I need some help in figuring out what I need to do for the following math problem (hopefully this is the right place for this question):

We have a survey that asks respondents to rate certain criterion. These criterion need to be changed at regular intervals based on several factors, one of which is the variance that we're seeing for the criterion. To make this process easier I want to create a score that is based on variance. This score will tell the user at a glance whether the variance is okay or not okay.

We will calculate the variance for a criterion based on a scale that can change from one survey to another (survey scale can vary - e.g. 0 to 10, 1 to 10, 0 to 7, etc.) and then I want to convert the calculated variance for that criterion to a 1 to 100 scale. I want the maximum variance possible to be 100 on the scale and the minimum variance (zero) to be 1 on the scale.

If the thing we are measuring has a small variance score then we may not want to continue measuring it (and it would show up as a low number on the 1 to 100 scale. E.g. I could set rules that say if the variance score falls below 50 on the 1 to 100 scale then we need to evaluate that criterion to determine if we should continue measuring it).

If the thing we are measuring has a large variance score then we do want to continue measuring it and the variance score would be a large number on the 1 to 100 scale.

Converting the variance into a 1 to 100 scale is so that I can quickly and uniformly communicate whether we have good variance or poor variance and whether they need to take action based on what we're seeing.

Hopefully I've done an adequate job in communicating what I'm trying to do here. Can anyone help me with the math for this problem?

Thanks for your help!