how do I apply the concept of “weight” to scale values. What’s breaking my brain is that the default value of scales is 1 and that you don’t want to go below 0 (usually). So weird.
How do you describe this relation with math?? In this case the weight would be 0.5
input => output
0.25 => 0.625
0.5 => 0.75
1 => 1
2 => 1.5
3 => 2
5 => 3
multiplying with a weight of 0.5 will change the default, so that doesn’t work.
I could subtract by 1 first and then apply the weight and then add 1 again. But that only works for scaling up.
What’s the formula here??
Pretty sure that works everywhere
Here’s a quick python script for demonstration
weight = 0.5
for i in range(30):
scale = i / 20
weighted = ((scale - 1) * weight) + 1
print(f'{scale:.2f} => {weighted}')
For me, this prints:
0.00 => 0.5
0.05 => 0.525
0.10 => 0.55
0.15 => 0.575
0.20 => 0.6
0.25 => 0.625
0.30 => 0.65
0.35 => 0.675
0.40 => 0.7
0.45 => 0.725
0.50 => 0.75
0.55 => 0.775
0.60 => 0.8
0.65 => 0.825
0.70 => 0.85
0.75 => 0.875
0.80 => 0.9
0.85 => 0.925
0.90 => 0.95
0.95 => 0.975
1.00 => 1.0
1.05 => 1.025
1.10 => 1.05
1.15 => 1.075
1.20 => 1.1
1.25 => 1.125
1.30 => 1.15
1.35 => 1.175
1.40 => 1.2
1.45 => 1.225
1 Like