- Joined
- Oct 3, 2021
- Messages
- 9
- Reaction score
- 3
Example problem
A casino offers you a gamble with a 1% chance of winning a try. How many tries will it take to win at least once?
Solution
For this example, I chose 95% confidence, a willingness to be wrong once in twenty:
tries = log(chanceToBeWrong) / log(chanceOfFailureEachTry) = log(1/20) / log(99%) ≅ 300 tries
But what about that decision to choose 95% confidence?
It was an arbitrary choice, governed by nothing more than a general rule of thumb. Can we do better?
Here's what I've figured out so far
Confidence is a linear variable, a Real in the range 0-1.
It can be expressed in this example problem as confidence = -0.0099 × tries + 0.9999
If I choose confidence = 0.5, then I'm making a prediction of the future designed to be wrong half the time
If I choose confidence = 1, then it would take infinite tries
So the range of admissible values for confidence appears to be (0.5, ..., 1) excluding end points
A casino offers you a gamble with a 1% chance of winning a try. How many tries will it take to win at least once?
Solution
For this example, I chose 95% confidence, a willingness to be wrong once in twenty:
tries = log(chanceToBeWrong) / log(chanceOfFailureEachTry) = log(1/20) / log(99%) ≅ 300 tries
But what about that decision to choose 95% confidence?
It was an arbitrary choice, governed by nothing more than a general rule of thumb. Can we do better?
Here's what I've figured out so far
Confidence is a linear variable, a Real in the range 0-1.
It can be expressed in this example problem as confidence = -0.0099 × tries + 0.9999
If I choose confidence = 0.5, then I'm making a prediction of the future designed to be wrong half the time
If I choose confidence = 1, then it would take infinite tries
So the range of admissible values for confidence appears to be (0.5, ..., 1) excluding end points
Last edited: