> Question: A fair die rolling a 6 twice in a row is more likely than rolling 1-2-3-4-5-6 in sequence
Two 6s in a row is 1/36 chance (1/6)^2
1-2-3-4-5-6 is a 1/46656 chance (1/6)^6
Website is claiming they are the same probability:
> Same probability: 1/46,656 — Both outcomes have exactly the same probability: (1/6)^6 = 1/46,656. This illustrates the representativeness heuristic — random-looking sequences feel more probable than ordered ones.
Website's "answer" is wrong: was the question supposed to be rolling a 6 six times in a row?
You're right, that's a mistake in how I phrased the question. It should say "six times in a row" not "twice in a row". Fixing it now! Thanks for pointing that out!
Yeah, most likely it was try to identify a bias of human perception, that 1,2,3,4,5,6 would be more probably than 6x6.
A better way to illustrate this bias is with coin flips. People will tell you that odds of 6 heads is more rare than the odds 3 tails then 3 heads. The difficulty is understanding whether they mean "in order" or "as a group".
If it's in order, the odds are the same. Every order of H/T has the same probability, but humans will see "all heads" and think that's more rare. But the important bit is whether there's a clear understanding ordering.
The slider disappearing when sliding between extremes is very confusing. I think the silver should be the only thing displayed and remove the buttons entirely.
Maybe I don't know enough about "calibration" in a technical sense, but it seems like this quiz cant really distinguish between factual knowledge and calibration skill?
Is this type of quiz reproducible for individuals and across various cross-sections of the population?
Are there studies on this? Is the quiz based on these studies?
Great question. Calibration specifically is about whether your confidence in an answer matches your accuracy, not whether you know the answer. Someone who knows a lot but is always 90% confident would score poorly even if they're wrong 20% of the time, as an example.
In terms of research, Tetlock's Expert Political Judgement and Superforecasting were the foundation. He did a 20 year study that showed domain experts were barely better than chance at long-range predictions. The Brier score was the standard metric for that research.
I see, that makes a lot of sense. Maybe the UI should reflect this? Have one button for True or False or Uncertain, and then the slider for confidence in the answer?
That's a really good UX idea. I can see how it's not the most intuitive now. Separating the direction from the confidence level would make it much clearer. Adding that to my list.
There’s a bias, I think. When I saw the title that is about how bad I’m at estimating, I’ve leaned towards counterintuitive answers. This got me quite a high score. I think test set should also include intuitive facts (or maybe I was just lucky).
As much as it is counterintuitive, that is actually a valid calibration strategy. If you notice the questions lean slightly towards counterintuitive and adjust for it, that IS better calibration! But you raise a fair point about framing bias from the title.
The Brier score and "diagnosis" are shown immediately, no signup needed. The email is optional and only if you want to see the calibration curve and the question breakdown sent to you. I'll make that clearer!
Made a few changes based on feedback from this thread: full results now shown immediately with no email gate, changed the UX to include true/false/uncertain buttons + a confidence slider, I cleaned up the quiz result page, and fixed the die probability question. Thanks for all the honest feedback!
Wait, so roughly is it rewarding being confident when correct, and penalizing being confident when wrong? Meaning that the highest score is only achievable if you answer fully confident true or false, and get all 10 correct?
If so, isn't that conflating knowledge with over/under confidence?
Your point on scoring is correct, if you're 100% confident and right on everything you would score a perfect 0. The calibration insight is in how you handle the questions where you don't know the answer. Say you're highly knowledgeable and 95% confident on everything, but get 2 wrong scores compared to someone that says they are 70% confident on those same two questions. That would indicate that you are overconfident compared to the other person!
How are they different? If you "know" something, you are 100% confident in it, which gives you an easy 0 for this question (or a surprising 1). Philosophically, the problem is more that there is no difference between confidently and modestly wrong in terms of consequences of binary decisions.
Just pushed a fix for that! You should be able to see everything without inputting your email now. I've made a note about font size, thank you for the feedback.
I’ve taken the quiz but not been compelled to sign up. The site feels manipulative, e.g., the “show me all the questions” link is tiny and hidden between two larger boxes, and even then it only shows 2 questions with a signup CTA. Maybe that’s best practice growth hacking these days, but to me it’s a manipulative turnoff. If you’d given me all the questions and answers simply then I would signed up for more, especially with the discount code. Otherwise, how am I supposed to even know what I’m signup up for? Every interaction I’ve had with the site so far is a sales attempt, so mostly I expect more of those.
Update at 2 hours: 1350+ quiz takers! 50% overconfident, 40% well-calibrated, and 10% underconfident. The average score is around 0.228, with the best score still at 0.007 (nearly perfect). The pattern so far is people are most overconfident in the 70-90% range, but are right closer to ~55% of the time.
I think this might be conflating confidence with accuracy. I tried leaving the slider the the middle (nominally the least confident position) and it gave a score of 0.25 and diagnosed it as 'overconfident'.
The Brier score is pathological when the guess is 0.5: regardless of the outcome, it will be equal to 0.25, so if you define "better than random" as having a score < 0.25, actually acting randomly makes you "overconfident".
Update: 400+ quiz takers now... insane. Best Brier score so far is 0.007 (nearly perfect calibration). The worst came in at 0.600. Average is 0.230, still just better than a coin flip. Where did you land?
That's the second best score I've seen today out of 700+ quiz takers! Exceptional calibration. The confidence angle is the whole point, people don't know how far off they actually are until they see the hard data!
Interesting data from the quiz so far: 160+ quiz takers! The average is 0.239 (barely better than a coin flip at 0.25), but almost everyone indicates they are confident in their answers.
The variance is normal, the questions pull from a pool of 138 questions so far. 0.177 is strong. Setting everything to 50% would just get you 0.25, so you did way better on the first attempt. The goal isn't 50/50 on everything, only on the occasions where you are not confident that you are right.
Yeah! I’m confident when I give an answer. In a real life scenario I would actually research the ones I’m not so sure about - but having a confident first take narrows down that research a lot.
It is very disappointing that you can't see what you got right or wrong without giving out your email. I'm not even sure if one would learn from the email or whatever the calibration result is.
I'm happy for you if it works but I sure feel cheated. I hope others also feel it's against the spirit of a Show HN. But maybe it's just me.
Just checked and everything is up. That might just be a console warning, but shouldn't affect the quiz. Can you try a hard refresh (ctrl+shift+R)? If that still doesn't work, what browser are you on?
I didn't find the questions very representative about estimation. that is maybe if happen to know many of random root facts about the world under which they were based, then their application might be a revenant question about ability to estimate. I really felt more like I was making uneducated guesses (0.155). I suppose I was expecting more ping pong balls in airplanes
The point I was going for was more so how people handle questions they don't know the answer to. Someone that is "well-calibrated" would set things they are uncertain about at closer to 50% instead of guessing one way or the other (overconfident). That score is excellent, so it suggests you did exactly that!
Two 6s in a row is 1/36 chance (1/6)^2
1-2-3-4-5-6 is a 1/46656 chance (1/6)^6
Website is claiming they are the same probability:
> Same probability: 1/46,656 — Both outcomes have exactly the same probability: (1/6)^6 = 1/46,656. This illustrates the representativeness heuristic — random-looking sequences feel more probable than ordered ones.
Website's "answer" is wrong: was the question supposed to be rolling a 6 six times in a row?
A better way to illustrate this bias is with coin flips. People will tell you that odds of 6 heads is more rare than the odds 3 tails then 3 heads. The difficulty is understanding whether they mean "in order" or "as a group".
If it's in order, the odds are the same. Every order of H/T has the same probability, but humans will see "all heads" and think that's more rare. But the important bit is whether there's a clear understanding ordering.
Is this type of quiz reproducible for individuals and across various cross-sections of the population?
Are there studies on this? Is the quiz based on these studies?
In terms of research, Tetlock's Expert Political Judgement and Superforecasting were the foundation. He did a 20 year study that showed domain experts were barely better than chance at long-range predictions. The Brier score was the standard metric for that research.
If so, isn't that conflating knowledge with over/under confidence?
I unsubscribe from mails that aren't useful to me day-to-day because they're distracting.
Other than that it seems like a cool idea. I'd recommend slightly bigger fonts. I often have this issue with Gemini.
https://taketest.xyz/confidence-calibration
The same site also has something with a fixed confidence level: https://taketest.xyz/ci-calibration
>0.188
Slightly above avg - yay
As a test of general knowledge it was interesting. The confidence angle was the most interesting part, though.
Heck yeah.
I'm happy for you if it works but I sure feel cheated. I hope others also feel it's against the spirit of a Show HN. But maybe it's just me.
Manifest fetch from https://www.convexly.app/manifest.json failed, code 403