No. of Recommendations: 14
.. there is an argument that it is nothing more than an artifact of the way the initial study was done:
https://www.scientificamerican.com/article/the-dun...Ignore my post here (which descends even further OT) if you want to relate the Dunning-Kruger effect to investing, which Texirish already nicely started to do. In the above study, Eric Gaze claims that the Dunning-Kruger effect is an artifact of its own design rather than an artifact of human nature.
Eric Gaze, a senior lecturer in mathematics, designs a counter-experiment to prove his claim. I am pretty sure that Eric Gaze got the maths right. His conclusion matched what would be expected. However it seems that he got the interpretation of his own counter-experiment wrong in how the random data was formulated.
Rather than using real people he used imaginary people with random data - they firstly (1) all had received completely logic scores in their tests, and secondly (2) they also had random predictions as to how they scored compared to the others.
Eric Gaze's hypothesis was that the Dunning-Kruger effect is an artifact of research design, not human thinking, and that he and his colleagues would try to show that it can be produced using randomly generated data.
He indeed confirmed his hypothesis, writing in his summary: “To establish the Dunning-Kruger effect is an artifact of research design, not human thinking, my colleagues and I showed it can be produced using randomly generated data.”
He then showed that, like the real people, the Dunning-Kruger effect showed that the low scoring imaginary people - even with this random data - overestimated their score.
Rather than “randomly generated data” producing a balanced popuation, he created a insanely delusional population - by definition of their predictions having a 0% correlarion to their actual scores. Real humans aren’t great at predicting how they rank, but thank goodness, they are not *that* bad.
But intead of a pouplation of accurate predictors, Mr. Gaze put his imaginary delusional people (noting that the two-step randomization produces even less realistic-predicting people than what we get with real huamns) through the Dunning-Kruger experiment, and wanted to show that even with random data - the same Dunning-Kruger effect would be produced. And what do you know.. that’s what it did - because the random sampled data was even more insane than real humans.
He doesn’t cite the randomly generated people as really bad predictors of their competence, and the impression is left that they are just random, thus ordinary, people.
If this point had clicked, I expect Gaze would have called the counter-experiment off. Gaze, in using random data, is not producing a neutral population (similar as random stock pickers are indeed neutral - in this case they are as good as average), but rather a baseline of humans dramatically poor (by construction of the data) at predicting their relative level of competence.
Gaze wanted to deduce that if the Dunning-Kruger experiment brings similar results with the random data, as it does with real people, then it is just the experiment design that is causing the results and not the human nature.
I may understand Gaze's inspiration. Sometimes when doing experiments, you produce results that are inherent to the design of the experiment itself. An example, the bottom 25% performers in the Dunning-Kruger experiment -
even if the population were an imaginary population capable to accurately predict their relative ranking - would actually over-predict their scores. This is owing to the lower 25% (by score) inadvertently producing a sub-sample population that, by definition, has a greater performance delta to the average, and thus this skews them to have a larger score delta to the average, thus causing them to also have a larger average over-prediction of their rank, compared to the whole population. However, the effect of this skewing is more mild than what occurs in the experimental results (using real humans), showing that the Dunning-Kruger effect is genuine.
If he wanted to contradict the experiment he would need to construct thre counter-experiment differently. He could use an imaginary population of humans actually fairly good at working out their level of competence (rather than a population dreadfully bad at it, as he did) which could easily be defined mathematically, and then try to show that the Dunning-Kruger effect brings similar results to the real population. That would actually show that the interpretation (the lower scoring 25% greatly overestimating themselves) of the experiment is at fault. It wouldn’t though. Unlike the randomly generated data, the lower 25% would here know they were the lower 25%.
- Manlobbi