Jump to content

Orton's Arm

Community Member
  • Posts

    7,013
  • Joined

  • Last visited

Everything posted by Orton's Arm

  1. Your inability to distinguish between statistical fact and mental disorders makes you entirely unqualified to continue in this discussion.
  2. The process of error regressing toward the mean of the error causes those who obtain extreme scores the first time around to, on average, obtain somewhat less extreme scores upon being retested. This is because those with very high scores on the first test are disproportionately lucky, and those with very low scores are disproportionately unlucky.
  3. Suppose you were looking at a population with 10,000 people with true I.Q.s of 130, 1000 people with true I.Q.s of 140, and 100 people with true I.Q.s of 150. Any given person has a 10% chance of getting lucky and scoring 10 points too high, or unlucky and scoring 10 points too low. Of the 1000 140s, 800 will be scored correctly with a 140 on the I.Q. test. Of the 100 150s, 10 will get unlucky and score a 140 on the test. And of the 10,000 130s, 1000 will get lucky and score a 140 on the test. The people who scored a 140 on the test consist of the following: - 800 people with a true I.Q. of 140 - 1000 people with a true I.Q. of 130 - 10 people with a true I.Q. of 150 If you ask this group of 1810 people to retake the test, the 800 140s will (on average) get 140s the second time around, the 10 150s will (on average) get 150s the second time around, and the 1000 130s will get an average score of 130 on their second try. If you average all this out, you'll see that the group as a whole will score lower than 140 the second time around.
  4. The normal distribution for I.Q.s is (at least supposed to be) centered around 100. 130 is closer to the center of that distribution than 150. Hence, there are more people with I.Q.s of 130 than of 150.
  5. Where did I say the normal distribution "stops" at a certain point? The "natural law of gravity" isn't relevant to a discussion of I.Q. test scores. Even the "super genius" at the top is capable of getting lucky and scoring better than his or her true I.Q.; or of getting unlucky and scoring lower.
  6. I make no apology for the fact that the subset is chosen arbitrarily. In fact, that's a necessary step in achieving the phenomenon I've been describing. The only "regression toward the mean" question that's relevant to the underlying eugenics discussion is this: "Suppose someone scores a 140 on an I.Q. test. How well can this person be expected to do upon retaking the test?" The answer is that people who score 140s on I.Q. tests are more likely to be lucky 130s than unlucky 150s; and therefore tend to do less well upon retaking the test. The same logic applies to any other arbitrarily-selected score--individual people tend to regress toward the mean upon being retested. As for your suggestion that I set up Gaussian distributions of scores and error, I already did that in my simulation. I did not, however, "integrate over all space," because doing so would provide no help in answering the question of how an individual with a high I.Q. test score would tend to do upon being retested. First I created a Gaussian population. Each member was given an error term based on a random number converted into its appropriate, probability-based spot in a Gaussian distribution. (For example, the number 0.5 would put you at the midpoint of the Gaussian error distribution; the number 0.82 would put you one standard deviation to the right of the distribution's mean, etc.). I wanted to answer the question, "do people who get high scores on I.Q. tests do equally well the second time around?" Therefore, I only retested those population members whose original scores were above my threshold limit.
  7. After the mess you made with the word "binomial," your repeated attempts to tell me that I don't know the definitions of specific words carry no credibility. Stick to arguing with logical concepts, please. As for the phenomenon I've been describing, it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150. Take a group of people who scored a 140 on an I.Q. test. When the lucky 130s in the group retake the test, they will, on average, get 130s. When the unlucky 150s retake the test, they will, on average, get 150s. Because the lucky 130s outnumber the unlucky 150s; the group's score on the retest will be a little closer to the population mean than it was the first time around. As far as the eugenics angle of things, it's true that variation in I.Q. test scores would dampen the effects of a program. A true 140 who got lucky and scored a 150 would be given greater incentives to have kids than a true 150 who got unlucky and scored a 140. The program wouldn't be perfect, but it would be a lot better than the present efforts to improve the quality of the gene pool. "What present efforts?" you ask. Exactly my point.
  8. I think we both agree someone who takes an I.Q. test twice will not get the same score each time. Whether these differences are caused by measurement error or by natural variation in a person's underlying ability to think is not relevant to the mathematical phenomenon I've been describing. I've tacitly assumed that if someone were to take an I.Q. test 1000 times, the results would be normally distributed, and centered around his or her true I.Q. I believe you're working with the same understanding. Given the logic of the above paragraph, it's possible for someone to obtain an I.Q. score higher than his or her true I.Q. (which I refer to as getting "lucky") or a score lower than his or her true I.Q. (getting "unlucky.") For the purposes of the phenomenon I've been describing, it doesn't matter whether the good or bad "luck" is caused by measurement error, or by random variation in someone's underlying ability to think. Either way, it's possible for someone who scored a 140 on an I.Q. test to be a lucky 130 or an unlucky 150. Your dice example doesn't discount this; nor does your widget example, nor does your attempt to debate the meanings of words like "variance" and "measurement error."
  9. If you repeatedly roll a die, and average the results, you should expect an average score of 3.5. If you were to repeatedly give someone an I.Q. test, and average the results, you'd obtain what I've been referring to as the person's "true I.Q." Suppose you wanted to make these two examples similar enough that the concepts learned in the die example could be usefully applied to the I.Q. test example. The "true value" of someone's I.Q. was defined as how well the person would do over the course of many I.Q. tests. To make the die example similar, you have to define the die's "true value" as its expected average over the course of many die rollings. When a person's actual I.Q. score differed from his underlying true I.Q., you'd treat it the same way you'd treat a die's actual roll differing from the underlying average roll. It's a confusing example that probably set the discussion back several pages--not least because it created the opportunity for my views to be mischaracterized. The logic of the phenomenon I've been describing is more easily understood in reference to I.Q. scores, or 40 yard dash times, than it is with the dice rolling example.
  10. In answer to the mathematical background question, I've taken classes in calculus and vector analysis. In addition, I've taken stats classes at the undergraduate and graduate levels. In the stats class that was the most vigorous, challenging, and focused on a strong understanding of the fundamentals, I obtained the second-highest final average in the class. Perhaps more importantly, my standardized test scores are good enough not merely for Mensa, but for this organization. When I understand a logical concept clearly--as is the case for the phenomenon I've been describing--Ramius's hooting and hollering does not tempt me to alter my understanding. The only thing which would cause me to reconsider my views is a logical, compelling explanation as to why the phenomenon I've been describing is not something that actually happens. Such an explanation has not and cannot be achieved. I've heard of dice, of widgets of unknown distributions, and other things not relevant to the topic at hand. But in a population where 130s significantly outnumber 150s, a score of 140 is more likely to signal a lucky 130 than an unlucky 150. The implications of this simple fact have been ignored or misunderstood by those who have been arguing against me.
  11. It's appaling that you're so severely unable to understand even the most basic logical concepts. Just about anyone ought to be able to see that a score of 140 on an I.Q. test is more likely to signal a lucky 130 than an unlucky 150. There are simply more 130s available for getting lucky than 150s available for getting unlucky. Therefore, people who get 140s on I.Q. tests will, on average, score somewhat lower upon retaking the test. Your repeated rejection of such an obvious fact merely destroys whatever shred of credibility you had left after defending syhuang.
  12. The bad attitude of the above post is one of the reasons why you're uncoachable. Even if you dropped the attitude, you'd still be uncoachable, because you don't have the brains to be properly coached.
  13. I've written that variation in a person's underlying I.Q. and measurement error can both cause the phenomenon I've been describing. I don't remember trying to persuade anyone that variance and error were the same thing. I do, however, remember you insisting that repeatedly rolling a pair of dice would create a binomial distribution. The following definition is from page 376 of Introduction to the Practice of Statistics by David Moore and George McCabe: If you don't even know what a binomial distribution is, you're not exactly in the best position to pretend you have a better understanding of words like "variance" and "error" than I have.
  14. No I'm not rambling. I'm pointing out that there are differences between the widget example Bungee Jumper described, and the I.Q. test example that's relevant to the underlying discussion. Your inability to grasp those differences merely reinforces the point I'm about to make. Think of a football team whose players lack fundamental football instincts. The head coach tries to teach the players these instincts, but ultimately fails/gives up. Eventually, he has to start teaching them more advanced stuff, to get them ready for the regular season. But all those complex offenses and defenses will never be executed properly, because the players failed to grasp the underlying fundamentals. I compare that to your situation. You lack a firm grasp of fundamental statistical concepts. You don't have the patience, the interest, or quite frankly the brains to remedy this defect. Nonetheless, someone came along and attempted to teach you more advanced stuff. This extra knowledge made you more arrogant, but it didn't correct your underlying problem understanding the basics. You're fine as long as you can blindly follow the methodology you've been taught. But whenever you have to think on your feet, you fail, because you lack a firm grasp of the fundamentals of statistics.
  15. The widget example and the I.Q. test example are different. You know the underlying population's I.Q. Based on how individual people score from one test to the next, you have a pretty good idea as to what the standard deviation for the error term is. If someone scores a 160 on an I.Q. test, you can confidently say this person is more likely to be a lucky 150 than an unlucky 170, because of your awareness of the underlying distribution. You also know that it's quite possible for a 150 or a 170 to score a 160 on an I.Q. test, based on observation of how people's scores differ from one test to the next.
  16. You obviously know a lot more about throwing insults at people than you know about stats. Then again, you know absolutely nothing about stats, so it's not like I'm paying you that huge a compliment about your ability to throw insults at people.
  17. Ramius, you try so hard, yet are so consistent in coming up short. I almost feel bad for you. Almost. If you could make an I.Q. test without measurement error, and if people's I.Q.s didn't vary based on time of day, the amount of rest, etc., then someone who scored a 140 on an I.Q. test would score a 140 upon retaking the test. Introduce measurement error into the above example, and you create the possibility that someone who scored a 140 on an I.Q. test is either a lucky 130 or an unlucky 150. Of these two possibilities, the lucky 130 is more likely than the unlucky 150, because there are more 130s available for getting lucky than 150s available for getting unlucky.
  18. Yes, it moves towards that person's individual I.Q. score. But if you have a room full of people who scored a 140 on an I.Q. test, there will be more people with I.Q.s of 130 who got lucky on the test, than people with true I.Q.s of 150 who got unlucky. Therefore, the people in the room will, on average, score lower than 140 upon retaking the test. This means that if you have an individual person who scored a 140 on the I.Q. test, the most likely outcome for retaking the test is less than 140.
  19. This is exactly what I've been trying to say for the last 30 pages. Thank you.
  20. A good post. Variation in a person's results from one test to the next can be caused by measurement error, or by the variation in the underlying true I.Q. that you've described. Probably actual variation between one I.Q. test and the next is caused by some combination of measurement error and random fluctuations in underlying I.Q. In terms of the phenomenon I've been describing, it doesn't really matter whether random variation is caused by measurement error or by fluctuation in the underlying true I.Q. Either way, someone who obtained a high score on an I.Q. test will, on average, obtain a somewhat lower score upon retaking the test.
  21. He wasn't being racist. Are you even familiar with Fraggle Rock? It's a show about characters that look like this.
  22. If you want an example of something that's due to error, read your own posts for a change. That should kill off a few of your brain cells, assuming you have any left. I've been arguing--consistently--that if you have a non-uniform underlying distribution, and if you have measurement error, those who obtain extreme scores the first time around will tend to score somewhat closer to the mean upon retaking the test. Your objections to this have displayed only your own stupidity and ignorance. The idea that any objection you're possibly capable of raising would even tempt me to do a flip-flop is utterly laughable.
  23. I think that we're close to being in agreement. What's creating the appearance of difference here is how we're going about describing expected value. I've been taught to think of expected value in probablistic terms. For example, consider a project with two possible outcomes. Outcome 1 produces $0, and outcome 2 produces $100. There's a 20% chance of outcome 1, and an 80% chance of outcome 2. The expected outcome for the project is 20% * $0 + 80% * 100 = $80. What you're calling the "expected outcome" of the retake I would call the "median outcome" of the retake. But other than this difference, we're on the same page.
×
×
  • Create New...