March 9, 2006

You don't have to be litigious to feel entitled to more.

My heart goes out to the kids who got their SAT scores screwed up:
A day after the College Board notified colleges that it had misreported the scores of 4,000 students who took the SAT exam in October, an official of the testing organization disclosed that some of the errors were far larger than initially suggested.

With college counselors and admissions officials scrambling to take a second look at student scores in the final weeks before they mail out acceptances and rejections, Chiara Coletti, the College Board's vice president for public affairs, said that 16 students out of the 495,000 who took the October exam had scores that should have been more than 200 points higher.

"There were no changes at all that were more than 400 points," Ms. Coletti said. She did not say how many students had errors that big. The three-section test has a maximum score of 2400.
Imagine how much anguish was caused to kids getting such incomprehensible scores and how much they were harmed in the admissions process.
The board said yesterday that it had finished notifying high schools and students about discrepancies. It said it would return the fees the affected students had paid to take the exam and to send the results to colleges and scholarship organizations....

"I hardly think a refund of the test fee will make up for that pain," Mr. Poch said, "and in this litigation-driven society, I wonder how long it will take for a class-action suit to emerge."
You don't have to be litigious to feel entitled to more than a return of the fee. Bring on the lawsuit!

UPDATE: The NYT puts up a second article on the controversy:
The scoring errors disclosed this week on thousands of the College Board's SAT tests were made by a company that is one of the largest players in the exploding standardized testing business, handling millions of tests each year.

The mistakes by the company, Pearson Educational Measurement, raised fresh questions about the reliability of the kinds of high-stakes tests that increasingly dominate education at all levels. Neither Pearson, which handles state testing across the country, nor the powerful College Board detected the scoring problems until two students came forward with complaints.

"The story here is not that they made a mistake in the scanning and scoring, but that they seem to have no fail safe to alert them directly and immediately of a mistake," said Marilee Jones, dean of admissions at the Massachusetts Institute of Technology. " To depend on test takers who challenge the scores to learn about system failure is not good."

48 comments:

Ann Althouse said...

CB: When you're offered a ridiculously low compensation, you don't have to be litigious to be outraged. You deserve more to be treated decently.To fail to see that is to be a doormat.

Telecomedian said...

I was very fortunate to score a 1280 back in the 1600-scale days. That was enough to get me into pretty much any school I wanted, but may have kept me out of some honors programs, scholarships or grants. Using basic math, though - 200 points of difference on a 2400 scoring scale is about 134 points on the old way - meaning I could have been over 1400. That score *DEFINITELY* would have earned more scholarships and resulted in less student loans. Somebody would be paying for that funding differential.

As long as some schools continue to base a large portion of their admissions' process on standardized tests, these testing agencies need to be held accountable for such mistakes. A 200 point swing is large enough swing to deny a candidate access to the top schools and the benefits of such a degree.

This could get very, very ugly.

Nick said...

"You don't have to be litigious to feel entitled to more than a return of the fee. Bring on the lawsuit!"

A lawyer and law professor saying that?! I'm shocked... absolutely shocked to my very core... got to keep the market growing after all.

jeff said...

I think that in addition to refunding all fees, they should allow any students who's results were incorrect to pick up to 20 organizations (colleges, scholarship, whathaveyou) to send free copies to.

And pay any re-application fees.

Telecomedian said...

I am firmly with Ann on this subject. CB, I think you're missing the point - it's not about entitlement, it's about earning. These test takers earned a higher grade than they received. This isn't the same situation as "grading on a curve" or grade inflation.

This subject directly affects the education and future earning potential of a group of students. If they were denied entry to their first choice school over a couple hundred SAT points, it's a very big deal. I worked in my college admissions office as a sophomore, so, I've seen such score differentials matter.

200 points is the difference between Cornell and SUNY-Ithica. I'm not saying SUNY-Ithica is a bad school, but it would be foolish to deny the potential that comes from a degree from a famous school.

Let's put it this way - nobody's ever been impressed that I got my degree from Towson State University.

Joe said...

Ann said: "When you're offered a ridiculously low compensation, you don't have to be litigious to be outraged. You deserve more to be treated decently.To fail to see that is to be a doormat."

Please let me do a hypothetical: let's say there's fine print, right there on the test that says, in effect, that the results are not to be taken as definitive without [sufficient redundent safeguards mentioned here], much like certain EULAs that enable software. Then do we blame/hold responsible the schools that use the numbers to make decisions? Or the state g'ments who make the schools use the numbers by law? The employers that use the schools reputation to make hiring decisions? Their CEOs? Do we blame the *students* who use the numbers to decide what schools to apply to?

How exactly do these numbers determine the student's future anyway? Do they have more of an effect than, say, the student's parents or friends or even teachers? Shouldn't we hold those people even *more* responsible for the student's educational opportunities? At this point it seems my hypothetical doesn't change anything: Shall we sue all these above?

Ann, I think *that* is what is so upsetting about our litigious society. It never seems to end.

Laura Reynolds said...

My tenth grade daughter recently took the PSAT and based on her score she is likely to get a very high score on her SATs when the time comes. Along with everything else, that would open up significant life-impacting opportunities.

Yes these people should be held accountable, they have carved out a huge role in the education system and that carries a lot of responsibilty. You are only at this point in life one time.

Ann Althouse said...

CB: I definitely think "litigious" has a negative connotation, but I didn't think "entitlement" had one!

Jennifer said...

Think, also, about the students who didn't even bother to apply to the schools they really wanted to go to because they felt their SAT scores were too low. When in fact they weren't. Now it's too late for them.

How do you adquately remedy that?

Brent said...

I agree that the College Board owes big time to the wrongly-scored test-takers. For many students, it is the Holy Grail for ultimate success: entrance into the "Best" University.

We all "know" from real-life that the "best" colleges and universities practically guarantee a "successful" life/career. But it is also true that several focused studies show that for the determined student, actual college choice ranks low in his/hers actual reasons for success.

Jay Matthews (Education reporter for the Washington Post)and Diane Ravitch (Brookings Institute, Hoover Institute) have both studied the personality/determination/college matrix and found that we place way too much emphasis on the importance of "Ivy" schools, for example.

And with the current situation at Harvard, it's not impossible to imagine it losing it's exalted #1 luster over, say, the next 5 - 15 years.

My daughter (SAT's 1360), accepted and planning for NYU for a music/business career, found a closer-to-home college, Chapman University in California, with a higher "success" rate of it's music/business graduates in both prestige of placements and first-year-earnings than NYU's rate.
In fact, more Chapman music opera graduates actually end up at the Met in NYC than NYU's! While this does not denigrate an NYU education, it does give pause to think about the "whole education - career" path emphasis we all seem to agree on in America today.

Jim Gust said...

The larger problem is that this error was discovered by accident, and we still don't know how it happened. It wasn't as simple as marking a correct answer for a given question as incorrect, it appears dependent on the scoring technology, and in an unpredictable way.

What makes us think that any of the other SAT tests were any more accurate? Or that the SAT IIs are free from error?

I've long wondered how the College Board has escaped regulatory scrutiny by the government.

But as to the lawsuit, the damages look pretty speculative to me.

Mom said...

There is already at least one attorney looking for clients for an action against the College Board on www.collegeconfidential.com, a discussion site for college admissions. There are also affected students who are reporting in that forum that the discrepancies on their scores are greater than the discrepancies to which the College Board admitted.

So, the lawsuits are on their way, and rightly so -- but I agree with jim gust that damages are going to be awfully speculative. I don't doubt that the affected students were hurt by this, but how, exactly, can the injury be quantified? Even a 2400 SAT score doesn't guarantee admission at the more competitive schools nowadays. And even if it did, it isn't clear that attending a more competitive school confers much of an economic advantage. There's at least one study out there suggesting that students of Ivy League caliber who choose to attend less prestigious colleges do just as well as the Ivy League cohort does in later life.

Freeman Hunt said...

They should definitely be compensated for more than the fee.

When I went to college, I had a scholarship that was entirely based on my SAT score. Had my score been 200 points lower, I would not have qualified for that scholarship and would have been out over $16,000 per year. That is $64,000 over four years of education--ouch!

That is significant damage even without taking into account missed admissions and the possibility of narrowed future opportunities.

Unknown said...

I agree with Ann. They hold themselves out as the arbiters of college ability and then cannot even score the test correctly. The damage done to kids and their parents in terms of tuition, scholarships, future earnings, is huge.

I would love to see a lawsuit. Discovery will be illuminating, to say the least. I can't wait to see how much the executives make, how they farm out scoring to low bidders, how they ensure accurate grading (especially for the essays), if they do at all.

Freeman Hunt said...

I don't understand why they added the essay section. College applications usually require an essay, so why was this needed?

No matter what system one devises, essay grading can never be as objective as the regular multiple choice verbal and math sections. I thought that the whole point of the SAT was to be the objective test part of the college admissions process.

reader_iam said...

I absolutely agree with the substance of this post; too much rides on SAT's, especially for higher achieving students in competitive situations for this to be sloughed off as an "oopsie"!

The College Board bears some responsibility in redressing this wrong for the students who earned the higher scores, and I think Jeff's proposal is a good start.

That said, Ann, I must respectfully disagree: entitlement does indeed have a negative connotation and, frankly, perhaps even more saw than "litigious" (probably because the overly developed sense of entitlement running rampant through our society has led to the overly litigious environment).

That's my take, anyway.

reader_iam said...

"Saw" should be "so," obviously, I think.

(This partly why I've massively cut back commenting anywhere. Sigh.)

Mom said...

Freeman, one reason I've heard for the addition of the essay section on the SAT is that many students today get extensive help with their application essays. Some are even using ghostwriters. Thus, the colleges have no idea whether the polished essays they receive have anything to do with a student's true ability to write.

The SAT essay, on the other hand, has to be written under testing conditions and with no help, so it should provide a truer glimpse of the student's capacity. I have also heard, though I'm not sure it's true, that admissions officers have the ability to look at the SAT essay itself and not just the scores. This would allow them to make their own assessment of the caliber of the student's writing.

I predict that the College Board fiasco will prompt more colleges to announce that they will no longer require SAT scores from applicants. Several well-known schools have already done this -- Mt. Holyoke, Bates, Sarah Lawrence, and others. Maybe they already knew something the rest of us didn't know about the reliability of the SAT scoring system!

howzerdo said...

Telecomedian,
Not a big deal, and it doesn't have a lot to do with your point, but as a SUNY alum, former system administrator, and current instructor at a campus...I knew knowing all 64 campuses by heart would come in handy. There are four state-operated campuses located right at Cornell, but the admissions do not differ from the private side of Cornell, and they are not called SUNY Ithaca. There is also Ithaca College - a private school in where else? Ithaca, NY. But SUNY Ithica? There is no such SUNY campus, as far as I know.

On the issue of the SAT score mistakes: I know I would have been very, very upset when I was applying to college if this happened to me. On the other hand, I did well on the test, but my scores played very little role in my applications; other factors influenced where I went to school (including finances and distance from home). But I have never regretted my choices and I love my alma mater. The same may not be true of SAT scores today; tuition is enormous and financial aid seems to be more of a factor for most students. As in all things, I am left with the same feeling, though. "Chill out." People (and organizations) make mistakes. Perfection is a myth. I would never litigate, and I never feel entitled. Because we quite simply aren't entitled to much of anything. Not that the College Board shouldn't own up to the errors, take responsibility, and do whatever they can for the students who were harmed; they most definitely should, because that is the right thing to do.
Gina (I decided to change from being one of the several posters using "me" as an identity)

Balfegor said...

The SAT essay, on the other hand, has to be written under testing conditions and with no help, so it should provide a truer glimpse of the student's capacity.

What I've heard is that actually, your score on the SAT essay corresponds best with how long your essay is. "Well-written," after all, is kind of subjective, once you get over basic hurdles like correct usage and proper grammar. It might be useful for the admissions people, I suppose, at least as far as technical writing skills go. A lot of the kinds of errors that would show up here -- structure, grammar, spelling, etc. -- would be cleared up in editing in a real piece of writing, so it's not really representative, I think, of the ability to write, just the ability to write a fairly clean first draft.

I absolutely agree with the substance of this post; too much rides on SAT's, especially for higher achieving students in competitive situations for this to be sloughed off as an "oopsie"!

I think that outside of set-formula situations, like the University of California (I think), and some scholarship programs mentioned here in the thread, the SAT score probably is not that important. Especially for high-achieving students, who will probably have plenty of other indicators of their academic potential. I think it's probably most important for marginal admits, where a low SAT score (e.g. down to 1000 from 1200) looks more like low-potential. Possibly also for lower-achieving students who nevertheless achieve brilliant SAT scores. A B- or C student who gets a 1500 looks considerably more impressive than a B- or C student who gets a 1300, so the lowered score may cost him a crucial second-look by the admissions committee.

As far as getting into Harvard or MIT or what-have-you, I suppose the high-achiever may be injured. But probably not as injured as he thinks.

Greg D said...

Um, folks, don't most people take the SAT as Juniors?

Which is to say, they'll be applying next year, and this doesn't affect them at all (other than scaring the day lights out of them)?

Now, some of those people were probably Seniors, taking the test a second time in the hopes of getting their scores up. This certainly could have harmed them.

What I'd like to know is how long they've known that the differences where greater than they originally said. Everyone who lived about that (if anyone did) must be fired. If they don't fire those people, the comapny needs to be sued, and lose, since letting those people stay is an endorsement of their actions.

Mom said...

Greg, the problem was with the scoring of the October, 2005 test. Among the students I know, at least, juniors generally take the SAT in the winter or spring. The fall administration is mostly for seniors who want to get their junior-year scores up. I'm sure some early-bird juniors also take the test in October, but the newspaper interviews I've seen with affected students were all with seniors.

balfegor, I saw the same article you did about longer essays getting higher scores on the SAT writing test. I showed it to my son before he took the test, too!

MadisonMan said...

As far as getting into Harvard or MIT or what-have-you, I suppose the high-achiever may be injured. But probably not as injured as he thinks.

Or she.

Craig said...

Just to be contrary: a big-class action against the test administrators will merely result in the costs of the awards being passed along to future test-takers and the cost of additional protective measures being passed along to future test-takers.

Assuming that there are some students, especially lower-income students who are particularly concerned with financial aid, who will be unable or unwilling to take the test twice at the higher cost, any lawsuit is not so much about compensating these test takers as it is in choosing between the current students and the students of the future.

The costs of the service of eradicating errors, like for most everything else, increase the more you consume, just as the benefits are declining. Common sense tells us, then, that there might well be a point were the "proper" number of grading and scoring errors is some positive number. If that is the case, then query whether a lawsuit is good sense.

Nerdstar said...

My roommate used to work for Pearson Educational Measurement. There have been a couple of times they messed up students's scores for state standardized tests for high schoolers. These resulted in successful law suits. I'd expect the same to be true this time.

In scoring the 2005 online tests for Virginia, Pearson made errors that resulted in 60 students being told they had flunked tests that they actually passed. The company offered scholarships worth $5,000 to five seniors whose graduations were affected.

In 2000, a Pearson scoring error caused 8,000 Minnesota students to flunk and led to a $7 million settlement.

Balfegor said...

But it's like the posters above have said: you get this one shot when you are 18.

More likely 17 for most students, I expect. And a nontrivial number of students get to take it multiple times -- I took it at 10, 13, and 15, and my siblings have both taken it more than once, though I forget how many times.

It can help a lot of kids show that they CAN do college-level work, even though their high school grades were bad or their high school was bad, or whatever

I'm not sure about this. This might be the inference they draw from a decent performance on the SAT, but as you suggest (re: correlation with success in college), it's probably not a correct inference. If it gives them confidence and they're otherwise prepared though, I suppose that's a good thing on balance.

The most egregious (known) case was a kid who took The Princeton Review and got something like a 400-point scroe increase.

You teach SAT courses, so I expect you'll disagree, but I think those courses are rip-offs. I bet he'd have got the same benefit from just sitting down and doing a few practice tests -- get a sense of what the problems are going to be, and calm his nerves.

Balfegor said...

Nothing wrong with Illinois, but a degree from Princeton is a ticket, forever, in our society.

Oh, how I hope so . . . haha. I didn't go to Princeton, but some close relations are there now. And of course, one always worries no matter where one's kin are. And reason I worry is that Princeton alone is not the meal ticket. You still have to work and do well to transform that Princeton degree into real success later on down the road.

Balfegor said...

My scores at 10, 13, and 15 were 1310, 1480, and 1600 respectively. I'll stick with doing practice tests on the dining table, thanks.

(I did, however, do Bar-Bri to prep for my bar exam . . . so I suppose I can't be too high up on my horse here. My excuse will be that Bar-Bri covers substantive material, and the California Bar is much, much scarier)

Balfegor said...

people are brainwashed by the idea that ETS tests are uncoachable.

I don't think they're uncoachable -- there are test taking strategies (even dumb ones -- my favourite is to total up all the letters from my answers, and then fill the questions I don't know with the letters I have too few of. I did that on the MBE and hey! I passed!) And time management and knowing how to narrow down choices and then can guess at a certain point, etc. is helpful.

I just don't think they're worth $1100 of training from Princeton Review or Kaplan or whatever, when you can probably get the basic strategies from books in your local bookstore, or even your local library. It is not a hard test. It is not even a particularly tricky test. It is certainly not a substantive test, unless you don't know how to read, or don't know 8th grade math. The test-taker just needs to focus, have good basic skills, and not make careless errors.

----

That said, on an almost unrelated note -- as an experienced test tutor, how would you tutor a child who's having difficulty with just the basics of mathematics. Do you get students like that?

I participate in a program where we tutor young schoolchildren in basic subjects, and what I find is that my student just cannot do basic math, if it's a word problem. She sees two numbers, and just guesses what operation she's supposed to do. Or actually, she just adds them together. It's like she has no grasp of the concepts behind them.

Any tips on how to get around this?

Balfegor said...

Finally, I do have some really good ways to teach math. Going into it here would be absurd. You are going to laugh, but there is a helluva book out there called Math Smart (and Math Smart Junior), by The Princeton Review, natch.

I'll take a peek next time I'm at Borders. Thanks.

Eli Blake said...

I'd like to defend the SAT.

First, errors of this type are very rare.

Second, they release every question on the test several months after the last test is taken, unlike a number of state mandated tests, including some with very high stakes (like our Arizona test that students have to pass before getting a diploma). Those questions are quietly tucked away, and if some were in error, they are never examined.

Third, the SAT has freely admitted when they have made a mistake. I know that in the case of our state mandated test, when they have made a mistake, it has been dragged out of them kicking and screaming, and then they are reluctant to do more than adjust the scores according to their secretive way of doing it, and asking people to 'trust' them on this.

Yes, the SAT screwed up, but they are still light years ahead in the area of openness, and on something this important, I think that the students and others involved deserve openness.

Beth said...

the kind of people who would agree to grade essays on an assembly line

Ouch! That just sounds shady. I haven't done one of these grading marathons, but a number of my colleagues do them every year. I have no clue what kind of people you think they are, Seven, but I can tell you these folks are competent instructors who don't make enough money and do what they can to make ends meet. There's really no call to malign them.

Abc said...

I'm with Elizabeth and Eli Blake here. I am of the opinion that the SAT is quite good at weeding people out and I think that that's how most schools use them. For example, there is a reasonably decent correlation between SAT scores and intelligence. I would venture to guess that just as on the LSAT, the correlation is quite high on the lower end of the scale and not so high on the higher end of the scale.

To clarify some things about the third section: The essay portion of the SAT is part of the new writing skills section, which is what the old SAT II Writing used to be. Since many colleges required the SAT II Writing in addition to the SAT, anyway, ETS decided to make it a part of the SAT.

As to the person whose daughter chose Chapman over NYU for music business... That is certainly her choice and I wish her the best of luck. While that may be a good strategy (looking at career numbers) for choosing a graduate or professional program, it's a terrible way to choose a college. How the hell are you supposed to know when you're 17/18 that the thing you want to study then is exactly what you want to do the rest of your life. What if your daughter decides to study, say, philosophy in her junior year? Then she'd probably get a pretty third-rate philosophy education at Chapman compared to that at NYU. Sure, the engineering school at Purdue is substantially better than the one at Harvard, but what if you decide that Mech. Eng. is not for you and you want to study history or physics even, instead? Oops, you're stuck at a pretty mediocre place.

Finally, if a person doesn't know how to do the very basic math on the SAT Math section, they DON'T belong at Stanford. Kids in other countries would probably laugh at the SAT Math section. Sorry, but the problems are just not that tricky. They really aren't. If you're not one of those garbage-in-garbage-out types who can't compute 50% off at Macy's without busting out the calculator, again, you deserve to be tripped up on the SAT and not do well.

Having said all this, what happened here is absolutely horrendous. I'm still curious how exactly this error had happened. I hope that there is actually a lawsuit so that there could be discovery and we know what had occurred. ETS should be held financially liable. But let's not pile on the SAT as this horrible thing and start exaggerating how Baltimore County Community College is a better school than Harvard.

Balfegor said...

The SAT is a seriously flawed instrument. It should not be used in our meritocracy. We can do better.

How?

High school grades are, frankly, next to worthless, because once you get into the AP or Honours classes, you get A's or high B's pretty much just for showing up, and they're not commensurable between schools. Teacher recommendations are, with a few exceptions, going to be pretty much all the same: gushing. An interview isn't going to get you a good sense of a student's potential, just their polish. At the moment, the SAT (or the ACT) is the only commensurable measure we have for student potential across the population.

The blog Number 2 Pencil is run by a psychometrician, who's considered the problem several times (though not in the most recent posts re: this SAT scoring error). The SAT is a flawed instrument, but it's by far the best instrument we have. If we had more psychometricians, and more funding to run larger samples and tests (to improve cross-population consistency, etc.), we could surely produce a better one, but the number of trained psychometricians we produce each year, in our country, is apparently measured in the single digits. We don't have the men, and we don't have the money.

My thought is that if we adopted a university examination system like the Bac or the Juken, the way other civilised countries have done, we could avoid this. I don't know about the Bac, but the Japanese and the Korean universities have entrance examinations testing substantive materials. They're susceptible to training (hence juku and yobikou), and people spend years cramming for them (sometimes more than one -- an uncle of mine failed the first time, and retook the next year, to get into Seoul National), but they are more comprehensive tests than the SAT, and they test the real preparation of the student, not just his ability to do simple math problems and read sample passages. But they're much, much more grueling than the SAT. And there's no way our universities would go to the time and expense of developing that kind of examination, when using such an examination goes so strongly against the admissions philosophy they were preaching for most of the past century (and would almost certainly exacerbate the racial and class inequalities in higher education). And there's no way middling American students could be expected to do passably well on such an exam -- not when so many end up taking remedial composition or remedial algebra in their first year at University. It would not work.

Telecomedian said...

Gina -

I mentioned SUNY-Ithaca because it is a fictional school. I didn't want to offend anybody who went to a real SUNY. When I transferred to Towson, I seriously considered SUNY - Oswego. I decided I'd rather be a deejay than a TV weatherman, so I went to Towson.

An old girlfriend went to Cornell, and her fellow med-school students called it SUNY-Ithaca just to shine her on.

Balfegor said...

And why do you need PSYCHOMETRICIANS to design a test about whether someone can succeed in college? You WANT substance-less tests to determine futures?

You want them for substantive tests too, ideally. They're there to determine whether the testing construct is consistent across different testee populations. This is an issue in test design even when the test construct is supposed to cover substantive material -- test design effects can make such a test highly unreliable.

If the SAT turns out to be crap, well, there goes the one "concrete number" these people can point to showing how much smarter they are than the run of humanity.

No, there's still IQ, obviously. And the SAT is what 70%, 80% correlated with IQ results? So it's more or less the same thing. There's loads of other tests you can take if you want to get into MENSA -- I don't think they even use SAT scores, do they?

And for me, bwahaha~, I can always just brag about my LSAT or non-subject-matter GRE score, since they test pretty much the same thing, if I'm bent on boasting of how much smarter I am than the common run of humanity.

That said, in an earlier thread on this blog I've said -- and I believe -- that the SAT score tests mostly concentration and focus, not "intelligence" in any meaningful sense.

Why is it necessary to have a test like this at all?

Because, as I said above, grades are worthless. In more detail, this is because they are a) not comparable between schools, and b) not comparable between teachers and c) not comparable between classes. As I mentioned above, grading systems also (in my experience) tend to over-reward people in honours or AP classes, because practically everyone gets an A, regardless of their work. Teachers' recommendations are unreliable in the same degree; the most you'll weed out is the person who cannot manage to find two teachers to give him a decent recommendation. A writing sample can be so heavily massaged by a writing coach as to be wholly unreliable as an indicator of the student's actual writing ability. The SAT, or a standardised test like it (like the ACT) is administered under controlled circumstances and provides a regular metric with which to compare students from vastly different backgrounds.

I think people who do well on the SAT desperately need the SAT to be legitimate because it SUPPOSED to be a test you can't study for

Or rather, because it is a test you don't have to study for. And that's not an advantage for smart people (or grinds) who'd mostly study anyway if they thought it was necessary, but an advantage for everyone else.

For most people, almost any substantive test is going to require some amount of cramming, because unless it tests the bare minimal material (like the ACT), it's going to cover some material that you may not have had emphasised in your own studies. And cramming is going to create much, much more of a class disparity than the SAT. And people are going to cram no matter what.

You think this is a problem?:

The problem is with the rich 520 kid who can pay a grand or two to get to a 640 and the kid who lives in a single-wide who can't.

How on earth would that be remedied by a substantive test, where the rich child has been able to attend a private school and get a decent education, and the child in the "single-wide" has spent his life going to one of our execrable public schools? Especially when the rich kid is then going to go to a special cram-school that will spend a year specially teaching him the material on the test.

I'm in DC right now, and we pump those public schools full of money only to produce children who, on average, on any substantive exam, are going to flunk as badly as it is possible to flunk. They don't do well on the SAT either, but they will do better on the SAT than they can hope to on any substantive exam, because they are more likely to have at least the minimum skills necessary to give it a try.

The idea of kids taking the SAME test, when they are PREPARED radically differently, is about the dumbest thing ever. I am frankly surprised that all you brilliant people with such high SAT scores can't see this.

Part of this, I expect, is that the test does not seem hard or tricky to me at all, whereas you think it's a tricky test and seem to take the tack that getting a good score (or improving a mediocre score) is largely about navigating those tricks. Apart from a few elementary test-taking strategies, I never got this impression, and as I've said, I took the test three times.

I don't think I was ever conscious of a test question being especially tricky. They certainly weren't difficult. If you sat down with each question taken in isolation, most students (capable of reading and capable of 8th grade math, mind) would be able to get it right. It's just that the test is several hours long and administered in a high pressure environment (because they all think it determines their futures). And it's timed, so if you're slow at math or reading, say, or have difficulty remembering things, you're at a major disadvantage because you will have little time to go back and correct your errors.

Now, since I never did any specialised test prep course, it may be that I was taking the long-way around for all the questions, and there was a shortcut that could have made it all much easier. Well -- fine. I can't claim to know all the ins-and-outs of the test's design, after all, since I played no role in designing it. But you don't need those shortcuts. You can do just fine without them. So the fact that some rich twit has paid out $1000 to get Princeton Review to teach him the shortcuts doesn't really seem a massive injustice to me. It gives him a boost, but at least you can still compete.

Balfegor said...

But if some people pay to prepare for it and others don't, the desired leveling effect is not achieved. So, it's not really rectifying any injustice, is it?

Are you saying there is no rectification whatsoever? I mean, if that's your argument, then sure, that's a fine reason to scrap it. But it's a human instrument. To expect it to provide 100% levelling is unrealistic -- if it just makes the grade a bit less steep, that seems like a pretty significant accomplishment. And one which a more substantive test, like the ACT, would reverse.

After all, at least as I see it, the greatest inequality between the top private schools and urban public schools is that urban public schools don't equip their students with substantive knowledge.

Your argument about grades is utterly superfluous.

How? The point is

1) We want a uniform measure, and
2) We want one that lets people who got piddly educations compete against privileged people who got posh educations.
3) Grades and essays aren't that measure, so
4) We need a test. And
5) A substantive test is going to privilege those privileged people just like school did, because it's just like a substantive test in school. Ergo
6) It needs to be nonsubstantive, and
7) Assume at most a common core of very basic competence (e.g. competence at the junior high level), because
8) Doing one of those tests with the shapes and whatnot is too weird.

You may disagree with the argument, but how hard can it be to understand?

I.Q. is a rudimentary, primitive, and justly criticized measuring device.

Sure, now the people studying intelligence use G instead. But they're all correlated anyhow. I have perfect pitch too, lalala.

What, then, is it good for, other than stroking the ego of people who are intelligent?

I've been taking this all in good humour, but you seem really stoked up about high scorers reaping psychic rewards . . . has someone been belittling you for your SAT score? :)

Balfegor said...

If the test correlates with intelligence HOW IS THIS POSSIBLE EVEN ONCE, let alone commonly?

This would probably be because most people don't take a test prep class. I knew some people who did, but most people at my school did not, to my knowledge. It may be different in different places, and in different communities. It may even be different now, almost 10 years on. But my sister is going through the college admissions thing right now, and I don't think so. At least, not in our town.

To the extent that there are predictable patterns in a test, then you can trick the test back by memorising those patterns and spitting them back. That's a defect of the testing construct, yes, but it's not a crippling defect unless everyone taking the test is taking the test by memorising those patterns, and avoiding the testing construct. This is not the case.

To use a slightly different analogy, suppose you have an exam that tests your aim and your reflexes by popping up targets and having you shoot them. Now, these results will be well correlated with your actual aim and reflexes. But if there's patterns to the way the targets come up, and you know that, you can memorise those and anticipate (rather than react to) the next target location, and get a better score. So for you, the test will be less well correlated (although it will still test your aim and reflexes in some degree).

With the SAT, such an effect is already evident, since test-takers are supposed to experience an average of a 100 point gain when taking the test for a second time anyhow. So acclimation or test-specific learning of a sort is going on. And the SAT is a noisy test too, in a sense, because your score can be significantly affected by repeated casual errors.

All the supporters of the SAT end up saying the same thing as you: "well, ahem, I did really well on the SAT and I didn't take a class." The subtext is egregiously there: "I am better." It's very tiring.

Uh, no, the subtext is "you don't need to take a course to do just fine." If it were a test where you needed to do prep to do well unless you were a genius, that would be one thing. I mean, if it were like the Juken or the Bac, where you do need to do prep. Or the Bar. But you don't. It's like the LSAT, not the Bar.

Who writes the questions? Your vaunted psychometricians (MEASURERS, by the way, not testers)?

Er, yes . . . "metrician" would tend to suggest that "metrics and "measurement" are involved. I mean, that's kind of what the tests are for, no? Psychometricians are involved in test design with the state achievement tests introduced under NCLB too. I don't think they actually write the raw test material, though, just cross-check the consistency of the test construct.

---

That said, I've been proceeding here with the assumption that Princeton Review and Kaplan really do help students significantly, which is something I don't particularly believe, because I believe those programs are bilking rich marks with low scores. The four or five students I knew in high school who attended those programs, at least two did see their scores increase when retaking the exam (I never knew the scores for the others), but the increases were on the order of 100 or 150 points or so -- not particularly marked, given that the test itself is already rather noisy. Given that in the three or four practice tests I'd do before taking the actual test, I typically posted gains on the same order between my first and final tests, and had a lot of variability in final score, I thought they'd just thrown away their money. Or their parents had, rather.

If you can consistently get 400 point increases or whatever out of your pupils, then bully for you. It wasn't that way for the people I knew who took went to SAT prep cramschool.

Balfegor said...

Actually -- there is something I don't understand in your (Seven Machos') position:

You object to the SAT. The SAT is flawed, clearly, because if you're coached, you can get a better score.

And then you say the ACT is superior.

But what, am I supposed to believe coaching won't get you a better score on the ACT?

I'm certain it will, and that the effect will be even more pronounced than on the SAT, since the ACT is designed with a greater knowledge component.

As I hinted above, I wouldn't actually mind if our admissions system switched to real substantive admissions examinations, along the lines of the systems in France, Japan, Korea, Germany, and even England -- aren't the O-levels or A-levels or whatever they are subject-matter examinations? But we (or more particularly, college administrators) realise perfectly well that such a system would, given the atrocious state of our nation's public schooling system, lock most of the urban and rural poor out of higher education. It would also privilege the rich, who are able to afford cram school for this children, even more than they are already privileged. And that's an unacceptable result -- for them at least, though I confess it doesn't distress me nearly as much, because I am a heartless sort, and think public schools should have their failure rubbed in their face to destroy the power of the teachers' unions etc etc and increase public pressure for a complete overhaul of pre-college public education. But I digress.

So we try to test "potential" rather than actual achievement, which is why we get a largely non-substantive IQ test like the SAT. We can try to make the SAT less subject to gaming -- in fact, psychometricians try to do this year by year, along with attempting to remove sources of unfair bias in test construction (e.g. the famous "regatta" question). But I don't see why the unfair privilege of the rich is an argument to switch to a test that, while predicting success rather better, because it is substantive, is going to privilege the rich even more.

Mom said...

This comment thread has gotten pretty old, but I just stumbled over a new article on the SAT scoring error that might interest those who have been following the story. Now the company is saying that rainy weather in the Northeast during the week of the disputed test may have caused the problem. Part of the article is below. The rest is here:

http://www.chron.com/disp/story.mpl/ap/nation/3713987.html

— The company that scans the answer sheets for the SAT college entrance exam said Thursday that wet weather may have damaged 4,000 tests that were given the wrong scores.

Abnormally high moisture content in some answer sheets caused them to expand so they could not be read properly at a scanning center in Austin, Texas, said Pearson Educational Measurement.

The affected test day, Oct. 8, coincided with the beginning of a week of heavy rain in the Northeast, where most of the tests came from. Rain that weekend forced hundreds of people to evacuate their homes. As much as 10 inches fell on New Jersey.

"When there's moisture in the paper, it actually grows," said Pearson spokesman David Hakensen. That causes the ovals students fill in "to move just slightly, enough so that it will be out of registration for the scanning head to read the answers."

Balfegor said...

Well, balfegor, is it fair to say that you don't even know what is on the test that you are so vigorously defending?

They added an essay section, and they fiddled with the verbal section since I took it. But other than that, I don't think they've changed it dramatically.

The ACT actually tests what you learn in school. It is a test of substantive knowldge. It is harder -- and a lot more boring -- to coach, though certainly possible.

You've, uh, never heard of cram school? I guess we don't have them as much here in the US, but back in Korea and Japan, it's a regular feature. You know that to get into a good high school, you're going to have to pass a barrage of subject matter exams, so you go to cram school. You do the same thing (but oh so much more) for the university exams. It's like with Bar-Bri, and the Bar exam. It's more boring to coach, and probably more expensive, but if Japan and Korea are any guide, we'll see a lot more coaching, and it will be rather more expensive, and exacerbate the performance gaps we already see.

In the nation's best private schools, a very, very high percentage of kids prep.

I didn't go to one. I don't think most people do go to the best private high schools, really. You're talking about like Groton and Philips Exeter and whatnot?

Test-takers absolutely do NOT gain an average of 100 points when they take the test a second time.

You're right, the study I'm thinking of apparently only said 43 points (from 1998). And looking at more recent literature, it's sounding like that's probably an overstatement. Looking here I'm seeing a study that seems to show average gains on the order of 30 from coaching.

Where do you get this crap?

The news mostly. Here, I think, is the IQ-SAT correlation paper.

Abc said...
This comment has been removed by a blog administrator.
Abc said...

Seven Machos,

Where do YOU get this stuff? Do you even know what correlation is? As in, what it means mathematically? It's obvious you don't. A 70 or 80% correlation means that 70 or 80% of the variability in SAT scores can be explained by variability in IQ. This is over large populations. So yes, a particular kid with an SAT score of 520 may be no smarter than a particular kid with an SAT score of 460, but overall, there is a good chance that he is. Second, even then, that difference could be explainable by other factors besides IQ (hence the other 20-30% of variation not due to variation in IQ).

As for the ACT, you are missing balfegor's point. Yes, it tests what some kids in lilly-white Minnesotta public schools learn in school. Why don't you go visit Harlem and see if the kids there learn that stuff. The ACT is just as easy to coach. It's just that the industry isn't as big as that for the SAT, since the ACT is used almost exclusively by small midwestern liberal arts colleges and midwestern state schools.

Oh, and if the ACT is a better test, why is there an extremely high correlation between scores on it and those on the SAT? In fact, colleges even have little converter tables for students who only report ACT scores.

Oh, and as for your response to me about the slight variation in scores, between, say 720 and 660, you again aren't looking at the actual percentile numbers. Scores between 720 and 660 are much more compactified and in fact, the difference in the number needed to get wrong to drop from 720 to 660 is usually higher than the number needed to get wrong to drop from 780 to 720. The latter drop can occur from just a difference of 3-4 questions. The former takes about 6-8. So, the difference is somewhat big, if there's only about 55 questions on the entire math section, or 70 questions on the entire reading skills section. And by the way, people applying to Stanford or most other prestigious schools don't get weeded out based on getting just a 720 on some section. Most schools use SAT's as minimal cutoffs and for the prestigious ones, those cutoffs tend to be in the low 700's. See a sample scoring sheet here:

https://satonlinecourse.collegeboard.com/digital_assets/pdfs/eri/scoring_2005-2006.pdf

Yes, of course it would be a shame if the cutoff was, say 700 and someone didn't get in because of a 690. But a) it doesn't quite work that strictly and b) cutoffs are cutoffs for a reason, It would equally be sad if the GPA cutoff were 3.85 and someone had a 3.83 (the difference in maybe just one grade in one semester).

Finally, remember I didn't say that the ETS should not be held liable. Of course they should be. But just as we don't stop arresting people simply because some people got wrongly arrested, we also shouldn't stop using the SAT because some paper expanded due to high humidity and a relatively small sample of students got their scores messed up. Yes, ETS should be held liable and I feel horrible for those students. But, scrapping the entire system is just plain silly.

Abc said...

Small addendum... when I was discussing the 660 and 720 example, I neglected to take note of the "guessing penalty" but the point generally remains the same. The difference between 720 and 780 is a bit greater than that between 660 and 720. In general, small differences in the middle of the scale represent a greater difference in actual test performance (as measured by # correct) than similar differences on the high end of the scale.

Balfegor said...

As for your tables, those are gross generalizations over millions of tests.

Usually, in statistics, millions of samples is a feature, not a bug.

Anyhow, I was reviewing the studies I linked, and fairness compels me to note that the coaching-effect study appears to be using the PSAT as a baseline performance variable. I strongly suspect that increases the error in their calculations, because my recollection of the PSAT is that it's shorter, and so individual testtaker errors there are more likely to have an outsize effect on final score, and it's also something many students (again, from my recollection) don't take particularly seriously.

ACT score improvements take more time and effort. Important typo.

That would make sense, and ideally, coaching for the ACT will actually teach you things if you didn't get them in high school. But rich people are still going to get an increased bonus relative to poor people without the same opportunities if we switch to the ACT or an ACT-type substantive exam, because they will have the time and money to enroll in these longer-term cramschool classes, and poor people, just like today, won't.

Balfegor said...

The flaws in the SAT could ALL be corrected simply if the test were made optional.

But it is, no? In most places. I think there are some school districts where it's mandatory now as a substitute for a graduation exam, but in most places it's entirely voluntary. Well, except that very few schools will take you without an SAT or some other exam, like the ACT.

Schools and groups of schools (like, say, the Ivy League or the Big 10) could very easily put together their own tests.

Yes, but they'd probably farm it out to a group with experience in test design. As a matter of fact, as far as I can see, the College Board is just such a group of schools (well, a really big group), and they farmed test design out to ETS, which developed the SAT.

I know what you mean there --

Smaller subsets of those institutions could put together their own tests, yes. And as I've said above, I wouldn't be opposed to that -- that's the system they have in Korea and Japan, where schools (and even departments within schools) have their own special entrance examinations that you sit for. There are also some smaller associations of institutions, including big names like Waseda and Keio, that have gathered together to produce standard basic tests.

But I don't think the major institutions have been persuaded that there's a need for that, or that they'll get results they like better by doing so.

That said, some smaller institutions -- including one which, for various reasons, is actually my sister's first choice -- do not require an SAT score for admission.

Balfegor said...

ARe the schools going to step up and waive the deadline for affectd students?

I think it's way past the deadlines now. What they ought to do is say they'll perform a new review of their files, and extend offers if they feel they are justified in light of the corrected record, and if they can handle the additional students -- but there aren't that many students affected, so other than the big schools where everyone applies, and the tiny schools where one more student means no more beds, available space probably won't be that much of a problem. The schools admit more than they can handle anyhow, and bet on consistent year-to-year yield.

Mom said...

"Are the schools going to step up and waive the deadline for affectd students?"

Interesting question! Most of the tougher schools had January 1 application deadlines. What would one of those schools do, I wonder, if it were contacted this week by an affected student who said, "I just found out my October SATs were 200 points higher than I thought. If I'd known that in December, I would have sent you an application, because your college would have been my first choice if I had thought I could get in. Will you accept my application now, even though the regular deadline is past, because the glitch wasn't my fault?"

Under these unusual circumstances, some schools might say yes. If I were that student, I'd be tempted to try it.