The Test That Wasn't Really About Testing
Every year, millions of American high school students spend months preparing for a test that was never designed to measure their preparation. The SAT, which determines college admissions for most students, was originally created in 1926 with a fundamentally different goal: identifying what psychologists of the era believed was innate, unchangeable intelligence.
The common assumption today is that the SAT measures academic aptitude — your ability to succeed in college based on what you've learned and how well you think. But the test's creators had something much stranger in mind. They wanted to find students with superior inherited mental capacity, regardless of their education or background.
When Psychology Met Prejudice
Carl Brigham, the Princeton psychologist who adapted the SAT from World War I Army intelligence tests, believed intelligence was largely genetic and that different racial and ethnic groups had different levels of inherited mental ability. His 1923 book "A Study of American Intelligence" argued that recent immigrants from Southern and Eastern Europe were intellectually inferior to earlier Anglo-Saxon settlers.
Photo: Carl Brigham, via wp.stanforddaily.com
This wasn't fringe thinking at the time. The eugenics movement, which promoted the idea that human traits like intelligence were fixed and hereditary, had significant support in American universities and government. Brigham and his colleagues designed the SAT to identify students with the best genetic intellectual endowment, not those who had worked hardest or learned most effectively.
The original SAT questions deliberately avoided testing specific knowledge that could be taught or learned through study. Instead, they focused on what Brigham believed were measures of pure reasoning ability — the kind that supposedly couldn't be improved through education or effort.
The Science Fell Apart, But the Test Remained
By the 1930s, the scientific foundation for the SAT had completely collapsed. Brigham himself renounced his earlier views about racial intelligence differences, calling his previous work "pretentious" and "without foundation." Research in psychology and genetics was making it clear that intelligence was far more complex, changeable, and influenced by environment than early eugenicists had believed.
But the SAT had already become institutionalized. Harvard president James Conant, who championed the test in the 1930s and 1940s, reframed it as a tool for identifying talented students from all backgrounds — a way to democratize elite education rather than restrict it. The test's purpose had shifted, but its fundamental structure remained the same.
Photo: James Conant, via cdn.britannica.com
This created a lasting contradiction. A test designed to measure inherited, unchangeable ability was now being used to evaluate students' educational achievement and college readiness. The mismatch between design and purpose helps explain many of the persistent problems with standardized testing that educators and students still struggle with today.
Why the Myth of Objectivity Persists
The SAT's transformation from eugenics tool to college admissions standard happened gradually, and most Americans never learned about its original purpose. Instead, the test acquired a reputation for scientific objectivity and fairness. Numbers feel neutral. Multiple choice questions seem unbiased. The scoring process appears mathematical and precise.
This perception of objectivity serves powerful interests. Colleges can use SAT scores to make admissions decisions that appear merit-based rather than subjective. Students and parents can believe that test preparation and hard work will lead to higher scores and better opportunities. The testing industry can market prep courses and materials as ways to unlock potential.
But research consistently shows that SAT scores correlate more strongly with family income than with college performance. Students from wealthy families score higher on average, not because they're inherently smarter, but because they have access to better schools, test preparation, and educational resources from an early age.
The Test That Measures What It Wasn't Meant To
Today's SAT has been revised multiple times and bears little resemblance to Brigham's original version. The College Board, which administers the test, explicitly rejects the hereditarian assumptions of its creators and emphasizes that scores can improve with practice and preparation.
Yet the fundamental tension remains. A test originally designed to identify fixed intellectual capacity is now used to evaluate educational achievement and college readiness. This helps explain why SAT scores often fail to predict college success as well as simpler measures like high school grades, which more directly reflect sustained effort and learning.
The Real Lesson About Standardized Testing
The SAT's history reveals something important about how supposedly objective measures can carry forward the biases and assumptions of their creators, even after those assumptions have been scientifically discredited. When we treat test scores as neutral measures of ability or achievement, we're often perpetuating ideas about intelligence and merit that have deeper, more problematic roots.
This doesn't mean standardized tests are inherently evil or useless. But it does mean we should be skeptical when any measurement claims to be purely objective or merit-based. The numbers might be precise, but the assumptions behind them — about what should be measured, how it should be measured, and what the results mean — are always human choices shaped by the values and blind spots of particular times and places.
The next time someone argues that standardized test scores represent pure merit or objective ability, remember that America's most important standardized test was literally designed by eugenicists who believed intelligence was inherited and unchangeable. The scores might look scientific, but the story behind them is much more human — and much more complicated — than most people realize.