Authors: Cathy O'Neil
Tags: #Business & Economics, #General, #Social Science, #Statistics, #Privacy & Surveillance, #Public Policy, #Political Science
In the years before the rankings, for example, college-bound students could sleep a bit better knowing that they had applied to a so-called safety school, a college with lower entrance standards. If students didn’t get into their top choices, including the long shots (stretch schools) and solid bets (target schools), they’d get a perfectly fine education at the safety school—and maybe transfer to one of their top choices after a year or two.
The concept of a safety school is now largely extinct, thanks in great part to the
ranking. As we saw in the example of TCU, it helps in the rankings to be selective. If an admissions office is flooded with applications, it’s a sign that something is going right there. It speaks to the college’s reputation. And if a college can reject the vast majority of those candidates, it’ll probably end up with a higher caliber of students. Like many of the proxies, this metric seems to make sense. It follows market movements.
But that market can be manipulated. A traditional safety school, for example, can look at historical data and see that only a small fraction of the top applicants ended up going there. Most of them got into their target or stretch schools and didn’t need what amounted to an insurance policy. With the objective of boosting its selectivity score, the safety school can now reject the excellent
candidates that, according to its own algorithm, are most likely not to matriculate. This process is far from exact. And the college, despite the work of the data scientists in its admissions office, no doubt loses a certain number of top students who would have chosen to attend. Those are the ones who learn, to their dismay, that so-called safety schools are no longer a sure bet.
The convoluted process does nothing for education. The college suffers. It loses the top students—the stars who enhance the experience for everyone, including the professors. In fact, the former safety school may now have to allocate some precious financial aid to enticing some of those stars to its campus. And that may mean less money for the students who need it the most.
It’s here that we find the greatest shortcoming of the
college ranking. The proxies the journalists chose for educational excellence make sense, after all. Their spectacular failure comes, instead, from what they chose
to count: tuition and fees. Student financing was left out of the model.
This brings us to the crucial question we’ll confront time and again. What is the objective of the modeler? In this case, put yourself in the place of the editors at
in 1988. When they were building their first statistical model, how would they know when it worked? Well, it would start out with a lot more credibility if it reflected the established hierarchy. If Harvard, Stanford, Princeton, and Yale came out on top, it would seem to validate their model, replicating the informal models that they and their customers carried in their own heads. To build such a model, they simply had to look at those top universities and count what made them so special. What did they have in common, as opposed to the safety school in the next town? Well, their students had stratospheric SATs and graduated like clockwork. The alumni were rich
and poured money back into the universities. By analyzing the virtues of the name-brand universities, the ratings team created an elite yardstick to measure excellence.
Now, if they incorporated the cost of education into the formula, strange things might happen to the results. Cheap universities could barge into the excellence hierarchy. This could create surprises and sow doubts. The public might receive the
rankings as something less than the word of God. It was much safer to start with the venerable champions on top. Of course they cost a lot. But maybe that was the price of excellence.
By leaving cost out of the formula, it was as if U.S. News had handed college presidents a gilded checkbook. They had a commandment to maximize performance in fifteen areas, and keeping costs low wasn’t one of them. In fact, if they raised prices, they’d have more resources for addressing the areas where they were being measured.
Tuition has skyrocketed ever since. Between 1985 and 2013,
the cost of higher education rose by more than 500 percent, nearly four times the rate of inflation. To attract top students, colleges, as we saw at TCU, have gone on building booms, featuring glass-walled student centers, luxury dorms, and gyms with climbing walls and whirlpool baths. This would all be wonderful for students and might enhance their college experience—if they weren’t the ones paying for it, in the form of student loans that would burden them for decades. We cannot place the blame for this trend entirely on the U.S. News rankings. Our entire society has embraced not only the idea that a college education is essential but the idea that a degree from a highly ranked school can catapult a student into a life of power and privilege. The U.S. News WMD fed on these beliefs, fears, and neuroses. It created powerful incentives that have encouraged spending while turning a blind eye to skyrocketing tuitions and fees.
As colleges position themselves to move up the
charts, they manage their student populations almost like an investment portfolio. We’ll see this often in the world of data, from advertising to politics. For college administrators, each prospective student represents a series of assets and usually a liability or two. A great athlete, for example, is an asset, but she might come with low test scores or a middling class rank. Those are liabilities. She might also need financial aid, another liability. To balance the portfolio, ideally, they’d find other candidates who can pay their way and have high test scores. But those ideal candidates, after being accepted, might choose to go elsewhere. That’s a risk, which must be quantified. This is frighteningly complex, and an entire consulting industry has risen up to “optimize recruitment.”
Noel-Levitz, an education consulting firm, offers a predictive analytics package called ForecastPlus, which allows administrators to rank enrollment prospects by geography, gender, ethnicity, field of study, academic standing, or “
any other characteristic you desire.” Another consultancy, RightStudent, gathers and sells data to help colleges target the most promising candidates for recruitment. These include students who can pay full tuition, as well as others who might be eligible for outside scholarships. For some of these, a learning disability is a plus.
All of this activity takes place within a vast ecosystem surrounding the
rankings, whose model functions as the de facto law of the land. If the editors rejigger the weightings on the model, paying less attention to SAT scores, for example, or more to graduation rates, the entire ecosystem of education must adapt. This extends from universities to consultancies, high school guidance departments, and, yes, the students.
Naturally, the rankings themselves are a growing franchise. The
U.S. News & World Report
magazine, long the company’s
sole business, has withered away, disappearing from print in 2010. But the rating business continues to grow, extending into medical schools, dental schools, and graduate programs in liberal arts and engineering.
even ranks high schools.
As the rankings grow, so do efforts to game them. In a 2014
ranking of global universities, the
mathematics department at Saudi Arabia’s King Abdulaziz University landed in seventh place, right behind Harvard. The department had been around for only two years but had somehow leapfrogged ahead of several giants of mathematics, including Cambridge and MIT.
At first blush, this might look like a positive development. Perhaps MIT and Cambridge were coasting on their fame while a hardworking insurgent powered its way into the elite. With a pure reputational ranking, such a turnaround would take decades. But data can bring surprises to the surface in a hurry.
Algorithms, though, can also be gamed. Lior Pachter, a computational biologist at Berkeley, looked into it. He found that the Saudi university had contacted a host of mathematicians whose work was highly cited and had offered them $72,000 to serve as adjunct faculty. The deal, according to a recruiting letter Pachter posted on his blog, stipulated that the mathematicians had to work three weeks a year in Saudi Arabia. The university would fly them there in business class and put them up at a five-star hotel. Conceivably, their work in Saudi Arabia added value locally. But the university also required them to change their affiliation on the Thomson Reuters academic citation website, a key reference for the
rankings. That meant the Saudi university could claim the publications of their new adjunct faculty as its own. And since citations were one of the algorithm’s primary inputs, King Abdulaziz University soared in the rankings.
Students in the Chinese city of Zhongxiang had a reputation for acing the national standardized test, or
, and winning places in China’s top universities. They did so well, in fact, that
authorities began to suspect they were cheating. Suspicions grew in 2012, according to a report in Britain’s
, when provincial authorities found ninety-nine identical copies of a single test.
The next year, as students in Zhongxiang arrived to take the exam, they were dismayed to be funneled through metal detectors and forced to relinquish their mobile phones. Some surrendered tiny transmitters disguised as pencil erasers. Once inside, the students found themselves accompanied by fifty-four investigators from different school districts. A few of these investigators crossed the street to a hotel, where they found groups positioned to communicate with the students through their transmitters.
The response to this crackdown on cheating was volcanic. Some two thousand stone-throwing protesters gathered in the street outside the school. They chanted, “We want fairness. There is no fairness if you don’t let us cheat.”
It sounds like a joke, but they were absolutely serious. The stakes for the students were sky high. As they saw it, they faced a chance either to pursue an elite education and a prosperous career or to stay stuck in their provincial city, a relative backwater. And whether or not it was the case, they had the perception that others were cheating. So preventing the students in Zhongxiang from cheating
unfair. In a system in which cheating is the norm, following the rules amounts to a handicap. Just ask the Tour de France cyclists who were annihilated for seven years straight by Lance Armstrong and his doping teammates.
The only way to win in such a scenario is to gain an advantage and to make sure that others aren’t getting a bigger one. This is the case not only in China but also in the United States, where high school admissions officers, parents, and students find themselves
caught in a frantic effort to game the system spawned by the U.S. News model.
An entire industry of coaches and tutors thrives on the model’s feedback loop and the anxiety it engenders. Many of them cost serious money. A four-day “application boot camp,” run by
a company called Top Tier Admissions, costs $16,000 (plus room and board). During the sessions, the high school juniors develop their essays, learn how to “ace” their interviews, and create an “activity sheet” to sum up all the awards, sports, club activities, and community work that admissions officers are eager to see.
Sixteen thousand dollars may sound like a lot of money. But much like the Chinese protesters in Zhongxiang, many American families fret that their children’s future success and fulfillment hinge upon acceptance to an elite university.
The most effective coaches understand the admissions models at each college so that they can figure out how a potential student might fit into their portfolios. A California-based entrepreneur, Steven Ma, takes this market-based approach to an extreme.
Ma, founder of ThinkTank Learning, places the prospective students into his own model and calculates the likelihood that they’ll get into their target colleges. He told Bloomberg BusinessWeek, for example, that an American-born senior with a 3.8 GPA, an SAT score of 2000, and eight hundred hours of extracurricular activities had a 20.4 percent shot of getting into New York University, and a 28.1 percent chance at the University of Southern California. ThinkTank then offers guaranteed consulting packages. If that hypothetical student follows the consultancy’s coaching and gets into NYU, it will cost $25,931, or $18,826 for USC. If he’s rejected, it costs nothing.