Thirteenth in an ongoing series about the places where science and religion meet. The previous episode is here.
For a more realistic portrait than Kurzweil’s of what a future dominated by technology might look like, one plausible place to start is with our present domination by technology, and how it is already transforming us as human beings. For example, consider the following quote: “Making robberies into larcenies. Making rapes disappear. You juke the stats, and majors become colonels.”
If you ever watched The Wire (2002–2008), the ultrarealistic, groundbreaking HBO series about life in the underbelly of Baltimore, Maryland, you might recall that memorable statement, delivered as former detective Roland Pryzbylewski connected the dots between the way police departments and school systems both work to simulate progress when none is actually occurring. Later, in an interview with PBS’s Bill Moyers, series creator David Simon made the connection more explicit, in his own words:
You show me anything that depicts institutional progress in America, school test scores, crime stats, arrest reports, arrest stats, anything that a politician can run on, anything that somebody can get a promotion on. And as soon as you invent that statistical category, 50 people in that institution will be at work trying to figure out a way to make it look as if progress is actually occurring when actually no progress is. And this comes down to Wall Street. I mean, our entire economic structure fell behind the idea that these mortgage-based securities were actually valuable. And they had absolutely no value. They were toxic. And yet, they were being traded and being hurled about, because somebody could make some short-term profit. In the same way that a police commissioner or a deputy commissioner can get promoted, and a major can become a colonel, and an assistant school superintendent can become a school superintendent, if they make it look like the kids are learning, and that they’re solving crime.
Anywhere there are statistics, Simon claims, there are people abusing them. But the question he does not ask is, “Why are statistics so prevalent?” Why has our society become so oriented around statistics to the point that they make the difference between success and failure, promotion or demotion, profit or loss, in so many different realms of life?
One important answer to this question is provided by reporter Esther Kaplan’s essay, “The Spy Who Fired Me,” in the March 2015 issue of Harper’s Magazine, where she details the increasing reach of computer-driven management into areas from cosmetics to package delivery:
In industry after industry, this data collection is part of an expensive, high-tech effort to squeeze every last drop of productivity from corporate workforces, an effort that pushes employees to their mental, emotional, and physical limits; claims control over their working and nonworking hours; and compensates them as little as possible, even at the risk of violating labor laws. In some cases, these new systems produce impressive results for the bottom line… In other cases, however, the return on investment isn’t so clear.
The centerpiece of Kaplan’s exposé is the data collection practiced on United Parcel Service (UPS) drivers, an innovation that was described a bit more optimistically, just a year earlier, by National Public Radio’s Morning Edition, as a way to improve efficiency and productivity. The general concept is called telematics, and describes a version of the internet of things adapted for the workplace. Inside the new UPS trucks, sensors and transmitters are wired up to doors, seat belts, scanners, odometers, speedometers, other dashboard instruments, and pretty much any other thing that could possibly have a chip inserted into it. All the while, this massive wave of data is being sent to a central location, in Paramus, New Jersey, where a computer management system combs through it for patterns.
The basic pattern for “big data” analysis, as for any statistical system, is this: First you get the numbers, then you look for patterns, and then you look for ways to make the numbers better. What this means in practice, however, is that the human beings on the other side of the numbers are asked to continually get better and better at all-and-only the things that the numbers measure. The statistics are what the computer sees, and all it can see are the statistics. At first, this potentially creates wins without losses, but gradually, as pressure keeps up for the numbers to keep getting better, the losses start to add up as well.
So why does the pressure build? Why do the numbers always have to go up? The answer, as given by David Simon, is that someone is always looking for a promotion, and the way to get there is through the numbers. Numbers seem solid and objective. There is a common saying that “the numbers don’t lie,” and most people intuitively believe that to be true. It is also easy to see trends with the numbers, it reduces complex situations to easily read metrics. If the numbers go down, that is bad, if they stay steady, that is neutral, if they go up, that is good. And since no one ever got ahead by being associated with things that are neutral or bad, the numbers have got to go up, and keep on going up.
There is often a limit, however, to how much gain can actually be produced with these methods. After the obvious efficiencies have already been taken care of, there may be little more that data analysis can do for you. But for a company that has already heavily invested in an expensive data monitoring system, and for the vendors that sold it to them, for the vendors that will continue to sell upgrades, and service the system, and patch bugs, and do trainings, and for all the upper-level people whose jobs are on the line, based on the performance of the machine, whose purchase they advocated for, that is a reality that cannot be accepted. So more blood must be squeezed out of the stone, which means that something has got to give, which means anything not measured directly by the numbers.
In the case of the UPS drivers, the number of packages they are supposed to deliver daily went from 85 to over 100 (and counting). That is the number that the computer sees, the number that went up. The initial efficiencies gained by the computer came from plotting better, more efficient routes. But the next set of efficiencies were all pulled out of the drivers, who, in order to meet their new quotas, needed to be in constant, unceasing motion, without regard to things like safely lifting heavy packages, taking breaks, or working workdays of reasonable length. In other words, they needed to be able to work like machines.
Then, the next step, as the quotas grew ever more impossible to meet, as inevitable as it possibly could be, was that drivers found ways (as David Simon described) to cheat the system by “juking the stats.” In other words, they found something that may not (that does not) serve any of the larger goals of the enterprise, that may, in fact, work against those, yet that still shows up as a positive number. In this case, what got sacrificed at this later stage (at least as described by Kaplan) was any extra effort to make sure packages get into the hands of the people to whom they are addressed.
The machine can monitor how many stops a driver makes, what route he takes, if he wears a seat belt or not, and any of a hundred other things. But it does not yet have a way to monitor the customer. It does not know if the customer is home or not. So an extra efficiency the driver can gain is to not wait for any customer who does not open the door immediately. Drivers are judged on how many of their stops they make, it would be patently unfair to judge them based on if their customers are at home or not (which is is completely out of their hands). So a driver who needs to juke her own personal stats can appear to be a miracle of efficiency by never waiting more than a few seconds for someone to open the door. No more agonizing minutes, standing by the door, being judged by the machine, as an elderly customer struggles with the lock, or as someone inside rouses from a map, or rushes to make it to the door from a distant part of the house. Slap a “sorry we missed you” post-it note on the door, and be on your merry (or not so merry) way.
As it turns out, what the computers do not see—what they cannot see, what is invisible both to the computer and to all those at the upper-level of management who see through the eyes of the computer—are all the purely human interactions of any job. And depending on what the job is, it can end up being the core competencies of the profession that end up neglected.
As so memorably dramatized in The Wire, two key helper professions that have been entirely colonized by statistical management are police work and education, to the detriment of both. For example, consider again the “broken windows” theory. In its original form, it was all about community-building, and the ways in which that manifests in the small details of police work, as a way to build the human relationship between officers and those that they serve. But as transformed by statistical management, it became a race to rack up record numbers of arrests, often to the destruction of all positive relationships between the police and the communities they work in.
If there is an American institution that has been even more heavily damaged by computerized management, however, it is the educational system. Twenty years ago, in the early 1990s, a new type of standardized test was adopted by many states across the nation. There were dual political pressures animating the push for these tests. On the one hand, there were parents and activists at lower socioeconomic levels, who were worried that the high school diplomas their children were receiving were essentially worthless pieces of paper, founded on social promotion, and unsecured by any baseline standards or guarantees of basic academic achievement or competency. On the other hand, parents and school districts at the upper socioeconomic levels were worried that their children’s degrees were being devalued by junk diplomas from the poorer schools.
At the same time, there were companies that had already come to understand that computerized testing in educational institutions was a low-cost, high-profit cash-cow enterprise. College admission tests like the SAT, ACT, and GRE were already anchoring entire industries around things such as test preparation, tutoring, and supplemental materials; and all that in addition to the rivers of cash flowing from the actual administration of the tests itself. With political pressure from both upper and lower socioeconomic levels, and the promise of money to be made, high-stakes proficiency testing sailed through state legislatures all across the country. It became the literal law of the land with the signing of the “No Child Left Behind” act in 2002, which tied federal funding for education to universal state adoption of high-stakes testing (or its equivalent).
To call the results mixed would be overly kind. After twenty years of teaching to the test, there has been no improvement in student achievement, in fact the converse, as an emphasis on deeper comprehension has been replaced with short-term rote memorization. Teacher satisfaction with their jobs has plummeted. The tests have been shown to correlate most with socioeconomic class, not academic achievement, so while they may simulate a meritocracy, they actually reinforce inequity. And, as the pressures to continuously improve the numbers have grown to unbearable levels, they have resulted in epidemics of “stats-juking,” in which teachers, principals, and superintendents have all succumbed to the temptation to fake better results. This, in addition to ruining the careers and reputations of any number of otherwise competent administrators, has set a horrible moral example for their students, among whom cheating has become increasingly normalized.
This all accords with “Campbell’s Law” (named after social scientist Donald Campbell), which states, “The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it was intended to monitor.” Juking the stats may feel like beating the system, but it actually represents the ultimate capitulation: time, effort, and often money, not to mention moral standards and professional reputations, all expended (some might say squandered), not in the pursuit of any goal beneficial to humanity, but solely to please the machine.
The schools, repositories of the future, represent perhaps the most visibly destructive evidence of computer mismanagement. But the long arm of statistics has also touched other institutions more subtly and invisibly. For example, patrons of many libraries across the nation have noticed a gradual change over the past few decades that at first may have been hard to identify. The shelves suddenly seemed sparse and empty. Books that had been old friends for decades went missing in action from the catalog system. And reference searches that had once been like walks in a magic garden of forking pathways were now more like being stranded in a labyrinth full of dead end passages. The reason, often, was new policies that all books that had (for example) not been checked out in more than five years would be culled from the collection. This was a statistic that would have been difficult to track before the computer era, but now it was a number someone could look at, and make sweeping decisions around. And an institution whose entire raison d’etre was archival was prostituted to novelty and currency; a repository of human wisdom entrusted to the value judgments of a computerized manager.
References
Moyers Bill, with David Simon, Bill Moyers’ Journal, April 17, 2009.
Kaplan, Esther, “The Spy Who Fired Me: The Human Costs of Workplace Monitoring,” Harpers, March 2015.
Goldstein, Jacob, “To Increase Productivity, UPS Monitors Drivers’ Every Move,” Morning Edition, April 17, 2014.
The Mynabirds, “Numbers Don’t Lie,” What We Lose in the Fire, We Gain in the Flood, 2010.
Henningfeld, Diane Andrews, editor, Standardized Testing, Greenhaven Press, Farmington Hills, 2008.
Nichols, Sharon L. and David C. Berliner, Collateral Damage: How High-Stakes Testing Corrupts America’s Schools, Harvard Education Press, Cambridge, 2007.
Campbell, Donald, “Assessing the Impact of Planned Social Change,” Social Research and Public Policies: The Dartmouth/OECD Conference, Gene Lyons, editor, Dartmouth College Public Affairs Center, 1975.
Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.
I love this blog, thank you. I work in mental health and over the past few years have seen a shift to what is being called “data driven treatment.” There are some benefits to this approach, but also some real concerns, many of which you outline nicely in your article. So thank you!
Thanks for the comment! You’re right, this shift is being seen across nearly all fields right now, including the most “human-focused” ones.