When Silicon Valley prognosticators warn about the existential dangers of A.I., that danger is often described as existing in an imminent future. We’re told to fear artificial general intelligence, algorithms and technologies that achieve some level of simuluated sentience. Such systems, we’re warned – often with allusions to Terminator or The Matrix – may view us as a threat. Nicholas Bostrom goes so far as to argue that such a ‘superintelligence’ is an inevitability, and that it will crush anything in its way. Eric Schmidt and Henry Kissinger similarly just came out with a book predicting these kinds of systems are only “10 to 20 years” away.
No doubt, these prognosticators are right. We should be careful developing A.G.I. But worrying about the future overlooks the profound ways even simple code can alter the patterns of human behavior. To understand how, just consider our relationship with that humblest of grains: wheat.
Reconsidering the Agricultural Revolution
Relative to humans with our complex social behaviors, wheat is a simple organism. Compelled by its DNA, it germinates, takes in sunlight, grows, gets pollinated by the wind, drops seeds which germinate, and so on. In the conventional telling of human history, around 12,000 years ago, humans learned to grow wheat, which allowed them to give up their hunter-gathering ways, settle into cities, and birth civilization.
But what if it wasn’t so simple or positive? Or as Yuval Harari puts it in Sapiens: instead of humans domesticating wheat; what if wheat domesticated humans?
To make this case, Harari points to archeological evidence suggesting that the average hunter-gatherer had a higher quality of life than their average farmer descendent, at least until a few centuries ago. Not bound to a plot of land, pre-agriculture humans roamed widely. They ate a wider variety of foods and could move to where it was, providing some defense against the vicissitudes of the weather. This produced longer life expectancies and less malnourishment. Hunter-gatherers also enjoyed a relatively more egalitarian existence, existing in smaller groups than their post-agricultural descendents for whom slavery and serfdom were often the order of the day. Given this, why would anyone voluntarily give up a higher quality of life?
Enter wheat. As a living organism, wheat is programmed by its DNA to propagate itself. And what better way to do this than to get another organism to assist? As humans discovered they could grow and consume it, they did. And as calorie intakes rose, so did individuals’ capacity to have more kids. But more kids meant more mouths to feed, which meant more land had to be dedicated to growing. As humans became more tied to land, populations increased, fomenting greater competition for resources. This led to more complex systems of living and governance. Sustaining and defending such systems required armies, which in turn meant that farmers had to grow not just enough for themselves, but for soldiers and administrators. Often this was accomplished through systems of slavery, serfdom, or systematized inequality, the consequences of which are still with us today.
The point is: wheat and similar subsistence crops like maize, rice and soy didn’t possess advanced intelligence. Domesticating humans didn’t require complex planning, malign intent, or pseudo-consciousness. On the contrary, all it took was a relatively simple program to impact human behavior on a massive, historical scale.
We’ve Seen This Movie Before
This brings me to my concern about today’s “dumb” A.I.: we’re being naive if we think the only risk is in the future. Already, A.I. is having a profound impact on how we live, communicate and coordinate action on small and large scales. As observed by Meredith Brossard, as racial and gender biases find their way into hiring algorithms, we risk such technology amplifying and reinforcing systems of structural inequality. And as I’ve written about at length, insofar as such algorithms increasingly filter the information and entertainment we see, it’s locking us in information bubbles that distort our notion of shared reality. Then, when we try to break free and look up from our devices, such algorithms hit us with a well-timed, contextually relevant notification to pull us back in. Finally, as more of our friends’ lives happen online, it’s increasingly difficult to avoid this “metaverse,” much like wheat made it difficult to avoid getting tied to one place.
It doesn’t have to be this way, however. In his book critiquing liberalism, Patrick Deenan points out that while most people think the Amish don’t use any modern technology, this isn’t entirely true. Many Amish business people, for example, use cell phones – although they continue to ban phones in the home. The Amish, it turns out, are very deliberate about what technology to permit. They think long and hard about the impact it will have on the community and their way of life. Does this align and advance their community’s values or does it detract? If the former, they may allow it in certain situations.
We could learn a lot from this approach. Instead of assuming that technological progress is inherently positive and inevitable, and that the only risks are in the future as Silicon Valley’s leaders would have us believe, we might ask: is this strengthening our democratic community or hampering it? Is this helping people achieve their potential, or holding people back? Based on the answers, we may decide on a very different regulatory approach.
Of course the alternative is to stay the course like we did with wheat. Then maybe one day someone will write a book asking, did we control the algorithms, or did they control us?