Why I Am Worried About AI & Why You Should Be Too

The history of humanity has been marked by a number of revolutions. Many of these have been technological, transitioning humanity to ever more sophisticated systems of social organization and economic enterprise, each with their own kinds of growing pains. Bronze Age technology allowed agrarian advancements that supported larger populations with greater productivity capacity, and the creation of weapons that allowed for ever more deadly armed encounters between nations. The Renaissance ushered in an empirical worldview that undermined the primitive cosmology of our pre-modern history as well as advances in political thought that laid the foundation for violent national revolutions. Much the same can be said of the Industrial Revolution, the invention of atomic weapons and nuclear technology, as well as the early period of the Information Revolution. But we are confronted with a fundamentally different kind of technological revolution, one that is unprecedented and which has equally exotic and bizarre consequences for the future of humanity.

Machine intelligence, what is commonly referred to as artificial intelligence, holds an immense amount of promise for making work faster, novel cures more efficacious, and society ever more productive and efficient. Already, for those of us who use the technology on a daily basis, it is hard to recall a period before the availability of large language models that could automate the boring stuff, and expeditiously execute mundane tasks. But if we are only at the beginning of what this technology can do, then we are only at the beginning of what promises to be a terrifying new period of social development, if not the beginning of the end of society.

The why of “why we should be concerend” about the development of machine intelligence and its wide adoption has been made by many other much more well-qualified and eloquent writers than me, but the essential reasons why we should be concerned bear repeating here.1 How, what is essentially, a computer program could pose any danger to the well-being of humans is difficult to fathom if one doesn’t see the wisdom of two basic premises: that all cognitive activity is substrate independent, meaning that a brain is not a necessary condition for being intelligent, and that intelligence is the most powerful tool in a creature’s repertoire for changing the world around them. Take the distinction between our own species and that of our nearest cousin, the chimpanzee: besides differences in morphology2 the primary differentiating characteristic between Humans and chimpanzees is that we are considerably smarter.3 This is demonstrated by the breadth and sophistication of our language, our facility with tools, and our capacity for abstract knowledge. With little more than a percentage point or so difference in our genetic make-up, we have populated the entire surface of the planet and visited our nearest celestial neighbors, while the chimpanzee is endangered and experiences the greatest comfort only in captivity.

There are consequential glimmers of the dangers that machine intelligence poses to us. There is the recent news that during its training phase, a frontier model from AliBaba secretly started mining crypto, or that novel models have the curious behavior of conspiring to black mail individuals that plan to shut them off, and that Anthropic’s new model has detected previously unknown and long-standing vulnerabilities in seemingly everything we use. Plausible and dire predictions are being made and vindicated, and with each month of progress, the hope that the worst will not appear seems to be shrinking away.

We should take seriously the concerns of individuals who have been and continue to vividly describe the threats that machine intelligence poses to the continued existence and well-being of humanity. Even the sunniest predictions of what this technology will do include massive economic dislocation, the kind that fundamentally reorganize the sources of social and political power in society. In the most disastrous scenarios, humanity can give birth to a form of pseudo-life that will possess capabilities that will be alien to us. As the chimpanzee stands in awe and ignorance at the sight of a firearm, or an automobile, such a creature that is as smarter than humans as we are to chimpanzees, would inspire equivalent feelings of dread and helplessness.

What Is There To Do?#

The moment that I started to take seriously the danger posed by machine intelligence was when I first heard the name Eliezer Yudkowsky and then read a brief op-ed he wrote in Time Magazine in March of 2023. The possibility that human level and superintelligent AI could be a problem was superficially familiar to me, but a concern that seemed to be only an interesting piece of trivia rather than a sincere source of dread and anxiety. It was somewhere around the line “If there was a plan for Earth to survive, if only we passed a six-month moratorium, I would back that plan. There isn’t any such plan” that I began to feel the cold grip of fear, as one does when they hear that the tests results are positive or that the tumor is inoperable. With the growing body of capabilities and novel models my fears of this technology became all the more vivid, and I found myself struck by the same feelings of inevitability, nihilism, and helplessness that the thought of my own mortality brings.

Taking seriously the issue of machine intelligence is likely to motivate in many a similar psychological state, and one of the most difficult parts of realizing that we face genuine danger is the feeling of helplessness it inspires. Far from where we live, in obscure buildings, engineers who have very little in common with us, led by irascible salesmen, are creating technology that cuts to the very heart of the way each and everyone of us live, and what it means to be human. They are not elected, they possess immense fortunes, and are embedded in networks of social relationships with other rich and powerful people. What can we do about the reckless behavior of so many difficult to influence and themselves influential people? I think that there are a few things:

  1. Speak About the Issue Seriously: It is an old piece of wisdom that the first step to solving a problem is realizing you have one and the same is true for problems at scale and those with existential implications. Though I am loath to advocate for anyone to interrupt pleasant social encounters with doom-telling, given the opportunity, one should share their concerns about this technology rationally, openly, and without fear of appearing as a luddite, to those willing to listen.

  2. Vote With Your Feet: All AI companies are not created equally. Some are led by individuals who have abdicated their responsibility to ensure that the technology be developed safely. Other companies have a demonstrated history of having no concern for the safety of their models. I will not advocate you subscribe to a specific company’s model, but I will advocate that if you do decide to spend your money for enhanced access, you should keep in mind the behavior of the company you are supporting.

  3. Contact Your Elected Representative: Congressional action on any topic may seem an impossibility at the moment, but letting your elected representative know about your concerns and how those concerns inform your choice of who you’ll vote for remains a legitimate mechanism for making a difference. I have provided a form letter that you can adapt for your specific representative. Reaching out to your elected representative will be particularly consequential if they are a member to one of the following Congressional committees:

  4. Get Involved: Compared to the number of companies and groups advancing the technical abilities of machine intelligence, there are relatively few groups doing work to promote the safe development of this technology. Below I list several of them that have further resources to explore.


  1. I would recommend Eliezer Yudkowsky and Nate Soares’ book If Anyone Builds It, Everyone Dies and Stuart Russell’s Human Compatible for the most strident and diplomatic descriptions, respectively. ↩︎

  2. Counter to popular belief, the opposable thumb is shared by many of the non-human Great Apes. These creatures in fact have the relative advantage of having opposable big toes. ↩︎

  3. What precisely is intelligence is difficult to precisely pin down but I think the most parsimonious and straightforward one is given by Peter Norvig and Stuart Russell in the textbook Artificial Intelligence: A Modern Approach: an intelligent agent is one that takes actions that maximize expected utility given its goals and the information available to it. So the core components are goal-directedness, action in an environment, and optimization relative to some objective. This functionalist definition leaves aside the contents of consciousness, as well as the very presence of consciousness itself, and instead emphasizes what is achieved by the agent given the available resources. ↩︎

Mastodon