The societal disruption of artificial intelligence

As Meta's stock tanked Thursday, CEO Mark Zuckerberg urged employees to focus on the company's short-video service, Reels. (Drew Angerer/Getty Images/TNS)

Who decided the world should be disrupted by AI? Do you recall receiving a voter pamphlet on the pros and cons of AI development and deployment? Was I the only one who missed election day?

The truth of the matter is that the most impactful decisions about AI are being made by a few people with little to no input from the rest of us. That is a recipe for unrest if I’ve ever heard one.

ADVERTISING


A couple dozen AI researchers think there’s a chance that AI could lead to unprecedented human flourishing. So, they have taken it upon themselves to develop ever more advanced AI models.

At the same time, they have freely admitted that they increasingly have limited control over the technology itself and its potential side effects.

Is it any surprise that more than a few folks feel disenchanted with a governing system that purports to give power to the people but, in practice, empowers computer scientists to more or less unilaterally throw society into a potential doom loop?

It’s as if we’ve been asked what we wanted for dinner, answered, “Thai,” and then we’re told we could decide between Pepperoni or Canadian Bacon. That’s not a choice. That’s not power. That’s democratic gaslighting.

A functioning democracy should not leave decisions that may create irreversible harm for generations to a room of computer scientists.

In addition to allowing a small set of AI labs to introduce humankind-altering technology with no input from you and me, now our elected officials are asking these same unrepresentative and unelected tech leaders for advice on how best to regulate this emerging technology.

It’s again worth noting that some of us, perhaps many of us, think AI should not have been introduced at this point or at least not at this scale.

If you’re still with me and you still agree with me, you might be lamenting the fact that it’s already too late. Whatever influence we wield now over the development of AI will have an insignificant impact on its long-term trajectory. Worse, there’s a chance that if we succeed in halting the deployment of AI models, China or (fill in the blank “bad guy” country) will just keep advancing their own models and eventually use those models against us in some war or economic contest.

Such arguments are flimsier than cheese-filled crust. I’d rather live in a U.S. that has strong communities where people perform meaningful work, still use their critical thinking skills, and trust their social institutions than a U.S. that leads the world in AI.

We need to shift the narrative from “how do we shape the development of AI?” to “when and under conditions should we permit limited uses of AI?” In the interim, it’s fine for our officials to consult AI experts and leaders but voters should be the ones determining when and how AI changes our society.