AI in the Age of Autocracy

20 January 2025

As I write this, I am in Abu Dhabi, United Arab Emirates, for COLING 2025, where I am giving a tutorial. This is my first time in the Middle East and the Gulf States. It’s clean, glitzy, efficient, and quiet. The coffee is excellent. Pictures of Bollywood megastars on their island playgrounds dot the airport terminal. There are also prominent portraits of the current ruling sheikhs in the lobby of my hotel. It strikes me that a strong predictor of whether you’re in a democracy or an autocracy is whether pictures of the chief executive feature prominently in ostensibly private buildings. That is, in an autocracy, frequent reminders of who holds the power intrude into the private sphere. As if by some sympathetic magic, a picture also reminds you that you are always, at least implicitly, under surveillance.

The host institution of COLING 2025 is the Mohammed Bin Zayed University of AI (MBZUAI). This university was only established in 2019 and quickly rocketed to one of the premier AI institutions in the world, ranking now in the top 20 for CV, NLP, ML, and AI. Recently the Gulf States have poured substantial resources into local AI development, and as a result universities in this region, MBZUAI in particular, have attracted large number of AI faculty from other parts of the world, particularly Eastern Europe and Australia. The pay is good, you get large-scale resources, and the field gets points for a global, more diverse footprint and starts paying attention to your home institution and host country. What more could an AI researcher want? Presumably most of the money for this comes from oil profits and it’s hard to think of a better investment than AI if what you want is a short-term large return.

But the investment is surely not all economic in its ambitions. A brief Wikipedia search shows that the UAE severely restricts freedom of speech and freedom of the press. Enforced disappearances, torture, and secret prisons have been reported. Perhaps it’s an oversight that Wikipedia is not blocked in the UAE. In fairness, the UAE launched some reforms of personal law in 2020, but it still ranks near the bottom in many international metrics of press freedom and human rights.

What does this have to do with AI? For a while, when I was really just getting into the field, AI seemed full of unspoken promise. Like many technologies before it, the promise was one of reducing the human drudgery of tedious labor, opening up avenues for creativity to those who previously lacked them, creating new cures for old diseases, and, yes, a promise of “democratization” of technology that brought power previously reserved for the few to the many. But with the great leaps in AI driven largely by scaling, it turns out that meaningful advances in AI cost a lot, and only a few institutions have the resources to back up continued growth. A Gulf petrostate is one of them, and so is able to supply the resources (compute infrastructure, access to data, etc.) that allows researchers the prospect of making a meaningful impact. Talent usually follows money (I’m not immune to this; this is a job—I ain’t researching machine learning for free, folks). But this concentration of power and resources, and especially where we see outsize investments being made in AI, implicitly points out a too-frequently unspoken warning for our field: AI is the autocrat’s best friend.

It’s not hard to see why. AI allows autocracies to automate surveillance. Movements, activities, and patterns of behavior can be tracked. It’s devilishly simple to create an NLP model that can scan local media for content critical of a country’s rulers in order to shut it down. And the funny thing is that despite all the money being shoveled into making AI better, an AI censor doesn’t even have to actually be all that good. False positives lead to completely innocuous statements being shut down by the AI—or the human censor behind it—and that actually has the effect of creating a climate of fear and self-censorship. It makes it seem like no matter what you say, you risk the ever-present wrath of the state, and so it’s better not to say anything.

At the other end of the AI pipeline lie the people who hold the resources: those with the technical know-how, infrastructure, and data capabilities to actually build the AI technologies—LLMs and vision models—that most of the rest of us engage with as researchers or end-users. These are the big tech companies: OpenAI, Google, Meta (née Facebook), Amazon, some smaller players like Anthropic, and Chinese counterparts like Tencent and Baidu. The need for scale and rapacious growth creates strong incentives for these organizations to strike deals with any state or institution that will have them, and that often involves making concessions to autocratic tendencies or policy, so we end up in awkward situations such as Microsoft doing extensive business with the US government while Bing in China has more severe censorship than native Chinese search engines. Simply put: you have to make money to make AI, and when it comes to dealing with autocracies, you have to make nice to make money.

When I left the US on the 17th of January, Joe Biden, who in his farewell address delivered a prescient, Eisenhower-esque, if tragically belated warning about the dangers of the tech-industrial complex, was still the president. When I return on the 21st, the president of the US will, once again, be Donald Trump. There have been many dismaying sights along the path to Trump’s political resurrection, but perhaps the most dismaying as a scientist (and as one in this field particularly) has been the parade of tech CEOs lining up to kiss Trump’s ring. It doesn’t take much to see why. Like it or not, the next 4 years are going to be critical for AI—either the technology will soar to heights never before seen, or the law of diminishing returns will finally catch up to us and the bubble will burst. Thus the folks whose entire capital is tied up in their AI infrastructure all either want to keep the piece of the AI pie they already have, or they want a larger piece to ensure against the pie shrinking. Trump himself was remarkably perceptive when he tweeted (or “Truthed” or whatever) “EVERYONE WANTS TO BE MY FRIEND!” For all the talk we hear about the optimism tech feels about Trump 2.0, there’s clearly another dynamic at work here. When one man not only has to ability to choose winners and losers based on personal pique, but has also promised through his prior actions and statements to do just that, those afraid of being one of the losers suddenly find themselves about-facing to ingratiate themselves with him. You have to make nice to make money. It doesn’t take a particularly astute political observer to see Mark Zuckerberg motivation to donate a million dollars to the inauguration party of a man who once threatened him with imprisonment: in an autocracy, if you don’t make nice, you might go to jail.

I will not name more names here. I don’t need to. When I say the words “AI baron,” a few names certainly spring to mind unbidden. Like the oil and railroad barons of the last century, every one of these people made their fortunes in large part due to government largesse, either in terms of outright contracts or favorable regulation and legislation. There is a bright gilded line from an owner’s need to protect their substantial investment in AI to appeasing an aspiring autocrat who views the US government has a tool for his personal glorification. Now we see that the vision of a democratic AI future—one that empowers every individual and empowers us to build a better society bottom-up—is fading rapidly. It is not gone, but the guiding light is dimmer and the battle will be uphill on unfriendly terrain. I am desperately afraid that the golden age of democracy that was the latter half of the 20th century was in fact a blip, and that the near-term future is less free and less open than the past we enjoyed has been. If that comes to pass, a large part of that may attributable to the rise of AI. The last few months have been clarifying: we finally see the AI barons for what they are, in all their naked glory. The next four years are likely to make the promise vs. the peril of AI much clearer as well.

The above represents my personal thoughts and opinions alone. I do not express them in my capacity as an employee of the State of Colorado or a recipient of federal research funding. I am grateful to live in a country where, for the moment at least, I have the de jure and de facto freedom to express them.