Is Elon Musk planning to use artificial intelligence to run the US government? That seems to be his plan, but experts say it is a “very bad idea”.
Musk has fired tens of thousands of federal government employees through his Department of Government Efficiency (DOGE), and he reportedly requires the remaining workers to send the department a weekly email featuring five bullet points describing what they accomplished that week.
Since that will no doubt flood DOGE with hundreds of thousands of these types of emails, Musk is relying on artificial intelligence to process responses and help determine who should remain employed. Part of that plan reportedly is also to replace many government workers with AI systems.
It’s not yet clear what any of these AI systems look like or how they work—something Democrats in the United States Congress are demanding to be filled in on—but experts warn that utilising AI in the federal government without robust testing and verification of these tools could have disastrous consequences.
“To use AI tools responsibly, they need to be designed with a particular purpose in mind. They need to be tested and validated. It’s not clear whether any of that is being done here,” says Cary Coglianese, a professor of law and political science at the University of Pennsylvania.
Coglianese says that if AI is being used to make decisions about who should be terminated from their job, he’d be “very sceptical” of that approach. He says there is a very real potential for mistakes to be made, for the AI to be biased and for other potential problems.
“It’s a very bad idea. We don’t know anything about how an AI would make such decisions [including how it was trained and the underlying algorithms], the data on which such decisions would be based, or why we should believe it is trustworthy,” says Shobita Parthasarathy, a professor of public policy at the University of Michigan.
Those concerns don’t seem to be holding back the current government, especially with Musk – a billionaire businessman and close adviser to US President Donald Trump – leading the charge on these efforts.
The US Department of State, for instance, is planning on using AI to scan the social media accounts of foreign nationals to identify anyone who may be a Hamas supporter in an effort to revoke their visas. The US government has not so far been transparent about how these kinds of systems might work.
Undetected harms
“The Trump administration is really interested in pursuing AI at all costs, and I would like to see a fair, just and equitable use of AI,” says Hilke Schellmann, a professor of journalism at New York University and an expert on artificial intelligence. “There could be a lot of harms that go undetected.”
AI experts say that there are many ways in which the government use of AI can go wrong, which is why it needs to be adopted carefully and conscientiously. Coglianese says governments around the world, including the Netherlands and the United Kingdom, have had problems with poorly executed AI that can make mistakes or show bias and as a result have wrongfully denied residents welfare benefits they are in need of, for instance.
In the US, the state of Michigan had a problem with AI that was used to find fraud in its unemployment system when it incorrectly identified thousands of cases of alleged fraud. Many of those denied benefits were dealt with harshly, including being hit with multiple penalties and accused of fraud. People were arrested and even filed for bankruptcy. After a five-year period, the state admitted that the system was faulty and a year later it ended up refunding $21m to residents wrongly accused of fraud.
“Most of the time, the officials purchasing and deploying these technologies know little about how they work, their biases and limitations, and errors,” says Parthasarathy. “Because low-income and otherwise marginalised communities tend to have the most contact with governments through social services [such as unemployment benefits, foster care, law enforcement], they tend to be affected most by problematic AI.”
AI has also caused problems in government when it’s been used in the courts to determine things like someone’s parole eligibility or in police departments when it’s been used to try to predict where crime is likely to occur.
Schellmann says that the AI used by police departments is typically trained on historical data from those departments, and that can cause the AI to recommend over-policing areas that have long been overpoliced, especially communities of colour.
AI does not understand anything
One of the problems with potentially using AI to replace workers in the federal government is that there are so many different kinds of jobs in the government that require specific skills and knowledge. An IT person in the Department of Justice might have a very different job from one in the Department of Agriculture, for example, even though they have the same job title. An AI programme, therefore, would have to be complex and highly trained to even do a mediocre job at replacing a human worker.
“I don’t think you can randomly cut people’s jobs and then [replace them with any AI],” says Coglianese. “The tasks that those people were performing are often highly specialised and specific.”
Schellmann says you could use AI to do parts of someone’s job that might be predictable or repetitive, but you can’t just completely replace someone. That would theoretically be possible if you were to spend years developing the right AI tools to do many, many different kinds of jobs – a very difficult task and not what the government appears to be currently doing.
“These workers have real expertise and a nuanced understanding of the issues, which AI does not. AI does not, in fact, ‘understand’ anything,” says Parthasarathy. “It’s a use of computational methods to find patterns, based on historical data. And so it is likely to have limited utility, and even reinforce historical biases.”
The administration of former US President Joe Biden issued an executive order in 2023 focused on the responsible use of AI in government and how AI would be tested and verified, but this order was rescinded by the Trump administration in January. Schellmann says this has made it less likely that AI will be used responsibly in government or that researchers will be able to understand how AI is being utilised.
All of this said, if AI is developed responsibly, it can be very helpful. AI can automate repetitive tasks so workers can focus on more important things or help workers solve problems they’re struggling with. But it does need to be given time to be deployed in the correct manner.
“That’s not to say we couldn’t use AI tools wisely,” says Coglianese. “But governments go astray when they try to rush and do things quickly without proper public input and thorough validation and verification of how the algorithm is actually working.”