Tech

Palantir exec says its work is on par with the Manhattan Project

Comparing AI to most lethal weapon in human history isn’t comforting.

Drew Angerer/Getty Images News/Getty Images

An executive at the deeply controversial and highly problematic artificial intelligence company Palantir has told company employees the work they’re doing is “this generation’s Manhattan Project.” According to Business Insider, the comments came during an all-hands meeting earlier this month and relate to Palantir’s work on Project Maven, an initiative that proved too spicy even for Google.

A quick recap: Project Maven is an AI-for-military-drones initiative Google backed away from after protests from employees. Palantir is the Peter Thiel-backed company that stepped up to fill the Google-shaped void, and first came to infamy for using its tech to help Immigration and Customs Enforcement (ICE) identify and deport undocumented immigrants. The Manhattan Project led to the development of the world’s first nuclear weapons during World War II. Yeah, so nothing to worry about here at all.

Everything to fear, including fear itself — There’s lots to worry about with drones automagically selecting targets. First, false positives. Second, biases baked into the algorithm that selects targets. Third, accountability when inevitably things go awry and there’s collateral damage. If we’ve learned anything from consumer-focused identification tech it’s that it needs to be regulated or it will, unavoidably, be abused. Thinking military applications will be any more responsible without oversight is madness.

Heck, even Google CEO Sundar Pichai thinks AI need regulation, and even the company that long since shelved it’s “Don’t be Evil” aspirations thinks that building software that makes killing people more efficient is something to be handled with extreme caution. That alone should serve as a warning. But so too should any company comparing its work to building nukes.

Nukes always lead to more nukes — The Manhattan project was a lot of things: it was a vanity project for the U.S. to do some scientific-prowess dick-swinging, it was an opportunity to spend an enormous sum of money on cutting edge research and development, and it was a way to end WWII in the face of an enemy who couldn’t be expected to surrender without an unprecedented and unimaginably horrific show of might.

With all of the above achieved, Albert Einstein nonetheless rued his involvement with the Manhattan Project. The atomic bomb may have ended the war, but it set the stage for a new sort of arms race, one where a precarious detente exists only thanks to the risk of mutually assured destruction.

Palantir’s AI-powered discrimination services and automated slaughter machines, meanwhile, are more likely to start another war than end any. Instead of offering bragging rights, Palantir’s products position the U.S. as a selfish and xenophobic bully, and rather than act as a deterrent, they’re set to breed anti-U.S. sentiment while encouraging U.S. rivals to ramp up their own, equivalent, and possibly even less discriminate replies.

We can't stop it, so we need to control it — Yes, there’ll always be defense contracts. The military industrial complex demands constant innovation, and relies on incessant efforts to find new ways to spend R&D budgets. If Palantir doesn’t build AI military tech, someone else will. But in dishing out its inevitable and enviable contracts, governments need to decide what sort of companies they want to partner with, and how success is measured so that profitability isn’t the only metric.

Most importantly, though, these sorts of contracts ought to come with the most restrictive and prescriptive guidelines. It’s not about stifling innovation, it’s about keeping the lid on Pandora’s box.