AI Goes to War: Palantir Wants to Join the US Army

The battlefield of future wars could be decisively transformed by artificial intelligence, which increasingly opens up a controversial scenario about the applications of this technology for unethical purposes, such as attacking other human beings.

Oliver Thansan
Oliver Thansan
27 April 2023 Thursday 21:51
4 Reads
AI Goes to War: Palantir Wants to Join the US Army

The battlefield of future wars could be decisively transformed by artificial intelligence, which increasingly opens up a controversial scenario about the applications of this technology for unethical purposes, such as attacking other human beings. This is the basis of the latest presentation by Palantir, an American software company specializing in big data analysis.

After years of selling its domestic surveillance services to the US Immigration and Customs Enforcement, Palantir is already working to also enter the Pentagon and US military departments through artificial intelligence.

A few days ago, the company unveiled a demo video of its latest offering, the Palantir Artificial Intelligence Platform (AIP). Although the system itself is designed simply to integrate Large Linguistic Models (LLMs) like OpenAI's GPT-4 or Google's BERT into private networks, the first thing they did was demonstrate its application to the modern battlefield.

In Palantir's presentation, a military operator tasked with monitoring the Eastern European theater of operations is shown discovering enemy forces concentrating near the border and responds by asking a ChatGPT-type digital assistant to help deploy reconnaissance drones, devise responses tactics to perceived aggression and even orchestrate jamming of enemy communications.

The AIP is shown helping to estimate enemy composition and capabilities by launching a Reaper drone on a reconnaissance mission in response to the operator's request for better images, and suggesting appropriate responses to the discovery of an armored element.

It is also specified that "LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used legally and ethically", with an AI operation that will be based on three principles: that AIP will be deployed in a classified system, that users will be able to change the scope and actions of each LLM and network asset and that there will be “guardrails” to prevent the system from performing unauthorized actions.