In 2012, Palantir quietly embedded itself into the daily operations of the New Orleans Police Department. There were no public announcements. No contracts made available to the city council. Instead, the surveillance company partnered with a local nonprofit to sidestep oversight, gaining access to years of arrest records, licenses, addresses, and phone numbers all to build a shadowy predictive policing program.
Palantir’s software mapped webs of human relationships, assigned residents algorithmic “risk scores,” and helped police generate “target lists” all without public knowledge. “We very much like to not be publicly known,” a Palantir engineer wrote in an internal email later obtained by The Verge.
After years spent quietly powering surveillance systems for police departments and federal agencies, the company has rebranded itself as a frontier AI firm, selling machine learning platforms designed for military dominance and geopolitical control.
“AI is not a toy. It is a weapon,” said CEO Alex Karp. “It will be used to kill people.”
That’s not hyperbole. Palantir’s Artificial Intelligence Platform (AIP) is already active in Ukraine, where it’s used to process real-time battlefield data, automate targeting decisions, manage logistics, and even support war crimes documentation. In the U.S., Palantir is behind TITAN, the Army’s $480 million battlefield intelligence system that fuses satellite data, drone feeds, and sensor input to suggest next moves. The company is also building MASS (Maven Smart System), an upgraded version of Project Maven, a controversial AI program originally dropped by Google after employee protests.
Founded in 2003 with seed funding from In-Q-Tel, the CIA’s venture arm, Palantir was co-created by Peter Thiel with the goal of applying Silicon Valley’s data tools to national security. Its name, a nod to the omniscient crystal balls in The Lord of the Rings, hinted at its ambitions: to see everything. For years, it remained mostly behind the scenes, supporting U.S. defense and intelligence operations under layers of classification.
But its client list and its confidence has grown. The company now openly courts defense ministries, NATO allies, and intelligence agencies around the world. Its executives include former Pentagon brass and intelligence insiders. Its pitch: unlike Big Tech rivals, Palantir embraces the realities of warfare. It’s not afraid to build AI that makes decisions with lethal consequences.
Back home, those same systems are quietly shaping domestic policy.
In 2023, Immigration and Customs Enforcement signed a $96 million contract with Palantir to upgrade its Integrated Case Management (ICM) platform with AI-driven analytics, facial recognition, and network mapping. The system is designed to conduct “complete target analysis,” flagging individuals based on biometrics, social ties, and predictive risk models. “Amazon Prime, but with human beings,” one ICE official described it.
Civil liberties groups call it algorithmic profiling at scale. The ACLU accuses Palantir of building the backbone of a “pre-crime” state. Amnesty International warns its tools enable “a vast and complex immigration enforcement and detention regime.”
Despite recent forays into healthcare, energy, and finance, Palantir’s biggest contracts still come from governments preparing for conflict. Its AI platforms are battlefield operating systems. In an AI arms race where other tech giants tread carefully, Palantir has carved out an identity by going all-in.
What remains unclear is who gets to decide how these systems are used or whether the public even knows they exist. Palantir has always operated at the edge of visibility. Now it’s standing in the spotlight, but the playbook hasn’t changed.
Just ask New Orleans.