Palantir’s relationship to privacy is highly dependent on exactly where you draw the creepy line. They collect data to make inferences about behavior, and in their intelligence work that means collecting data to identify potential terrorists. Their users certainly consume more data than they would with a manual counterterrorism approach, but the outcome is that less of it gets looked at by humans. So the difference is between abstract but extensive privacy violations (your phone/text metadata, financial transactions, and other behaviors all factor into their model) and literal but less common ones (someone manually reviewing the same things to decide if your Venmo transaction with the memo “Dinner at Afghan restaurant” indicates that you might be training with the Taliban.) What’s worse, the possibility of a human manually snooping around your personal information because you got unlucky, or the extremely high probability that an algorithm will review your behavior and flag it as totally innocuous with no human intervention?
Palantir is certainly sensitive to political shifts. They say as much in the S-1, and have said so elsewhere, too. But the picture is not quite what one might expect. They started to generate revenue in 2008. In Obama’s second term, revenue compounded at 37% annually, reaching $466m in 2016. In 2017, growth slowed to just 11%, and their annual growth under Trump has been just 17%.
The way they describe their views—and the way they contrast them with other tech companies—is that they’re ultimately deferring to what voters want. As Alex Karp puts it: