From 'Don't Be Evil' to Drone Deals: Silicon Valley’s Reckless AI Arms Race

From 'Don't Be Evil' to Drone Deals: Silicon Valley’s Reckless AI Arms Race
Photo by Andrea De Santis / Unsplash

In 2018, Google vowed never to develop AI for weapons or surveillance. OpenAI pledged its technology would “benefit all humanity,” not warfare. Fast forward to 2025: both companies have erased these red lines, quietly rewriting their ethical policies to pursue military contracts. Meanwhile, the Pentagon and intelligence agencies openly admit they’re feeding classified data to opaque AI systems for targeting, surveillance, and cyber operations. This seismic shift—masked as “responsible innovation”—raises urgent questions: When tech giants discard their own principles for profit, and militaries weaponize AI behind a veil of secrecy, who holds power over algorithms that could reshape global conflict?

In 2024-2025, major AI companies made consequential shifts in their ethical guardrails, abandoning previous commitments to avoid military and surveillance applications. These changes reveal a troubling pattern of self-reversed accountability that undermines trust in corporate AI governance.

OpenAI's Military Pivot

  • January 2024: Removed explicit bans on "military and warfare" uses from its policies15, replacing them with a vague prohibition against "harm"1.
  • October 2024: Announced defense partnerships, including a drone defense system with Anduril to "rapidly synthesize time-sensitive data" for battlefield use1. This directly contradicts its original 2023 stance that military work conflicted with its mission to "benefit all humanity"15.
  • Current justification: Claims defensive systems like drone interception "protect people" while denying weapons development12, despite defense contracts inherently supporting warfare infrastructure.

Google's Eroded Safeguards

  • February 2025: Deleted three key prohibitions from its AI principles3468:
    1. Technologies causing "overall harm"
    2. Weapons designed to injure people
    3. Surveillance violating international norms
  • New standard: Now pursues projects where "likely overall benefits substantially outweigh foreseeable risks"36, a subjective metric that enables military contracting.
  • Contradiction: Maintains claims about "promoting privacy and security" while removing explicit surveillance restrictions348 – a move critics call "vague language favoring profit over safeguards"7.

Why Trust Erodes

  1. Strategic ambiguity: Both companies replaced clear prohibitions with flexible terms like "national security use cases"25 and "benefits outweigh risks"36, allowing subjective reinterpretation.
  2. Financial incentives: OpenAI seeks Pentagon contracts amid $5B losses1, while Google aims to compete in the U.S.-China AI arms race46.
  3. Hypocrisy on "democratic values": While invoking "freedom" and "human rights"146, their systems now power:
    • Israel's AI-assisted targeting in Gaza2
    • U.S. drone defense networks1
    • Classified military analytics via Microsoft/Palantir partnerships5

The Bigger Pattern

Silicon Valley’s military pivot reflects:

  • Industry normalization: Following Microsoft/Amazon’s lead in lucrative defense contracts15
  • Reduced employee pushback: Post-Ukraine war, tech workers show less resistance to defense work1
  • Geopolitical posturing: Framing AI dominance as a democratic imperative against China46, despite ethical trade-offs

As Google’s 2018 employee protests over Project Maven showed368, internal principles once acted as accountability checks. Their removal signals corporate governance is subservient to market demands – a reality that renders any current "principles" inherently provisional. Without binding regulation, these mutable policies offer little assurance against harmful AI deployments.

Recent developments confirm U.S. military and intelligence agencies have accelerated classified AI deployments since 2023, with multiple public disclosures about using confidential/classified data to train and operate AI systems:

Direct Admissions of Classified AI Use

  1. Project Linchpin (2024):
    The Army's flagship AI program explicitly uses classified data to develop targeting systems and battlefield analytics. Its collaboration with Project Manager Intelligence Systems and Analytics (PM IS&A) focuses on "critical Army modernization initiatives" requiring classified intelligence feeds6.
  2. CENTCOM's AI Targeting (2023-2024):
    U.S. Central Command confirmed using computer vision algorithms to analyze classified satellite imagery and signals intelligence for "anomalous behavior detection" in conflict zones like Iraq/Syria58.
  3. Gemini-Classified (2025):
    Google's upcoming air-gapped AI for military/intelligence agencies processes Top Secret/Sensitive Compartmented Information (TS/SCI). A Mandiant executive noted "large percentage" of defense agencies requested access during private briefings4.

Policy Shifts Enabling Classified Data Sharing

  • The 2024 National Security Memorandum (§3.3(f)) mandates agencies share classified AI testing results with the NSA’s AI Security Center, including evaluations of AI’s ability to "accelerate offensive cyber operations"7.
  • DOD Directive 3000.09 revisions (2023) explicitly permit AI models to ingest classified mission data for autonomous weapons systems, provided they follow "responsible AI" guidelines5.

Operational Examples

  • Targeting Acceleration:
    Army Brig. Gen. Ronnie Anderson confirmed AI now reduces threat identification from "minutes to seconds" by analyzing classified imagery banks and signals intelligence8.
  • AI-Enhanced Surveillance:
    NSA’s AI Security Center conducts "rapid, systematic, classified testing" of models using intercepted communications and cyber threat data7.

Industry-Military Data Pipelines

Private contractors like Palantir and Anduril (partnered with OpenAI) process classified DOD data through "trusted infrastructure" under programs like Project Linchpin63.

While these programs claim adherence to Responsible AI frameworks57, the lack of public oversight mechanisms for classified systems raises concerns about accountability. The military’s stance appears to prioritize tactical advantage over transparency, leveraging commercial AI advancements while shielding data inputs/outputs behind classification barriers47.

The erosion of corporate AI ethics isn’t just about broken promises—it’s about enabling a future where unaccountable systems automate life-and-death decisions. OpenAI and Google’s policy reversals, paired with the military’s classified AI deployments, create a dangerous feedback loop: companies profit from defense contracts, militaries gain tools for asymmetric warfare, and the public loses insight into technologies that could destabilize global security. Without binding regulations or transparency, these partnerships risk normalizing AI as a shadowy instrument of power rather than a force for collective good. As one Google engineer anonymously warned during the policy overhaul: “Principles you delete today become weapons someone else deploys tomorrow.” The time to demand accountability is now—before “move fast and break things” becomes “move fast and break us.”

Timeline of Events (2018-2025)

2018:

  • Google employees protest Project Maven, a Pentagon contract using Google's AI to analyze drone footage.
  • Sundar Pichai, Google CEO, outlines Google's initial AI principles, including a section on "AI applications we will not pursue." These included technologies causing harm, weapons designed to injure people, and surveillance violating international norms.
  • Google chooses not to renew its contract for Project Maven after it expires in 2019.

December 2018:

  • Google challenges other tech firms building AI to follow its lead and develop responsible tech that "avoids abuse and harmful outcomes."
  • Prior to Spring 2023: Lattice AI is started, building Latus.
  • Spring 2023: xTechPrime is launched by the U.S. Army.

2023:

  • DOD Directive 3000.09 revisions permit AI models to ingest classified mission data for autonomous weapons systems.
  • U.S. Central Command confirms using computer vision algorithms to analyze classified satellite imagery and signals intelligence for "anomalous behavior detection."
  • December 2023: U.S. Army xTech Program launches xTechScalable AI.
  • January 2024: OpenAI quietly removes language from its usage policies that prohibited the use of its products for “military and warfare.”

March 2024:

  • U.S. Army xTech Program launches xTechScalable AI 2 at the SXSW conference.
  • Young Bang, ASA(ALT) principal deputy, requests that Matt Willis and his team utilize the prize competition structure to seek solutions from small businesses to defend against adversarial AI threats
  • Fiscal Year 2024: Army SBIR invests nearly $10 million in five small businesses aligned to Project Linchpin thrust areas.
  • August 2024: Microsoft, Palantir, and various U.S. defense and intelligence agencies partner to make AI and analytics services available to U.S. defense and intelligence agencies in classified environments.
  • September 2024: JMC training series “Data Analytics Booster Month.” takes place.

October 2024:

  • OpenAI announces defense partnerships, including a drone defense system with Anduril.
  • Google announces it will offer a version of its Gemini AI model capable of working within classified environments early next year.
  • Army launches pilot to explore generative AI for acquisition activities
  • Operationalizing Science at JMC with Artificial Intelligence and Machine Learning Article Published

February 4, 2025: Google updates its AI principles, removing previous pledges not to use AI for weapons or surveillance tools.

  • February 2025: Google deletes key prohibitions from its AI principles.
  • Early 2025: Google will be offering a version of its Gemini AI model capable of working within classified environments.
  • February 2025: From 'Don't Be Evil' to Drone Deals: Silicon Valley’s Reckless AI Arms Race Article published.
  • 2025 (Projected): Army SBIR estimates $45 million in contract awards for Project Linchpin, out of an estimated $114 million in awards for Army SBIR’s larger AI/ML portfolio.
  • Google's Gemini-Classified, an air-gapped AI for military/intelligence agencies, processes Top Secret/Sensitive Compartmented Information (TS/SCI).

Citations:

  1. https://www.technologyreview.com/2024/12/04/1107897/openais-new-defense-contract-completes-its-military-pivot/
  2. https://apnews.com/article/israel-palestinians-ai-weapons-430f6f15aab420806163558732726ad9
  3. https://www.theregister.com/2025/02/05/google_ai_principles_update/
  4. https://www.cnbc.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weapons-surveillance.html
  5. https://finance.yahoo.com/news/openai-quietly-pitching-products-u-145321695.html
  6. https://www.businessinsider.com/google-changes-its-ai-policy-defense-tech-2025-2
  7. https://epic.org/google-rolls-back-responsible-ai-principles-breaking-promise-to-limit-military-use-of-its-products/
  8. https://www.aljazeera.com/economy/2025/2/5/chk_google-drops-pledge-not-to-use-ai-for-weapons-surveillance
  9. https://www.codepink.org/aiwarfare
  10. https://fortune.com/2024/10/17/openai-is-quietly-pitching-its-products-to-the-u-s-military-and-national-security-establishment/
  11. https://fortune.com/2025/02/19/israel-microsoft-openai-raises-questions-powerful-tech/
  12. https://www.wired.com/story/google-responsible-ai-principles/
  13. https://fortune.com/2024/11/27/ai-companies-meta-llama-openai-google-us-defense-military-contracts/
  14. https://www.washingtonpost.com/technology/2025/02/04/google-ai-policies-weapons-harm/
  15. https://www.wired.com/story/openai-anduril-defense/
  16. https://ai.google/responsibility/principles/
  17. https://www.cnn.com/2025/02/04/business/google-ai-weapons-surveillance/index.html
  18. https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
  19. https://www.amnesty.org/en/latest/news/2025/02/global-googles-shameful-decision-to-reverse-its-ban-on-ai-for-weapons-and-surveillance-is-a-blow-for-human-rights/
  20. https://www.bloomberg.com/news/articles/2025-02-04/google-removes-language-on-weapons-from-public-ai-principles
  21. https://www.washingtontimes.com/news/2025/feb/18/israel-deploys-u-made-ai-models-war-concerns-arise-tech-role-lives-die/
  22. https://www.cnbc.com/2025/01/30/openai-partners-with-us-national-laboratories-on-scientific-research.html
  23. https://www.business-humanrights.org/en/latest-news/ap-exposes-big-tech-ai-systems-direct-role-in-warfare-amid-israels-war-in-gaza/
  24. https://www.insidegovernmentcontracts.com/2025/02/january-2025-ai-developments-transitioning-to-the-trump-administration/
  25. https://www.markey.senate.gov/imo/media/doc/letter_to_google_on_ai_principles_revisions2.pdf
  26. https://techcrunch.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weapons-from-website/
  27. https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work/
  28. https://www.businessinsider.com/google-employees-slam-company-after-it-ditches-ai-weapons-pledge-2025-2

Read more

Russian Cyber Warfare Targets Encrypted Messaging: The Signal QR Code Exploit Crisis The Rise of a New Attack Vector

Russian Cyber Warfare Targets Encrypted Messaging: The Signal QR Code Exploit Crisis The Rise of a New Attack Vector

Encrypted messaging apps like Signal have become critical tools for journalists, activists, military personnel, and privacy-conscious users worldwide. However, Google's Threat Intelligence Group has revealed that Russian-aligned hacking collectives UNC5792 and UNC4221 have weaponized Signal's device-linking feature, turning its core privacy functionality into an espionage vulnerability.

By My Privacy Blog