Introduction Artificial intelligence (AI) offers immense potential to benefit humanity, but it also presents opportunities for malicious actors to exploit these technologies for harmful purposes. As AI becomes more integrated into various aspects of our lives, understanding and mitigating these threats is crucial. This article delves into the current landscape
In 2018, Google vowed never to develop AI for weapons or surveillance. OpenAI pledged its technology would “benefit all humanity,” not warfare. Fast forward to 2025: both companies have erased these red lines, quietly rewriting their ethical policies to pursue military contracts. Meanwhile, the Pentagon and intelligence agencies openly admit
Apple has discontinued advanced encryption features for iCloud backups in the United Kingdom following reported pressure from British authorities under updated surveillance laws, marking a significant development in the ongoing debate over privacy versus national security. This move comes as governments globally push for expanded access to encrypted communications under
Encrypted messaging apps like Signal have become critical tools for journalists, activists, military personnel, and privacy-conscious users worldwide. However, Google's Threat Intelligence Group has revealed that Russian-aligned hacking collectives UNC5792 and UNC4221 have weaponized Signal's device-linking feature, turning its core privacy functionality into an espionage vulnerability.