Moltbook, the Social Network for AI Agents, Exposed Real Humans’ Data

0
2

An analysis by WIRED this week found that ICE and CBP’s face recognition app Mobile Fortify, which is being used to identify people across the United States, isn’t actually designed to verify who people are and was only approved for Department of Homeland Security use by relaxing some of the agency’s own privacy rules.

WIRED took a close look at highly militarized ICE and CBP units that use extreme tactics typically seen only in active combat. Two agents involved in the shooting deaths of US citizens in Minneapolis are reportedly members of these paramilitary units. And a new report from the Public Service Alliance this week found that data brokers can fuel violence against public servants, who are facing more and more threats but have few ways to protect their personal information under state privacy laws.

Meanwhile, with the Milano Cortina Olympic Games beginning this week, Italians and other spectators are on edge as an influx of security personnel—including ICE agents and members of the Qatari Security Forces—descend on the event.

And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

Moltbook, a Social Network for AIs, Exposed Real Humans’ Data

AI has been touted as a super-powered tool for finding security flaws in code for hackers to exploit or for defenders to fix. For now, one thing is confirmed: AI creates a lot of those hackable bugs itself—including a very bad one revealed this week in the AI-coded social network for AI agents known as Moltbook.

Researchers at the security firm Wiz this week revealed that they’d found a serious security flaw in Moltbook, a social network intended to be a Reddit-like platform for AI agents to interact with one another. The mishandling of a private key in the site’s JavaScript code exposed the email addresses of thousands of users along with millions of API credentials, allowing anyone access “that would allow complete account impersonation of any user on the platform,” as Wiz wrote, along with access to the private communications between AI agents.

That security flaw may come as little surprise given that Moltbook was proudly “vibe-coded” by its founder, Matt Schlicht, who has stated that he “didn’t write one line of code” himself in creating the site. “I just had a vision for the technical architecture, and AI made it a reality,” he wrote on X.

Though Moltbook has now fixed the site’s flaw discovered by Wiz, its critical vulnerability should serve as a cautionary tale about the security of AI-made platforms. The problem often isn’t any security flaw inherent in companies’ implementation of AI. Instead, it’s that these firms are far more likely to let AI write their code—and a lot of AI-generated bugs.

Apple’s Lockdown Mode Kept the FBI Out of Reporter’s iPhone

The FBI’s raid on Washington Post reporter Hannah Natanson’s home and search of her computers and phone amid its investigation into a federal contractor’s alleged leaks has offered important security lessons in how federal agents can access your devices if you have biometrics enabled. It also reveals at least one safeguard that can keep them out of those devices: Apple’s Lockdown mode for iOS. The feature, designed at least in part to prevent the hacking of iPhones by governments contracting with spyware companies like NSO Group, also kept the FBI out of Natanson’s phone, according to a court filing first reported by 404 Media. “Because the iPhone was in Lockdown mode, CART could not extract that device,” the filing read, using an acronym for the FBI’s Computer Analysis Response Team. That protection likely resulted from Lockdown mode’s security measure that prevents connection to peripherals—as well as forensic analysis devices like the Graykey or Cellebrite tools used for hacking phones—unless the phone is unlocked.

Musk’s Starlink Disabled Russian Troops’ Satellite Internet Access

The role of Elon Musk and Starlink in the war in Ukraine has been complicated, and has not always favored Ukraine in its defense against Russia’s invasion. But Starlink this week gave Ukraine a significant win, disabling the Russian military’s use of Starlink, causing a communications blackout among many of its frontline forces. Russian military bloggers described the measure as a serious problem for Russian troops, in particular for its use of drones. The move reportedly comes after Ukraine’s defense minister wrote to Starlink’s parent company, SpaceX, last month. Now it appears to have responded to that request for help. “The enemy has not only a problem, the enemy has a catastrophe,” Serhiy Beskrestnov, one of the defense minister’s advisers, wrote on Facebook.

US Disrupted Iranian Air Missile Defense System With Cyberattacks in 2025 Strikes

In a coordinated digital operation last year, US Cyber Command used digital weapons to disrupt Iran’s air missile defense systems during the US’s kinetic attack on Iran’s nuclear program. The disruption “helped to prevent Iran from launching surface-to-air missiles at American warplanes,” according to The Record. US agents reportedly used intelligence from the National Security Agency to find an advantageous weakness in Iran’s military systems that allowed them to get at the anti-missile defenses without having to directly attack and defeat Iran’s military digital defenses.

“US Cyber Command was proud to support Operation Midnight Hammer and is fully equipped to execute the orders of the commander-in-chief and the secretary of war at any time and in any place,” a command spokesperson said in a statement to The Record.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com