An evaluation by WIRED this week discovered that ICE and CBP’s face recognition app Cellular Fortify, which is getting used to determine individuals throughout the US, isn’t really designed to confirm who individuals are and was solely authorized for Division of Homeland Safety use by enjoyable a few of the company’s personal privateness guidelines.
WIRED took an in depth have a look at extremely militarized ICE and CBP items that use excessive techniques sometimes seen solely in energetic fight. Two brokers concerned within the taking pictures deaths of US residents in Minneapolis are reportedly members of those paramilitary items. And a brand new report from the Public Service Alliance this week discovered that knowledge brokers can gasoline violence towards public servants, who’re going through increasingly more threats however have few methods to guard their private info below state privateness legal guidelines.
In the meantime, with the Milano Cortina Olympic Video games starting this week, Italians and different spectators are on edge as an inflow of safety personnel—together with ICE brokers and members of the Qatari Safety Forces—descend on the occasion.
And there’s extra. Every week, we spherical up the safety and privateness information we didn’t cowl in depth ourselves. Click on the headlines to learn the complete tales. And keep secure on the market.
AI has been touted as a super-powered software for locating safety flaws in code for hackers to use or for defenders to repair. For now, one factor is confirmed: AI creates a whole lot of these hackable bugs itself—together with a really unhealthy one revealed this week within the AI-coded social community for AI brokers often known as Moltbook.
Researchers on the safety agency Wiz this week revealed that they’d discovered a severe safety flaw in Moltbook, a social community meant to be a Reddit-like platform for AI brokers to work together with each other. The mishandling of a non-public key within the website’s JavaScript code uncovered the e-mail addresses of hundreds of customers together with tens of millions of API credentials, permitting anybody entry “that would allow complete account impersonation of any user on the platform,” as Wiz wrote, together with entry to the non-public communications between AI brokers.
That safety flaw could come as little shock on condition that Moltbook was proudly “vibe-coded” by its founder, Matt Schlicht, who has acknowledged that he “didn’t write one line of code” himself in creating the location. “I just had a vision for the technical architecture, and AI made it a reality,” he wrote on X.
Although Moltbook has now fastened the location’s flaw found by Wiz, its essential vulnerability ought to function a cautionary story in regards to the safety of AI-made platforms. The issue usually isn’t any safety flaw inherent in firms’ implementation of AI. As a substitute, it’s that these companies are way more more likely to let AI write their code—and a whole lot of AI-generated bugs.
The FBI’s raid on Washington Publish reporter Hannah Natanson’s dwelling and search of her computer systems and cellphone amid its investigation right into a federal contractor’s alleged leaks has supplied essential safety classes in how federal brokers can entry your units when you’ve got biometrics enabled. It additionally reveals at the least one safeguard that may maintain them out of these units: Apple’s Lockdown mode for iOS. The function, designed at the least partly to forestall the hacking of iPhones by governments contracting with spy ware firms like NSO Group, additionally saved the FBI out of Natanson’s cellphone, in accordance with a courtroom submitting first reported by 404 Media. “Because the iPhone was in Lockdown mode, CART could not extract that device,” the submitting learn, utilizing an acronym for the FBI’s Pc Evaluation Response Workforce. That safety possible resulted from Lockdown mode’s safety measure that stops connection to peripherals—in addition to forensic evaluation units just like the Graykey or Cellebrite instruments used for hacking telephones—except the cellphone is unlocked.
The position of Elon Musk and Starlink within the conflict in Ukraine has been sophisticated, and has not all the time favored Ukraine in its protection towards Russia’s invasion. However Starlink this week gave Ukraine a big win, disabling the Russian army’s use of Starlink, inflicting a communications blackout amongst lots of its frontline forces. Russian army bloggers described the measure as a major problem for Russian troops, specifically for its use of drones. The transfer reportedly comes after Ukraine’s protection minister wrote to Starlink’s father or mother firm, SpaceX, final month. Now it seems to have responded to that request for assist. “The enemy has not only a problem, the enemy has a catastrophe,” Serhiy Beskrestnov, one of many protection minister’s advisers, wrote on Fb.
In a coordinated digital operation final yr, US Cyber Command used digital weapons to disrupt Iran’s air missile protection programs in the course of the US’s kinetic assault on Iran’s nuclear program. The disruption “helped to prevent Iran from launching surface-to-air missiles at American warplanes,” in accordance with The File. US brokers reportedly used intelligence from the Nationwide Safety Company to seek out an advantageous weak point in Iran’s army programs that allowed them to get on the anti-missile defenses with out having to instantly assault and defeat Iran’s army digital defenses.
“US Cyber Command was proud to support Operation Midnight Hammer and is fully equipped to execute the orders of the commander-in-chief and the secretary of war at any time and in any place,” a command spokesperson mentioned in a press release to The File.



