An evaluation by WIRED this week discovered that ICE and CBP’s face recognition app Cellular Fortify, which is getting used to determine individuals throughout the USA, isn’t actually designed to verify who people are and was solely accredited for Division of Homeland Safety use by enjoyable a few of the company’s personal privateness guidelines.
WIRED took an in depth take a look at highly militarized ICE and CBP units that use excessive techniques usually seen solely in energetic fight. Two brokers concerned within the capturing deaths of US residents in Minneapolis are reportedly members of those paramilitary items. And a brand new report from the Public Service Alliance this week discovered that data brokers can fuel violence against public servants, who’re going through increasingly more threats however have few methods to guard their private data below state privateness legal guidelines.
In the meantime, with the Milano Cortina Olympic Video games starting this week, Italians and other spectators are on edge as an inflow of safety personnel—together with ICE brokers and members of the Qatari Safety Forces—descend on the occasion.
And there’s extra. Every week, we spherical up the safety and privateness information we didn’t cowl in depth ourselves. Click on the headlines to learn the total tales. And keep protected on the market.
AI has been touted as a super-powered device for locating safety flaws in code for hackers to take advantage of or for defenders to repair. For now, one factor is confirmed: AI creates a whole lot of these hackable bugs itself—together with a really dangerous one revealed this week within the AI-coded social community for AI brokers often called Moltbook.
Researchers on the safety agency Wiz this week revealed that they’d discovered a critical safety flaw in Moltbook, a social community meant to be a Reddit-like platform for AI brokers to work together with each other. The mishandling of a personal key within the website’s JavaScript code uncovered the e-mail addresses of hundreds of customers together with thousands and thousands of API credentials, permitting anybody entry “that will permit full account impersonation of any consumer on the platform,” as Wiz wrote, together with entry to the non-public communications between AI brokers.
That safety flaw could come as little shock provided that Moltbook was proudly “vibe-coded” by its founder, Matt Schlicht, who has stated that he “didn’t write one line of code” himself in creating the location. “I simply had a imaginative and prescient for the technical structure, and AI made it a actuality,” he wrote on X.
Although Moltbook has now mounted the location’s flaw found by Wiz, its essential vulnerability ought to function a cautionary story concerning the safety of AI-made platforms. The issue usually isn’t any safety flaw inherent in firms’ implementation of AI. As a substitute, it’s that these corporations are way more more likely to let AI write their code—and a whole lot of AI-generated bugs.
The FBI’s raid on Washington Submit reporter Hannah Natanson’s house and search of her computer systems and cellphone amid its investigation right into a federal contractor’s alleged leaks has provided necessary safety classes in how federal brokers can entry your gadgets if you have biometrics enabled. It additionally reveals no less than one safeguard that may maintain them out of these gadgets: Apple’s Lockdown mode for iOS. The function, designed no less than partly to forestall the hacking of iPhones by governments contracting with spy ware firms like NSO Group, additionally stored the FBI out of Natanson’s cellphone, in keeping with a courtroom submitting first reported by 404 Media. “As a result of the iPhone was in Lockdown mode, CART couldn’t extract that system,” the submitting learn, utilizing an acronym for the FBI’s Pc Evaluation Response Staff. That safety doubtless resulted from Lockdown mode’s safety measure that forestalls connection to peripherals—in addition to forensic evaluation gadgets just like the Graykey or Cellebrite instruments used for hacking telephones—until the cellphone is unlocked.
The function of Elon Musk and Starlink within the conflict in Ukraine has been complicated, and has not all the time favored Ukraine in its protection towards Russia’s invasion. However Starlink this week gave Ukraine a major win, disabling the Russian navy’s use of Starlink, inflicting a communications blackout amongst lots of its frontline forces. Russian navy bloggers described the measure as a significant issue for Russian troops, particularly for its use of drones. The transfer reportedly comes after Ukraine’s protection minister wrote to Starlink’s father or mother firm, SpaceX, final month. Now it seems to have responded to that request for assist. “The enemy has not solely an issue, the enemy has a disaster,” Serhiy Beskrestnov, one of many protection minister’s advisers, wrote on Fb.
In a coordinated digital operation final yr, US Cyber Command used digital weapons to disrupt Iran’s air missile protection programs through the US’s kinetic assault on Iran’s nuclear program. The disruption “helped to forestall Iran from launching surface-to-air missiles at American warplanes,” in keeping with The Document. US brokers reportedly used intelligence from the Nationwide Safety Company to search out an advantageous weak point in Iran’s navy programs that allowed them to get on the anti-missile defenses with out having to immediately assault and defeat Iran’s navy digital defenses.
“US Cyber Command was proud to help Operation Midnight Hammer and is absolutely outfitted to execute the orders of the commander-in-chief and the secretary of conflict at any time and in anyplace,” a command spokesperson mentioned in a press release to The Document.










