A beforehand unknown vulnerability in OpenAI ChatGPT allowed delicate dialog information to be exfiltrated with out consumer data or consent, in response to new findings from Test Level.
“A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content,” the cybersecurity firm mentioned in a report printed as we speak. “A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent.”
Following accountable disclosure, OpenAI addressed the difficulty on February 20, 2026. There isn’t any proof that the difficulty was ever exploited in a malicious context.
Whereas ChatGPT is constructed with varied guardrails to forestall unauthorized information sharing or generate direct outbound community requests, the newly found vulnerability bypasses these safeguards fully by exploiting a aspect channel originating from the Linux runtime utilized by the unreal intelligence (AI) agent for code execution and information evaluation.
Particularly, it abuses a hidden DNS-based communication path as a “covert transport mechanism” by encoding data into DNS requests to get round seen AI guardrails. What’s extra, the identical hidden communication path could possibly be used to ascertain distant shell entry contained in the Linux runtime and obtain command execution.
Within the absence of any warning or consumer approval dialog, the vulnerability creates a safety blind spot, with the AI system assuming that the atmosphere was remoted.
As an illustrative instance, an attacker might persuade a consumer to stick a malicious immediate by passing it off as a technique to unlock premium capabilities free of charge or enhance ChatGPT’s efficiency. The menace will get magnified when the method is embedded inside customized GPTs, because the malicious logic could possibly be baked into it versus tricking a consumer into pasting a specifically crafted immediate.
“Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation,” Test Level defined. “As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user’s perspective.”
With instruments like ChatGPT more and more embedded in enterprise environments and customers importing extremely private data, vulnerabilities like these underscore the necessity for organizations to implement their very own safety layer to counter immediate injections and different surprising habits in AI methods.
“This research reinforces a hard truth for the AI era: don’t assume AI tools are secure by default,” Eli Smadja, head of analysis at Test Level Analysis, mentioned in a press release shared with The Hacker Information.

“As AI platforms evolve into full computing environments handling our most sensitive data, native security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That’s how we move forward safely — by rethinking security architecture for AI, not reacting to the next incident.”
The event comes as menace actors have been noticed publishing internet browser extensions (or updating present ones) that have interaction within the doubtful apply of immediate poaching to silently siphon AI chatbot conversations with out consumer consent, highlighting how seemingly innocent add-ons might change into a channel for information exfiltration.
“It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums,” Expel researcher Ben Nahorney mentioned. “In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information.”
Command Injection Vulnerability in OpenAI Codex Results in GitHub Token Compromise
The findings additionally coincide with the invention of a important command injection vulnerability in OpenAI’s Codex, a cloud-based software program engineering agent, that might have been exploited to steal GitHub credential information and in the end compromise a number of customers interacting with a shared repository.

“The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter,” BeyondTrust Phantom Labs researcher Tyler Jespersen mentioned in a report shared with The Hacker Information. “This can result in the theft of a victim’s GitHub User Access Token – the same token Codex uses to authenticate with GitHub.”
The difficulty, per BeyondTrust, stems from improper enter sanitization when processing GitHub department names throughout activity execution on the cloud. Due to this inadequacy, an attacker might inject arbitrary instructions via the department title parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads contained in the agent’s container, and retrieve delicate authentication tokens.
“This granted lateral movement and read/write access to a victim’s entire codebase,” Kinnaird McQuade, chief safety architect at BeyondTrust, mentioned in a publish on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability impacts the ChatGPT web site, Codex CLI, Codex SDK, and the Codex IDE Extension.
The cybersecurity vendor mentioned the department command injection method is also prolonged to steal GitHub Set up Entry tokens and execute bash instructions on the code assessment container every time @codex is referenced in GitHub.
“With the malicious branch set up, we referenced Codex in a comment on a pull request (PR),” it defined. “Codex then initiated a code review container and created a task against our repository and branch, executing our payload and forwarding the response to our external server.”
The analysis additionally highlights a rising threat the place the privileged entry granted to AI coding brokers may be weaponized to offer a “scalable attack path” into enterprise methods with out triggering conventional safety controls.
“As AI agents become more deeply integrated into developer workflows, the security of the containers they run in – and the input they consume – must be treated with the same rigor as any other application security boundary,” BeyondTrust mentioned. “The attack surface is expanding, and the security of these environments needs to keep pace.”



