Google on Thursday stated it noticed the North Korea-linked menace actor often known as UNC2970 utilizing its generative synthetic intelligence (AI) mannequin Gemini to conduct reconnaissance on its targets, as varied hacking teams proceed to weaponize the device for accelerating varied phases of the cyber assault life cycle, enabling data operations, and even conducting mannequin extraction assaults.
“The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance,” Google Menace Intelligence Group (GTIG) stated in a report shared with The Hacker Information. “This actor’s target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.”
The tech big’s menace intelligence group characterised this exercise as a blurring of boundaries between what constitutes routine skilled analysis and malicious reconnaissance, permitting the state-backed actor to craft tailor-made phishing personas and establish gentle targets for preliminary compromise.
UNC2970 is the moniker assigned to a North Korean hacking group that overlaps with a cluster that is tracked as Lazarus Group, Diamond Sleet, and Hidden Cobra. It is best identified for orchestrating a long-running marketing campaign codenamed Operation Dream Job to focus on aerospace, protection, and power sectors with malware underneath the guise of approaching victims underneath the pretext of job openings.
GTIG stated UNC2970 has “consistently” targeted on protection concentrating on and impersonating company recruiters of their campaigns, with the goal profiling together with searches for “information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.”

UNC2970 is way from the one menace actor to have misused Gemini to enhance their capabilities and transfer from preliminary reconnaissance to energetic concentrating on at a sooner clip. A few of the different hacking crews which have built-in the device into their workflows are as follows –
- UNC6418 (Unattributed), to conduct focused intelligence gathering, particularly looking for out delicate account credentials and e mail addresses.
- Temp.HEX or Mustang Panda (China), to compile a file on particular people, together with targets in Pakistan, and to collect operational and structural knowledge on separatist organizations in varied nations.
- APT31 or Judgement Panda (China), to automate the evaluation of vulnerabilities and generate focused testing plans by claiming to be a safety researcher.
- APT41 (China), to extract explanations from open-source device README.md pages, in addition to troubleshoot and debug exploit code.
- UNC795 (China), to troubleshoot their code, conduct analysis, and develop net shells and scanners for PHP net servers.
- APT42 (Iran), to facilitate reconnaissance and focused social engineering by crafting personas that induce engagement from the targets, in addition to develop a Python-based Google Maps scraper, develop a SIM card administration system in Rust, and analysis using a proof-of-concept (PoC) for a WinRAR flaw (CVE-2025-8088).
Google additionally stated it detected a malware known as HONESTCUE that leverages Gemini’s API to outsource performance technology for the next-stage, together with an AI-generated phishing equipment codenamed COINBAIT that is constructed utilizing Lovable AI and masquerades as a cryptocurrency alternate for credential harvesting. Some points of COINBAIT-related exercise have been attributed to a financially motivated menace cluster dubbed UNC5356.

“HONESTCUE is a downloader and launcher framework that sends a prompt via Google Gemini’s API and receives C# source code as the response,” it stated. “However, rather than leveraging an LLM to update itself, HONESTCUE calls the Gemini API to generate code that operates the ‘stage two’ functionality, which downloads and executes another piece of malware.”
The fileless secondary stage of HONESTCUE then takes the generated C# supply code obtained from the Gemini API and makes use of the authentic .NET CSharpCodeProvider framework to compile and execute the payload straight in reminiscence, thereby leaving no artifacts on disk.
Google has additionally known as consideration to a latest wave of ClickFix campaigns that leverage the general public sharing characteristic of generative AI providers to host realistic-looking directions to repair a standard laptop subject and in the end ship information-stealing malware. The exercise was flagged in December 2025 by Huntress.
Lastly, the corporate stated it recognized and disrupted mannequin extraction assaults which can be geared toward systematically querying a proprietary machine studying mannequin to extract data and construct a substitute mannequin that mirrors the goal’s conduct. In a large-scale assault of this sort, Gemini was focused by over 100,000 prompts that posed a collection of questions geared toward replicating the mannequin’s reasoning capability throughout a broad vary of duties in non-English languages.
Final month, Praetorian devised a PoC extraction assault the place a reproduction mannequin achieved an accuracy price of 80.1% just by sending a collection of 1,000 queries to the sufferer’s API and recording the outputs and coaching it for 20 epochs.
“Many organizations assume that keeping model weights private is sufficient protection,” safety researcher Farida Shafik stated. “But this creates a false sense of security. In reality, behavior is the model. Every query-response pair is a training example for a replica. The model’s behavior is exposed through every API response.”



