When Microsoft shipped VS Code 1.118 in early May 2026, most developers expected the usual round of quality-of-life improvements. What they got instead was a quiet change that ignited one of the year’s fiercest developer community backlashes: every Git commit made through VS Code now carried a Co-Authored-by: Copilot trailer — whether or not the developer had actually used Copilot to write the code.
The incident lasted barely 48 hours before Microsoft rolled it back, but the questions it raised will shape how development teams think about AI attribution, tooling trust, and governance for years to come.
TL;DR
- VS Code 1.118 silently enabled a
Co-Authored-by: Copilottag on all Git commits by default, even when Copilot was not used - The change was injected after the developer reviewed the commit message, bypassing the last human checkpoint
- Microsoft reverted to opt-in after 372 thumbs-down reactions and 654 Hacker News comments
- The incident exposes a broader governance gap: who controls AI attribution in your development pipeline?
- Teams need clear policies on AI provenance, commit metadata integrity, and tool configuration auditing
What Actually Happened
A single pull request (#310226) flipped the git.addAICoAuthor setting from "off" to "all". Opened on 15 April 2026, it was merged the following day with 25 of 26 automated checks passing. No announcement. No changelog mention. Just a default that silently rewrote commit metadata for millions of developers.
The implementation was particularly concerning because the trailer was appended after the developer reviewed the commit message in VS Code’s source control panel. The tag slipped in between the user’s final review and the actual git commit execution — the one moment in the workflow where a developer reasonably expects no further modification.
Worse still, multiple developers reported the tag appearing on commits where Copilot was never invoked, and even on machines where Copilot’s chat features were explicitly disabled. The attribution was not tracking actual AI contribution — it was a blanket statement applied regardless of reality.
Why This Matters More Than a Bad Default
It is tempting to dismiss this as a minor configuration blunder. Microsoft apologised, reverted the default, and the 1.119 release restores opt-in behaviour. Problem solved, right?
Not quite. The incident reveals three structural problems that every development team should be thinking about.
1. Commit Metadata Is a Trust Boundary
Git commit metadata is not decorative. It feeds into compliance audits, licence reviews, intellectual property assessments, and increasingly, AI provenance tracking. If your commits falsely claim AI co-authorship, you may be inadvertently triggering IP review processes, misrepresenting code provenance to clients, or creating inaccurate audit trails.
For agencies and consultancies — including ourselves at REPTILEHAUS — this is particularly sensitive. When we deliver code to a client, the commit history is part of the deliverable. False attribution metadata undermines trust and could raise contractual questions about what was human-authored versus machine-generated.
2. Your IDE Is Now a Governance Surface
Developers have long treated their IDE as a trusted, local environment. But as editors become increasingly AI-integrated — with inline completions, chat assistants, and now metadata injection — the IDE itself becomes a governance surface that needs the same scrutiny as your CI/CD pipeline.
The VS Code incident is a preview of what happens when AI capabilities are woven so deeply into tooling that they can modify outputs without explicit developer consent. Today it is a co-author tag. Tomorrow it could be AI-suggested dependency upgrades, automatic code reformatting based on model recommendations, or telemetry about which suggestions were accepted.
3. The Opt-Out Pattern Is Broken
The broader pattern here — shipping AI features as opt-out rather than opt-in — has become endemic across the developer tool ecosystem. We have seen it with Chrome silently installing Gemini Nano, SaaS platforms training models on customer data by default, and now VS Code injecting attribution metadata without consent.
The argument from vendors is always the same: “Most users want this.” But consent is not about majority preference — it is about individual control. And when the feature in question modifies the permanent historical record of your codebase, the bar for informed consent should be significantly higher.
What Your Team Should Do Now
Whether or not you use Copilot, this incident is a useful catalyst for tightening your AI governance posture. Here is a practical checklist:
Audit Your IDE Configuration
Review the AI-related settings in every editor your team uses. Document which features are enabled, what data they transmit, and what metadata they modify. Treat this as you would a security configuration review — because it is one.
Establish an AI Attribution Policy
Decide as a team how you want to handle AI co-authorship. Some organisations mandate attribution when AI tools contribute meaningfully. Others prohibit it to avoid IP complications. The worst position is having no policy at all, leaving it to whatever defaults your tools ship with.
Pin and Audit Tool Versions
If you are running VS Code with auto-update enabled (the default), a single release can change your team’s behaviour overnight. Consider pinning editor versions in regulated environments and reviewing changelogs before rolling out updates — just as you would with any other dependency in your stack.
Add Commit Hooks as a Safety Net
A simple pre-commit or commit-msg Git hook can strip or flag unexpected trailers before they enter your repository’s history. This is a low-effort safeguard that protects against any tool — not just VS Code — injecting unwanted metadata.
Monitor the EU AI Act Implications
The EU AI Act’s transparency requirements are coming into force in August 2026. False or misleading AI attribution could create compliance headaches, particularly for teams operating in regulated industries. Getting your attribution practices right now saves pain later.
The Bigger Picture: Who Controls Your Development Pipeline?
The VS Code co-author incident is a symptom of a deeper shift. As AI becomes embedded in every layer of the development stack — from code generation to testing to deployment — the question of who controls the pipeline becomes increasingly complex.
Your CI/CD system runs scripts you wrote and reviewed. Your linter follows rules you configured. But your AI-integrated IDE now makes decisions about your commit metadata, your code suggestions, and potentially your dependency choices based on models and defaults set by the vendor.
This is not inherently bad. AI-assisted development is genuinely transformative, and tools like Copilot deliver real productivity gains. But the relationship between developer and tool needs to be one of informed partnership, not silent modification.
The teams that will navigate this well are those that treat AI tooling governance as a first-class concern — with documented policies, regular audits, and clear boundaries around what their tools are and are not permitted to do autonomously.
Need Help Getting Your AI Governance Right?
At REPTILEHAUS, we help development teams integrate AI tooling effectively while maintaining control over their pipelines, their code provenance, and their compliance posture. Whether you are setting up AI coding workflows, auditing your toolchain, or building governance frameworks, get in touch — we would love to help.
📷 Photo by Harshit Katiyar on Unsplash



