Loading...
All Articles
AI Governance · 8 min read

AI Coding Assistant Governance: Policy Template for Enterprise Teams

How to roll out GitHub Copilot, Cursor, and Claude Code in an enterprise without leaking secrets, exposing IP, or contaminating the codebase — a template policy, pre-commit hooks, and CI gates.

The Conversation Every VP Engineering Is Having

It's 2026 and the question is no longer "should we allow AI coding assistants." Every serious engineering org already has them — officially or unofficially. The question is how to run them responsibly: which tools, in which repos, with what telemetry, with what guardrails, and what happens when someone accidentally pastes a customer's PII into a prompt.

This post is a template for that conversation. It's based on policies we've written with clients ranging from Series B SaaS companies to regulated industries. It covers Copilot, Cursor, Claude Code, Windsurf, and whatever else ends up installed on someone's laptop by next Tuesday. We won't pretend there's a single right answer, but there's a pretty consistent shape to a workable one.

First Principles

Five principles that should anchor the policy:

  1. Treat AI assistants like any other developer tool — subject to security review, procurement, and SSO.
  2. Assume prompts leave your network — because they do, unless you've configured a private endpoint.
  3. Generated code is your code — the model isn't liable, you are.
  4. License risk is real — GPL code in suggestions is possible, unlikely, but possible.
  5. Opt-in by team, not by individual — pilots, then tiers, then general availability.

The Risks, Plainly

Before the policy, name the risks. Every engineer should be able to recite these.

RiskWhat it looks likeBlast radius
Secret exfiltrationAPI key in a prompt gets loggedHigh
PII exposureCustomer data pasted for debuggingHigh (regulatory)
IP leakageProprietary algorithms in context windowHigh (competitive)
License contaminationGPL code suggested, merged into proprietary codebaseMedium
Supply chainSuggested package name is a typosquatMedium
Quality / hallucinationConfident nonsense shipped to productionMedium
OverrelianceJuniors skip learning fundamentalsLong-term
TelemetryVendor retains keystrokes or code snippetsMedium (contract)

Good governance is the intersection of controls that reduce each of these to an acceptable level without making the tool unusable.

The Policy Template

Below is a policy document you can adapt. Treat it as a starting point.

1. Scope

This policy applies to all engineering staff, contractors, and embedded consultants using AI coding assistants in connection with company code. It covers IDE plugins (GitHub Copilot, Cursor, Windsurf, Zed AI), CLI assistants (Claude Code, Aider, OpenAI Codex CLI), chat-based assistants used for code (ChatGPT, Claude.ai), and any other tool that sends source code or technical context to a third-party model.

2. Approved Tools

Only tools on the approved list may be used with company code. The approved list is reviewed quarterly and maintained by the platform and security teams.

As of this version:

  • GitHub Copilot Business or Enterprise — enterprise tenancy, SSO, zero-retention enabled.
  • Cursor Business — with custom OpenAI/Anthropic endpoints and privacy mode.
  • Claude Code — with Anthropic business tier and default data handling.
  • GitHub Copilot Chat (via the same Business tenancy).

Personal accounts and free tiers are not permitted for any work-related use. Tools not on this list require a security review before use.

3. Allowed And Forbidden Contexts

AI assistants may be used in repositories marked ai: allowed or ai: limited in the repository catalog. They may not be used in repositories marked ai: forbidden.

Allowed contexts

  • Application code (frontend, backend, services).
  • Infrastructure code (Terraform, Helm, Kubernetes manifests).
  • Internal tooling and scripts.
  • Test code and test data generation (synthetic data only).

Forbidden contexts

  • Repositories containing customer data exports.
  • Repositories containing cryptographic implementations that have not been approved by security.
  • Repositories containing signing keys, KMS bootstrap, or root-of-trust material.
  • Any repository or file that contains PII, PHI, or regulated data.
  • Security research against our own systems without written approval.

4. Prompt Hygiene

Engineers must not paste into any AI assistant prompt:

  • API keys, tokens, passwords, or other secrets.
  • Customer data, even partial or anonymized.
  • Personally identifiable information.
  • Non-public financial data or forecasts.
  • Non-public roadmap or HR information.

When in doubt, redact.

5. Generated Code Review

All AI-generated or AI-modified code is subject to the same review process as human code: PR review, CI checks, and security scanning. Engineers are responsible for the code they commit regardless of origin.

6. License Hygiene

Engineers may not knowingly accept suggestions that replicate substantial portions of external source code. Where the tool offers a public-code filter (Copilot's duplication detection), it must be enabled. Suggestions that appear to be verbatim third-party code should be rejected and reported.

7. Telemetry

Vendors' telemetry and retention settings are centrally configured to minimize exposure:

  • Copilot: Organization policy with "Allow suggestions matching public code" set to Block, and "Prompt and suggestion collection" disabled.
  • Cursor: Privacy mode enforced via configuration management.
  • Claude Code: Default behavior (no training on business data) plus explicit claude-code-settings management.

8. Incidents

Any suspected secret leak, PII exposure, or license concern involving an AI assistant must be reported to the security team within 24 hours. The standard incident response runbook applies.

9. Review And Change

This policy is reviewed every 6 months or when a material change in tooling or regulation occurs.

Making It Enforceable

A policy nobody reads is no policy. The enforcement comes from three layers: pre-commit hooks on the developer laptop, CI checks in every PR, and OPA rules in the CI itself.

Pre-commit Hook: Block Secrets Before They Hit Git

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.21.0
    hooks:
      - id: gitleaks
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.5.0
    hooks:
      - id: detect-secrets
        args: ["--baseline", ".secrets.baseline"]
  - repo: local
    hooks:
      - id: block-ai-forbidden-paths
        name: block AI-forbidden paths
        entry: scripts/check-ai-allowed-paths.sh
        language: script
        pass_filenames: true

A small script that refuses commits touching forbidden paths:

#!/usr/bin/env bash
set -euo pipefail

FORBIDDEN=(
  "services/crypto/"
  "secrets/"
  "customer-exports/"
)

for file in "$@"; do
  for path in "${FORBIDDEN[@]}"; do
    if [[ "$file" == "$path"* ]]; then
      echo "refusing commit: $file is in an AI-forbidden path"
      exit 1
    fi
  done
done

CI Check: Detect AI-Generated Code With Questionable Provenance

You cannot perfectly detect AI-authored code, but you can detect obvious tells: suspicious copyright headers, large blocks with no matching history, or new dependencies that weren't part of the PR description.

name: ai-governance

on:
  pull_request:

jobs:
  secret-scan:
    runs-on: ubuntu-24.04
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  license-scan:
    runs-on: ubuntu-24.04
    steps:
      - uses: actions/checkout@v4
      - uses: fossas/fossa-action@v3
        with:
          api-key: ${{ secrets.FOSSA_API_KEY }}

  dependency-review:
    runs-on: ubuntu-24.04
    steps:
      - uses: actions/checkout@v4
      - uses: actions/dependency-review-action@v4
        with:
          fail-on-severity: moderate
          deny-licenses: "GPL-3.0,AGPL-3.0"

OPA Policy: Gate Merges Based On Repo Catalog

Every repository has an entry in a central catalog with a classification. OPA checks that the current repo is allowed to use AI assistants and that PRs touching forbidden paths don't slip through.

package ai.repo_policy

default allow_merge := false

allow_merge if {
  input.repo.ai_classification in {"allowed", "limited"}
  count(forbidden_changes) == 0
}

forbidden_changes contains path if {
  some change in input.changes
  path := change.path
  startswith(path, "services/crypto/")
}

forbidden_changes contains path if {
  some change in input.changes
  path := change.path
  contains(path, "customer-exports")
}

Rollout Strategy

Enforcement is the last step. The first is getting the tool in engineers' hands without breaking something. We recommend a three-tier rollout.

Tier 0: Pilot (4-6 weeks)

Pick one team, 5-10 engineers, on a non-customer-facing service. Give them the tool, the policy, and a direct channel to the platform team. Measure:

  • PR throughput
  • Defect rate
  • Time-to-merge
  • Qualitative feedback (was it a net positive?)

No enforcement yet. Observe only.

Tier 1: Opt-In Teams (6-8 weeks)

Open enrollment to teams that opt in. Require the policy acknowledgment, require pre-commit hooks, enable CI gates in their repos. Expand the approved tool list based on pilot feedback.

Tier 2: General Availability (Quarter 3)

All engineering. Policy in the handbook. CI gates enforced everywhere. Central dashboard for tool licenses and utilization. Incident runbook updated to cover AI-related incidents.

Measuring Whether It's Working

Governance without measurement is vibes. The metrics we track in client engagements:

  • Adoption rate — % of engineering with active seats.
  • Acceptance rate — for Copilot, the fraction of suggestions accepted.
  • Policy violations caught — by pre-commit and CI per month.
  • Security incidents involving AI — hopefully zero.
  • Developer satisfaction — via quarterly survey.
  • DORA metrics before/after — did lead time improve?

Common Pitfalls

Things we've seen go wrong:

  • Policy written in legalese, nobody reads it. Fix: one page, plain English.
  • Approved tool list too restrictive. Engineers use their personal accounts. Fix: be generous but require SSO and privacy settings.
  • Enforcement only at CI. Secrets already in git history. Fix: pre-commit is non-negotiable.
  • No rollback plan. A tool rolled out widely and then found unsuitable is painful. Fix: pilot first.
  • Treating the tool as a productivity miracle. Unrealistic ROI expectations lead to disappointment. Fix: measure honestly.

Next Steps

AI coding assistants are now part of the default developer toolkit. A governance framework for them is part of the default security and engineering operations posture. Start with the principles, adapt the policy, install the hooks, and roll out by tier. If you want help writing a tailored policy, piloting with a team, or building the enforcement pipeline, get in touch.

filed under
copilotcursoraipolicygovernance
work with us

Want our team to help with your infrastructure?

talk to an engineerFree 30-min discovery callBook
close