Repository-level AI Policy File (AI-POLICY.txt) for Client Compliance #4880
denniscoorn-paqt
started this conversation in
Ideas
Replies: 1 comment
-
I think this would be a very useful feature. Without a pre-check like this, many companies will probably hesitate to adopt the tool. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We work with multiple clients who have different contractual requirements about AI code generation. Some clients explicitly prohibit AI coding agents due to NDAs, regulatory compliance, or IP concerns. Others actively encourage it.
Currently, there's no way to signal these restrictions at the repository level before a coding agent starts operating. This creates risk:
This is different from
.codexignore
(which controls which files are read) or custom instructions (which guide the LLM behavior). We need a pre-execution check that happens before any LLM interaction.Proposed Solution
Introduce an
AI-POLICY.txt
file (similar torobots.txt
) that Codex CLI checks during initialization:Behavior
codex
in a repositoryAI-POLICY.txt
Disallow: /
is present, show warning:Why This Approach?
robots.txt
- a gentleman's agreement with audit trailUse Cases
Downsides Of Existing Solutions?
.codexignore
Bottom line: A simple, clear signal that says "this specific repository has contractual restrictions on AI usage" before any code is generated would be a very useful feature to have.
Thoughts? Would this be useful for others managing multi-client or compliance-sensitive projects?
Beta Was this translation helpful? Give feedback.
All reactions