Playbook series: Creating clear AI policies and guardrails
October 7, 2025 // 4 min read
By creating clear, practical policies and a framework for tiered tool usage, companies can build the trust necessary to empower employees and safely scale AI adoption.
Published via GitHub Executive Insights | Authored by Matt Nigh, Program Manager Director of AI for Everyone
Our playbook on building an AI-powered workforce focused on the idea that adoption is a change-management challenge. The foundation of that change is trust, and trust is built on clear, practical policies.
Without clear rules for AI adoption and usage, employees face a paralyzing choice: They can either avoid using powerful new AI tools for fear of breaking a rule they don't know exists, or they can forge ahead, potentially exposing the company to security risks and data leaks. The first hinders innovation and the second creates avoidable risk.
This post provides a practical blueprint for creating an AI policy framework — effective AI guardrails that will empower your employees to experiment, innovate, and adopt new tools safely in this new AI era.
Start with a data classification standard
Before you can set rules for tools, you must have clear rules for your data. A policy that says, "don't use sensitive data in public AI tools" is meaningless if your employees don't have a shared understanding of what "sensitive" means.
This is why a simple, clear data classification standard is critical. It gives employees a mental model to assess risk on the fly. You don't need a complex system; a few tiers are usually sufficient. For example:
- Public: Information that is already public or approved for public release.
- Internal: Non-public company information that is generally available to all employees but would not be shared externally (e.g., internal announcements, project plans).
- Confidential: Sensitive information accessible only to specific teams or individuals (e.g., unannounced financial results, employee PII, pre-release source code).
With this foundation in place, your AI usage policies become instantly more practical and actionable for every employee.
A simple framework for a complex landscape
The AI tool ecosystem is evolving at a dizzying pace. To avoid creating a complex web of rules that no one can follow, we recommend a simple, three-tiered framework that categorizes tools based on risk and data sensitivity. Below is an example of how we tier our own tooling.
Tier 1: Approved and enterprise-ready tools
This is your "green light" category. These are the tools that have been fully vetted by your Security, Legal, and IT teams. They have enterprise-grade security controls and a formal contract in place. For us, this includes tools like GitHub Copilot, Microsoft 365 Copilot, Slack AI, Zoom AI, etc,. These tools should take up the bulk of your use cases, and be the primary AI tools your organization uses.
- The policy: Employees can use these tools with internal and confidential data.
- Why it matters: This provides a powerful, secure, and versatile set of default tools that can meet the vast majority of employee needs, giving them a safe sandbox in which to work and innovate.
Tier 2: Unvetted public tools
This category includes the vast world of free, public AI models and services that do not have a formal enterprise agreement with your company. These should be used primarily in edge cases.
- The policy: The rule here is simple and absolute: Public data only. Employees must not input any proprietary company code, customer data, or internal-only information into these tools.
- Why it matters: This policy allows for low-risk experimentation and learning. Employees can still use these tools to explore new capabilities, learn prompting techniques, and stay current with state-of-the-art technology, all without exposing sensitive information.
Tier 3: Local-only AI tools
A growing category of powerful AI tools can run entirely on an employee's local machine without sending any data to the cloud (e.g., LM Studio, MacWhisper).
- The policy: These are generally safe for a wide range of data, but with two key caveats. Employees must ensure that any features that could transmit data (like cloud sync or automatic transcription services) are disabled. For use with highly sensitive data, a quick consultation with your IT/Security team should still be required.
- Why it matters: This empowers advanced users to leverage powerful models for sensitive tasks, while still maintaining a crucial security checkpoint.
Handling special cases
No framework can perfectly capture every scenario. A mature policy acknowledges and plans for exceptions, as you will inevitably encounter tools that don't fit neatly into the three tiers.
- Experimental and pilot programs: Your most innovative teams will build or discover new AI tools that aren't ready for a full enterprise rollout. Create a specific "Experimental" category for these tools. The policy should be strict, and should consider treating them like Tier 2 tools (public data only), but their existence creates a sanctioned space for R&D. This allows innovation to flourish without waiting for a full security review, with a clear path to graduate to Tier 1 once proven.
- Department-specific tools: A one-size-fits-all toolset is a myth. Your sales team may need a specialized AI-powered CRM tool that is irrelevant to engineering. Your policy should include a clear, lightweight process for individual departments to get approval for their specific tools. This provides flexibility and ensures teams have what they need to succeed.
Making your policies usable
The best policies are useless if no one knows they exist or if they are too difficult to understand. To make your guardrails effective, you must make them accessible.
- Create a central, simple list: Don't bury your policies in a 50-page document. Create a simple, easy-to-scan page on your internal wiki that lists the approved tools for each tier, including any special cases.
- Provide clear points of contact: Designate a specific place where employees can ask questions and get fast answers. For us, a dedicated Slack channel (
#security-help) where employees can ping the security team directly has been invaluable. - Socialize and educate: Regularly communicate your policies in company all-hands meetings, newsletters, and through your AI Advocate network. The goal is to make the "rules of the road" a part of the company's shared knowledge.
By implementing clear guardrails, you are not slowing down your AI adoption. You are building the foundation of trust and safety that gives your employees the freedom and confidence to move fast, experiment boldly, and unlock the full potential of AI.
Putting it all together
Creating effective AI guardrails isn't about locking things down — it's about unlocking safe, scalable adoption. Clear policies give employees the confidence to move fast without second-guessing what’s allowed. They turn ambiguity into alignment.
With a strong data classification model, a practical tool-tiering framework, and a lightweight evaluation process, you equip your organization to keep pace with AI’s rapid evolution — without sacrificing trust, security, or momentum.
Want to learn more about the strategic role of AI and other innovations at GitHub? Explore Executive Insights for more thought leadership on the future of technology and business.
Tags