Packmind seamlessly captures your engineering playbook and turns it into AI context, guardrails, and governance.
-
Updated
Mar 1, 2026 - TypeScript
Packmind seamlessly captures your engineering playbook and turns it into AI context, guardrails, and governance.
🔥🔥🔥 Enterprise AI middleware, alternative to unifyapps, n8n, lyzr
FSPEC: The Spec-Driven, Multi-Agent Coding Factory. It is infrastructure for the "Dark Factory"—the emerging model of fully autonomous software development where AI agents handle all implementation while humans focus on defining what to build and why.
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle hooks, parallel execution, async guardrails, conditional routing, and tool-level permissions.
Validate that supporting text quotes in your data actually appear in their cited references
Mechanical enforcement tools to prevent AI agents from bypassing established project standards.
A Python implementation of the VETTING (Verification and Evaluation Tool for Targeting Invalid Narrative Generation) framework for LLM safety and educational applications.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
L0: The Missing Reliability Substrate for AI. Streaming-first. Reliable. Replayable. Deterministic. Multimodal. Retries. Continuation. Fallbacks (provider & model). Consensus. Parallelization. Guardrails. Atomic event logs. Byte-for-byte replays.
A secure, governable AI gateway for Splunk with operational guardrails. An alternative to Splunk AI Assistant focused on safety, compliance, and predictable results using a 'Configuration as Code' approach."
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
DecipherGuard: Understanding and Deciphering Jailbreak Prompts for a Safer Deployment of Intelligent Software Systems
Modular and safe prompt templates for GPT agents (resume, HR, guardrails)
An educational example showing how to build a guardrailed, tool-augmented AI assistant in C# (.NET 10) using Ollama, with deterministic validation, tool constraints, timeouts, and safe fallbacks.
Structured guardrails for Claude Code — scope control, complexity limits, automated review, smart commits. 8 skills, 3 agents, auto-loaded rules.
🤖 Build a guardrailed, tool-augmented AI assistant in C# with deterministic boundaries for safe, reliable outputs and local chat capabilities.
🛡 Secure LLM apps by managing untrusted content through a fast, local, model-agnostic pipeline with shared security checks.
preamble.md is a security policy file that governs AI agent behavior. It defines what agents can do, what requires approval, and what is forbidden.
🛡️ Enforce AI behavior guidelines with SpecGuard, a tool that turns policies into executable tests for reliable and scalable AI output management.
AI Agent Guardrails on Elasticsearch — Trust Gate that verifies LLM responses against organizational experience before delivery
Add a description, image, and links to the ai-guardrails topic page so that developers can more easily learn about it.
To associate your repository with the ai-guardrails topic, visit your repo's landing page and select "manage topics."