A FastAPI-based AI agent that helps users with GibsonAI and Memori documentation using Agno agents and session-based persistent memory.
- Session-Based Memory: Each browser session gets its own agent instance with isolated conversation memory
- Persistent Sessions: Maintains conversation context across multiple requests within a session
- Flexible Knowledge Base: Automatically loads and indexes documentation from multiple GitHub sources
- Web Search: Uses DuckDuckGo for additional context when documentation isn't sufficient
- RESTful API: Clean session management with automatic cleanup
- FastAPI: Fast API with automatic documentation and CORS support
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment variables: Copy
.env.exampleto.envand update the values:cp .env .env.local # or just edit .env directlySet your OpenAI API key and GitHub documentation URLs:
# .env file OPENAI_API_KEY=your-openai-api-key-here GITHUB_DOCS_URLS=https://raw.githubusercontent.com/GibsonAI/memori/refs/heads/main/docs/llms.txt -
Run the server:
uvicorn main:app --reload
-
Open your browser:
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
- Start a Session: Each browser/client starts a new session to get a unique session ID
- Separate Agent Instances: Each session gets its own agent instance for true isolation
- Shared Memory: Agents for the same user share conversation memory across sessions
- Automatic Cleanup: Old sessions are automatically cleaned up to prevent memory leaks
- Session Persistence: Conversation context is maintained throughout the session
The system implements a hybrid approach that provides both session isolation and memory continuity:
- Same
user_id= Shared conversation memory across all sessions - Different
session_id= Separate agent instances for isolation - Cross-session memory = Users can continue conversations across browser sessions
- Memory isolation = Different users cannot access each other's memories
// 1. Start a new session
const sessionResponse = await fetch('/session/start', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ user_id: 'optional_user_id' })
});
const { session_id } = await sessionResponse.json();
// 2. Ask questions using the session ID
const askResponse = await fetch('/ask', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
question: 'How do I use GibsonAI?',
session_id: session_id,
user_id: 'optional_user_id'
})
});
const { answer } = await askResponse.json();POST /session/start- Start a new chat sessionGET /sessions/{session_id}/info- Get session informationGET /sessions/{session_id}/memory- View session conversation memory
POST /ask- Ask questions (requires session_id)
GET /- API overview and health checkGET /health- Detailed health check with session count
You can configure multiple GitHub documentation sources by separating URLs with commas in the .env file:
GITHUB_DOCS_URLS=https://raw.githubusercontent.com/GibsonAI/memori/refs/heads/main/docs/llms.txt,https://raw.githubusercontent.com/GibsonAI/another-repo/main/docs/api.txt,https://raw.githubusercontent.com/GibsonAI/third-repo/main/README.mdOPENAI_API_KEY: Your OpenAI API key (required)GITHUB_DOCS_URLS: Comma-separated list of GitHub raw file URLs (required)
See frontend_example.js for a complete JavaScript client example that demonstrates:
- Session management
- Conversation flow
- Error handling
- User interface integration
// Basic usage
const client = new GibsonAIClient();
await client.startSession('user123');
const answer = await client.askQuestion('What is GibsonAI?');- Session-Based Agents: Each session gets a unique Agno agent instance
- Shared Memory: Same user_id enables memory sharing across sessions using Agno's built-in memory system
- Knowledge Base: Shared LanceDB vector store with GitHub documentation
- DuckDuckGo Tools: Web search for additional context
- Automatic Cleanup: Background cleanup of old sessions (1+ hour)
- True Session Isolation: Each browser session gets its own agent instance
- Memory Continuity: Conversation history persists across browser sessions for the same user
- Memory Efficiency: Automatic cleanup prevents memory leaks
- Scalable: Can handle multiple concurrent users with isolated contexts
- RESTful Design: Clean API design following REST principles
- Frontend Ready: Easy integration with web applications
- Best of Both Worlds: Session isolation + memory continuity