A privacy-first, multi-provider AI desktop client for Linux systems that unifies multiple AI providers and local models into a single conversational interface.
- Multi-Provider Support: OpenAI, Ollama (local), Groq, HuggingFace, OpenRouter
- Offline Capability: Full functionality with local Ollama models
- Streaming Responses: Real-time token-by-token response rendering
- Privacy-First: No telemetry, local key storage, optional encryption
- Intelligent Routing: Automatic model selection based on requirements
- Extensible Architecture: Plugin system for custom providers and tools
- Modern UI: Dark theme with PyQt6 interface
- Clone the repository:
git clone https://github.com/yourusername/chat-linux-client.git
cd chat-linux-client- Run the installation script:
./scripts/install.sh- Run the application:
./scripts/run.sh- Create virtual environment:
python3 -m venv venv
source venv/bin/activate- Install dependencies:
pip install -r requirements.txt- Run the application:
python main.pyConfigure API keys through the application settings or by setting environment variables:
GROQ_API_KEY: Groq API keyHUGGINGFACE_API_KEY: HuggingFace API keyOPENROUTER_API_KEY: OpenRouter API key
For offline AI support, install Ollama:
curl -fsSL https://ollama.ai/install.sh | shThen pull models:
ollama pull llama2
ollama pull mistral- Select Model: Choose your preferred AI provider and model from the dropdown
- Configure Settings: Set up API keys and preferences in the settings menu
- Start Chatting: Type your message and press Enter or click Send
- View History: Chat history is automatically saved and can be exported
chat-linux-client/
|
|--- main.py # Application entry point
|--- requirements.txt # Python dependencies
|--- README.md # This file
|
|--- core/ # Core AI provider logic
| |--- api_client.py # Base API client interface
| |--- ollama_client.py # Ollama local AI client
| |--- groq_client.py # Groq API client
| |--- huggingface_client.py # HuggingFace API client
| |--- openrouter_client.py # OpenRouter API client
| |--- provider_router.py # Intelligent routing engine
| |--- settings.py # Configuration management
| |--- model_manager.py # Model information and selection
|
|--- ui/ # User interface
| |--- main_window.py # Main PyQt6 window
|
|--- storage/ # Data persistence
| |--- history_manager.py # Chat history storage
| |--- config_manager.py # Application configuration
|
|--- utils/ # Utility modules
| |--- markdown_renderer.py # Markdown to HTML rendering
| |--- key_handler.py # Secure API key storage
| |--- system_checks.py # Environment validation
|
|--- styles/ # UI styling
| |--- dark.qss # Dark theme stylesheet
|
|--- assets/ # Static assets
| |--- icon.png # Application icon
|
|--- scripts/ # Build and run scripts
| |--- install.sh # Installation script
| |--- run.sh # Application launcher
| |--- build_appimage.sh # AppImage build script
|
|--- packaging/ # Distribution packaging
| |--- chatgpt-client.desktop # Desktop entry
| |--- AppImageBuilder.yml # AppImage configuration
The system follows a modular architecture with clear separation of concerns:
- UI Layer: PyQt6 desktop interface with dark theme
- Routing Engine: Intelligent model selection and request routing
- Provider Layer: Multiple AI provider implementations
- Storage Layer: Persistent chat history and configuration
- Utility Layer: Helper functions and system integration
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
pytest tests/./scripts/build_appimage.sh# Format code
black .
# Lint code
flake8 .
# Type checking
mypy .- Python: 3.8 or higher
- Operating System: Linux (Ubuntu 20.04+, Fedora 35+, Arch Linux)
- Memory: 4GB RAM minimum (8GB recommended)
- Storage: 500MB free space
- Optional: Ollama for local AI models
- Import Errors: Ensure virtual environment is activated
- API Connection: Check internet connectivity and API keys
- Ollama Not Found: Install Ollama for local model support
- Permission Errors: Check file permissions for config/data directories
Run comprehensive system checks:
python main.py --check-systemApplication logs are stored in:
- Linux:
~/.local/share/chat-linux-client/logs/
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Check the troubleshooting section
- Run system checks for diagnostics
- Create an issue on the project repository
Future features planned:
- Voice interface (speech-to-text + TTS)
- Local RAG knowledge system
- Multi-window chat sessions
- Agent-based task automation
- System tray background assistant mode
- Plugin marketplace
- Custom theme support