This repository contains an example of building an AI agent that can answer questions from PDF documents using LangChain and Ollama. The agent utilizes the gemma3:4b model to process and respond to queries based on the content of the PDF files.
- Python 3.8+
- Ollama installed and running
- The following Ollama models pulled:
gemma3:4bnomic-embed-text:latest
- Clone this repository
git clone git@github.com:woliveiras/reader-agent.git
cd reader-agent- Pull the required models with Ollama
ollama pull gemma3:4b
ollama pull nomic-embed-text:latest- Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`- Install dependencies
pip install -r requirements.txt- Create the data directory
mkdir data- Download the PDF files
Add some PDF files to the data directory. You can use any PDF files you want to test the agent.
To run the agent and test its functionality, execute the following command:
python agent.pyThis will start the agent, which will process the PDF files in the data directory and allow you to ask questions about their content.
agent.py: The main script that initializes the agent and handles user queries.requirements.txt: The list of Python dependencies required to run the agent.
Building an AI Agent for PDF Question Answering with LangChain and Ollama