Skip to content

woliveiras/reader-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Building an AI Agent for PDF Question Answering with LangChain and Ollama

This repository contains an example of building an AI agent that can answer questions from PDF documents using LangChain and Ollama. The agent utilizes the gemma3:4b model to process and respond to queries based on the content of the PDF files.

Requirements

  • Python 3.8+
  • Ollama installed and running
  • The following Ollama models pulled:
    • gemma3:4b
    • nomic-embed-text:latest

Setup

  1. Clone this repository
git clone git@github.com:woliveiras/reader-agent.git
cd reader-agent
  1. Pull the required models with Ollama
ollama pull gemma3:4b
ollama pull nomic-embed-text:latest
  1. Create and activate a virtual environment
python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
  1. Install dependencies
pip install -r requirements.txt
  1. Create the data directory
mkdir data
  1. Download the PDF files

Add some PDF files to the data directory. You can use any PDF files you want to test the agent.

Running the Examples

To run the agent and test its functionality, execute the following command:

python agent.py

This will start the agent, which will process the PDF files in the data directory and allow you to ask questions about their content.

Files

  • agent.py: The main script that initializes the agent and handles user queries.
  • requirements.txt: The list of Python dependencies required to run the agent.

References

Building an AI Agent for PDF Question Answering with LangChain and Ollama

About

No description, website, or topics provided.

Resources

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Languages