Build a Python + ChatGPT-3.5 Chatbot in 10 Minutes
Today, I’m at peace with the existence of ChatGPT — but when it first launched, I was terrified. My mother, a fourth-grade teacher, promptly informed me (a career writer and technical content manager) that I’d soon be out of a job. Years later, still employed, I now strive to make ChatGPT my ally. Not only have I come to appreciate how helpful it can be, but I also figure it’s wise not to be enemy No. 1 when the robots take over.
That said, why am I rambling on about this? Well, understanding the enemy was a step forward in befriending it. Working with AI is an incredibly valuable skill. And this extends past just asking ChatGPT questions (which I now love). Learning how to code with or alongside the model is the next step in really harnessing the power of GPT.
I designed this tutorial to help beginners get started in understanding ChatGPT’s API and response logic. It won’t teach you how to build a fancy application on top of GPT, but it will help you understand how to write code that interacts with OpenAI’s (creators of ChatGPT) API. Since this is a no-frills tutorial, I built it using a Jupyter Notebook. Jupyter Notebooks are great for prototyping and testing ideas. If you are unfamiliar with Jupyter Notebooks, check out this guide on getting started.
Get Started With OpenAI
- Sign up at platform.openai.com.
- Access the API dashboard.
- Go to your API keys page, click “Create new secret key” and copy the key immediately. You won’t be able to view it again later.
- Add billing information.
- While OpenAI offers a free trial, I added a payment method. A $10 credit is usually enough to experiment with GPT-3.5.
- Note: You’ll need either the free trial or active billing to use GPT-3.5.
Building the Chatbot With ChatGPT-3.5
Import Required Libraries and Dependencies
Paste the following into your first Jupyter Notebook cell:
openai lets you interact with OpenAI’s models, like GPT-3.5. python-dotenv securely manages sensitive information like API keys, loading them from a .env file into your Python environment.
Import the Modules
Use the following code to import openai, os and dotenv, which help configure your environment and interact with the API:
Load Your API Key
Make sure you’ve created an API key in your OpenAI dashboard. You’ll now set up a .env file to store it securely:
Paste your actual API key where it says sk-proj-api-key-here. Next, load the key in your notebook:
Define the Chatbot Function
This is where we define how the chatbot communicates with OpenAI’s API and handles the conversation.
The method openai.chat.completions.create()is the latest API call designed specifically for interactive chat. This method allows real-time communication with GPT-3.5 by sending a list of messages and receiving a relevant response based on the ongoing conversation.
GPT models like GPT-3.5 are designed to retain memory during a session. This means they reference earlier parts of the conversation to provide more coherent and helpful responses. Our chat_with_gpt function accepts a list of messages, which serves as the ongoing dialogue history. This helps the assistant respond in context.
Each message in the list includes a "role" value to tell the model who the message is from:
"user"for the person asking questions"assistant"for GPT responses"system"sets behavior instructions (optional)
The temperature value controls the randomness or creativity of the assistant’s response. The scale ranges from 0 to 1, with 0 being the predictable end of responses and 1 being more random and creative. 0.7 is suggested as the best balance.
The API returns a response object. From there, we extract the assistant’s reply with choices[0].message.content, which refers to the actual text generated by GPT.
Run the Chat Loop
Now we’ll make the chatbot interactive so the user can have a continuous conversation with the model.
This section sets up an infinite loop, allowing the chatbot to keep running until the user types exit or quit. Infinite loops make the chatbot conversational and “aware” of past messages by keeping track of what’s already been said. This loop keeps the conversation going, while preserving context. That context allows GPT-3.5 to respond in more natural, meaningful ways.
Here’s what’s happening step by step:
- The chatbot prompts the user for input using
input(). - If the user types “exit” or “quit”, the loop breaks and the chatbot says goodbye.
- If the user does not type “exit” or “quit”,
messages.append(...)adds the user’s last message to the conversation history. - The function
chat_with_gpt(messages)is called to get the assistant’s response. - The assistant’s reply is also added to the history using
messages.append(...). - Finally, the response is printed to the console so the user can read it.
Befriending the Robots
This project gives you a basic understanding of how to build a chatbot using GPT-3.5. While it’s just the beginning, you now know how to authenticate with the OpenAI API, maintain a conversation loop and use Jupyter Notebooks to test and run code. It’s a great first step toward more advanced AI development and befriending the robots.