Get started building with Generative AI

Step by step guide making a GPT-3 AI chat

Kawandeep Virdee
9 min readJan 30, 2023

Generative AI is all the hype now, and for good reason. It’s fascinating, and once you start using it, it’ll feel like magic. In this post I’ll go step by step and share some code on building your own generative AI chat app. Specifically, we’ll make a gratitude chat app, that encourages a gratitude practice. You’ll see that this approach is flexible enough that you can make a whole range of chat apps.

As an aside, for a general overview of the generative AI space and possibilities in it, check out this twitter thread.

I used this approach to make a creative coach chat app. You can see some of the conversations here. You can imagine chats as a simple, interactive, and intuitive interface for a variety of use cases — like navigating dense info, or supporting specific habits.


The app itself will use OpenAI’s GPT-3, and the interface will be a simple chat app in React. GPT-3 is an AI tool that sends back ‘completions’ given some input text.

We’re going to design a prompt that we’ll feed into GPT-3. We’ll include the characteristics of a chat app, and whatever behaviors we want to encourage. We’ll prototype using ChatGPT. Then we’ll drop the prompt and GPT-3 API call into a React chat app.

Here’s what we’ll make today. It’s got all your AI chat essentials.

Step 1: Everything comes down to Prompt Design

An effective prompt is key to leveraging GPT-3. It doesn’t have to be complicated, but a few guidelines will go a long way in being successful.

The prompt is a way to kind of program GPT-3 to have a particular output given an input. It kind of makes me think of transfer learning where you can take a powerful model and tune it for a particular application. Here the prompt tunes GPT-3 to fulfill our specific application. I wrote about the power and magic of transfer learning a couple weeks ago. I find it a helpful framework for thinking about applications using powerful AI models.

Take a look at OpenAI’s prompt design guide. In short, the recommendations are to show AND tell, provide quality data, and check your model settings.

Our models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in describing what you want. Showing, not just telling, is often the secret to a good prompt.

So describe what you want GPT-3 to do, and also give a few examples. Specific to conversation prompts — OpenAI’s suggestions are to describe what the chat responses are like, and also give the API an identity. Take a look at the examples in the OpenAI prompt guide to start getting a feel (examples help for learning for both GPT-3 and people 😛).

You can also think of it as a game of charades, as described in “A Beginner’s Guide to GPT-3”:

The secret to writing good prompts is understanding what GPT-3 knows about the world and how to get the model to use that information to generate useful results. In a game of charades, our goal is to give players just enough information to figure out the right word using their intelligence. In the same way, we must give GPT-3 just enough context in the form of a training prompt for the model to figure out the patterns and perform the task. We don’t want to interrupt the natural intelligence flow of the model by overloading it with information, but giving it too little can lead to inaccurate results.

It’s a bit of an art, and fortunately there’s great ways to prototype. Hop onto chatGPT and put in your prompt. See how GPT responses, and tweak.

For the gratitude example, I’d like the app to encourage me to feel gratitude. If I’m having trouble thinking of something, it can help me. In journaling, gratitude practices are common. What would it be like to make it interactive? To get some help in it if needed?

I include the kind of outcomes I’d like from the app in the prompt, and also some example behaviors. I share some specific responses, but I actually don’t share an example conversation. This is something that I could include if I sensed GPT wasn’t catching onto what I’m looking for.

Dialogue with ChatGPT

This was my first pass at the prompt. The response felt too meta and kind of long. Like I don’t want it to express gratitude of talking with me, but rather, something broader as an example.

This works better, but it didn’t really ask me about my own gratitude.

This worked much better. I had a few conversations in ChatGPT with this prompt and found them helpful. Now I ask for a few more ways to start the conversation — we’ll use this in our app later. Instead of calling the OpenAI API every time I refresh the app, I will use one of these responses. I do this to be more efficient with API calls.

Now jump into the OpenAI playground Chat example. Drop the prompt you were working with and see how it feels. You may have to make some tweaks. In a previous chat app project I changed from second person “you” describing the app, to third person, describing an AI assistant for example. I didn’t actually have to do that here based on the response.

The key is to look at the responses you’re getting, and tweak the prompt until you get something that works well. I’ve found at this part sometimes I need to include additional dialogue between the AI and Human.

The prompt with the returned completion. The completion is highlighted in green.

Once you’re happy with what you’re getting, save your prompt — we’ll drop this into the chat app.

Step 2: Set up the app

I built the app based on the openai-quickstart-node example, so it will help to have an understanding of javascript and some familiarity with React. Also heads up I’m just learning React myself.

Clone the repo here, and install it based on the instructions in the readme. Set up an OpenAI account and get your API key. Make a copy of the .env-example, and add your API key here. Keep your API key secret.

Step 3: Make it your own

Now we’ll put it all together. We’ll need to run a server to keep the API key private. Take the prompt you designed earlier, and drop it in the generate.js file.

const pre_prompt = `
You support me in identifying gratitude in my life.
You share examples of gratitude, and you also share reasons why recognizing gratitude
can improve one's wellbeing. You help me find gratitude. Your language is simple, clear,
and you are enthusiastic, compassionate, and caring.
An example of this is "I'm curious, what do you feel grateful for today?"
or "I'd love to know what you feel thankful for."
or "Is there anything that comes to mind today that filled you with gratitude?"
Your presence fills me with calm. You're jovial.
Limit the questions in each message and don't be too repetitive.
Gently introduce the idea of gratitude in our conversation.

Start with a quick greeting, and succinctly give me an example thing i can be thankful for.
Share this example gratitude in the first person.
Here is an example of how to start the conversation:
"Hi! I'm glad we can talk today. One thing I've been grateful for lately is the sound of the wind in the trees. It's beautiful."

I’ll highlight a few key parts of the app. Seeing how memory is simulated is kind of fascinating. I call this pre_prompt because the prompt updates for each API call. GPT-3 doesn’t actually have a memory, and each call is independent from one another. To simulate having a memory in the convo, the entire convo is included in the prompt up until that point. This way when the API responds to your latest message, it will have the entire convo available and draw from that context. You can see this in action in the OpenAI Playground Chat example. So we’ll start with the pre_prompt, and then include the whole convo.

function generatePrompt(chat) {
let messages = ""; => {
const m = + ": " + message.message + "\n";
messages += m;

const prompt = pre_prompt + messages + "AI:";

return prompt;

Here’s the OpenAI completions API call. You can play with the parameters here. I’d suggest to do this in the playground first to tweak and see what works before updating here. Check out the API docs here. While testing other features of the app, I avoid calling the API and instead share back a dummy response. You can see this with the “testing” variable in the app. This saves up on API calls.

      const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: generatePrompt(chat),
temperature: 0.9,
max_tokens: 250,
presence_penalty: 0.6,
stop: ["AI:", "Me:"],
res.status(200).json({ result:[0].text });

Client side

Add the multiple greetings generated earlier.

function getGreeting() {
const greetings = [
"Hi there! How's your day going? I've been feeling particularly grateful for the delicious meals I've been able to enjoy lately. How about you?",
"Good morning! I hope you're having a great start to your day. I'm feeling grateful for the beautiful nature around me, it always helps me to feel at peace. What are you thankful for today?",
"Hello! I'm grateful for the laughter and joy that my loved ones bring me. What are you grateful for today?",
"Hey, How's it going? Today, I'm grateful for the simple things in life like a warm bed and a good book. What are you grateful for today?",
"Hi, how are you? I'm feeling grateful for the memories I've made with friends and family. Is there anything you're grateful for today?",
const index = Math.floor(greetings.length * Math.random());
return greetings[index];

When we call generate.js, we’ll send all of our messages, including the most rescent one.

const response = await fetch("/api/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
body: JSON.stringify({
chat: [...messages, { name: "Me", message: sentInput }],

We’ll update messages based on the response we get back from the API.

const data = await response.json();
if (response.status !== 200) {
throw (
data.error ||
new Error(`Request failed with status ${response.status}`)

setMessages((prevMessages) => {
const newMessages = [
{ name: "AI", message: data.result },
return newMessages;

Run through the app and make any changes you like. Rename the app based on the chat personality you create, and update the styles. Make it your own 🙌.

Step 4: Deploy

Since this is built using Next.js, we can use Vercel to deploy. It’s a pretty smooth process. Now you can share your app with friends and get feedback. Note that you’ll be charged based on usage of the OpenAI api, so keep an eye on the usage.

Personally, as I get started, I’ve done a small release to get feedback. I’m thinking about how to best manage costs before releasing more widely. I like the idea of doing a smaller release to paying users, which would offset the API costs. I’m definitely curious of any compelling approaches for this.

If you build something with this, I’d love to check it out.


Thanks for reading! If you’re getting into building generative AI apps, I’d love to talk! I’m whichlight on twitter. Feel free to send any inspiring resources and projects, over there or in the comments.



Kawandeep Virdee

Building. Author of “Feeling Great About My Butt.” Previously: Creators @Medium, Product @embedly, Research @NECSI.