Setup OpenAI on your local machine

Setting up the OpenAI API Key

When working locally with the OpenAI API, you need to set up an API key. The API key is a unique identifier that allows you to authenticate and access the OpenAI language models and services. It acts as a secure credential, granting you authorized access to the API endpoints. This key should be kept secret—please do not share it.

To set up an OpenAI API key, follow these steps:

  1. Go to the OpenAI API Settings page and navigate to the API Keys section (Settings > API Keys).
  2. Click on the “Create new secret key” button.
  3. Leave the Permissions set to the default value (All permissions).
  4. Give your key a descriptive name and click “Create secret key”.
  5. Copy the generated secret key. This is the key you’ll use to authenticate your API requests.
  6. Store the key securely, as you would with any other sensitive credential. Do not share or commit this key to version control.

Once you have your API key, you can set it as an environment variable or pass it directly to the OpenAI Python library when making API calls.

Setting the API key as an environment variable

In your VSCode workspace, create a new file called .env. In this file, add the following line:

OPENAI_API_KEY=<your-api-key>

where <your-api-key> is the API key you copied in the previous step. It should look like this:

OPENAI_API_KEY=sk-proj-...

Install the OpenAI Python Package

To use the OpenAI API in Python, we need to install the OpenAI Python package. You can do this by running the following command in your terminal:

pip install openai

You should see output like this:

 pip install openai
Collecting openai
  Downloading openai-1.55.1-py3-none-any.whl.metadata (24 kB)
Collecting anyio<5,>=3.5.0 (from openai)
  Using cached anyio-4.6.2.post1-py3-none-any.whl.metadata (4.7 kB)
Collecting distro<2,>=1.7.0 (from openai)
  Using cached distro-1.9.0-py3-none-any.whl.metadata (6.8 kB)
Collecting httpx<1,>=0.23.0 (from openai)

and then:

Installing collected packages: typing-extensions, tqdm, sniffio, jiter, idna, h11, distro, certifi, annotated-types, pydantic-core, httpcore, anyio, pydantic, httpx, openai
Successfully installed annotated-types-0.7.0 anyio-4.6.2.post1 certifi-2024.8.30 distro-1.9.0 h11-0.14.0 httpcore-1.0.7 httpx-0.27.2 idna-3.10 jiter-0.8.0 openai-1.55.1 pydantic-2.10.2 pydantic-core-2.27.1 sniffio-1.3.1 tqdm-4.67.1 typing-extensions-4.12.2

Install the dotenv package

We need to install the python-dotenv package to load the API key from the .env file. You can install the dotenv package by running the following command in your terminal:

pip install python-dotenv

Testing the OpenAI API

You can test the OpenAI API by creating a new python file (e.g., test-openai.py) and running the following code:

from dotenv import load_dotenv
from openai import OpenAI 

load_dotenv()

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "system", "content": "You are a helpful assistant."}, 
              {"role": "user", "content": "What is the weather in Bern?"}]
)

print(response)

To run Python code in VSCode, you can can either run the entire file or select the code that you want to run and press Shift+Enter. You may be asked to install the ipykernel package the first time you run a Python file. If so, go ahead and select the Install option. Alternatively, you can install it by running pip install ipykernel in your terminal.

If you run the code, you should see output like this:

response = ChatCompletion(
    id='chatcmpl-AYDrMDAY2RokWtkN47ljomaZML0DH',
    choices=[
        Choice(
            finish_reason='stop',
            index=0,
            logprobs=None,
            message=ChatCompletionMessage(
                content='I\'m sorry, but I cannot provide real-time weather updates. To get the current weather in Bern, I recommend checking a reliable weather website or app for the latest forecast and conditions.',
                refusal=None,
                role='assistant',
                audio=None,
                function_call=None,
                tool_calls=None
            )
        )
    ],
    created=1732719792,
    model='gpt-4o-mini',
    object='chat.completion',
    service_tier=None,
    system_fingerprint='fp_0705bf87c0',
    usage=CompletionUsage(
        completion_tokens=37,
        prompt_tokens=24,
        total_tokens=61,
        completion_tokens_details=CompletionTokensDetails(
            accepted_prediction_tokens=0,
            audio_tokens=0,
            reasoning_tokens=0,
            rejected_prediction_tokens=0
        ),
        prompt_tokens_details=PromptTokensDetails(
            audio_tokens=0,
            cached_tokens=0
        )
    )
)

The ChatCompletion object is the response received from the OpenAI API when you make a request for text completion or generation. It contains several key pieces of information:

  1. id: A unique identifier for the specific API request.
  2. choices: A list of possible completions generated by the AI model. In this case, there is only one choice.
  3. choices[0].message.content: The actual text generated by the AI model in response to your prompt.
  4. created: The timestamp when the API request was processed.
  5. model: The name of the AI model used to generate the response.
  6. usage: Details about the number of tokens used for the prompt and the generated text.

To put it simply, the ChatCompletion object represents the AI’s response to your input, including the generated text and some metadata about the request and the model used.

If you just want to get the text of the response, you can access it with response.choices[0].message.content.

print(response.choices[0].message.content)

In this case, you should see the following output:

I'm sorry, but I cannot provide real-time weather updates. 
To get the current weather in Bern, I recommend checking 
a reliable weather website or app for the latest forecast 
and conditions.

Testing the OpenAI API with a Jupyter Notebook

You can also test the OpenAI API with a Jupyter Notebook. To do this, create a new Jupyter Notebook and insert the following code in individual cells.

from dotenv import load_dotenv
from openai import OpenAI 
load_dotenv()
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "system", "content": "You are a helpful assistant."}, 
              {"role": "user", "content": "What is the weather in Bern?"}]
        )
print(response)
print(response.choices[0].message.content)

Example workspace

For convenience, you can download an example workspace here:

ai-coding-workshop

Once you have downloaded the ZIP file, unzip it and open the folder in VSCode. You can then create a virtual environment and install the dependencies. Once you have done this, you can run the code in the Jupyter Notebook cells or the Python file as described above.

Back to top