Integrating OpenAI's GPT with .NET Core: Building AI-Powered C# Applications

Explore the transformative potential of integrating OpenAI's language models like GPT-3.5-Turbo into .NET Core apps. This guide walks you through setting up, understanding the API, and harnessing the potential of AI in C# applications.

Harness the Power of Artificial Intelligence in .NET Core with OpenAI's Language Models.

Do you recall the amazement we felt hearing the modem's dial-up sounds, attempting our first connection to the Internet? Now, two decades later, we're not just talking about faster speeds, but an entirely new realm of possibilities with Artificial Intelligence. Let's explore how to integrate OpenAI into .NET Core apps.

In this post, we’ll delve into one of the most compelling offerings in the Artificial Intelligence realm: OpenAI’s gpt-3.5-turbo, a highly advanced language model that's making substantial strides in natural language understanding and generation. While the fourth iteration, GPT-4, offers even more capabilities, it requires a paid account with OpenAI. As such, we'll focus on gpt-3.5-turbo to ensure that all developers, regardless of their OpenAI account status, can follow along and integrate this language model into their applications.

Despite the model's complexity, integrating it into .NET Core using C# is straightforward, especially via the OpenAI REST API.

While there are numerous packages available that simplify the integration of the OpenAI API into .NET Core applications, it’s crucial to first understand the underlying principles of how this interaction works. This blog post intentionally avoids the use of these pre-packaged solutions.

By building from the ground up, you'll grasp the dotnetcore integration intricacies with the OpenAI API, understanding request construction and response processing. This hands-on approach will not only help you appreciate the intricacies involved but also equip you with the knowledge to troubleshoot issues, optimize performance, and adapt to changes in the API over time.

Set up the .NET Core console app

We begin our journey to integrate artificial intelligence into C# applications using the OpenAI API. It's a snap to access the OpenAI API with .NET Core 7, and today I'll walk you through this straightforward process in detail.

If you're new to .NET and need some guidance on setting it up, check out my beginner-friendly guide here.

To prepare your console app for the following OpenAI API integration example, execute the following commands.

Create new console app
$ dotnet new console -o openai-csharp-example
$ cd openai-csharp-example
Add required dependencies
$ dotnet add package Newtonsoft.Json
$ dotnet add package Microsoft.Extensions.Configuration
$ dotnet add package Microsoft.Extensions.Configuration.UserSecrets

Setting up the OpenAI API

To interact with the OpenAI API, you’ll need an API key. This key is a unique identifier that grants your application permission to access the API services. You can acquire this key by registering an account on the OpenAI platform and navigating to the ‘API Keys’ section in your account settings.

Once you have your API key, it’s time to integrate it into your application. The key is typically added as a field in a class specifically built for interacting with the OpenAI API, often referred to as the service class. This class not only holds the API key but also encompasses the methods for sending requests and handling responses from the API.

Then initialize the Secret Manager and add your OpenAI API key to the Secret Manager as follows:

$ dotnet user-secrets init
$ dotnet user-secrets set "OpenAI:ApiKey" "your_api_key"

A detailed explaination how the Secret Manager works you find here in detail.

Create the OpenAI API Service

Building a separate service to interact with the OpenAI API allows us to encapsulate the logic related to API requests and responses within a dedicated service class, thereby promoting better code organization, maintainability, and scalability.

Let’s create a new class named OpenAIService in the root directory of our project. This service class acts as a gateway between our application and the OpenAI API, isolating the specifics of the API interaction such as request formatting and response parsing.

It aligns with the principles of good object-oriented design and encapsulation by hiding the details of the API interaction and exposing a method, SendPromptAndGetResponse(), which serves as a clean interface to the rest of our application. This method takes a prompt and returns the corresponding response from the API, abstracting away the complexities involved in the process.

public class OpenAIService
{
    private readonly HttpClient _httpClient;
    private readonly string _apiKey;

    public OpenAIService(HttpClient httpClient, string apiKey)
    {
        _httpClient = httpClient ?? throw new ArgumentNullException(nameof(httpClient));
        _apiKey = apiKey ?? throw new ArgumentNullException(nameof(apiKey));
    }

    public async Task<string> SendPromptAndGetResponse(string prompt)
    {
        const string requestUri = "https://api.openai.com/v1/chat/completions";
        var requestBody = new
        {
            temperature = 0.2,
            model="gpt-3.5-turbo",
            messages= new []
            {
                new { 
                    role = "system", 
                    content = "You are a helpful assistant." 
                },
                new { 
                    role = "user", 
                    content = prompt 
                }
            }
        };

        _httpClient.DefaultRequestHeaders.Authorization = 
            new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", _apiKey);

        var response = await _httpClient.PostAsync(
            requestUri,
            new StringContent(JsonConvert.SerializeObject(requestBody), Encoding.UTF8, "application/json"));

        response.EnsureSuccessStatusCode();

        var responseBody = JsonConvert.DeserializeObject<ResponseBody>(await response.Content.ReadAsStringAsync());
        return responseBody.Choices[0].Message.Content.Trim();
    }
}

The constructor of OpenAIService takes HttpClient and apiKey as parameters. This usage of dependency injection allows us to supply dependencies from outside the class, increasing the flexibility and testability of our code.

The SendPromptAndGetResponse() method is where the magic happens. It constructs a request, sends it to the OpenAI API, and processes the response. The method encapsulates these steps and presents a straightforward interface to the rest of our application: give it a prompt, and it will return the response.

In essence, OpenAIService is an example of the Facade pattern, which provides a simplified interface to a complex subsystem. Here, the subsystem is the interaction with the OpenAI API, and OpenAIService is the facade that simplifies that interaction for the rest of our application.

Next, we’ll look at how to use OpenAIService in the context of a chatbot application.

Understanding the OpenAI Chat Completion API

Before we dive further into the coding, let’s take a pause and better understand the core concepts that make the OpenAI Chat Completion API work. Getting these fundamentals right will go a long way in efficiently interacting with the API.

Interpreting API Responses: When we send a chat completion request to OpenAI, the response comes in a JSON format, which includes an array of ‘choices’. To convert this JSON data into a usable format in our C# application, we use a process called deserialization. For this purpose, we’ve implemented the ResponseBody class:

public class Message
{
    public string Content { get; set; }
}

public class Choice
{
    public Message Message { get; set; }
}

public class ResponseBody
{
    public List<Choice> Choices { get; set; }
}

In the above classes, Choice corresponds to the 'choices' we get from the API, and Message represents individual messages. The ResponseBody class holds a list of Choice objects, which effectively forms a roadmap to traverse the JSON response.

Tokens Demystified: Think of tokens as building blocks of conversation. In OpenAI’s language model, a token can range from a single character to a whole word. Understanding tokens is crucial because they influence both the cost and the maximum limit of your API usage. Every message to and from the API consumes tokens, affecting how much you pay and how long your conversations can be.

The Role of Choices: When we send a prompt to the API, it can generate multiple completions or ‘choices’ based on the prompt. Each of these completions provides a unique direction for the conversation to proceed. By manipulating these choices, we can engineer our chatbot’s responses to provide a dynamic user experience.

Recognizing these key concepts of the OpenAI Chat Completion API is paramount to leverage its full potential efficiently. It’s all about striking a balance between providing a rich conversational experience and managing the cost associated with token usage.

Building a Chat Session to Maintain Context

When designing our chat application, it’s essential to consider how we handle the conversational context. As humans, our understanding of a conversation depends on the messages that have been previously exchanged — we remember previous statements and use them to inform our responses. In similar fashion, for our AI to generate meaningful responses, it needs access to previous prompts and replies. This is what we refer to as maintaining the conversation context.

To handle this in our console chatbot, we introduce a new class ChatSession.

public class ChatSession
{
    private readonly List<object> _messages;

    public ChatSession()
    {
        _messages = new List<object>
        {
            new
            {
                role = "system",
                content = "You are a helpful assistant."
            }
        };
    }

    public void AddMessage(string role, string content)
    {
        _messages.Add(new { role, content });
    }

    public object[] GetMessages() => _messages.ToArray();
}

This class plays a crucial role in preserving the conversational context throughout the user's interaction with the AI. The ChatSession object stores every message exchanged during the conversation, ensuring that each new message can be understood in the full context of what has been said before.

When a ChatSession is first initialized, it's provided with a system message that sets the tone for the AI's responses. As the conversation progresses, each message – both from the user and the AI – is added to the _messages list within ChatSession. The AddMessage() method helps to encapsulate this operation, taking as parameters the role ("user" or "assistant") and the content of the message.

Now, let’s modify our main chat loop to make use of ChatSession. For that create or change the Program.cs as follows:

internal class Program
{
    private static async Task Main(string[] args)
    {
        var builder = new ConfigurationBuilder()
            .AddUserSecrets<Program>();
        
        var configuration = builder.Build();
        var apiKey = configuration["OpenAI:ApiKey"];
    
        using var httpClient = new HttpClient();
        var openAIService = new OpenAIService(httpClient, apiKey);

        var chatSession = new ChatSession();

        while (true)
        {
            Console.Write("You: ");
            var userInput = Console.ReadLine();

            if (string.IsNullOrWhiteSpace(userInput))
            {
                Console.WriteLine("Input can't be empty. Please try again.");
                continue;
            }

            chatSession.AddMessage("user", userInput);

            try
            {
                var response = await openAIService.SendPromptAndGetResponse(chatSession.GetMessages());
                Console.WriteLine($"OpenAI: {response}");
                chatSession.AddMessage("assistant", response);
            }
            catch (Exception ex)
            {
                Console.WriteLine($"An error occurred: {ex.Message}");
                break;
            }
        }
    }
}

After the user's input is received, it's added to the ChatSession object as a user message. When a response is obtained from the AI, this too is added to the ChatSession as an assistant message.

In line with our newly established ChatSession class, it’s essential we adapt the SendPromptAndGetResponse() method in our OpenAIService class. This adaptation is crucial for maintaining conversation context and feeding it to our OpenAI model.

Our revised method now takes an IEnumerable<object> parameter, which is our list of messages. The list comprises both the system, user, and assistant messages which encapsulate the complete conversation history. This list forms the content of the messages field in the request body for the OpenAI API.

The SendPromptAndGetResponse() looks like this now:

public async Task<string> SendPromptAndGetResponse(IEnumerable<object> messages)
    {
        const string requestUri = "https://api.openai.com/v1/chat/completions";
        var requestBody = new
        {
            temperature = 0.2,
            model="gpt-3.5-turbo",
            messages= messages.ToList()
        };

        _httpClient.DefaultRequestHeaders.Authorization = 
            new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", _apiKey);

        var response = await _httpClient.PostAsync(
            requestUri,
            new StringContent(JsonConvert.SerializeObject(requestBody), Encoding.UTF8, "application/json"));

        response.EnsureSuccessStatusCode();

        var responseBody = JsonConvert.DeserializeObject<ResponseBody>(await response.Content.ReadAsStringAsync());
        return responseBody.Choices[0].Message.Content.Trim();
    }

With this implementation, our chatbot becomes more conversational, understanding and responding appropriately to user inputs in the full context of the conversation. By encapsulating the context management within the ChatSession class, our main application flow remains clean, focused, and maintainable, further underscoring the importance and benefits of good object-oriented design.

Running and Testing Your OpenAI-Powered Chatbot

With the chatbot implemented and the OpenAI API service integrated, it’s time to put our console application to the test. Remember, you can interact with your chatbot directly through the console, making it easy to provide inputs and see the AI’s responses.

  1. Open a terminal window.
  2. Navigate to your project’s directory.
  3. To run the application, use the dotnet run command. This command builds and runs your application in one step.

Your chatbot is now waiting for your input. Type in a question or statement and hit Enter to see how the AI responds. For example, you might ask:

Translate 'Today is beautiful weather.' into German.

You can now talk to the AI. As you can see in the following example output, the model recognizes the context of your input as in a human-to-human dialog.

You: Translate 'Today is beautiful weather.' into German.
OpenAI: Heute ist schönes Wetter.
You: Now to French.
OpenAI: Aujourd'hui, il fait beau.
You: Say it in Spanish.
OpenAI: Hoy hace buen tiempo.

Experiment with different kinds of prompts to see how the AI responds. This is a great opportunity to gauge its capabilities, understand its limitations, and get a sense of how it might be used in a more complex application.

Keep in mind that although this tutorial focuses on creating a simple chatbot, the principles and methods used here are applicable to a wide range of applications. Whether you’re building an intelligent virtual assistant for an app, an automated content generator, or a tool for answering customer inquiries, OpenAI’s GPT-4 can provide a significant boost in functionality and user experience.

Wrapping up

We’ve just built a console application in .NET Core that interacts with the OpenAI API using the GPT-3.5-Turbo model (or if you have a paid account and mininum one payment done you can also use GPT-4). This application serves as a basic chatbot that can receive prompts from a user and generate intelligent responses.

In this journey, we have explored the foundational concepts of the OpenAI API, from tokens and choices to the structure of the chat completion API. This knowledge, combined with hands-on experience, will serve as a solid base as you venture further into the world of AI with OpenAI.

If you would like to reference the complete code, you can access it on my GitHub repository: openai-csharp-example.

Bear in mind, Artificial Intelligence, especially in dotnetcore, is constantly evolving, and it’s an exciting field to be a part of. Don’t stop exploring, and don’t stop learning.

Subscribe to Rico Fritzsche

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe