Using Go to Translate Text using the Groq API (Fast Inference)

Using Go to Translate Text using the Groq API (Fast Inference)

Featured on Hashnode

In this blog post, I'll walk you through the process of using Go to translate text using the Groq's Fast Inference API.

What is Groq?

Groq is a new company that's making AI work much faster than ever before. They've created special computer parts called Language Processing Units (LPUs) that are really good at understanding and creating language. Imagine being able to generate so many words in a second - that's how fast Groq's technology is! It's way quicker than the computers we usually use for AI.

Groq vs ChatGPT

This super fast speed is great for lots of things. It can help make:

  • Translation apps that work right away

  • Chatbots that talk to you without waiting

  • Trading programs that make decisions super fast

The best part? Groq made their fast APIs available for anyone to use.

Groq supports different open-source LLM models so it is a great alternative to OpenAI's GPT models.

By the end of this tutorial, you'll have a clear understanding of how to set up your environment, make Groq API requests, and handle the responses to translate text. Let's dive in!

Prerequisites

Before we start, ensure you have the following:

  • A basic understanding of the Go programming language.

  • Go installed on your machine. I am using the latest version Go 1.22.

  • An API key from the Groq Fast Inference API.

Getting Your Groq API Key

To interact with the Groq Fast Inference API, you'll need an API key. Follow these steps to get your API key:

  1. Create an account in https://groq.com.

  2. Sign up or log in to your Groq account.

  3. Visit https://console.groq.com/keys. This page shows all your API keys.

  4. Generate a new API key which we need for to access Groq's API. You should see this view after clicking Create API Key. Use a display name that you like.

  5. Copy and the newly generated API key in a secure place, as you'll need it for the following steps.

Setting Up Your Go Environment

Create a new Go project and initialize a Go module:

mkdir go-groq
cd go-groq
go mod init go-groq

Next, create a file named main.go and add the following code:

package main

import (
    "errors"
    "fmt"
    "os"
)

func main() {

    apiKey := os.Getenv("GROQ_API_KEY")
    if apiKey == "" {
        err := errors.New("GROQ_API_KEY needs to be set as an environment variable")
        panic(err)
    }

    groqClient := &GroqClient{ApiKey: apiKey}
    textToTranslate := "Kim Ji-won was born on October 19, 1992, in Geumcheon District, Seoul, South Korea, and has an elder sister who is two years older than her. While still a teenager in 2007, she was scouted on the street and signed with an entertainment agency, she subsequently became a trainee for over three years while preparing for her debut. During her first year of junior high school, she spent six months to a year studying in Chicago, Illinois, United States, where her maternal relatives lived."

    systemPrompt := "you are a professional language translator. " +
        "only respond with the translated text. do not explain."
    prompt := fmt.Sprintf("translate this text to Tagalog: %s", textToTranslate)

    translatedText, err := groqClient.ChatCompletion(LLMModelLlama370b, systemPrompt, prompt)
    if err != nil {
        fmt.Println(err)
    }

    if translatedText != nil {
        fmt.Println(*translatedText)
    }
}

Implementing the GroqClient

Next, we'll implement the GroqClient that will handle communication with the Groq Fast Inference API. Create a new file named groq_client.go and add the following code:

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "net/http"
)

const (
    apiBaseUrl = "https://api.groq.com/openai"
    SYSTEM     = "system"
    USER       = "user"

    LLMModelLlama38b       = "llama3-8b-8192"
    LLMModelLlama370b      = "llama3-70b-8192"
    LLMModelMixtral8x7b32k = "mixtral-8x7b-32768"
    LLMModelGemma7b        = "gemma-7b-it"
)

type GroqClient struct {
    ApiKey string
}

type GroqMessage struct {
    Role    string `json:"role"`
    Content string `json:"content"`
}

type ChatCompletionRequest struct {
    Messages    []GroqMessage `json:"messages"`
    Model       string        `json:"model"`
    Temperature int           `json:"temperature"`
    MaxTokens   int           `json:"max_tokens"`
    TopP        int           `json:"top_p"`
    Stream      bool          `json:"stream"`
    Stop        interface{}   `json:"stop"`
}

type ChatCompletionResponse struct {
    Id      string `json:"id"`
    Object  string `json:"object"`
    Created int    `json:"created"`
    Model   string `json:"model"`
    Choices []struct {
        Index   int `json:"index"`
        Message struct {
            Role    string `json:"role"`
            Content string `json:"content"`
        } `json:"message"`
        Logprobs     interface{} `json:"logprobs"`
        FinishReason string      `json:"finish_reason"`
    } `json:"choices"`
    Usage struct {
        PromptTokens     int     `json:"prompt_tokens"`
        PromptTime       float64 `json:"prompt_time"`
        CompletionTokens int     `json:"completion_tokens"`
        CompletionTime   float64 `json:"completion_time"`
        TotalTokens      int     `json:"total_tokens"`
        TotalTime        float64 `json:"total_time"`
    } `json:"usage"`
    SystemFingerprint string `json:"system_fingerprint"`
    XGroq             struct {
        Id string `json:"id"`
    } `json:"x_groq"`
}

func (g *GroqClient) ChatCompletion(llmModel string, systemPrompt string, prompt string) (*string, error) {

    llm := llmModel

    if llmModel == "" {
        //default to llama8B
        llm = LLMModelLlama38b
    }
    groqMessages := make([]GroqMessage, 0)

    if systemPrompt != "" {
        systemMessage := GroqMessage{
            Role:    SYSTEM,
            Content: systemPrompt,
        }
        groqMessages = append(groqMessages, systemMessage)
    }

    if prompt != "" {
        userMessage := GroqMessage{
            Role:    USER,
            Content: prompt,
        }
        groqMessages = append(groqMessages, userMessage)
    } else {
        return nil, fmt.Errorf("prompt is required")
    }

    chatCompletionRequest := &ChatCompletionRequest{
        Messages:    groqMessages,
        Model:       llm,
        Temperature: 0,
        MaxTokens:   1024,
        TopP:        1,
        Stream:      false,
        Stop:        nil,
    }

    chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
    if err != nil {
        return nil, err
    }

    //send http post request
    chatCompletionUrl := "/v1/chat/completions"
    finalUrl := fmt.Sprintf("%s%s", apiBaseUrl, chatCompletionUrl)

    req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
    if err != nil {
        return nil, err
    }

    //set headers
    req.Header.Set("Content-Type", "application/json")
    req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", g.ApiKey))

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }

    if resp.StatusCode != 200 {
        return nil, fmt.Errorf("unexpected status code: %d, reason: %s", resp.StatusCode, resp.Status)
    }

    defer func(Body io.ReadCloser) {
        err = Body.Close()
        if err != nil {
            fmt.Println("Error:", err)
        }
    }(resp.Body)

    body, err := io.ReadAll(resp.Body)
    if err != nil {
        return nil, err
    }

    chatCompletionResp := &ChatCompletionResponse{}

    err = json.Unmarshal(body, &chatCompletionResp)
    if err != nil {
        return nil, err
    }

    var content string
    if chatCompletionResp.Choices != nil && len(chatCompletionResp.Choices) > 0 {
        content = chatCompletionResp.Choices[0].Message.Content
    } else {
        return nil, fmt.Errorf("no choices")
    }

    return &content, nil
}

Breaking Down the GroqClient Code

Let's go through the GroqClient code step by step to understand how it works.

Constants and Structs

First, we define some constants and structs used throughout the code:

const (
    apiBaseUrl = "https://api.groq.com/openai"
    SYSTEM     = "system"
    USER       = "user"

    LLMModelLlama38b       = "llama3-8b-8192"
    LLMModelLlama370b      = "llama3-70b-8192"
    LLMModelMixtral8x7b32k = "mixtral-8x7b-32768"
    LLMModelGemma7b        = "gemma-7b-it"
)

type GroqClient struct {
    ApiKey string
}

type GroqMessage struct {
    Role    string `json:"role"`
    Content string `json:"content"`
}

type ChatCompletionRequest struct {
    Messages    []GroqMessage `json:"messages"`
    Model       string        `json:"model"`
    Temperature int           `json:"temperature"`
    MaxTokens   int           `json:"max_tokens"`
    TopP        int           `json:"top_p"`
    Stream      bool          `json:"stream"`
    Stop        interface{}   `json:"stop"`
}

type ChatCompletionResponse struct {
    Id      string `json:"id"`
    Object  string `json:"object"`
    Created int    `json:"created"`
    Model   string `json:"model"`
    Choices []struct {
        Index   int `json:"index"`
        Message struct {
            Role    string `json:"role"`
            Content string `json:"content"`
        } `json:"message"`
        Logprobs     interface{} `json:"logprobs"`
        FinishReason string      `json:"finish_reason"`
    } `json:"choices"`
    Usage struct {
        PromptTokens     int     `json:"prompt_tokens"`
        PromptTime       float64 `json:"prompt_time"`
        CompletionTokens int     `json:"completion_tokens"`
        CompletionTime   float64 `json:"completion_time"`
        TotalTokens      int     `json:"total_tokens"`
        TotalTime        float64 `json:"total_time"`
    } `json:"usage"`
    SystemFingerprint string `json:"system_fingerprint"`
    XGroq             struct {
        Id string `json:"id"`
    } `json:"x_groq"`
}

Here, we define constants for API base URLs, message roles, and model names. The GroqClient struct holds the API key, while GroqMessage, ChatCompletionRequest, and ChatCompletionResponse structs define the request and response formats.

ChatCompletion Function

Let's break down the ChatCompletion function. This function calls the LLM which will do the translation of the text we send it to. You can use the different models supported by Groq. I have already listed the text-based models in the constants we previously defined.

💡
Check out the Groq Documentation for the list of models supported
func (g *GroqClient) ChatCompletion(llmModel string, systemPrompt string, prompt string) (*string, error) {
    // Determine the model to use
    llm := llmModel
    if llmModel == "" {
        llm = LLMModelLlama38b
    }

    // Create messages slice
    groqMessages := make([]GroqMessage, 0)

    // Add system message if provided
    if systemPrompt != "" {
        systemMessage := GroqMessage{
            Role:    SYSTEM,
            Content: systemPrompt,
        }
        groqMessages = append(groqMessages, systemMessage)
    }

    // Add user prompt message
    if prompt != "" {
        userMessage := GroqMessage{
            Role:    USER,
            Content: prompt,
        }
        groqMessages = append(groqMessages, userMessage)
    } else {
        return nil

, fmt.Errorf("prompt is required")
    }

    // Create request payload
    chatCompletionRequest := &ChatCompletionRequest{
        Messages:    groqMessages,
        Model:       llm,
        Temperature: 0,
        MaxTokens:   1024,
        TopP:        1,
        Stream:      false,
        Stop:        nil,
    }

    // Serialize request to JSON
    chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
    if err != nil {
        return nil, err
    }

    // Construct the final URL
    chatCompletionUrl := "/v1/chat/completions"
    finalUrl := fmt.Sprintf("%s%s", apiBaseUrl, chatCompletionUrl)

    // Create HTTP request
    req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
    if err != nil {
        return nil, err
    }

    // Set headers
    req.Header.Set("Content-Type", "application/json")
    req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", g.ApiKey))

    // Execute HTTP request
    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }

    // Check response status code
    if resp.StatusCode != 200 {
        return nil, fmt.Errorf("unexpected status code: %d, reason: %s", resp.StatusCode, resp.Status)
    }

    // Read response body
    defer func(Body io.ReadCloser) {
        err = Body.Close()
        if err != nil {
            fmt.Println("Error:", err)
        }
    }(resp.Body)

    body, err := io.ReadAll(resp.Body)
    if err != nil {
        return nil, err
    }

    // Parse response JSON
    chatCompletionResp := &ChatCompletionResponse{}
    err = json.Unmarshal(body, &chatCompletionResp)
    if err != nil {
        return nil, err
    }

    // Extract the translated text
    var content string
    if chatCompletionResp.Choices != nil && len(chatCompletionResp.Choices) > 0 {
        content = chatCompletionResp.Choices[0].Message.Content
    } else {
        return nil, fmt.Errorf("no choices")
    }

    return &content, nil
}

Determine the Model to Use: We start by setting the model to use for the request. If none is provided, we default to LLMModelLlama38b.

// Determine the model to use
llm := llmModel
if llmModel == "" {
    llm = LLMModelLlama38b
}

Create Messages Slice: We create a slice to hold the messages for the chat completion request.

// Create messages slice
groqMessages := make([]GroqMessage, 0)

Add System Message: If a system prompt is provided, we add it as a system message.

// Add system message if provided
if systemPrompt != "" {
    systemMessage := GroqMessage{
        Role:    SYSTEM,
        Content: systemPrompt,
    }
    groqMessages = append(groqMessages, systemMessage)
}

Add User Prompt Message: We add the user prompt as a message. If the prompt is empty, we return an error.

// Add user prompt message
if prompt != "" {
    userMessage := GroqMessage{
        Role:    USER,
        Content: prompt,
    }
    groqMessages = append(groqMessages, userMessage)
} else {
    return nil, fmt.Errorf("prompt is required")
}

Create Request Payload: We construct the request payload with the messages, model, and other parameters.

// Create request payload
chatCompletionRequest := &ChatCompletionRequest{
    Messages:    groqMessages,
    Model:       llm,
    Temperature: 0,
    MaxTokens:   1024,
    TopP:        1,
    Stream:      false,
    Stop:        nil,
}

Serialize Request to JSON: We serialize the request payload to JSON format.

// Serialize request to JSON
chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
if err != nil {
    return nil, err
}

Construct the Final URL: We build the final URL for the API endpoint.

// Construct the final URL
chatCompletionUrl := "/v1/chat/completions"
finalUrl := fmt.Sprintf("%s%s", apiBaseUrl, chatCompletionUrl)

Create HTTP Request: We create a new HTTP POST request with the serialized JSON payload.

// Create HTTP request
req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
if err != nil {
    return nil, err
}

Set Headers: We set the necessary headers, including the authorization header with the API key.

// Set headers
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", g.ApiKey))

Execute HTTP Request: We send the HTTP request using an HTTP client.

// Execute HTTP request
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
    return nil, err
}

Check Response Status Code: We check the response status code to ensure it's 200 (OK).

// Check response status code
if resp.StatusCode != 200 {
    return nil, fmt.Errorf("unexpected status code: %d, reason: %s", resp.StatusCode, resp.Status)
}

Read Response Body: We read the response body and defer the closing of the body.

// Read response body
defer func(Body io.ReadCloser) {
    err = Body.Close()
    if err != nil {
        fmt.Println("Error:", err)
    }
}(resp.Body)

body, err := io.ReadAll(resp.Body)
if err != nil {
    return nil, err
}

Parse Response JSON: We parse the response JSON into a ChatCompletionResponse struct.

// Parse response JSON
chatCompletionResp := &ChatCompletionResponse{}
err = json.Unmarshal(body, &chatCompletionResp)
if err != nil {
    return nil, err
}

Extract Translated Text: We extract the translated text from the response and return it.

// Extract the translated text
var content string
if chatCompletionResp.Choices != nil && len(chatCompletionResp.Choices) > 0 {
    content = chatCompletionResp.Choices[0].Message.Content
} else {
    return nil, fmt.Errorf("no choices")
}

return &content, nil

Running the Application

Before running the application, make sure to set the GROQ_API_KEY environment variable with your Groq API key. You can do this in your terminal:

export GROQ_API_KEY=your_api_key_here

Now, run your Go application:

go run main.go groq_client.go

If everything is set up correctly, you should see the translated text printed in your terminal.

Download Source Code in Github

The full source code for this tutorial is available in Github. Don't forget toit.

In this blog post, we've walked through the process of using Go to translate text using the Groq Fast Inference API. We covered how to set up your environment, make API requests, and handle the responses to perform text translation. By following these steps, you can easily integrate the Groq Fast Inference API into your Go applications for various language processing tasks. Happy coding!


You might want to check my other blog post about Groq Function calling


If you're interested in learning more about integrating Go with Generative AI, follow this blog for more tutorials and insights. This is just the start!

I do live coding in Twitch and Youtube. You can follow me if you'd like to ask me questions when I go live. I also post in LinkedIn, you can connect with me there as well.