Groq’s LPU Inference Engine has excelled in the latest independent Large Language Model (LLM) benchmark, redefining the standard for AI solutions with its remarkable speed and efficiency. By integrating Cleek with Groq Cloud, you can now easily leverage Groq’s technology to accelerate the operation of large language models in Cleek.

Groq’s LPU Inference Engine achieved a sustained speed of 300 tokens per second in internal benchmark tests, and according to benchmark tests by ArtificialAnalysis.ai, Groq outperformed other providers in terms of throughput (241 tokens per second) and total time to receive 100 output tokens (0.8 seconds).

This document will guide you on how to use Groq in Cleek:

1

Obtaining GroqCloud API Keys
2

First, you need to obtain an API Key from the GroqCloud Console.
3

Get GroqCloud API Key
4

Create an API Key in the API Keys menu of the console.
5

Save GroqCloud API Key
6

Safely store the key from the pop-up as it will only appear once. If you accidentally lose it, you will need to create a new key.

7

Configure Groq in Cleek
8

You can find the Groq configuration option in Settings -> Language Model, where you can input the API Key you just obtained.
9

Groq service provider settings

Next, select a Groq-supported model in the assistant’s model options, and you can experience the powerful performance of Groq in Cleek.