Using Groq in Cleek
Groq’s LPU Inference Engine has excelled in the latest independent Large Language Model (LLM) benchmark, redefining the standard for AI solutions with its remarkable speed and efficiency. By integrating Cleek with Groq Cloud, you can now easily leverage Groq’s technology to accelerate the operation of large language models in Cleek.
Groq’s LPU Inference Engine achieved a sustained speed of 300 tokens per second in internal benchmark tests, and according to benchmark tests by ArtificialAnalysis.ai, Groq outperformed other providers in terms of throughput (241 tokens per second) and total time to receive 100 output tokens (0.8 seconds).
This document will guide you on how to use Groq in Cleek:
API Keys
menu of the console.Safely store the key from the pop-up as it will only appear once. If you accidentally lose it, you will need to create a new key.
Settings
-> Language Model
, where you can input the API Key you just obtained.Next, select a Groq-supported model in the assistant’s model options, and you can experience the powerful performance of Groq in Cleek.