A SpecAtlas

Groq

active API / developer platform

Groq is an ultra-low-latency inference API for open models (Llama, Mixtral, Gemma); commercial use permitted; data is not used to train base models.

Groq serves open-weight models (Meta Llama, Mixtral, Gemma, Qwen, etc.) on custom LPU hardware for very low latency. API inputs are not used to train the underlying base models. Free tier has rate limits; paid tier offers higher throughput.

Basics

slug
groq
type
API / developer platform
status
active
last checked
2026-04-18
official site
https://groq.com
Key Value Condition Source Checked
commercial_use_allowed yes Groq Terms of Service 2026-04-18
training_use_of_input no Groq serves open models; it does not train base models on API inputs. Groq Terms of Service 2026-04-18
Key Value Condition Source Checked
api_available yes Groq API Documentation 2026-04-18

Primary sources

→ full Sources page

FAQ

Q. What models does Groq serve?
Open-weight models from Meta, Mistral, Google (Gemma), Alibaba (Qwen), and others. Groq does not train its own base models; it accelerates inference.