Groq, Inc. is an American artificial intelligence (AI) company that builds an AI accelerator application-specific integrated circuit (ASIC) that they call the Language Processing Unit (LPU) and related hardware to accelerate the inference performance of AI workloads.
Examples of the types AI workloads that run on Groq's LPU are: large language models (LLMs), image classification, anomaly detection, and predictive analysis.
Groq is headquartered in Mountain View, CA, and has offices in San Jose, CA, Liberty Lake, WA, Toronto, London. and remote employees throughout North America and Europe.
Groq received seed funding from Social Capital's Chamath Palihapitiya, with a $10 million investment in 2017 and soon after secured additional funding.
In April 2021, Groq raised $300 million in a series C round led by Tiger Global Management and D1 Capital Partners. Current investors include: The Spruce House Partnership, Addition, GCM Grosvenor, Xⁿ, Firebolt Ventures, GlobalCapital, and Tru Arrow Partners, as well as follow-on investments from TDK, XTX Markets, Boardman Bay Capital Management, and Infinitum Partners. After Groq’s series C funding round, it was valued at over $1billion, making the startup a unicorn.
On March 1, 2022, Groq acquired Maxeler Technologies, a company known for its dataflow systems technologies.
On August 16, 2023, Groq selected Samsung Electronics foundry in Taylor, Texas to manufacture its next generation chips, on Samsung's 4-nanometer (nm) process node. This was the first order at this new Samsung chip factory.
On February 19, 2024, Groq soft launched a developer platform, GroqCloud, to attract developers into using the Groq API and rent access to their chips. On March 1, 2024 Groq acquired Definitive Intelligence, a startup known for offering a range of business-oriented AI solutions, to help with its cloud platform.
Groq raised $640 million in a series D round led by BlackRock Private Equity Partners in August 2024, valuing the company at $2.8 billion.
and a deal with Bell Canada to expand national AI infrastructure.Groq Media. (28 May 2025). "Press release: Groq Becomes Exclusive Inference Provider for Bell Canada’s Sovereign AI Network." Groq website Retrieved 2 July 2025.Michael Reeve. (29 May 2025). "New partnership between Bell, Groq and TRU set to ensure Kamloops is on the cutting edge of AI expansion in Canada". JFJC Today website Retrieved 2 July 2025.
The LPU features a functionally sliced microarchitecture, where memory units are interleaved with vector and matrix computation units.
In addition to its functionally sliced microarchitecture, the LPU can also be characterized by its single core, deterministic architecture.
The first generation of the LPU (LPU v1) yields a computational density of more than 1TeraOp/s per square mm of silicon for its 25×29 mm 14nm chip operating at a nominal clock frequency of 900 MHz.
Groq currently hosts a variety of open-source large language models running on its LPUs for public access. Access to these demos are available through Groq's website. The LPU's performance while running these open source LLMs has been independently benchmarked by ArtificialAnalysis.ai, in comparison with other LLM providers. The LPU's measured performance is shown in the table below:
+Language Processing Unit LLM Performance ! Model Name!! Tokens/second (T/s) !! Latency (seconds) | ||
Llama2-70B | 253 T/s
| 0.3s |
473 T/s | 0.3s | |
826 T/s | 0.3s | |
|
|