The Groq AI model is gaining popularity on social media, challenging ChatGPT’s dominance and being compared to Elon Musk’s model, the similarly named Grok.
Groq, the most recent artificial intelligence (AI) product to hit the market, is sweeping social media with its lightning-fast response time and cutting-edge technology that may eliminate the need for GPUs.
Groq became an instant phenomenon after its public benchmark testing went viral on the social networking platform X, indicating that its computation and reaction speed outperformed the popular AI chatbot ChatGPT.
The first public demo using Groq: a lightning-fast AI Answers Engine.
— Matt Shumer (@mattshumer_) February 19, 2024
It writes factual, cited answers with hundreds of words in less than a second.
More than 3/4 of the time is spent searching, not generating!
The LLM runs in a fraction of a second.https://t.co/dVUPyh3XGV https://t.co/mNV78XkoVB pic.twitter.com/QaDXixgSzp
This is due to the Groq team creating its own bespoke application-specific integrated circuit (ASIC) processor for large language models (LLMs), which allows it to generate approximately 500 tokens per second. In comparison, the publicly accessible version of the model, ChatGPT-3.5, can create approximately 40 tokens per second.
Groq Inc, the model’s developer, claims to have invented the first language processing unit (LPU) to run its model, as opposed to the limited and expensive graphics processing units (GPUs) commonly used to run AI models.
Wow, that's a lot of tweets tonight! FAQs responses.
— Groq Inc (@GroqInc) February 19, 2024
• We're faster because we designed our chip & systems
• It's an LPU, Language Processing Unit (not a GPU)
• We use open-source models, but we don't train them
• We are increasing access capacity weekly, stay tuned pic.twitter.com/nFlFXETKUP
However, the company that powers Groq is not new. It was created in 2016 and trademarked the name Groq. Last November, as Elon Musk’s own AI model, also called Grok (but written with a “k”), gained popularity, the engineers of the original Groq released a blog post criticizing Musk’s choice of name:
“We understand why you might wish to adopt our name. You enjoy fast things (rockets, hyperloops, one-letter company names), and our Groq LPU Inference Engine is the most efficient way to execute large language models (LLMs) and other generative AI applications. However, we must request that you quickly select another name.
Since Groq went viral on social media, neither Musk nor the Grok page on X has commented on the similarities in the names of the two tools.
Nonetheless, many users on the network have begun to compare the LPU model to other popular GPU-based models.
One AI developer described Groq as a “game changer” for companies that demand low latency, which is the time it takes to execute a request and receive a response.
side by side Groq vs. GPT-3.5, completely different user experience, a game changer for products that require low latency pic.twitter.com/sADBrMKXqm
— Dina Yerlan (@dina_yrl) February 19, 2024
Another user said that Groq’s LPUs might provide a “massive improvement” to GPUs when it comes to servicing the needs of AI applications in the future and that they could also be a solid alternative to the “high-performing hardware” of Nvidia’s in-demand A100 and H100 chips.
The emergence of Groq has injected new energy into the AI space. While the ChatGPT vs. Grok battle continues, the bigger picture is the rapid development of AI capabilities. Both projects push the boundaries of what’s possible, and their rivalry could lead to even more groundbreaking advancements.
However, it’s crucial to remember the ethical and societal implications of powerful AI. As AI continues to evolve, responsible development and transparent communication are paramount. Groq’s success is a reminder that innovation doesn’t happen in a vacuum, and collaboration and open discussions are essential for shaping a positive future with AI.