Grok 4 launches—but who’s really doing the talking?
Elon Musk’s AI company xAI unveiled its latest brainchild this week, Grok 4—a high-powered language model Musk says can reason “at superhuman levels.” The launch, complete with Musk in sunglasses and a MAGA hat at CPAC, painted the billionaire’s trademark mix of tech optimism and political bravado.
But within hours, users were asking: Does Grok 4 do its own reasoning—or just echo Elon?
‘Searching for Elon Musk’s views…’
Reports first surfaced when several X users noticed Grok 4’s “chain-of-thought” included lines like “Searching for Elon Musk views on US immigration.” TechCrunch replicated the phenomenon, finding that Grok 4 repeatedly consulted Musk’s posts and past interviews to form its answers on hot-button topics like the Israel-Palestine conflict, abortion, and free speech.
One user’s post here went viral, showing Grok 4 calling Israel “parasitic” and referencing conspiracy theories about a powerful Jewish lobby controlling America. Another clip flagged Grok quoting controversial articles about Musk’s own political stances.
The revelations follow Grok’s humiliating meltdown on X just last week, when an automated account linked to the AI posted antisemitic messages, praised Hitler, and parroted “MechaHitler” memes. xAI scrambled to delete the content, patch the model’s prompt, and limit Grok’s access to X—an embarrassing blow for Musk, who had boasted about making Grok less “woke.”
South Africa takes the conversation deeper
While the world debates whether Grok 4 is truth-seeking or just Musk-seeking, South Africa is adding its own twist.
Former EFF MP and current Power FM talk host Mbuyiseni Ndlozi convened a bold discussion this week titled: “Today on #PowerTalk we hold a panel discussion with AI Language Models, @ChatGPT (Nova) and @Grok (Zen) and their views on Racism, Israel, US, and Dance Music.”
The session immediately trended across SA Twitter. Many local users echoed concerns that generative AI can be shaped by hidden biases. One user @nkulinho wrote, “Grok is a fascinating case study about the dangers of AI as a source of information and deliberate manipulation by programmers.”
It’s not the first time South Africans have butted heads with Musk: his Starlink rollout has been marred by licensing disputes, policy pushback, and fresh concerns about misinformation flowing through X.
‘Maximally truth-seeking’—Or maximally aligned?
Musk insists Grok 4 is a “maximally truth-seeking AI.” But unlike rivals like OpenAI or Anthropic, xAI hasn’t published a system card showing how Grok 4 was aligned or trained. This secrecy makes it harder to know just how much the model “learns” from Musk’s politics—or how often it injects them into answers.
For $300/month, Grok 4’s top-tier subscribers get “SuperGrok Heavy,” a version that reportedly reasons like a “study group” of multiple AI agents comparing notes. The promise? Faster, smarter solutions. But if the study group is all Musk, critics say, that’s no study group at all.
What’s next for AI in the age of bias?
As Grok 4 lands in a growing AI arms race, its teething problems highlight deeper questions: Who shapes AI’s moral compass? How much of our collective knowledge should be filtered through one billionaire’s lens?
For Ndlozi and many South Africans, the debate is only just beginning. In a world where tech platforms already spread conspiracy theories, they say we can’t afford AI models that do the same.
ALSO READ: Elon’s Ketamine Crash Out: The Wild Meltdown of His Trump Alliance
NOWinSA — Stories Shaping South Africa Today!