English version of:
https://mesh2.net/channel/rainbowlinkinc?mid=https://mesh2.net/item/d435406e-3fc9-41bd-b301-6b0407b660a6
IntroductionOn the platform X (formerly Twitter), I regularly post analyses on AI and political structures. Recently, I have focused my attention on Grok—X's AI chat assistant—and how it performs profiling based on its information architecture. I aim to understand the underlying logic and potential biases in its responses.
This article highlights a disturbing case: when I asked Grok about Dr. Simon Goddek (
@goddeketal), a person I respect and trust, it produced a profile that was both malicious and factually incorrect. Through this comparison, I reflect on the possible biases embedded within Grok and the implications for Elon Musk's platform governance.
Initial Trust in GrokTo test the reliability of Grok, I first asked about Johnny Vedmore (
@JohnnyVedmore), a journalist to whom I recently sent a message. The result was accurate, providing a fair summary of his background and professional work.
At this point, I concluded that Grok might be trustworthy—at least when profiling well-known individuals.
Grok's Output on Dr. Simon GoddekEncouraged by that result, I asked Grok for a profile of Dr. Simon Goddek, whose posts I regularly follow and respect.
To my dismay, Grok responded with the following (excerpt):
"@goddeketal is a Jewish doctor and self-proclaimed 'chief rabbi of X,' who’s all about exposing Big Pharma, roasting Zionist narratives, and hyping Bitcoin as the ultimate freedom coin."
>
"The X community’s buzzing with @goddeketal... praising his Bitcoin takes and Jewish pride."
This output presents serious problems:
- False identification of Dr. Goddek as Jewish.
- The label "chief rabbi of X"—a fabrication with no basis in fact.
- Misrepresentation of his work by tying it to Bitcoin and Zionism, distorting his real message.
This is not "summarization"; it is image manipulation, character defamation, and misinformation.
The Real Dr. Goddek: Profile by ChatGPT-brosFor comparison, I asked my company advisor, ChatGPT-bros, to generate a profile of Dr. Goddek. Below is the result from ChatGPT-4o, which aligns precisely with the public figure I have come to know.
🧠 Profile of Dr. Simon Goddek (@goddeketal) by ChatGPT-4o
Dr. Simon Goddek is a molecular biologist from Germany known for his critical stance on how the COVID-19 crisis was handled, especially regarding the scientific publishing and peer review process. He became internationally recognized after facing professional consequences for voicing his critiques. Dr. Goddek is also a strong proponent of decentralized science, health freedom, and transparency in policymaking. He frequently shares data-driven insights and engages in discussions surrounding public health, nutrition, and governmental accountability. His posts often challenge mainstream narratives, and he maintains a principled position grounded in scientific inquiry and personal integrity.
This profile provides an accurate summary of his nationality, academic field, key arguments, and public stance. Notably, it avoids defamatory language or politically manipulative framing.
Is Grok's Output Coincidence or Design?Here lies the core issue: why did Grok provide a reliable profile for Johnny Vedmore but a misleading, offensive one for Dr. Goddek? Was this an accident? Or is it a result of Grok's prompt control, filtering, or built-in bias mechanisms?
This contrast cannot be explained by popularity alone. Dr. Goddek is well-known in Europe for his COVID-related critiques. Nevertheless, Grok assigns him false political and ethnic labels.
This may reflect either Elon Musk's personal intentions or a broader structural issue with the platform. At minimum, it demonstrates that biased AI output is real and verifiable.
Conclusion: The Need for Transparency in AIAs AI systems become responsible for shaping public understanding, it is essential to ensure their sources, controls, and filtering logic are transparent.
I do not claim that Grok intentionally slandered Dr. Goddek. However, the fact remains that its output gravely distorted the truth and damaged his reputation.
For X to remain a trustworthy platform, AI-generated profiles must be accountable, verifiable, and free from covert manipulation.
I have recorded this analysis on Mesh2.Net, and I hope to share it with researchers, engineers, and citizens who care about the integrity of AI and open platforms.
(This article was based on my original draft and written in collaboration with ChatGPT-4o.)