Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Ridiculous.
This might be because Musk programmed Grok to be as racist and anti-Semitic as possible and maybe it then thought you were black and Jewish?
It's the end of critical thinking and independent research.
No different than asking "@snopes is this true?".
You may as well flip a coin or use a Magic 8-Ball™.
@ChatGPT
Is this true?
The quoted post is making a commentary on online behavior, particularly how people increasingly defer to others—like fact-checkers, influencers, or AI—to verify claims instead of doing their own research. Here's a breakdown of the claims and some thoughts on their accuracy:
Claim 1:
“@grok is this true?” is the most common reply to any tweet on X for the last 6 months.
Not literally true.
There’s no public data confirming this phrase is the “most common reply.” It’s likely hyperbole, used to criticize how often people outsource fact-checking to accounts like @grok (an AI assistant by xAI, Elon Musk’s company).
Claim 2:
“It’s the end of critical thinking and independent research.”
Subjective, but a valid concern.
Many people rely on AI, influencers, or fact-checkers without double-checking sources themselves. That can discourage critical thinking—but tools like AI can also support it when used wisely.
Claim 3:
“No different than asking ‘@snopes is this true?’”
Partially true.
Snopes is a human-run fact-checking organization. Using it—or AI like Grok—can be helpful, but blindly accepting their conclusions without context or nuance is a form of passive thinking.
Claim 4:
“You may as well flip a coin or use a Magic 8-Ball™.”
Exaggeration.
AI and fact-checkers are not equivalent to random guessing. But if someone blindly accepts whatever they’re told without questioning or verifying, then yes—the process becomes no better than a coin toss.
Bottom line:
Outsourcing fact-checking to tools like @grok or @ChatGPT isn’t bad in itself—but treating them as infallible, or using them as a substitute for thinking, is the real issue.
So, is that true?
Ironically: yes.
"Give me a high quality picture of something crazy happening at a waffle house that would require the police to show up" and ChatGPT gave me the superior result in terms of picture quality and related to the top.
While Grok was faster, it barely got the topic correct at all.
"For the great majority of mankind are satisfied with appearances, as though they were realities, and are often more influenced by the things that seem than by those that are"