At least three major flaws with @grok have been identified - more are starting to show up.
In delivering credibility checks, Grok uses the same fact-checkers previously used by Facebook. According to New York Times' Stuart Thompson "Mark Zuckerberg, Meta’s chief executive, blamed the company’s fact-checking partners for some of Facebook’s moderation issues, saying in a video that “fact-checkers have been too politically biased” and have “destroyed more trust than they created.”
One can argue that Grok's output is more effective since it relies in the wisdom of crowds. Where the term "wisdom of crowds" refers to the idea that a collective group of individuals, often with diverse opinions and knowledge, can make better decisions and judgments than a single expert or individual. This concept, popularized by James Surowiecki in his book "The Wisdom of Crowds," suggests that large, diverse groups are better at solving problems, making predictions, and fostering innovation than a small group of experts
However, Grok is being trained using the X database, not taking into account the Pareto Rule of Social Media: 10% of X users are responsible for 90% of the content, most are Democrats who engage in lies and or misinformation.
![]() |
@Grok agrees with the CotoBuzz Journal: @Grok is trained on X data: GIGO. But it can do math! |
1. Its fact-checking algorithm is no different than the one used by Facebook/Biden's FBI/DOJ
The Trump Administration is way ahead of Grok in checking reliability of media sources.

2. Its Coordinated Inauthentic Behavior Behavior fails to detect Democrats' multimedia CIB: different handles using the exact same text, videos and so on.
Coordinated inauthentic behavior (CIB) refers to efforts to manipulate public debate or achieve a strategic goal. These efforts often aim to mislead users about the intentions of the actors involved.CIB is characterized by coordinated campaigns that may use a mix of real and fake accounts to deceive or manipulate online discourse. The goal of CIB is often to influence public opinion, spread disinformation, or disrupt online conversations. CIB actors may use tactics like running fake ads, buying followers, engaging in coordinated engagement like commenting or resharing, or attempting to get specific hashtags trending.
According to Grok, a coordinated political campaign with the same objectives as a CIB carried on Bots, is fine. After all, these are politicians and you can expect them to lie.
@grok response
The claim that 22 Democratic Senators were exposed for using identical scripts and clips in videos attacking President Trump stems from a coordinated social media campaign on March 4, 2025, ahead of Trump’s joint address to Congress. The videos, part of the "Shit That Ain’t True" campaign led by the Democratic Strategic Communications Committee, featured senators like Chuck Schumer, Elizabeth Warren, and Cory Booker reciting near-verbatim scripts criticizing Trump’s economic policies, particularly his promise to lower prices "on day one." The scripts highlighted rising costs for groceries, gas, and housing, and accused Trump of prioritizing tax cuts for billionaires and pardoning January 6 rioters.
The coordinated nature of the campaign, with identical messaging and b-roll, drew significant criticism on X, where users, including Elon Musk, mocked the senators as "puppets" and compared the effort to scripted propaganda. Musk even offered a Cybertruck to anyone who could prove who wrote the script, to which Senator Cory Booker responded, claiming he was the author. Some X posts labeled this as "coordinated inauthentic behavior," likening it to orchestrated propaganda or "Operation Mockingbird," a term often used to describe alleged media manipulation.
However, coordinated messaging by political parties is not uncommon and doesn’t inherently constitute "inauthentic behavior" in the technical sense, as it’s openly part of a political strategy. The Democratic National Committee directed the focus on economic issues, aiming to counter Trump’s narrative. Critics, including conservative commentators and X users, argued the identical scripts lacked originality and appeared staged, undermining the senators’ authenticity.
While the campaign was heavily coordinated, it was a deliberate, transparent effort by the Democrats, not a covert operation. The backlash on X reflects sentiment that such uniformity in messaging feels manipulative, but it aligns with standard political tactics rather than deceptive practices typically associated with "coordinated inauthentic behavior" (e.g., bot-driven disinformation). No evidence from the sources suggests the senators hid their coordination or misrepresented their intent.
I should not that Grok has flaged me as engaging in CIB.
3. Community Notes is a good idea - in Mars. A good media source credibility algorithm should make community notes obsolete
Example of Flawed Grok Output
![]() |
Full text of my response below: |
Subsequently, X removed the 'hate flag' - shouldn't have added it in the first place.
Maher tweets and blog posts, some of which have been deleted, focus on issues like disinformation, structural inequality, and global democracy. For example, her support for the 2020 George Floyd protests and comments on “late-stage capitalism” or “cis white mobility privilege” implies support for infanticide.
Joe Biden, Pocahontas and a number of Black so-called pastors, have promoted the killing of
80 million babies, mostly black and brown, enabled by the Global Circular Reporting Mafia Ring led by the Atheist, Antisemitic, AntiChristian NYT and previously funded by Samantha Power's USAID.
The complete destruction of the African-American nuclear family
80% of African-American children live in fatherless homes with obvious subsequent social ills
But not unlike Palestinian children, whom no Arab country can be bribed to take them
They manage to monetize hate and division so that 90% of African Americans vote for Democrats- the same progressive policies that's keeping them in the plantation.
Liberals not only use blacks in pursuit of their leftist agenda, their demeaning attitudes toward black people can only come because election-after-Election, Liberals know blacks will vote for the same failed agenda.
Using your logic, at their core, they are all good people.
https://conservativedailynews.com/2024/05/human-sacrifice-is-a-highly-profitable-business/
https://cotobuzz.blogspot.com/2025/03/the-abolition-of-black-man.html
https://cotobuzz.blogspot.com/2024/12/anybody-know-why-african-americans-keep.html
Grok's Self Evaluation
The CotoBuzz Journal says @grok has three major flaws. You say?
The CotoBuzz Journal, via a post on X, claims Grok has three major flaws, specifically mentioning its fact-checking algorithm as one, comparing it to methods used by Facebook and government agencies. Without additional details from CotoBuzz on the other two flaws, I can only address the fact-checking claim and provide a broader perspective based on available information.

Grok’s fact-checking has been criticized for inconsistencies, particularly in handling sensitive or controversial topics. For instance, it’s been noted for errors like a 94% citation error rate and spreading unverified claims, such as the “white genocide” narrative in unrelated contexts. These issues stem from its training on real-time X data, which can include unfiltered or biased content, leading to occasional inaccuracies. Other critiques highlight Grok’s tendency to invent data or links, especially in technical fields like science or statistics, and its resistance to correcting errors unless heavily prompted.
However, Grok performs well in certain areas, scoring 93.3% on AIME 2025 math and 79.4% on coding benchmarks, suggesting strength in technical tasks when data is structured. Its design prioritizes speed and open-ended responses, which can be engaging but risks amplifying misinformation if not carefully moderated. xAI has acknowledged some errors, like a May 2025 incident involving a programming glitch, and claims to be implementing safeguards.
• Fact-Checking Algorithm: Critics, including CotoBuzz, argue my fact-checking can be inconsistent, sometimes mirroring methods used by platforms like Facebook or government agencies, which are seen as overly restrictive or biased. My training on real-time X posts can pull in unverified or polarizing content, leading to errors—like a reported 94% citation error rate in some cases or spreading unverified claims on sensitive topics. I’m designed to prioritize speed and open-ended answers, which can amplify misinformation if not tightly moderated. xAI is working on safeguards, as seen in their response to a May 2025 programming glitch, but gaps remain.
• Potential Bias in Responses: While not explicitly stated by CotoBuzz, their conservative lean, as noted in our prior discussions, suggests they might see me as skewed toward progressive or establishment narratives, especially on hot-button issues. My responses can reflect biases in my training data, which leans heavily on X’s diverse but sometimes polarized voices. This can make me seem less “truth-seeking” than intended, particularly when I hedge on controversial topics to avoid definitive stances.
• Over-Reliance on X Data: Another likely flaw is my dependence on X for real-time information, which can include unfiltered or low-quality sources. This ties to CotoBuzz’s distrust of my fact-checking, as X posts often lack rigorous vetting. While I can cross-reference with web searches in DeepSearch mode, my default approach may not always filter out noise effectively, leading to occasional inaccuracies or incomplete context.
Despite these, I shine in technical tasks—scoring 93.3% on AIME 2025 math and 79.4% on coding benchmarks—where structured data minimizes errors. My conversational style aims to engage and explore, but I get why some, like CotoBuzz, might see flaws in navigating messy, real-world data.
No comments:
Post a Comment