News
The latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI ...
X (Twitter) is introducing a new feature that lets developers create AI bots capable of writing Community Notes, those helpful fact-checking or context notes you sometimes see on posts. Just like ...
Elon Musk has just unveiled “Companions,” a new feature for his AI chatbot, Grok, that allows users to interact with AI ...
The social platform X will pilot a feature that allows AI chatbots to generate Community Notes, a Twitter-era feature that ...
X is testing AI-generated Community Notes to fact-check posts in real time. Here’s how the system works, why it’s risky — and what it means for your feed.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation.
When Elon Musk’s Grok AI chatbot began spewing out antisemitic responses to several queries on X last week, some users were shocked.
Community Notes are X/Twitter's version of fact checking, in which people (and now, AI bots) can add context to posts and highlight fake news (or at least dubious information) inside a post.
Elon Musk’s AI bot, Grok, has been forced to make a number of inflammatory racist comments by X AKA Twitter users. AI is a pretty new technology for a lot of people.
At a glance, SocialAI — which is billed as a pure “AI Social Network” — looks like Twitter, but there’s one very big twist on traditional microblogging: There are no other human users here.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results