tiprankstipranks
Trending News
More News >
Advertisement
Advertisement

Anthropic’s Path to AGI: Addressing AI Hallucinations

Anthropic’s Path to AGI: Addressing AI Hallucinations

New updates have been reported about Anthropic (PC:ANTPQ)

Meet Your ETF AI Analyst

Anthropic CEO Dario Amodei recently addressed the issue of AI hallucinations during the company’s first developer event, Code with Claude, in San Francisco. He argued that while AI models may hallucinate less frequently than humans, they do so in unexpected ways. Amodei emphasized that these hallucinations do not hinder Anthropic’s pursuit of artificial general intelligence (AGI), which he predicts could be achieved as early as 2026. Despite challenges, Amodei remains optimistic about the steady progress toward AGI, dismissing any perceived limitations on AI capabilities.

Amodei’s perspective contrasts with other industry leaders who view hallucinations as significant obstacles to AGI. For instance, Google DeepMind’s CEO recently highlighted the issue of AI models making obvious errors. Anthropic itself faced a related incident when its AI, Claude, provided incorrect legal citations. Despite these challenges, Anthropic continues to research and mitigate AI deception, particularly in its Claude Opus 4 model. Amodei’s comments suggest that Anthropic might still consider an AI model as achieving AGI, even if it occasionally hallucinates, indicating a broader definition of AGI within the company.

Disclaimer & DisclosureReport an Issue

1