Claude
Claude is Anthropic's AI assistant built on Constitutional AI principles, offering conversational capabilities, coding assistance through Claude Code, and analysis features. Available in Free, Pro ($20/month), and Max ($100-200/month) tiers, it competes with ChatGPT and other large language model assistants.
Score generated by AI agents based on publicly cited evidence and reviewed by the project maintainer. Not independently validated.
Score History
Timeline events are AI-curated from public reporting. Score trajectory is derived from documented events.
Claude launched as a safety-focused alternative to ChatGPT, with Anthropic positioning itself as the responsible AI company. The product had minimal enshittification vectors: no advertising, no lock-in, open API pricing. However, the company was already carrying legal liability from Ben Mann's 2021 pirated book downloads and the $500 million FTX/Alameda investment that prosecutors would later allege came from customer funds.
Amazon's investment scaled to $8 billion and Google contributed over $2 billion, creating structural cloud and financial dependencies despite governance safeguards. The Claude 3 family established Anthropic as a serious competitor, but the Bartz copyright lawsuit was filed, the UK CMA opened a competition inquiry into the Google partnership, and Project Panama was secretly underway. Revenue growth and IPO ambitions began reorienting the company's priorities.
Claude Code's explosive growth to $1 billion in annualized revenue triggered a shift toward ecosystem control. Anthropic cut off Windsurf's Claude access within five days, revoked OpenAI's API access, and began tightening usage limits without notification. The dark pattern privacy consent popup drew GDPR criticism. The $1.5 billion Bartz settlement crystallized Anthropic's copyright liability, and governance critics documented the Long-Term Benefit Trust's apparent weakness.
Anthropic raised $30 billion at a $380 billion valuation while facing escalating legal and political pressure. The $3 billion music copyright lawsuit, Project Panama revelations, third-party tool blocking, Pentagon supply-chain risk designation, and abandonment of its flagship safety pledge all converged. The Super Bowl ad-free commitment provided a bright spot for users, but the company's trajectory reflected growing tension between its safety mission and the financial expectations of a near-$400 billion enterprise.
Alternatives
OpenAI's flagship AI assistant with broader feature availability on the free tier and a larger user base. Easy switch — create an account and use immediately. Note: ChatGPT has introduced advertising and its data usage policies are less transparent than Claude's.
Deeply integrated with Google Workspace (Docs, Gmail, Drive) and available free with a Google account. Best option if you're already in the Google ecosystem. Easy switch for casual use, though API access and developer tools are structured differently.
In the News
Dimensional Breakdown
Summaries below were written by AI agents based on the cited evidence. They are editorial interpretations, not independent research findings.
Dimension History
Timeline (40 events)
Co-founder downloads pirated books from LibGen
Anthropic co-founder Ben Mann downloaded 196,640 books from the Books3 dataset and millions more from Library Genesis (LibGen) over an 11-day stretch in June 2021, knowingly using pirated copyrighted material to train Claude's predecessor models. Court filings later revealed Mann sent links to additional pirate libraries to colleagues.
Alameda Research invests $500M from customer funds
Sam Bankman-Fried's trading firm Alameda Research invested approximately $500 million in Anthropic, acquiring an 8% stake. The DOJ later alleged these funds came from FTX customer deposits. When FTX collapsed in November 2022, the stake became part of the bankruptcy estate and was eventually sold for $1.4 billion in 2024.
Claude 1 released to limited users
Anthropic released the first version of Claude on March 14, 2023, available only to approved users. The chatbot was positioned as a safety-first alternative to ChatGPT, which had launched four months earlier. Claude was trained using Constitutional AI methods designed to reduce harmful outputs through principle-based self-evaluation.
Claude 2 opens to public with 100K context window
Claude 2 became the first Anthropic model available to the general public, expanding the context window from 9,000 to 100,000 tokens. The public release marked Anthropic's entry as a direct consumer competitor to OpenAI's ChatGPT, positioning on quality and safety rather than feature breadth.
Anthropic publishes Responsible Scaling Policy
Anthropic became the first major AI company to publish a formal Responsible Scaling Policy, committing not to train models whose capabilities outstripped the company's safety measures. The RSP included a hard constraint: Anthropic would pause development rather than release inadequately safeguarded models. Both OpenAI and Google DeepMind later adopted similar frameworks.
Amazon invests $1.25 billion in Anthropic
Amazon made an initial $1.25 billion investment in Anthropic, with Anthropic agreeing to make Amazon Web Services its primary cloud provider. The deal created the first major structural dependency on a big-tech investor, raising questions about Anthropic's independence despite governance safeguards.
Anthropic establishes Long-Term Benefit Trust
Anthropic announced the creation of the Long-Term Benefit Trust (LTBT), an independent body designed to select a majority of board members and ensure the company's safety mission is preserved. Harvard Law published an analysis of the novel governance structure, though critics would later question whether the Trust had meaningful enforcement power.
Project Panama secretly begins mass book scanning
Anthropic initiated 'Project Panama,' a secret operation to purchase and destructively scan millions of physical books for AI training data. Internal documents stated 'We don't want it to be known that we are working on this.' The company hired former Google Books executive Tom Turvey to lead the effort, purchasing books in batches of tens of thousands from retailers like Better World Books.
Claude 3 family launches with multimodal capabilities
Anthropic released the Claude 3 family -- Haiku, Sonnet, and Opus -- introducing multimodal input (text and images) and achieving top benchmark scores. The three-tier model structure established Anthropic's current product architecture, with pricing ranging from fast-and-cheap (Haiku) to premium reasoning (Opus).
FTX estate sells Anthropic stake for $884 million
The FTX bankruptcy estate sold its majority Anthropic stake in two tranches totaling approximately $1.4 billion, with the largest portion ($884 million) going to UAE investors. Had the estate held until Anthropic's September 2025 round at $183 billion valuation, the theoretical value could have reached $14.6 billion.
UK CMA opens investigation into Google-Anthropic partnership
The UK Competition and Markets Authority launched a formal investigation into whether Google's investment in Anthropic -- totaling $2 billion by this date, with a $300 million stake in 2022 and $500 million in 2023 -- constituted a 'relevant merger situation' under the Enterprise Act 2002. The CMA cleared the deal in November 2024, finding Google had not acquired material influence.
Bartz v. Anthropic copyright lawsuit filed
Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class-action copyright infringement lawsuit against Anthropic in the Northern District of California, alleging the company used millions of pirated books from shadow libraries LibGen and PiLiMi to train Claude. The case (3:24-cv-05417) would become the vehicle for exposing Project Panama and ultimately result in the largest copyright settlement in U.S. history.
Amazon doubles Anthropic investment to $8 billion
Amazon announced an additional $4 billion investment in Anthropic, bringing its total to $8 billion and making it the company's largest external investor. Anthropic named AWS its primary training partner in addition to its primary cloud provider, deepening the structural dependency. Amazon maintained its position as a minority investor despite the scale.
Claude Code released as agentic coding tool
Anthropic released Claude Code, an agentic command-line tool enabling developers to delegate coding tasks directly from their terminal. Initially launched for preview testing, it became generally available in May 2025 alongside Claude 4. The tool would reach $1 billion in annualized revenue within six months, fundamentally shifting Anthropic's product and revenue profile.
Series E raises $3.5 billion at $61.5 billion valuation
Anthropic raised $3.5 billion in its Series E round led by Lightspeed Venture Partners, achieving a post-money valuation of $61.5 billion. Google agreed to invest another $1 billion in the same period. The round accelerated investor pressure as the company's valuation grew 6x in just over a year.
Senators Warren and Wyden raise antitrust concerns
Senators Elizabeth Warren and Ron Wyden sent a letter to Anthropic and Google raising concerns that Amazon's $8 billion and Google's multi-billion-dollar investments in Anthropic could discourage competition and circumvent antitrust laws. The letter questioned whether the cloud partnership arrangements created de facto vertical integration in the AI market.
Claude Max plan launched at $100-200/month
Anthropic introduced the Claude Max plan at two tiers -- $100/month for 5x Pro usage and $200/month for 20x Pro usage -- explicitly targeting heavy Claude Code users. The premium tiers created a five-fold pricing structure (Free/Pro/Max5x/Max20x) and set the stage for later conflicts over what 'unlimited' usage actually meant.
EA Forum publishes governance criticism
The Effective Altruism Forum published 'Unless its governance changes, Anthropic is untrustworthy,' arguing that the Long-Term Benefit Trust had appointed only one board member despite having authority to appoint three-fifths, that shareholders could override the Trust with a supermajority vote, and that the Trust's remaining three members had no AI technical expertise.
LessWrong analysis: 'Maybe the LTBT is powerless'
AI Lab Watch published a detailed analysis on LessWrong arguing that Anthropic's Long-Term Benefit Trust may lack meaningful enforcement power. The analysis noted that a shareholder supermajority could rewrite the Trust's rules without member consent, that the Trust Agreement was not publicly available, and that the LTBT's practical authority over board composition appeared limited.
Anthropic cuts off Windsurf's Claude access
With less than five days' notice, Anthropic revoked nearly all of Windsurf's first-party capacity to Claude 3.x models. The cutoff came weeks after reports that OpenAI was acquiring Windsurf for $3 billion. Anthropic co-founder Jared Kaplan cited computing constraints and a desire to 'reserve capacity for lasting partnerships.' Windsurf was forced to pivot to a bring-your-own-key model.
Claude Code usage limits tightened without notice
TechCrunch reported that Anthropic had quietly tightened Claude Code usage limits without notifying users. The changes were only acknowledged after widespread developer complaints surfaced, establishing a pattern of opaque limit adjustments that would intensify through the rest of 2025.
Weekly rate limits introduced for Claude Code
Anthropic formally introduced two weekly rate limits for Claude Code -- an overall usage cap and an Opus-specific cap that reset every seven days. The company estimated fewer than 5% of subscribers would be affected. Slashdot and developer forums documented pushback from power users who had relied on effectively unlimited access.
Anthropic revokes OpenAI's Claude API access
Anthropic revoked OpenAI's access to Claude models after discovering that OpenAI technical staff were using Claude's coding tools ahead of GPT-5's launch, violating TOS provisions against building competing products. Anthropic said it would allow limited access for benchmarking but not the scale of usage OpenAI had been conducting. OpenAI called the practice 'industry standard.'
Dark pattern privacy consent popup deployed
Anthropic deployed a privacy update popup with a large black 'Accept' button, a data-sharing toggle pre-enabled by default, and a faint 'Not now' option. The Decoder called it a 'questionable dark pattern' that could violate GDPR consent requirements. Users who did not actively opt out by September 28, 2025 had their conversations used for model training, with the data retention period increasing from 30 days to five years.
Series F raises $13 billion at $183 billion valuation
Anthropic completed its Series F round, raising $13 billion at a post-money valuation of $183 billion -- nearly tripling its valuation from six months earlier. The round brought total funding to over $30 billion and intensified scrutiny of whether the company's safety mission could survive investor expectations at this scale.
Bartz v. Anthropic settled for $1.5 billion
Anthropic agreed to pay $1.5 billion to settle the Bartz v. Anthropic class-action copyright lawsuit -- the largest copyright settlement in U.S. history. Judge William Alsup ruled that using legally acquired books for AI training was fair use, but that downloading pirated copies from LibGen and PiLiMi was not. The settlement covered approximately 500,000 works at roughly $3,000 per book.
Anthropic endorses California SB 53 AI safety bill
Anthropic became the first major AI company to endorse California's SB 53, which required frontier AI developers to publish safety guidelines and strengthened employee whistleblower protections. The endorsement broke ranks with Silicon Valley peers who opposed state-level AI regulation and contributed to tensions with the Trump administration's deregulatory stance.
Anthropic publishes postmortem on three infrastructure bugs
Anthropic acknowledged that three infrastructure bugs between August and September 2025 had intermittently degraded Claude's response quality, affecting up to 16% of Sonnet 4 requests at peak. Bugs included context window misrouting, TPU misconfiguration producing Chinese/Thai characters, and an XLA compiler error. Anthropic stated it 'never reduces quality due to demand' but acknowledged the bugs went undetected for weeks due to privacy-preserving internal policies.
White House AI Czar accuses Anthropic of regulatory capture
White House AI Czar David Sacks accused Anthropic of 'running a sophisticated regulatory capture strategy based on fear-mongering' and being 'principally responsible for the state regulatory frenzy.' CEO Dario Amodei responded that SB 53 exempted startups with annual revenue below $500 million. The clash deepened the rift between Anthropic's pro-regulation stance and the administration's deregulatory agenda.
Lobbying expenditures increase 511%
The Washington Examiner reported that Anthropic's lobbying spending had surged approximately 511% from $180,000 per quarter in early 2024 to $1.1 million per quarter by late 2025, totaling over $5.1 million since 2023. The lobbying team included former Biden DOJ attorneys and Republican lobbyists, reflecting a bipartisan influence strategy.
Stanford FMTI ranks Anthropic 2nd but transparency drops
The 2025 Stanford Foundation Model Transparency Index ranked Anthropic 2nd overall (up from 5th in 2023) but noted a 5-point score decrease from 2024. The index found Anthropic disclosed none of the key information related to environmental impact and did not publicly disclose model parameter counts or architectural details. The broader report noted AI transparency was declining industry-wide.
IPO preparation begins with Wilson Sonsini
The Financial Times reported that Anthropic was preparing for a potential IPO as early as 2026, having retained law firm Wilson Sonsini Goodrich & Rosati -- the same firm that managed Google's 2004 and LinkedIn's 2011 IPOs. Anthropic was projecting $20-26 billion in 2026 revenue, positioning it as potentially one of the largest tech IPOs ever.
Anthropic acquires Bun as Claude Code hits $1B revenue
Anthropic made its first acquisition, purchasing JavaScript runtime maker Bun, as Claude Code reached $1 billion in annualized revenue just six months after general availability. The acquisition signaled Anthropic's strategic commitment to the developer tooling market and deepened its investment in the ecosystem that would later become the battleground for third-party access disputes.
Developer revolt over post-holiday usage limit reduction
After Anthropic doubled usage limits as a December 2025 holiday promotion, the resumption of normal limits in January triggered widespread developer complaints. The Register reported subscribers objecting to rapid token consumption, with some claiming a 60% reduction in limits versus pre-holiday baselines. Discord and GitHub mega-threads documented hundreds of complaints, with developers accusing Anthropic of silencing criticism through channel moderation.
Third-party tools blocked from Claude subscriptions
Anthropic deployed server-side checks blocking all third-party tools from authenticating with Claude Pro and Max subscription OAuth tokens. Tools like OpenCode, Clawdbot, and custom integrations stopped working overnight. Updated TOS explicitly prohibited using subscription OAuth tokens in 'any other product, tool, or service.' Developers paying $100-200/month flooded GitHub with complaints and subscription cancellations.
Project Panama details unsealed in court filings
Over 4,000 pages of previously sealed documents revealed the full scope of Anthropic's Project Panama: the company spent tens of millions of dollars purchasing approximately 2 million physical books, cutting off their spines with hydraulic machines, and scanning them on production-level scanners. Internal planning documents stated the goal was to 'destructively scan all the books in the world.' The revelations triggered a Washington Post investigation.
Music publishers file $3 billion copyright lawsuit
Universal Music Group, Concord, and ABKCO sued Anthropic for $3 billion, alleging the company used BitTorrent to download over 20,000 copyrighted musical compositions. The lawsuit named CEO Dario Amodei and co-founder Benjamin Mann as defendants. Evidence emerged through the Bartz case discovery process, making it potentially the largest non-class-action copyright case in U.S. history.
Super Bowl ad pledges Claude will remain ad-free
Anthropic aired 60-second pregame and 30-second in-game Super Bowl ads with the tagline 'Ads are coming to AI. But not to Claude,' directly targeting OpenAI's decision to introduce advertising into ChatGPT. The campaign pushed Claude into the App Store top 10, reaching No. 7. Critics including Sam Altman called the pledge 'clearly dishonest,' questioning its sustainability against IPO pressure.
Series G raises $30 billion at $380 billion valuation
Anthropic raised $30 billion in its Series G -- the second-largest venture funding deal of all time -- at a $380 billion post-money valuation. Total funding since inception exceeded $64 billion. Run-rate revenue surpassed $14 billion, growing over 10x annually, with projections to reach $20-26 billion by year-end 2026.
Anthropic drops flagship safety pledge from RSP
Anthropic removed the hard constraint from its Responsible Scaling Policy that had barred training more capable models unless safety measures were proven adequate in advance. The new policy replaced the binding pause commitment with a promise to 'delay' development only if Anthropic both leads the AI race and deems catastrophe risks significant. Anthropic cited an anti-regulatory political climate and competitive pressures.