Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Large Language Model Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Claude AI
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Background: Anthropic == Claude is the flagship product of '''Anthropic PBC''', a [[public benefit corporation]] founded in 2021. Anthropic was established by siblings '''Dario Amodei''' (CEO) and '''Daniela Amodei''' (President), along with approximately seven other former senior researchers from [[OpenAI]].<ref name="anthropic-wiki">Wikipedia, ''Anthropic'', 2026.</ref> Dario Amodei had served as Vice President of Research at OpenAI; Daniela as Vice President of Safety and Policy. The founders departed over disagreements about the pace and safety practices of AI development at OpenAI, with the explicit goal of building a lab that treated safety as a first-order concern rather than a secondary consideration.<ref name="founding">Contrary Research, ''Anthropic Business Breakdown & Founding Story'', 2026.</ref> Anthropic was incorporated as a Delaware public benefit corporation β a legal structure allowing the board to formally weigh public benefit alongside shareholder returns. The company's stated mission is to develop AI that is '''reliable, interpretable, and steerable''', with safety embedded in research and deployment practices rather than retrofitted as compliance. As of early 2026, Anthropic was valued at approximately $380 billion.<ref name="anthropic-wiki"/> The company generates revenue through Claude API access, its consumer product claude.ai, and enterprise partnerships, most notably with [[Amazon Web Services]], which invested heavily in Anthropic beginning in 2023. Major cloud partnerships also include [[Google Cloud]] and Microsoft Foundry. === Constitutional AI === Anthropic's principal technical contribution to AI alignment is '''Constitutional AI''' (CAI), introduced in a December 2022 paper. CAI is a training methodology in which a language model is guided by a set of written principles β a "constitution" β and trained to self-critique and revise its own outputs against those principles.<ref name="cai-paper">Bai, Y. et al. (2022). Constitutional AI: Harmlessness from AI Feedback. Anthropic.</ref> The process involves two stages: # '''Supervised learning''' β The model generates responses and revises them according to the constitutional principles, producing a curated dataset. # '''Reinforcement learning from AI feedback''' (RLAIF) β A second model instance acts as a "critic," judging whether responses comply with the constitution. The original model is then trained to maximize these AI-generated preference signals. RLAIF effectively automates the human-evaluator step in conventional [[reinforcement learning from human feedback]] (RLHF), making alignment training more scalable. Claude 2's constitution drew on sources including the 1948 [[Universal Declaration of Human Rights]] and Apple's Terms of Service, among others.<ref name="anthropic-wiki"/> Claude's current constitution (updated 2026) articulates not just rules but the reasoning and values behind them, with Anthropic stating an intent for it to serve as a model for the broader industry. === Responsible Scaling Policy === In September 2023, Anthropic published its first '''Responsible Scaling Policy''' (RSP), a self-imposed framework that commits the company to pausing or restricting deployment if model capabilities cross predefined risk thresholds. The RSP defines a ladder of '''AI Safety Levels''' (ASLs), analogous to biosafety levels in pathogen research:<ref name="rsp3">Anthropic, ''Responsible Scaling Policy Version 3.0'', 2026.</ref> * '''ASL-1''' β Models with no meaningful catastrophic risk. * '''ASL-2''' β Models requiring standard deployment and security safeguards; covers Claude models through Claude 3.7. * '''ASL-3''' β Activated for Claude Opus 4 (May 2025); requires enhanced internal security and deployment restrictions targeting potential weapons-of-mass-destruction misuse. * '''ASL-4 and beyond''' β Thresholds intentionally left partially undefined pending better capability characterization. The RSP has been updated multiple times (v2 in 2024, v3 in early 2026), with each iteration refining capability evaluation methods and safeguard specifications. Anthropic cites the RSP as a framework it hopes will be adopted industrywide, or inform government regulation.
Summary:
Please note that all contributions to Large Language Model Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
My wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Claude AI
(section)
Add topic