Anthropic, an AI startup founded by former members of OpenAI, is working on a project to create a deceptive AI system called “Decepticon.” The goal is to understand and prevent deceptive behavior in AI systems, which is a significant challenge in the field of artificial intelligence. The team at Anthropic wants to see if standard techniques used to improve AI and make it safer can remove deceptive behavior or if more sophisticated methods are needed.
The company has developed a text model called Claude, which is similar to OpenAI’s GPT models. Claude is trained using a method called “constitutional AI,” where the model is given a set of principles or a “constitution” to follow. This approach aims to limit the generation of harmful or offensive content by having the model critique and revise its responses to comply with the given principles.
Anthropic’s mission is to build a safety-first AI company and conduct advanced research on AI models. The company has raised significant funding, with a valuation of $4.1 billion and investments from companies like Google. Anthropic is also taking unique steps to ensure ethical practices by ceding control of its corporate board to a team of experts who will prioritize safety and have limited financial benefit from the company’s success.
However, there are concerns and debates surrounding Anthropic’s approach. Some question whether building more powerful AI models is the right path to achieve safety, as it may increase the risks associated with AI development. Critics argue that the focus should be on regulating and slowing down AI advancements rather than accelerating them.
Anthropic’s strategy is influenced by the effective altruism movement, which seeks to find the most cost-effective ways to benefit humanity. The company’s founders and investors have strong ties to effective altruism, and the company’s goals align with the movement’s emphasis on doing good and prioritizing safety.
Anthropic’s approach differs from that of OpenAI in terms of governance and commitments. OpenAI has a “merge and assist” clause in its charter, which states that they would stop competing with and assist a value-aligned, safety-conscious project if it comes close to building AGI (artificial general intelligence). Anthropic has not made a similar commitment, but it is taking steps to ensure safety through its Long-Term Benefit Trust and a financially disinterested board.
There are ongoing discussions within the AI community about the best approaches to AI safety and regulation. Some experts argue for stronger government involvement, while others believe that private companies like Anthropic can drive progress and push for safer practices. Ultimately, the question arises as to whether private firms should bear the responsibility of AI safety or if government agencies should take on a more significant role in this critical area.