Anthropic forms group to study AI’s real-world impact.

John E. Dunn
5 Min Read

Here’s the rewritten content, with the text between <figure></figure> unchanged and all other content and HTML tags respected:

With the establishment of the new Anthropic Institute, the company is intensifying its commitment to ethical principles.

Anthropic and Claude
                                        <div class="media-with-label__label">
                        Credit:                                                             T. Schneider / Shutterstock                                                 </div>
                                </figure>
        </div>
                                        </div>
                        </div>
                    </div>

Having recently engaged in a dispute with the US Department of Defense (DoD) regarding AI safeguards, Anthropic this week unveiled a fresh endeavor: the creation of a think tank, named the Anthropic Institute. Its stated mission is “to confront the most significant challenges that powerful AI will pose to our societies.”

Anthropic co-founder Jack Clark will lead the Institute in his new capacity as head of public benefit. The organization will be staffed by “an interdisciplinary team of machine learning engineers, economists, and social scientists,” as announced by the company.

This new entity will integrate and build upon three existing internal groups: the Frontier Red Team, responsible for stress testing and evaluating AI models; Societal Impacts, which examines the real-world applications of AI; and Economic Research, dedicated to understanding AI’s expanding influence on economies and job markets.

Anthropic stated, “The Institute benefits from a distinct perspective, possessing insights available exclusively to the developers of advanced AI systems. It intends to leverage this advantage fully, offering transparent reports on discoveries made regarding the nature of the technology we are developing.”

In a less typical move for an AI firm, the company also declared that “[the Institute] plans to interact with employees and sectors threatened by disruption, as well as individuals and communities grappling with an uncertain future.”

‘Supply chain risk’

Although likely coincidental, the Institute’s launch this week provides an opportune moment to highlight Anthropic’s distinct approach to AI ethics compared to its competitors.

The Anthropic Institute represents the newest in a string of recent announcements, such as the Claude Constitution, all aimed at providing external parties insight into the core design principles that guide the model’s values and conduct.

During 2024, CEO Dario Amodei articulated his vision for “how AI could transform the world for the better” in an essay, showcasing an idealism recently challenged by events.

Following weeks of persuasion and warnings from the DoD, Anthropic’s steadfast commitment to its AI ethics led Defense Secretary Pete Hegseth to exclude it from Pentagon initiatives on February 27, labeling it a “supply chain risk.” In retaliation, Anthropic, undeterred, filed a lawsuit on Monday seeking a temporary restraining order against the Pentagon.

The crucial question is how enterprise clients should interpret this burgeoning and seemingly irreconcilable ideological conflict between two distinct cultures.

Andrew Gamino-Cheong, CTO of the AI governance platform Trustible, suggests that Microsoft’s public objection to the comprehensive ban on Anthropic might carry considerable weight.

He stated, “Microsoft’s backing of Anthropic in this situation will be important.” He added that labeling a company of Anthropic’s stature a “supply chain risk” was also discouraging for the broader industry.

It’s worth noting that Microsoft invested $5 million in Anthropic in November and announced plans to integrate Claude Sonnet into its Copilot chatbot, underscoring that Anthropic’s fate has implications extending far beyond the company itself.

Gamino-Cheong observed, “Numerous startups and AI firms will likely be reluctant to conduct business with the federal government because of this.” He found it astonishing that an American model developer was branded a ‘supply chain risk’ when “Chinese models haven’t received such a label yet.”

Nevertheless, he opined that Anthropic’s dedication to AI ethics could attract private sector businesses increasingly prioritizing AI governance.

“Claude’s models have begun to outperform most competitors in numerous ‘business’ related tasks. This superior performance stems from their investment in AI safety research, which coincidentally is the precise issue causing contention with the DoD,” he elaborated.

Initially published on CIO.com, this article is reprinted here.

Artificial IntelligenceTechnology IndustryIndustryVendors and Providers
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *