Anthropic looks to fund a new, more comprehensive generation of AI standards TechCrunch

Anthropic is launching a program to fund the development of new kinds of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its Claude.

Unveiled Monday, Anthropic’s program will make payments to third-party organizations that can, as the company says in a blog post, “effectively measure advanced capabilities in AI models.” Those interested can submit applications to be evaluated on a permanent basis.

“Our investment in these assessments aims to elevate the entire field of AI security, providing valuable tools that benefit the entire ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-related assessments remains challenging, and demand is outstripping supply.”

As we’ve pointed out before, AI has a comparison problem. The most cited standards for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions about whether some standards, especially those released before the dawn of modern generative AI, even measure what they claim to measure, given their age.

The very high-level, harder-than-sounds solution that Anthropic is proposing is creating challenging standards with a focus on AI security and societal implications through new tools, infrastructure and methods.

The company is specifically looking for tests that assess a model’s ability to accomplish tasks such as carrying out cyberattacks, “enhancing” weapons of mass destruction (eg nuclear weapons) and manipulating or deceiving people (eg .through deep falsification or misinformation). On AI risks related to national security and defense, Anthropic says it’s committed to developing some sort of “early warning system” for identifying and assessing risks, though it didn’t reveal in the blog post what that might entail. such a system.

Anthropic also says it aims its new program to support research into “end-to-end” benchmarks and tasks that investigate AI’s potential to aid scientific inquiry, converse in multiple languages, and mitigate entrenched biases, as well as self-censoring toxicity.

To achieve all this, Anthropic envisions new platforms that allow subject matter experts to develop their own evaluations and large-scale trials of models involving “thousands” of users. The company says it has hired a full-time coordinator for the program and that it may acquire or expand projects it believes have the potential to scale.

“We offer a variety of financing options tailored to the needs and stage of each project,” Anthropic wrote in the post, though an Anthropic spokesperson declined to provide any further details about those options. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the red border team, good regulation, trust and security and other relevant teams.”

Anthropic’s effort to support new AI standards is a laudable one — assuming, of course, that it has enough money and manpower behind it. But given the company’s commercial ambitions in the AI ​​race, that may be hard to fully believe.

In the blog post, Anthropic is quite transparent about the fact that it wants some of the assessments it funds to match AI security classifications IT developed (with some input from third parties such as the non-profit AI research organization METR). This is within the prerogative of the company. But it could also force applicants to the program to accept definitions of “safe” or “dangerous” AI with which they may disagree.

A portion of the AI ​​community is also likely to object to Anthropic’s references to the “catastrophic” and “misleading” dangers of AI, such as the dangers of nuclear weapons. Many experts say there is little evidence to suggest that AI, as we know it, will gain world-ending capabilities that surpass humans anytime soon, if ever. Claims of imminent “superintelligence” only serve to distract from pressing AI regulatory issues of the day, such as AI’s hallucinatory tendencies, these experts add.

In its post, Anthropic writes that it hopes its program will serve as “a catalyst for progress toward a future where comprehensive AI assessment is an industry standard.” This is a mission with which many open, non-corporate efforts to create better AI standards can be identified. But it remains to be seen whether these efforts are willing to join forces with an AI vendor whose loyalty ultimately lies with shareholders.

#Anthropic #fund #comprehensive #generation #standards #TechCrunch
Image Source : techcrunch.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top