Monday, December 23, 2024
Technology

Anthropic looks to fund a new, more comprehensive generation of AI benchmarks

Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models like its own Claude.

Unveiled on Monday, Anthropic’s program will dole out payments to third-party organizations that can, as the company puts it in a blog post, “effectively measure advanced capabilities in AI models.” Those interested can submit applications to be evaluated on a rolling basis.

“Our investment in these evaluations is intended to elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-relevant evaluations remains challenging, and the demand is outpacing the supply.”

As we’ve highlighted before, AI has a benchmarking problem. The most commonly cited benchmarks for AI today do a poor job of capturing how the average person actually uses the systems being tested. There are also questions as to whether some benchmarks, particularly those released before the dawn of modern generative AI, even measure what they purport to measure, given their age.

The very-high-level, harder-than-it-sounds solution Anthropic is proposing is creating challenging benchmarks with a focus on AI security and societal implications via new tools, infrastructure and methods.

The company calls specifically for tests that assess a model’s ability to accomplish tasks like carrying out cyberattacks, “enhance” weapons of mass destruction (e.g. nuclear weapons) and manipulate or deceive people (e.g. through deepfakes or misinformation). For AI risks pertaining to national security and defense, Anthropic says it’s committed to developing an “early warning system” of sorts for identifying and assessing risks, although it doesn’t reveal in the blog post what such a system might entail.

Anthropic also says it intends its new program to support research into benchmarks and “end-to-end” tasks that probe AI’s potential for aiding in scientific study, conversing in multiple languages and mitigating ingrained biases, as well as self-censoring toxicity.

To achieve all this, Anthropic envisions new platforms that allow subject-matter experts to develop their own evaluations and large-scale trials of models involving “thousands” of users. The company says it’s hired a full-time coordinator for the program and that it might purchase or expand projects it believes have the potential to scale.

“We offer a range of funding options tailored to the needs and stage of each project,” Anthropic writes in the post, though an Anthropic spokesperson declined to provide any further details about those options. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the frontier red team, fine-tuning, trust and safety and other relevant teams.”

Anthropic’s effort to support new AI benchmarks is a laudable one — assuming, of course, there’s sufficient cash and manpower behind it. But given the company’s commercial ambitions in the AI race, it might be a tough one to completely trust.

In the blog post, Anthropic is rather transparent about the fact that it wants certain evaluations it funds to align with the AI safety classifications it developed (with some input from third parties like the nonprofit AI research org METR). That’s well within the company’s prerogative. But it may also force applicants to the program into accepting definitions of “safe” or “risky” AI that they might not agree with.

A portion of the AI community is also likely to take issue with Anthropic’s references to “catastrophic” and “deceptive” AI risks, like nuclear weapons risks. Many experts say there’s little evidence to suggest AI as we know it will gain world-ending, human-outsmarting capabilities anytime soon, if ever. Claims of imminent “superintelligence” serve only to draw attention away from the pressing AI regulatory issues of the day, like AI’s hallucinatory tendencies, these experts add.

In its post, Anthropic writes that it hopes its program will serve as “a catalyst for progress towards a future where comprehensive AI evaluation is an industry standard.” That’s a mission the many open, corporate-unaffiliated efforts to create better AI benchmarks can identify with. But it remains to be seen whether those efforts are willing to join forces with an AI vendor whose loyalty ultimately lies with shareholders.

source

Leave a Reply

Your email address will not be published. Required fields are marked *