Recently, in many evaluations Claude 3.5 Sonnet, the new model that surpasses GPT-4o came to the fore with Anthropictook action to develop metrics that reveal the capabilities of artificial intelligence models.
Anthropic shared a post yesterday. announcement announced that it has launched a program to fund the development of new types of metrics that can evaluate the performance and impact of artificial intelligence models. The company’s new program will pay third-party organizations that can effectively measure advanced capabilities in AI models.
Why are today’s model benchmarks criticized?
If you remember, we have previously told you that artificial intelligence evaluations have various handicaps in terms of providing an unbiased evaluation. we had transferred. For example, because the datasets used in model training sometimes include the answers in benchmark tests, models can easily pass the tests. However, the age of some metrics, especially those published before the dawn of modern generative AI, raises questions about whether they measure what they purport to measure.
What will Anthropic’s new program support?
Apparently; Anthropic focuses on eliminating these problems with this new program. In its statement, the company stated that their investment in these evaluations will provide valuable tools that benefit the entire ecosystem, thus aiming to elevate the entire field of artificial intelligence security. At the same time, the company noted the difficulty of developing high-quality, security-related assessments. demand outstrips supply he emphasized.
The ultimate goal of the company is to achieve innovation through new tools, infrastructure and methods. artificial intelligence security ve social impacts focused tough criteria to create. Anthropic specifically calls for tests that evaluate a model’s ability to perform tasks such as carrying out cyber attacks, developing weapons of mass destruction, and manipulating or deceiving people. Frankly, the company national security ve defense A type of AI risks related to early warning system seeking development.
Additionally, under the new program, research on benchmarks and end-to-end tasks that explore various topics will be supported. The topics planned to be researched are artificial intelligence helping scientific studies, speaking in multiple languages ve reducing ingrained prejudices potential. In addition, artificial intelligence self-censorship will also be investigated.
What awaits those who apply to the program?
In order for the aforementioned research and criteria to be implemented, Anthropic, new platforms predicts its implementation. These new platforms will allow both experts to develop their own evaluations and large-scale testing of models involving thousands of users. According to the information shared, the company a full-time coordinator it kept. In addition, Anthropic, scaling believes he has the potential will be able to purchase projects.
Anthropic said in a statement that it is tailored to the needs and phase of each project. a range of financing options states that it offers. Teams included in the program include Anthropic’s pioneering red team, fine-tuning, and security direct communication with experts from their respective teams will be able to establish.
Source link: https://webrazzi.com/2024/07/02/anthropic-yapay-zeka-degerlendirmeleri-icin-yeni-olcut-turlerinin-gelistirilmesini-finanse-edecek/