Chinese artificial intelligence startup DeepSeekhas released an open version of DeepSeek-R1. In certain AI benchmarks OpenAIHe claimed that it performed as well as the o1 model.
It was stated that DeepSeek-R1, created by the Massachusetts Institute of Technology (MIT) and offered under the MIT license, outperformed O1 in AIME, SWE-bench and MATH-500 tests. While SWE-bench focuses on the programming part, MATH-500 focuses on word problems, and AIME focuses on the performance of the model.
DeepSeek-R1 is Very Good at Solving Problems
R1 is a judgment-heavy model. He reasons with himself. For this reason, it takes more time to find the solution than other models, but in subjects such as mathematics, it at least gives more accurate results than others.
When it comes to artificial intelligence models, the parameter is very important. For example, DeepSeek-R1 contains 671 billion parameters. This means that it performs better than models with fewer parameters. It should also be noted that this will vary depending on other factors.
DeepSeek has also released varying versions of R1, starting with 1.5 billion parameters. As can be expected, powerful hardware is needed to use the largest model. Meanwhile, it surpasses the O1 model in terms of cost. It can be obtained much cheaper than OpenAI’s o1 model via API.
Samsung Announces Artificial Intelligence Supported Smartboard That Will Put Teachers Unemployable
Samsung introduced its new artificial intelligence-supported smart boards at the BETT 2025 event. The AI era begins for education with Android 15 and advanced AI features.
Source link: https://www.tamindir.com/haber/deepseek-r1-vs-openai-o1_92563/