From dce60f49f792b3543ddae8776278602788112578 Mon Sep 17 00:00:00 2001 From: shawnwoolcock Date: Fri, 28 Feb 2025 08:11:40 +0800 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..e4e3040 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, [archmageriseswiki.com](http://archmageriseswiki.com/index.php/User:DessieLundstrom) an LLM fine-tuned with support learning (RL) to improve reasoning ability. DeepSeek-R1 attains outcomes on par with OpenAI's o1 model on numerous standards, including MATH-500 and [SWE-bench](https://www.punajuaj.com).
+
DeepSeek-R1 is based on DeepSeek-V3, a mixture of specialists (MoE) model recently [open-sourced](https://saathiyo.com) by DeepSeek. This [base design](https://dev.ncot.uk) is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research study team also [performed](http://devhub.dost.gov.ph) knowledge distillation from DeepSeek-R1 to [open-source](https://faptflorida.org) Qwen and Llama designs and released a number of [variations](https://droidt99.com) of each \ No newline at end of file