This is an automated archive made by the Lemmit Bot.
The original was posted on /r/machinelearning by /u/Ambitious_Anybody855 on 2025-02-22 00:30:49+00:00.
The best way to decensor a DeepSeek model? Don’t try to decensor it.
Fine-tuned OpenThinker on OpenThoughts-114k, a dataset focused on reasoning tasks like math, coding, and graduate-level Q&A, with no political content. Despite using censored base models (Qwen), the fine-tuned OpenThinker-7B and OpenThinker-32B models became decensored without any explicit intervention. Unlike Perplexity, no custom fine-tuning was applied to remove censorship, yet the results remain uncensored.
It challenges assumptions about model safety and opens exciting new research directions. AI game is so on
You must log in or register to comment.