This is an automated archive made by the Lemmit Bot.
The original was posted on /r/singularity by /u/Gothsim10 on 2024-10-19 11:03:23+00:00.
Original Title: New paper: Language Models Can Learn About Themselves by Introspection. Each model predicts itself better than other models trained on its outputs, because it has a secret sense of its inner self. Llama is the most difficult for other models to truly see.
You must log in or register to comment.