This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Fringolicious on 2024-06-28 09:22:56+00:00.


Had this thought when reading a post about a Facebook bot not being noticed by somebody conversing with it.

Right now it’s pretty easy to spot AI content (See: Delve and writing style in general) but as we interact more and more with AI, we’re likely going to pick up its’ mannerisms and such right? Feels like we’ll end up with AI and human content converging somewhat, which will make AI generated content harder to pick up (Or I guess human content harder to pick apart)

Also, it’s said that using AI generated content as training data is bad, right? What happens if we all start to sound more and more like AI? Will that also be “bad” training data?

Fun stuff.