>>13180
My advice? If you're familiar with LLMs and the associated technology, I would build something that compares LLM (and other "AI"-related stuff) outputs out genuine human posts.
Also, I suggest looking up some shitty "AI detection" services, specifically the ones that offer you services on "making AI undetectable,", and analyze their scrambled prompts too- if their services are free that is. Their actual detection tools are hot-fucking-garbage, but clearly their market is not revealing AI, but making it more-obfuscated, so if you can make something that can sniff out obfuscated output, you can really put a hole in these "undetectable" bot-posters.
On the surface. I've noticed that chatGPT-prompted posts often are built like a school paper: They start off with what they want, they explain their piece, and then reiterate it at the end. Emojis can be there, but that's more common for Google's AI-bot (whatever it's called nowadays) than chatGPT.
Human posts are unstructured and spontaneous. Just questions and answers, context is there but it's not structured like a research paper or a journo's article.
Also. 4chins has been bot-spammed for a least a decade. There's a repost somewhere that showed how the glownogs fucked up and set their bot to post on literally -all- boards when it was meant for either /mu/ or /lit/ that day.