How to always tell if an account is run by a bot.

You can’t. The questions “Is this account run by a bot?” is undecidable. This video (https://www.youtube.com/watch?v=HeQX2HjkcNo ) is a really great introduction to what the term “undecidable” means though it talks about a different undecidable problem. Basically it means that, while it is often possible to tell if a given account is a bot, and it may even be possible to sometimes automatically prove that an account must be a bot, there will always be accounts out there that may or may not be bot accounts and you cannot possibly know for sure.

One trick you can use to tell if an account is a bot is to look at the rate at which they’re able to make posts. If they can make posts faster than a human could then it’s a bot; however all that means is that the human programmers who make these bots have to do is add a small random delay between each tweet to hide it’s true nature.

You could try checking to see if it always posts at the exact same times right down to the second, but again that method for checking can be thwarted. You can check to see if they’re posting at all hours and never taking time to sleep, but once again, the bot can be programmed to only tweet during the daytime for whatever time zone it’s pretending to be in.

As for the text itself in the message, there’s no magic that can be used to tell if a given piece of text was generated by AI or written by a human. Humans can write text that seems like it was made by something like chatGPT (this is usually done by people making fun of it), and you may be surprised by how well an AI bot can pretend to be a person.

Once upon a time it was easier. I found in the 2010s that many chat bots couldn’t answer the question “What was the last thing I said to you?” in the middle of a conversation, but now bots are more than able to give a good, clear answer to that question.

Modern AIs mimic how the human brain works more closely than previous AIs were able to. Ever since the invention of AI transformers in 2017 it’s been possible to build bots that can keep track of much longer conversations rather than only being able to remember the last thing that was said to it.

I’ve seen people on Twitter fall for bots that were far simpler than chatGPT. The “Dr. James E. Olsson” account comes to mind. That account merely generates the same randomly selected tweets over and over again. It takes a randomly selected message from a file, tweets it, waits a while, and then tweets again over and over like clockwork. It also deletes it’s old tweets at regular intervals like clockwork to avoid suspicion. I’ve seen what seem like actual people fall for this account before.

To be clear, there are ways to tell if an account is spewing propaganda, but both humans and Ais can do that. It’s also generally easy to develop an AI that can detect spammers by using machine learning, but spam, like propaganda, can be done by both computers and humans.

Of course modern bots aren’t very good at coming up with new ideas, but neither are most people. At the end of the day, there’s nothing we will be able to do to be able to tell if something is real or not.

Leave a Reply

Your email address will not be published. Required fields are marked *