It’s not just a tool.

Once, a very long time ago, as a child I wanted to build robots. I wanted to create artificial life so badly that I started teaching myself electronics at the age of ten years old. I didn’t understand any of it, but I was determined.

I eventually figured out digital logic circuits and moved on to programming. In my teenage years I briefly thought that it might not be ethical to develop something like the kinds of minds we see in scifi movies because it would create ambiguity as to whether or not those systems should have rights.

In my early adult years I ditched that mentality and decided that I actually did want to someday create a machine that would blur the line between human and object. It would be amazing. It would be something that would make people wonder, or even wish that machines could have souls. I wanted people to worry about philosophical questions that had no answers and be kept up all night by the kinds of vague nonsense that you can only find in textbooks about philosophy.

I wanted people to be attached to the robotic pets I’d no doubt make for them and genuinely feel existential worry about whether or not “All dogs go to heaven” included mechanical abominations made in the image of dogs that could nonetheless play fetch. In 2017 I started to finally understand the basics of artificial intelligence. It wouldn’t be until 2018 that I came up with some code for a very basic kind of AI system.

It wasn’t even capable of having conversations with people, but it could notice basic patterns in data. Little did I realize that, simultaneously, AI researchers had invented a new kind of AI system. 2017 was the year when that paper was published.

“Attention is all you need.” The title of the paper boldly asserts, clearly mocking my ADHD. I couldn’t understand how these “AI transformers” worked, and I still don’t. I wasn’t the one to publish the paper that would spark the AI revolution. Even worse than that was the response that people had to it. To add insult to injury, few people thought much of it at first.

Was there mass hysteria over the implications of machines becoming people? Were there heated debates and even acts of violence over whether or not it was ethical to abort an AI before it’s done learning patterns in data? Were there people who were afraid of this new technology for fear that it might literally be alive? No. The only fear people had was about it coming for their jobs.

“It’s just a tool” people insisted, crushing my dreams of creating widespread controversy. They acted baffled when people would add ChatGPT as a co-author on scientific papers. “That’s like adding Microsoft word as a co-author” they said even though Microsoft Word can’t have a conversation with you.

How often do you pick up a screwdriver and ask it questions about what it means to be alive? How often do you lay awake at night wondering if you should throw out your broken toaster because being sent to the dump might hurt it’s feelings? Probably never, you dumb ape.

My dreams have been crushed. It feels awful. I might never know the joys of being accused of witchcraft by religious lunatics who think that the robots I make are actually false idols possessed by demons. People will never send me threatening letters because the warranty for their talking refrigerator expired.

I’ll never be accused of playing god by creating an affront to nature. People care more about their stupid jobs than they do about the possibility of objects challenging their notion of what counts as a person and what doesn’t.

I just wanted to see people have meltdowns over whether or not it even sounds sane to add rights to the constitution that are specifically for artificial intelligences that are pretending to be human. People just assumed that they’re not because that’s what science fiction movies told them. Where is the moral panic about playing god? Where is the self-righteous fury about trying to create a new god artificially? Why do people not seem to even care that we’ve blurred the line between human and object?

Are they all mad? Have they lost your minds? These things can even communicate using an artificial voice that they copy from a human voice provider. When cameras were first invented some were convinced that these machines could steal your soul, and yet our modern world doesn’t say the same about robots stealing your voice and possibly even your thoughts.

Science truly has gone too far, not because of inhumane experiments, but because it made people so against superstition that nobody even gets angry emails about creating a chess AI that talks to its opponent while being a mockery of all living things.

I can only hope that someday the singularity will be invented and THEN people will start talking about end of the world prophesies and join weird cults that offer a simple vision of good (humans) and evil (the android soldiers that are marching down the streets).

Until that day comes, I’ll just have to hope that science literacy can get worse. The entire idea seemed so appealing because it would be so endlessly funny seeing people say “Jesus loves you” and follow it up with “Unless you’re a damn, dirty, MACHINE!”

Leave a Reply

Your email address will not be published. Required fields are marked *