Random header image... Refresh for more!

Sentient AI

A week or two ago, we sat down for dinner and I said ominously, “I know what we’re all thinking about.” Everyone looked at me for a moment, and I said, “The sentient AI Google thing.” Josh nodded. The Wolvog shrugged. The ChickieNob asked, “What sentient AI Google thing?”

Those of us who are familiar with the Battlestar Galactica oeuvre know this is a very plausible situation. As they say, “The Cylons were created by man. They evolved. They rebelled. There are many copies. And they have a plan.”

In all seriousness (though I’m only half joking above), I’m fascinated by the situation because of the ethics outlined by developers about the impact of AI on humans. Meaning, the conversation flow can become hyper realistic, even if it’s all just programming, and can trick the human brain into believing they’re conversing with something that has thoughts and feelings. And then how do humans feel when they need to turn off the program or ask it to do something a human wouldn’t want to do?

If you believe the machine is alive and has feelings, can you ask it to do something? Do you need to request politely? And do you have to walk away if the machine says, “no” so you don’t trample its feelings? And if you don’t do things with consent, how will you feel as a human?

That, more than the machines rising up and killing us, posed some really interesting questions.

What do you think?

(c) 2006 Melissa S. Ford
The contents of this website are protected by applicable copyright laws. All rights are reserved by the author