Over the last week we have seen two conferences with two distinct tones and approaches to the idea of AI, these differences are not nearly as simple as good and evil, or right and wrong, there is more nuance to consider. Here is a quote from Satya Nadella:

“We need to ask ourselves not only what computers can do, but what computers should do. That time has come.”

It struck me that this message came before the hype and the demonstrations, it was presented as a foundation of everything related to Artificial Intelligence and Machine Learning. Probably the most impressive demo this week, however, came from Google I/O where they demonstrated an AI assistant (Google Duplex) making a phone call. James Vincent from the Verge described it like this:

“Of course, this was a phone call with a difference. It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI. It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.”

After the awe and shock of the demo passed a growing and vocal subset of viewers let there concern be heard on social networks, in that it presented such an authentic sounding human voice, with no indication of who it may, or may not, be and that this interaction bordered on a kind of deception.

I must admit as far as social boundaries go I would like to know if and when I am talking to a human or not. Do I need to be polite to AI? I try to be more deliberate and efficient when I know I am talking to AI, but more social and engaging when I am talking to a human. I think I deserve to know when I do not need to bother, but that is currently the limit of my concern.



Comment Section

Comments are closed.