When Google showed the world two demos at Google I/O 2018, the world was stunned. When I watched it live, I thought it was an April fools prank that came a month late. If you haven’t watched the video, I linked it below. You will probably need to watch it to understand the rest of the blog post. It’s only 4 minutes.
In the first video, the assistant was easily able to converse with the receptionist to book a haircut at 10am. In the second video, the assistant was able to understand the accent of the restaurant receptionist and was able to determine that the restaurant does not take reservations for four people.
Why this is Revolutionary
After the first demo, Sundar Pichai (The CEO of Google) listed machine learning, natural language understanding, deep learning and text to speech as the key technologies that were able to create this type of artificial intelligence. So basically, Google had to incorporate almost all of its breakthroughs to make this happen. We have seen or used the Google Assistant, Apple Siri or Amazon Alexa to get detailed and focused answers back from our question. Since then, we have established that it doesn’t output any human sounds while it talks. Sounds like “Um” or “Mm-hmm”. But what makes these two demos quite outstanding is that it included these sounds. It also included an increase in pitch when it is asking a question, to make it sound more human. Not once did the receptionist question the authenticity of the caller. It sounded real. It sounded like an exact conversation that two people will have when booking a hair appointment. Google trained their machine learning and artificial intelligence software to handle all types of people and possibilities of where the conversation might head.
Despite all of the positive responses it received, Google’s latest ambitious project sparked a lot of doubts and controversy, especially when ethics are concerned. Should we be knowing when the Google Assistant is calling? Should we react differently to these types of technologies? Is this taking A.I too far? (Also if you haven’t read my artificial intelligence post, you can read it here). All of these are valid questions and we have the right to ask them. But if we expect to have these types of technologies rolling out soon, why do we overreact and prevent ourselves from appreciating these types of advancements, especially when it is related to an artificial intelligence? Google Duplex is one of these breakthroughs in the technology ecosystem. We want it in our lives but we do not know how to accept it and use it for our well being without it going out of control.
Another question which I mentioned above is, “Should we know if Google Assistant is calling.” My answer is no. There are two reasons.
First, when talking to an assistant, we tend to talk with statements rather than questions especially when wanting to know something specific. An example of this is when I say to Google Home, “Weather,” rather than asking, “What is the weather?” Another example is when I ask it to check a flight time. I tend to say the airline and the flight number rather than asking “What is the status of this flight,” We assume that if we are more direct with the question, it will have a better chance of understanding us. So if the Google Assistant was to tell the recipient that it is the Google Assistant calling, the conversation will sound more direct as the human on the other side will try to speak in a way so that the assistant can understand them. Google does not want that. We don’t want to speak like that either.
Secondly, would you continue listening to a phone call if the first thing that the caller says is, “Hi, this is the Google Assistant calling”? Probably not. I would assume that it is a robot trying to sell me something rather than trying to reserve a table for the night. So unless Google is planning to tell every business that their robot might call, more people will hang up rather than trying to hear out the rest of the sentence.
If we expect to have these types of technologies rolling out soon, why do we overreact and prevent ourselves from appreciating these types of advancements, especially when it is related to an artificial intelligence?
We reached a point when Google can make a phone call for you. That sounds small but in next year’s developer conference, what is the next command? Will it be, “Hey Google, drive me to work,” or, “Hey Google, manage my bank transactions?” The possibilities are endless. At what point in the future is the Google Assistant talking to itself? In that case, will the phone call be dead? Most people think that an A.I is a supercomputer that is hidden away in a basement of a large building. They forget that it can be as simple as talking to a small, puck-sized object which blends into your household.
What are your thoughts about Google’s new feature? Do you like it? Do you find it scary? Will you use it? Comment below!
Thanks once again for reading the post! Don’t worry, I will post a more creative post soon!