Select Page

Yes, things will communicate in plain English. 

We have seen it so much in Hollywood that we have ignored the idea. But natural language represents the most elevated form of communication. It is a solution bridging the gap to the user. It presents several advantages:

  • Universal
  • Standard
  • Familiar
  • Transparent

It can be used to express simple instructions and abstract concepts too. Everybody understands natural language and, if things communicate in this way, we could see what they say, what they think, and how everything around us works. Natural language is the most elevated standard.

Source: Flickr

Lets travel to the 2025 smart home. It has the same appearance that a house in 2018, but all things are connected in an invisible way. And of course, all things belongs to different manufacturers. We are comfortably drinking a Coke, reading a book in our garden. Suddenly, the shades start to fold, the pool covers, heating turns on and our parasol closes leaving us in the sun. We raise an eyebrow. All we have to do is look at the communication registry to understand what is going on with our things:

        Meteo: it is going to rain heavily in 30 minutes

        Shades: I’m folding if it rains

        Pool: I’m covering if it rains

        Sprinkler: I will standby today if it rains

        Heater: will the temperature descend?

        Meteo: temperature will descend 5º in one hour

        Heater: I’m starting to heat the house

“I’m going to the living room in 10 minutes”

        Heater: ok, speeding up heat in the living room

You don’t have to know code to understand what is going on and react accordingly. It is transparent. It is extremely fast, and everybody can start a conversation; they are not code lines hidden within  a platform or a HUB) controlled by a corporation that only a bunch of developers understand.

This way, we are not allowing things to think and act without control. If we let them do it, we will be heading straight to the Internet of paternalistic things”, where suddenly our refrigerator thinks that there is somebody pregnant at home and will not allow anybody to grab a beer. Really scary.

The good news is that we have advanced enough technology today to build all this. Objects, helped by generalist communication modules (such as Particle, Intel Edison o Thinking Things), can send and receive all sort of messages. To build this conversation scenario with our things, we need:

    • A universal ID: needed to univocally identify objects across platforms, systems and applications. For instance, I want to refer to my car through my smartphone, my TV or through my mirror, everything with a different OS and multiple communication methods.
    • Multi-radio comms: bandwidth, security, frequency and bidireccionality are very different depending on the device, even within a closed environment such as a house. For instance, a surveillance camera connected to 3G is economically non-viable, whereas connected through WiFi makes much more sense. And in the same way, I will not trust a smart lock connected to my personal WiFi, that can be hacked, but I will trust it if connected through the much more secure 3G network.
    • An NLP system (NaturalLanguage Processing): a system to convert from and to natural language, that everybody can understand. This is the place for devices to understand the world.
    • A window to the conversation: one or more places to listen, hear or interact in these conversations. In the same way that with the ID, I want to understand what is going on from my tablet, my smartphone, my TV and my car. Among all possible systems, some very evident ones have little road ahead of them to offer this ‘window’ to conversations.

Wait a second,  can machines really understand those messages? More good news, we surpassed the maturity of understanding natural language in specific contexts a while ago. When the weather station called ‘Meteo’ publishes “it is going to rain in 30 minutes”, what every machine understands through NLP technology is something like:

intent                     = weather forecast
text                       = rain
recipient                = all
date                       = today
time                       = present time +30 minutes

Every object catches what it is programmed to understand with these parameters, and acts accordingly. We are not talking about building Siri or Cortana or FacebookM or Echo or Google Now for every device; they are complete virtual assistants. We are talking about building small intelligences inside things, with a very limited vocabulary, in a perfectly defined context and adapted to the small actions that things perform.

Since the personal computer became mainstream, humans have learned to use computers adapting to their language. We started with very strange words in the command line (DIR, CHKDSK…). Afterwards, due to certain maneuver in Xerox Park performed by Steve Jobs, graphical interfaces arrived and machines were closer to our world. With the iPhone we could even touch the graphical interface, but we still need to learn and understand what every new app represents in front of us.

Conversational interface opens a new world in user experience, it is clean, fast and efficient. In an increasingly complex world, simple solutions seem like the best way to deliver value.

It is high time that we cease to learn their interfaces and we improve the way to communicate with devices. It is time that they, the machines, travel the last mile to us. It is time for them, finally, to talk.

“Never trust anything that can think for itself if you can’t see where it keeps its brain.”

J.K. Rowling