Imagining the (chatty) interaction between humans and machines in the future
Posted: Tue Jan 07, 2025 7:00 am
For some time now, experts in user experience have had a word burned into their brains: voice. With the help of increasingly ubiquitous artificial intelligence, voice will become the most important user interface, so much so that it promises to send the now omnipresent screens to the "asylum."
However, screens may still have a chance in the (chatty) future that is just around the corner . Isn't it symptomatic that Amazon recently launched the Echo Show, a screen variant of its famous smart speaker?
“I think the Amazon Echo Show is just an intermediate solution,” explains Jörg Heidrich, CEO of User Experience at digital agency UDG, speaking to Horizont . “Supported by artificial intelligence, voice services will gradually become less complex, so much so that no screen will be needed to provide additional explanations to the user,” he says.
Stephan Anders, director of innovation at the Plan.Net agency group, is of a different opinion. “Voice applications will be supported in most cases by screens. There are certain types of information that are much better conveyed with visual components ,” argues Anders.
Frank Rief, director of the Rief Mediadesign agency, is also in favour of visual elements as a necessary complement to voice assistants. “ Complex information with a lot of narrative elements is difficult to communicate purely through hearing,” he says. And that is why visual elements will inevitably come into play. These visual elements will not be projected on traditional screens, however, but will probably take the form of projections and holograms. “Perhaps there will be screens that the user can roll up and put in his pocket,” Rief predicts.
In a new voice-driven scenario, gesture control will also have a lot to say. As part of south africa phone number Soli project, Google is working, for example, on a sensor that analyses gestures made with fingers and hands using radar technology and translates them into specific commands.
With the help of sensors like the one designed by the internet giant, it would not be necessary to physically touch any digital gadget to control it . However, gesture control also faces a number of challenges.
“The biggest challenge with gesture control is developing an industry-wide standard ,” says Andreas Renner, Associate Creative Director at digital agency Triplesense Reply. “Users will not be inclined to learn individual gestures for different devices.”
In the future, users will have to deal with multiple interfaces , not all of which will be to their liking, and it will therefore be vitally important to test them with their potential audience beforehand, Renner warns. In this way, “hypotheses and concepts can be validated at the earliest possible stage and thus shape the ideal user experience,” says Renner.
With an eye on the new interfaces that are just around the corner, companies will also have to give their own brand identity a new twist , with less room for aesthetics and inevitably less weight given to optical elements.
However, screens may still have a chance in the (chatty) future that is just around the corner . Isn't it symptomatic that Amazon recently launched the Echo Show, a screen variant of its famous smart speaker?
“I think the Amazon Echo Show is just an intermediate solution,” explains Jörg Heidrich, CEO of User Experience at digital agency UDG, speaking to Horizont . “Supported by artificial intelligence, voice services will gradually become less complex, so much so that no screen will be needed to provide additional explanations to the user,” he says.
Stephan Anders, director of innovation at the Plan.Net agency group, is of a different opinion. “Voice applications will be supported in most cases by screens. There are certain types of information that are much better conveyed with visual components ,” argues Anders.
Frank Rief, director of the Rief Mediadesign agency, is also in favour of visual elements as a necessary complement to voice assistants. “ Complex information with a lot of narrative elements is difficult to communicate purely through hearing,” he says. And that is why visual elements will inevitably come into play. These visual elements will not be projected on traditional screens, however, but will probably take the form of projections and holograms. “Perhaps there will be screens that the user can roll up and put in his pocket,” Rief predicts.
In a new voice-driven scenario, gesture control will also have a lot to say. As part of south africa phone number Soli project, Google is working, for example, on a sensor that analyses gestures made with fingers and hands using radar technology and translates them into specific commands.
With the help of sensors like the one designed by the internet giant, it would not be necessary to physically touch any digital gadget to control it . However, gesture control also faces a number of challenges.
“The biggest challenge with gesture control is developing an industry-wide standard ,” says Andreas Renner, Associate Creative Director at digital agency Triplesense Reply. “Users will not be inclined to learn individual gestures for different devices.”
In the future, users will have to deal with multiple interfaces , not all of which will be to their liking, and it will therefore be vitally important to test them with their potential audience beforehand, Renner warns. In this way, “hypotheses and concepts can be validated at the earliest possible stage and thus shape the ideal user experience,” says Renner.
With an eye on the new interfaces that are just around the corner, companies will also have to give their own brand identity a new twist , with less room for aesthetics and inevitably less weight given to optical elements.