[LINK] Google 'Ambient Computing'

Stephen Loosley StephenLoosley at outlook.com
Sat Jun 20 23:25:15 AEST 2020


Google isn't even trying to not be creepy: 'Continuous Match Mode' in Assistant will listen to everything until it's disabled

By Tim Anderson 19 JUN 2020 https://www.theregister.com/2020/06/19/google_assistant_new_dev_tools_features


Ambient computing: The idea that access to Google (and its access to you) is pervasive whatever you are doing

Google has introduced "continuous match mode" for apps on its voice-powered Assistant platform, where it will listen to everything without pausing.

At the same time it has debuted related developer tools, new features, and the ability to display web content on its Smart Display hardware using the AMP component framework.

The Chocolate Factory has big plans for its voice assistant. "We consider voice to be the biggest paradigm shift around us," said director of product Baris Gultekin, speaking at the Voice Global summit, where the new features were introduced.

The goal is "ambient computing", where you can interact with the big G anywhere at any time, so pervasively that you do not notice it.

Voice interaction is a key part of this since it extends the ability to perform searches or run applications to scenarios where tapping a keyboard or touching a display are not possible.

Google Assistant exists in many guises such as on smartphones and watches, TVs, PCs, and also on dedicated hardware, such as the voice-only Google Home and Google Home Mini, or with "smart display" screens on the Google Nest Hub or devices from Lenovo and Harman.

While assistant devices have been popular, Android phones (which nag you to set up the Assistant) must form the largest subset of users. Over all the device types, the company claims over 500 million active users.

As with any evolving platform, developer support is critical. At the Voice Global event, the company introduced new tooling, the biggest piece being the Actions Builder, a web-based tool that lets you add and assemble the pieces of an "action", essentially an application for Assistant.

Actions Builder is an easy-to-use tool that integrates with Natural Language Understanding for parsing user input into what makes sense for an action, and lets you create the flow of intents and responses that form a complete action.

Payam Shodjai, senior director of product for Google Assistant, explained at the summit that Builder is a response to complaints that the previous approach, where developers switched back and forth between an Actions Console and a DialogFlow editor, was too disjointed.

Actions Builder will "replace DialogFlow as the preferred way to develop actions on the assistant," said Shodjai.

Trying out the new Action Builder, we discovered that running an action under development is impossible if you have the Web and App Activity permission, which lets Google keep a record of your actions, disabled. A dialog appears prompting you to enable it.

It is a reminder of how Google Assistant is entwined with the notion that you give Google your data in return for personalised experiences.

Some developers prefer to work with text-based code rather than a visual builder, and Google is responding to this need with an updated Actions SDK which lets you work in your preferred IDE or programmer's editor. This also makes it easier to use source control and continuous integration tools with action projects.

There are also new features in the Assistant platform. Home Storage is a way of storing values across multiple sessions in a single household, as defined by Google's Home Graph, part of its smart home API.

This is significant because it enables multi-user applications. The examples demonstrated were games such as word puzzles.

DOn't say we didn't warn you

The documents reveal a strong warning, though, in red type: "Important: Do not use home storage to store any personal data. Home storage data applies across multiple individuals in the same household who use the same action."

It will be down to developers to heed such warnings.

The Media API for Google Assistant has been updated and now handles play and resume across devices, and the general ability to start playback from a specific point rather than from the beginning.

Devices with Smart Displays are gaining the ability to display web content – provided it is delivered using the AMP component framework. Initial support will be for news sites, coming later this summer, but with more categories to follow.

The most intriguing new feature is called Continuous Match Mode. This touches on an area of debate, which is the extent an Assistant device listens in on conversations without specifically being asked to do so. Currently you have to use a wake word to initialise an Assistant action, though devices must be listening at all times to some extent in order to recognise the wake word – "Hey Google" or "OK Google" – which can be followed by a custom invocation for a third-party action.

Once in a conversation – the equivalent to running an application – the system follows a flow based on parsing the user's words into intents that are meaningful in the the context of the conversation. The action continues to completion of the flow, or quits if the action cannot parse the user's intents, or after a system command such as "cancel". The developer can override the first exit command with one last plea to continue, but not the second.

"Sometimes you want to build experiences that enable the mic to remain open, to enable users to speak more naturally with your action, without waiting for a change in mic states," said Shodjai at the summit and in the developer post.

"Today we are announcing an early access program for Continuous Match Mode, which allows the assistant to respond immediately to user's speech enabling more natural and fluid experiences. This is done transparently, so that before the mic opens the assistant will announce, 'the mic will stay open temporarily', so users know they can now speak freely without waiting for additional prompts."

The mode is not yet publicly documented. The demonstrated example was for a game with jolly cartoon pictures; but there may be privacy implications since in effect this setting lets the action continue to listen to everything while the mode is active.

Shodjai did not explain how users will end a Continuous Match Mode session. Presumably this will be either after a developer-defined exit intent, or via a system intent as with existing actions.

Until that happens, the action will be able to keep running.

Just as with personalisation via tracking and data collection, privacy and pervasive computing do not sit comfortably together, and with the new Continuous Match Mode a little more privacy slips away. ®

--



More information about the Link mailing list