The Google Pixel 2 comes with a unique AI-based feature which relies on activity recognition to automatically activate functionality in the phone. A perfect example is the the Do-Not-Disturb mode which comes on when the phone detects that the user is driving.
This smart feature has so far been limited to the Pixel 2 pair, but Google has now announced that the underlying technology is now openly available for developers to utilize using a new Activity Recognition Transition API.
The API comes bundled with extensive training data and pre-configured sensing algorithms to detect the activity and location change patterns and perform a relevant action.
We’re excited to make the Activity Recognition Transition API available to all Android developers – a simple API that does all the processing for you and just tells you what you actually care about: when a user’s activity has changed.
The Transition API has been developed after testing it extensively so that it can detect the nature of motion and halts, such as accurately discerning whether a user has parked his car or has just stopped at a traffic signal. Thanks to the extensive amount of sensing data, developers are no longer required to train their models, as the Transition API does all the location and sensor data heavy lifting and is ready for implementation. Needless to say, developers will be able to create different types of contextual actions by sensing a change in user’s surroundings and activity patterns.
Google has also announced that apart from driving, support for new activities will also be added to the Transition API over the course of the next few months, expanding the scope of context-aware features that can be developed using the AI-based tool. For example, the tool will soon be able to automatically detect the mode of transport. Google has already released an application guide for developers to set up their project and register for activity transition updates on the official Android Developer page.