At I/O 2018, Google introduced Android P and the new features it brings such as gesture-based navigations, adaptive battery and app actions, aside from new APIs for developers. But it seems that Google still had some announcements left about Android P, and chose the last day of its annual developers’ conference to reveal features such as a new Sound Amplifier feature and Dynamic Processing Effect to enhance the audio experience of users.
Google has unveiled a new Audio Framework which brings an all-new Sound Amplifier and a Dynamic Processing Effect that will arrive with Android P. The framework promises to deliver a much better audio output on all parameters, be it noise suppression while using the microphone or music playback through a device’s speakers.
The Android Open Source Project (AOSP) has already been updated with the new Dynamic Processing Effect and it will be made available to all OEMs and developers. However, it boils down to manufacturers when it comes to taking advantage of the new enhancements as some companies use their own custom audio enhancement techniques such as Beats audio in HTC devices. As for the tools available for developers, the Dynamic Processing Effect brings over 100 parameters that can be used to fine tune audio output using the new framework, however, these settings will not be available to the end users.
Dynamic Processing Effect works in four stages for each audio channel. It can be implemented to a great effect when it comes to background noise suppression by adjusting different frequencies to a variable extent.
As for the Sound Amplifier, it concentrates all the audio enhancement settings and presents them to the users in the form of just two sliders for the ease of use – one slider to control the volume and the other one to adjust the strength of filters for canceling background frequencies.
As a result, users will be able to easily discern the voice of the speaker in a video from the background noise. Moreover, an ‘active listening’ tool will also be available which can be used to cancel out background noise in real-time and adjust audio based on the ambience. Google claims that the new audio framework can be used for a wide array of tasks such as loudness maximization, microphone noise suppression and headphone tuning to name a few.