A new commit on the Chromium Gerrit seems to suggest that Google Chrome might soon get the ‘Live Captions’ feature to automatically provide real-time captions for audio playing on the browser. Believed to have been originally spotted by Chrome Unboxed, the feature is apparently a part of the Speech on-Device API (SODA) and is expected to be available on Chrome Canary before being rolled out to the stable channel later.

Google introduced the Live Captions feature on its Pixel phones with the launch of Android 10 last year. The feature, as you can tell from its name, allows a device or a media player software to display real-time captions for any audio content, including music, videos, and podcasts, irrespective of whether the content itself supports the feature natively. The real-time transcribing happens on-device instead of the cloud, which not only makes the process faster and seamless but also preserves user privacy.

Interestingly, Chrome developers are seemingly looking to develop more than just a live caption system with the new feature, with a Google employee claiming that the ChromeOS team is also looking at “other speech recognition scenarios they may want to build in the future.”¬†Whatever be the case, given that it will be a native Chrome feature, it might come to all desktop platforms, including Chromebooks, Windows, Linux, and Mac, although, the exact details remain unclear as of now.

The Live Captions feature on Chrome is expected to work the same way it does on Pixel devices, making it an essential accessibility feature on Android not only for people with hearing disabilities but also for folks who are trying to watch a video in an exceptionally noisy place. It will be interesting to see when this feature will finally make its way to the public Chrome build, but it’s unlikely to happen any time soon, given that it is still in the very early stages of its development.