Chrome can now instantly endorse audio and video on the web

Google extends its real-time captioning feature, Live Captions, from Pixel phones to anyone using a Chrome browser, as first seen XDA Developers. Live Captions uses machine learning to spontaneously create captions for videos or audio where none existed before, and to make the internet so much more accessible to anyone who is deaf or hard of hearing.

When enabled, Live Captions automatically appears in a small, movable box at the bottom of your browser when you watch or listen to a piece of content where people are talking. Words appear after a slight delay, and if you speak or stutter quickly, you may notice errors. But overall, the feature is just as impressive as when it first appeared on Pixel phones in 2019. Captions will even appear with muted sound or if your volume is rejected, making it a way to “read” videos or podcasts without annoying others. around you.

Live captions endorse podcast player audio

Chrome’s Live Captions worked on YouTube videos, Twitch streams, podcast players, and even music streaming services like SoundCloud in early tests done by a few of us here. The edge. However, it seems that Live Captions in Chrome only works in English, which is also the case on mobile devices.

Live captions can be enabled in the latest version of Chrome by going to Settings, then the “Advanced” section and then “Accessibility”. (If you do not see the feature, try manually updating and restarting your browser.) If you turn it on, Chrome will quickly download some speech recognition files, and then captions should appear the next time your browser sounds play where people talk. .

Live Captions was first introduced in the Android Q beta, but to date it has been exclusive to some Pixel and Samsung phones. Now that they’re on Chrome, Live Captions will be available to a much larger audience.

Source