The internet is a great equalizer but things aren’t great for visually impaired users, when surfing the internet. While they have to use accessibility tools which read out the on-screen text, often the context is missing. That’s what a visually impaired Facebook employee is hoping to change with a new technology.
Matt King’s idea will use Artificial Intelligence to verbalize the content of an image or video and enable visually-impaired users to figuratively see and determine appropriate content for people as well as advertisers.
“More than two billion photos are shared on Facebook every single day. That’s a situation where a machine-based solution adds a lot more value than a human-based solution ever could.” He further said that “Anybody who has any kind of disability can benefit from Facebook. They can develop beneficial connections and understand their disability doesn’t have to define them, to limit them.”
One of the things that King is working on is the “automated alt-text” a tool which describes audibly what is the content of a picture. The tool was first launched in April of 2016 with a support for five languages. Now, it supports more than 29 languages and can be accessed either from the Facebook web or Facebook’s app on iOS and Android platforms.
When talking about this took, King said that,
“The things people post most frequently kind of has a limited vocabulary associated with it. It makes it possible for us to have one of those situations where if you can tackle 20 percent of the solution, it tackles 80 percent of the problem. It’s getting that last 20 percent which is a lot of work, but we’re getting there.”
Last month, Facebook pushed another update to its automatic alt-text tool, which can now use facial recognition to help visually impaired people find out who is in a photo. However, the tool is still a work in progress and it will be quite a few years before it can actually solve the problems that it started with.