Facebook AI now better describe photos for visually impaired
Facebook has announced new improvements in its artificial intelligence (AI) technology to generate descriptions of photos posted on its platforms including Instagram for the visually-impaired users.
Way back in 2016, Facebook introduced a new technology called automatic alternative text (AAT) that utilises object recognition to generate descriptions of photos on demand so that blind or visually impaired individuals can more fully enjoy their News Feed.
“We’ve been improving it ever since and are excited to unveil the next generation of AAT,” the company said in a statement late on Tuesday.
The improved AAT reliably recognises over 1,200 concepts — more than 10 times as many as the original version launched in 2016.
The company has expanded the number of concepts that AAT can reliably detect and identify in a photo by more than 10 times, which means fewer photos without a description.
“Descriptions are also more detailed, with the ability to identify activities, landmarks, types of animals, and so forth,” the social network said.
“These advancements help users who are blind or visually impaired better understand what’s in photos posted by their family and friends — and in their own photos — by providing more (and more detailed) information”.
For the latest iteration of AAT, Facebook leveraged a model trained on weakly supervised data in the form of billions of public Instagram images and their hashtags.
To make the models work better for everyone, the company fine-tuned them so that data was sampled from images across all geographies, and using translations of hashtags in many languages.
“We also evaluated our concepts along gender, skin tone, and age axes. The resulting models are both more accurate and culturally and demographically inclusive,” the company informed.
News Credit | IANS