FaceApp in Hot Water for Developing a Racist AI

Last Updated: April 26, 2017

When Wireless Lab – a Russian company – launched FaceApp, they obviously thought that people were going to love it. Quite frankly, the app does have a lot of things going for it, with AI based image processing that can add amazing photo-realistic effects to your selfies.

On its face (no pun intended), FaceApp sounds like an amazing app. Users can simply click a selfie, or import an image from their gallery, and use the various AI-assisted filters to change their expressions, look older or younger, etc. Heck, there’s even a filter to swap genders (which works rather well, to be quite honest).

FaceApp

However, the app came under fire on Twitter and other social media networks, when users started reporting that the “Hotness” filter in the app was lightening users’ skin. That’s incredibly racist, and has some troubling connotations about the underlying idea of “hotness” in society… or at Wireless Lab, seeing as they used their private data set to train the AI, instead of a public data set.

faceapp in hot water for developing racist ai

 

To its credit, Wireless Lab was quick on the uptake, and issued an email statement apologising for the algorithm, with founder and CEO “Yaroslav Goncharov” going as far as to call it “an unquestionably serious issue”, but defending the app by claiming that the issue arose due to “training set bias”, and it wasn’t an intentional behavior.

Wireless Lab has changed the name of the filter under question from “Hotness” to “Spark” as a temporary fix, with a more permanent fix coming soon. What’s weird, though, is that while FaceApp has come under relentless fire on social media, the Play Store is still full with people wondering why the “hotness” filter has been removed, and going as far as to ask the developers to “ignore the haters”. Clearly, humanity needs fixing.

LEAVE A REPLY