Facial Recognition Firm Working on 'Mask Removal' Tool

How accurate would it be?

While mask wearing in public for such a long period of time has had its challenges it’s, of course, offered benefits as well. Besides the obvious ones, many people were pleased to learn that covering half of our faces with paper or fabric was enough to confuse facial recognition technology and inhibit its ability to invade your privacy.

Or so we thought, at first, until studies began to reveal that the masking made little difference to a technology that was hell bent on identifying our faces.

Well Clearview AI, perhaps the most controversial name in facial recognition, seems intent on making sure that doesn’t change, and in a recent interview with Wired, CEO Hoan Ton-That said his company has some developments in the works that specifically target masks.

He calls the technology a “mask removal” tool, and claims it uses artificial intelligence to amass statistical patterns in other images in order to essentially guess what the person might look like beneath the face covering.

What could go wrong?

According to Wired, Ton-That went on to claim that the company’s database of images is up to 10 billion in volume, despite the legal issues Clearview has encountered related to how it’s obtained those to begin with. This amounts to a database that’s about three times larger than previously reported.

Wired says that the new technology which, besides mask removal also looks to de-blur fuzzy images, makes Clearview’s tool more powerful, but “may make it more dangerous and error-prone as well.”

Ton That isn’t deterred, telling Wired that the expanded database of images makes the overall technology more accurate. He adds that it’s a “good thing” that people are concerned about the misuse of technology like this, but that, “over time, (Clearview) can address more and more of their concerns.”

Aleksander Madry, a professor at MIT who specializes in machine learning weighed in, telling Wired he “would expect accuracy to be quite bad,” and that “without careful control over the data set and training process (he) would expect a plethora of unintended bias to creep in.”

More in Video