It’s the dawn of the artificial intelligence age.

AI is all around us, keeping tabs on our activities throughout the day. It’s especially prevalent across social media. There are countless devices and platforms where we share our personal info in hopes of creating a seamless user experience, or so we think.

Every smartphone, tablet, smartwatch, Google Home, Alexa, etc are operating on some form of an artificial intelligence algorithm. Further, these algorithms work via artificial learning. They can recognize faces, voices, and thus, predict future decisions we’ll make.

Even creepier, an image or video upload on social networks can be deconstructed by the algorithms. This deconstruction can identify personal data connected to the person in the image. Thus, data regarding sensitive information can be connected to an image of you without verification.

As you can imagine, this makes day-to-day life a lot easier. However, it places unimaginable power into the hands of those fortunate enough to own this kind of high-tech equipment.

But what happens when these algorithms get it wrong?

In April, Apple received a $1 billion lawsuit from a victim of identity theft, misidentified by facial recognition software.

According to the complaint, last March, the 18-year-old complainant got his driver’s permit, which displayed his name, address, birthday, sex, height, and eye color. However, it did not include a photo. When he lost his permit, he failed to notify police. These kinds of non-photo permits indicate that they should not qualify as identification.

They won’t qualify a bar or bank – so people are wondering how a situation like this could arise?

The Details

In a short time, the teen received a summons from Boston municipal court claiming he was under investigation for larceny over $1,200.

This confused the teen as the crime took place at an Apple store in Boston, and he had never been to Boston. Further, he also had an alibi. He attended his senior prom in Manhattan on the date of the alleged incident.

At his arraignment, a loss prevention associate said he witnessed the suspect steal Apple pencils (RSP $99 USD) on a security video. He also told police that the defendant was previously arrested for thefts from another Apple location. When the teen’s attorney requested access to the security footage, it “no longer existed.”

After returning home, the defendant received notice of additional pending charges. Accusations of larceny at multiple New Jersey Apple locations, as well as in Delaware and New York City.

In November, the New York City police arrested him at 4am. The police warrant had a photo of someone that did not resemble him, yet they arrested him.

Where Facial Recognition Software Fails

A detective viewed the surveillance footage and remarked that the suspect “looked nothing like” the teen they’d arrested.

The teen’s attorney was able to speak with the district attorney in Boston. As a result, the DA obtained the surveillance footage that had previously been reported missing. Eventually, the DA dismissed the case against the teen after viewing footage and the teen initiated a lawsuit against Apple, the security firm, the US District Court.

The Lawsuit

The complaint states that “any examination of Face ID has presupposed that the iPhone user is not being deceptive about his identity. However, when a name is mismatched to a particular face, the security benefits of the Face ID software become a criminal’s weapon.”

Since the original permit did not show a photo, this proved confusing. However, investigators found that Apple accepted the identification information on the lost permit (the victim), along with a photo of someone else (the criminal).

Police stated that they suspect the person who committed the crimes at Apple used the photoless permit for identification during one of the incidents. Then Apple matched the personal data with a photo of the real person somewhere on the web. As a result, Apple’s security software wrongly identified the perpetrator as the victim.

They “failed to consider the possibility of human error in its identification procedures, despite the fact that there was a clear discrepancy between the height described on the permit, and the actual suspect’s height.”

You can read more about the case here. (Bah v. Apple Inc., 19-cv-03539, U.S. District Court, Southern District of New York (Manhattan).

The Changing Artifical Intelligence Culture

The University of Toronto recently announced that a new filter could mask private information, restore data security and privacy. Naturally, this seems to be going over well with modern technology users. Here’s more on what could prove to be a much-awaited digital antidote to the rising problem of intrusive AI technology:

The Problem

Professor Aarabi, head researcher at U of T, says that facial recognition systems are becoming a menace.

They are getting better with time, learning how to decipher images with more accuracy. One side of the argument is that in order for technology to advance, there will be mistakes and casualties. The other side states that personal privacy quickly becoming a thing of the past, and it’s not ok.  

The Solution

With facial recognition growing more adept by the day, some feel that we must combat the intrusive nature of these algorithms. This can only happen with better technology. As a result, the Professor – aided by graduate student Avishek Bose – has come up with a countermeasure.

His remedy is simple. Similar to an anti-missile defense system, it gets ahead of the problem before it becomes a problem. This solution is finding ways to keep situations like the NY teen at bay. The two researchers have created an anti-facial recognition software that fools facial recognition software in a bid to throw it off track.

Adversarial Training

The groundwork for what could prove to be revolutionary technology encompassed pitting two like-for-like AI opposites against each other. One neural network takes up the role of facial recognition and the other learns from the former. It then executes countermeasures to stand in the way of the task at hand.

To put it simply, the battle rages on with the two programs fighting for control of pixels.

Pixels are the minute area of an image display screen or the smallest pieces of the puzzle that yields a digital image.

Consequently, the aftermath is a Snapchat-like filter which you can add to images. It’s like a watermark that overlays the entire image, making a privacy invisibility cloak of sorts.

How it Works

Once the filter kicks into effect, the defending algorithm, if you will, reads the neutral net of its opposition and conceals or misdirects identification elsewhere.

If, for instance, the AI is looking to map out your jawline, the filter initiates disturbances to hide such features. The image looks the same to the human eye, but the AI detector is thrown off. 

Bose and Arabi put their invention to the test against a face dataset which avails benchmarking thresholds with upwards of 600 faces. This mix incorporates different environments, lighting conditions, and ethnicities, mimicking human diversity. And it works.

There are promising results.

At the beginning of the research trials, most of the samples tested were 100% recognizable down a tee. Coupled with the disruptive filter, however, that percentage was almost negligible at 0.5%.

In other words, a famous picture moved from instantly recognizable to barely visible in the eyes of the facial detection system. For human viewers, though, they could be none the wiser.

The project’s lead author says the key to success was in channeling the power of a facial detection system against itself. When the first-generation technology executed its detection functionalities and grows more artificially intelligent, the second-generation does the same to stay a move or two ahead.

Beyond blocking facial recognition, he says there are many more applications to the new technology including disrupting image-based searches.

The team presented their ingenious discovery at the 2018 IEEE International workshop, and there are plans to roll out an application in the near future.

Consequently, this new filter undoes the benefits of facial recognition software. For example, if the perpetrator is the real criminal, we wouldn’t be able to identify him beyond the old standard of a line-up or visual comparison.