Facial Recognition Isn't The Enemy

By Mike Beevor, Digi.CIty Expert in Residence - IOT

The mention of “video analytics” in any conversation seems to have become synonymous with “facial recognition” – perhaps unfairly so.  While facial recognition is undoubtedly cool from a technical perspective and brings us closer than ever to achieving the dreams of so many Sci-Fi authors, the negative media storm associated builds upon some misconceptions. The lack of knowledge about what happens after the image is collected creates some  unjustified backlash, in this author’s opinion.

The crux of matter is a simple one.  How much can I really tell from a single image of your face?

Well, I can probably tell what gender you identify as and I can pick out certain features like eye colour, hair colour and take a really rough guess at an age bracket.  But that’s it. Unless I already have your image in a database, associated with your name, I have zero idea who you actually are. I could find out far more about you from your social media than I could from a randomly captured image of you.

Facial recognition starts to blur privacy boundaries, when we begin applying CONTEXT to the image captured as a basis to gaining more information about you.  

Let me lay out an example.

You’re shopping at the grocery store and I capture your vehicle entering the parking lot using license plate recognition.  I can probably also use machine learning to tell me what colour, model and make of car you are driving. As you get out of the car, I capture your image using cameras in the parking lot.

Photo by Craig Whitehead on Unsplash

As you enter the grocery store, I capture your image again using the cameras monitoring the entrances/exits to the store, taking note of the colour and style of your clothes.  I might also use cameras to monitor your route around the store, capturing information around which aisles you went up and down and how long you spent in certain sections of the store.  I might even use the data from a “scan and go” handheld device to tell me what you were putting in your shopping cart.

Photo by Craig Whitehead on Unsplash

Photo by Craig Whitehead on Unsplash

That handheld device, or even the scan of the loyalty card at the checkout to gain your reward points, or coupon, may be combined with  another image of you captured by the camera at the cash register. That loyalty card is tied to your personal details – name, address, credit card information even.

I capture your vehicle again as you leave the parking lot, noting the time between entry and exit to calculate how long you were in the store.

Now I have detailed context.  I can combine that information together and gain a pretty interesting picture of you as a person and your shopping habits. Should I use that information to target you with marketing materials specific to traits I have analysed, then I am overstepping privacy boundaries.   And that is not cool. Or at least, this author feels that way. - 

Now, the flip side of all of this is identifying you from an already-stored image with a public safety/security CONTEXT.  

Let’s run the same scenario. But instead of an innocent grocery shopper , our subject is a known bad actor with a history of aggravated assault and theft.

The license plate of the vehicle they are driving is picked up as they enter the parking lot, and an alert is sent to the in-store security team that a known bad actor is near and to be on alert.

Facial recognition is deployed from the parking lot and entrance cameras and connected to the local law enforcement database of known felons. Our subject is observed in the parking lot, heading towards the entrance to the store.  The Security Team moves into position, ready to intercept. Object detection or motion analysis at this point suggest that the subject is a viable threat and local law enforcement are notified.

The subject is intercepted before damage or harm can come to the security team, staff or other innocent bystanders, they are apprehended and removed from the scenario.  In this context perhaps everyone involved is grateful that the video analytics and facial recognition technology was in place.

In the U.S. several cities - including Portland, San Francisco, Oakland - and three states - California, Oregon, New Hampshire - have banned facial recognition technology in police body cameras. There are no federal or global guidelines crafted yet 

In summary, it is important to state that capturing or using facial recognition is not the enemy.  The context in which it is applied is critically important. Before dismissing facial recognition or video analytics as an invasion of privacy, take a hard look at the context and the issues that you would be able to resolve.  

Video analytics may be used for the greater good – which is certainly how the physical security industry designs them. There is a case to set strong data governance practices, apply strong data management and manipulation ethics and examine your regulatory compliance. Then you will have the CONTEXT that you and your organisation need to make the best decisions.