Bloomberg Opinion — Verifying your identity used to be so simple. You’d show a picture on your driver’s license or passport and these were two objects that lived in your pocket or a drawer at home. Today, you can be identified by an array of digital representations of your face via the likes of Apple Inc., Microsoft Corp. and lesser known names like ID.me, which will soon scan the faces of U.S. citizens who want to manage their taxes online with the Internal Revenue Service.
On the surface, these services are simple, but the number of companies processing faceprints is also growing, raising some hard questions about how we want to be identified — and even classified — in the future.
One way to imagine today’s complex web of facial recognition vendors is to think of the Internet as being like The National Portrait Gallery in London.
The public portraits that are freely on display are a bit like the billions of photos people post on social media, which some facial-recognition vendors scrape up. Clearview AI Inc. is one company that openly does this. U.S. government agencies and police departments use its search tool to scour more than 10 billion public photos to see if they’ll match certain suspects. PimEyes is another search engine that both investigators and stalkers have used to scan social media for a facial match.
Then if you walk further into The National Portrait Gallery, you’ll find private exhibitions that you pay to see. It’s similar on the web, with companies such as ID.me, Apple, Microsoft and others hired to privately process and verify faces, essentially acting as gatekeepers of that data. For instance, several U.S. states including Maryland and Georgia recently tapped Apple to store state IDs and drivers licenses on their citizens’ iPhones. People’s faces are converted into faceprints, a digital representation that looks like a string of numbers.
Finally, the Gallery in London has a gift shop with trinkets to take home and do with as you please. The online equivalent is facial-recognition vendors that merely sell the tools to analyze images of faces. Israel’s AnyVision Interactive Technologies Ltd. sells face-matching software to police departments and leaves them to set up their own databases, for example.
The most popular of the three is probably the “private exhibition” model of companies such as Apple. But this space is where things get a little messy. Different companies have different faceprints for the same people, in the same way your fingerprints remain constant but the inky stamp they make will always be slightly different. And some companies have varying degrees of ownership over the data. Apple is hands-off and stores faceprints on customer phones; so is Microsoft, which processes the faces of Uber drivers to verify them and prove they are masked, but then deletes the prints after 24 hours.
By contrast, ID.me, a Virginia-based facial-verification company, manages an enormous set of faceprints — 16 million, or more than the population of Pennsylvania — from people who have uploaded a video selfie to create an account. Soon, the IRS will require Americans to ditch their login credentials for its website and verify themselves with an ID.me faceprint to manage their tax records online.
These systems have had glitches, but they generally work. Uber drivers have been scanning their faces with Microsoft’s technology for a few years now, and ID.me has been used by several U.S. state unemployment agencies to verify the identities of claimants.
The big question mark is over what happens when more companies start processing and storing our faces over time.
The number of databases containing faceprints is growing, according to Adam Harvey, a researcher and the director of VRFRAME, a non-profit organization that analyses public datasets, including those containing faces. He points out that it has become easier to set up shop as a face-verification vendor, with much of the underlying technology open-source and getting cheaper to develop, and billions of photos available to mine. The private companies processing and storing millions of faceprints also don’t have to be audited in the same way as a government agency, he points out.
As more companies handle more faceprints, it’s not inconceivable that some of them will start sharing facial data with others to be analyzed, in the same way that ad networks exchange reams of personal data for ad-targeting today. But what happens when your faceprint becomes another way to analyze emotion? Or makes you a target of fraudsters? Facial recognition has the potential to make websites like the IRS run more securely, but the growth of these databases raises some of the same risks that came with passwords — of identities being forged or stolen. And unlike passwords, faces are far more personal tokens being shared with companies.
Today’s gatekeepers of faceprints are promising stringent security. ID.me’s chief executive officer, Blake Hall, who oversees the large database of faceprints for the IRS and other government agencies, says: “We would never give any outside entity access to our database … Biometric data is only shared when there is apparent identity theft and fraud.”
But Harvey and other privacy advocates have good reason to be concerned. Facial recognition has blundered in the past, and personal data has been mined unscrupulously too. With the facial-recognition market growing in funding and entrants, the array of gatekeepers will get harder to keep track of, let alone understand. That usually doesn’t bode well.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of “We Are Anonymous.”