Wednesday, August 21, 2024

If the AI can't see you, do you exist?

 kw: book reviews, nonfiction, artificial intelligence, artificial vision, face recognition systems, activism, memoirs

One could say it all started with a Halloween mask. One could also say it started with a vision system that couldn't see. Graduate student Joy Buolamwini was coding a project she calls Aspire Mirror, using some off-the-shelf face recognition software, but it couldn't detect her face. She had recently bought a white costume mask for a Halloween event; on a whim she put it on. Lo and behold! The system detected the mask as a face, right away.

Dr. Buolamwini's parents are from Ghana, so she isn't just "Black", she's darker skinned than many Afro-Americans. The software she used for the Aspire Mirror prototype isn't the only one that is blind to her face. Most such software is "Black blind", and many vision systems make many more recognition errors with Black faces than with White ones. Later in her career, when she and colleagues tested major vision systems, they showed an almost perfect correlation between accuracy and paleness of skin. 

As she describes in her book Unmasking AI: My Mission to Protect What is Human in a World of Machines, one major company's software achieved 100% accuracy of both recognition (distinguishing persons who look similar) and verification (admitting only the correct person to their phone, for example) for White male faces only. Other major brands achieved accuracy exceeding 85% or 90%, again for White faces only, and also, better accuracy for men than women. For all of them, the darker the skin, the lower the accuracy of both kinds, and the lowest accuracy is for Black women.

As Big Data has morphed into Large Language Models and Large Image Models (LLM and LIM, respectively), their identification as AI has been cemented in the popular (and press) imagination. Personally, I dislike the term "Artificial Intelligence" when used for such systems. I prefer SI, "Simulated Intelligence". Weaselly terms such as ANI (Artificial Narrow Intelligence) still don't go far enough to distance SI software from natural (i.e., evolved) intelligence. Genuine AGI (Artificial General Intelligence), if it is possible, would be deserving of the raw moniker without the qualifier: AI.

Face recognition is something all animals with vision can accomplish. Of course, various species do better at recognizing their own species. Even insects, with a very different vision system than our own, can recognize one human from another, favoring a familiar face (if it belongs to someone who didn't try to kill them on sight). I saw this when we had a large (basketball size) nest of hornets on a corner of our house. Even in late summer, when hornets get more aggressive, they ignored my wife and me, but would threaten unfamiliar persons; the mail carriers, who change from day to day, were cautioned to avoid that corner of the house. We can conclude that it doesn't take a huge brain to recognize faces reliably.

Machine vision systems have to be trained. So far, it takes a huge "neural network", whether hardware or software based, that has been trained with at least thousands of images of people's faces, to do their job. With millions they do better. The bias problem is with the training data. Dr. Buolamwini found that the standard facial image databases all had more male than female faces, and many more White faces than all others combined. I don't recall much in the way of numbers, but I get the impression that the proportions were "whiter" than the demographics of the American populace. Consider that the major cities of America all have either majority Black populations, or are near-majority Black. That means that automated surveillance systems—for which which most large machine vision systems are obtained—have been trained on a population that they seldom encounter, and have seen too few of their actual "clientele" to be able to reliably recognize them.

The ramifications of this are grave indeed. Scenario: You arrive home to find police or FBI waiting to arrest you. You have been picked out by an AI system as "strongly resembling" a wanted suspect, and a system that was monitoring the several cameras you passed by on your way alerted the authorities of your presence. This has happened to a number of people, not all of them Black, and a few spent hours or days in custody before their actual identity was verified. 

Another scenario: Your company wants to reduce the number of security guards at the gate (I've worked at such secure installations), so they announce that you'll need to look into a camera to get access. If the system being used is no more capable than the one behind Aspire Mirror, and you happen to be Black, it may not even recognize that a face has been presented to it!

Another issue that the author presents, several times: Training images are typically frontal and partial profile photos with good lighting and a neutral background. Nothing like pictures taken "in the wild", such as from a camera at the top of a 20-foot pole on a street corner.

This reminds me of something I'll have to describe, because I can't ethically show the photo here. After a baptism in my back yard, we took a group photo of about 35 of us. We were a good mix of races including two Black families and a few Chinese families; five Caucasians were present (I am in a very multi-ethnic church). It is under a tree, so the lighting is dappled. I did brightness tests of several faces. One White boy's face in a shadowy part of the image is darker than the face of a Black boy in the sunshine. A Black man in the shadow is practically invisible. However, shifting the photo's brightness a little makes that particular Black man easy to see, while washing out all the sunlit faces. Interestingly, Google Photos seems to be pretty good at isolating faces and recognizing them; it recognized every face in the photo, even though it had to ask me to verify some of the identifications. Other face recognition software that I've used don't do quite so well.

This example emphasizes that even a posed group photo can have lighting variations that stretch a vision system (including our own!) to its limits. With such matters as the prime example, the author expands the arena to include automated systems of decision support: Résumés scanned and prioritized before the hiring manager even sees them; street surveillance with automated flagging of "suspicious" persons or behavior; neighborhood analyses that set store prices based on demographics, and similarly for real estate appraisals. All these are actually extensions of things that humans were doing already. The machines just do them faster and more cheaply...and some have been banned as a human activity (real estate redlining, for example), but AI systems are being allowed to slip under the radar.

Only a portion of the book is devoted to such technical matters. Much more involves Dr. Buolamwini's increasing involvement in policy. As she relates, it is not enough to point out the deficiencies of systems that lie behind automated decision making. Their weaknesses reflect and even amplify the weaknesses of the people who create them. Biased people cannot produce unbiased systems (A side thought: Get together several people who each recognize their own bias, and ensure that multiple viewpoints are included, and they have a better chance of reducing the biases of a system they all have a hand in developing and training; this is hard! But it is analogous to the way TCP/IP protocols can transmit data with extraordinary perfection through a noisy network; the Internet would be impossible without it).

Over a rather short span of time, much of it while she was still in graduate school, the author has become a more and more visible presence to policy makers, in her capacity as a founding member of the Algorithmic Justice League and the Poet of Code. In her epilogue she describes a meeting with the President of the U.S., at which he asked, "What should I do?" 

Since this occurred quite recently, it will be left to the next President (and maybe several "nexts") to deal with these matters. Make no mistake: they must be dealt with. This is a bipartisan matter. "Algorithmic harms" know no political party, and oxen all along the political spectrum have been gored. I find it abundantly clear that long before AGI arrives to become just "AI", making all prior systems obsolete, governance systems are required to rein in the big money behind overuse of such tools. We still have a grip on the tail of this tiger, but it is quite capable of spinning 'round for a bite!

No comments:

Post a Comment