X
Innovation

AI image classification errors could ruin your life. Here's one way to reduce them

Image classification algorithms are notoriously error-prone, but a novel method for spotting errors within incomprehensible AI code could help solve the problem.
Written by Rajiv Rao, Contributing Writer
Hand touching abstract AI
Weiquan Lin/Getty Images

Think about how quickly the internet became the key channel for a huge amount of human activity, from commerce to communication and onto collaboration, and you start to get some idea about the transformative role artificial intelligence (AI) will play in our lives during the next decade.

Just like the internet, AI and its core sub-types -- machine learning, natural language processing, facial recognition, and deep learning -- are predicted to transform human society by embedding themselves in virtually every facet of everyday life.

Also: 5 ways to prepare for the impact of generative AI on the IT profession

Even today, many IT systems and services already tout AI-enabled solutions to business problems. This ubiquity then begs the logical question, what happens if AI fails? 

Concerningly, AI's 'hallucinations' -- where the technology makes up answers to questions it doesn't know -- are already commonplace.

We have also seen algorithms exhibit and amplify gender, racial, ethnic, and class biases that are already baked into society.

Also: Want to work in AI? How to pivot your career in 5 steps

These kinds of issues arise because much of the data that powers AI models is scraped from content on the internet, the majority of which has been produced by dominant power structures. 

So, to ensure an AI algorithm is performing appropriately and not making embarrassing gaffes, the models that power emerging technology should be 'trained' with a set of data -- and in the case below, images -- that act as unbiased benchmarks.

Raising AI the right way

For example, California's early warning system has an image recognition algorithm connected to over a thousand cameras across the state. The model is trained to identify a puff of smoke from a cloud. 

reeb

Gleich's Reeb graph translates unrecognizable embedded vectors from an algorithm's data set into colored dots, allowing errors of classification to be observed.

Purdue University

From tumors in your lungs to swerving cars on the highway, image recognition software that is schooled to find key indicators from the noise is being deployed in mission-critical, life-saving scenarios. 

While the potential positive impact of these kinds of systems is significant, so is the risk from errant AIs. So, how do you ensure that AI image recognition systems don't help to destroy society as we know it?

Also: Six skills you need to become an AI prompt engineer

Having a sound 'training set' of images that provides good benchmarks is important. This training involves ensuring information on each pixel, and how it's tagged and classified under a category, is undertaken with impeccable precision.

This way, when the algorithm skitters around trying to figure out which category a particular image falls under, it can do so by referring to the information on images in the training set.

However, no matter how well-architected the training set, an algorithm will sometimes confront unfamiliar content and come to a juddering halt.

What's more, trying to find out where the impasse has happened in the data set is like trying to locate a needle in, not just a haystack, but an entire barn, given the potential for trillions of units of information that constitute data sets.

The good news is David Gleich, scientist and professor of computer science at Purdue University, and fellow scientists Tamal Dey and Meng Liu, have conjured up a novel solution to this intractable problem.

"The tool we've developed helps you find places where the network is saying, 'Hey, I need more information to do what you've asked,'" says Gleich. "I would advise people to use this tool on any high-stakes neural network decision scenarios or image prediction task."

What's under the hood?

When Gleich conducted his research, he encountered problems with his databases. The database confused X-rays, gene sequences, and apparel with other things. 

He says that one neural net had a chronic habit of labeling a car as a cassette player simply because the photos were extracted from online sales listings that contained car stereo equipment.

The problem comes from how the algorithm slots an image in the right category, which hinges on generating a bunch of numbers called 'embedded vectors' that are churned out based on the information on the image. 

The AI compares the embedded vectors in the 'training set' with those of other data set images. The image is slotted into a category with a high match probability.

Also: How do you get employees to embrace AI? (You may find this sneaky)

Unfortunately, the embedded vectors are often meaningless. So, when there's a mismatch or error, there's no way to dive into the unrecognizable layers of the algorithm and spot the offending error.

To overcome this hurdle, Gleich and his team employed an ingenious plan. They took a quick detour into the field of topography, which is essentially where technologies like Google Maps find their genesis. 

They decided to map the relationship of the vectors on a Reeb graph, a 'compact shape descriptor', and a solution that has been used in shape analysis for 75 years.

The data set was then transformed into color-coded dots representing vectors belonging to one category or another. Closely clustered dots of the same color denoted the same category. 

Dots of different colors that overlapped each other instantly denoted that something was awry and -- most critically -- where the problems could be found. 

Also: How renaissance technologists are connecting the dots between AI and business

And just like that, the normally incomprehensible innards of an algorithm along with its problem spots were suddenly as clear as day.

"What we're doing is taking these complicated sets of information coming out of the network and giving people an 'in' into how the network sees the data at a macroscopic level," Gleich said. 

"The Reeb map represents the important things -- the big groups and how they relate to each other -- and that makes it possible to see the errors." 

Gleich and his colleagues have gone one step further, making their AI image classification tool available to the public. Code for the tool is available on GitHub, as are use case demonstrations.

Now, anyone has a shot at being able to see the relationships between images in an AI dataset, which the researchers call a "bird's eye view". 

People can use the tool to dive down and locate the source of the problem, which is something that neural networks desperately need to function properly, prevent bias, and keep us safe.

Editorial standards