Princeton University researchers have developed a method that flags possible prejudices in sets of images used to train systems of artificial intelligence ( AI). The study is part of a broader effort to address and eliminate the inequalities that have developed into AI processes that impact everything from credit facilities to sentencing plans for courtrooms.
Although the causes of prejudice in AI systems are varied, stereotypical images found in vast collections of images gathered from online sources used by engineers to build computer vision, a branch of AI that enables computers to identify individuals, objects and behavior, are one major cause. Since these data sets are the basis of computer vision, images that represent social prejudices and biases can unintentionally affect computer vision models. Researchers at the Princeton Visual AI Lab have developed an open-source platform to help curb this issue at its source, which automatically uncovers possible biases in visual data sets. Until image collections are used to train computer vision models, the tool enables data set developers and users to fix problems of under-representation or stereotypical representations. In related study, Visual AI Lab members have published a study of current bias reduction approaches in computer vision models themselves, and have suggested a new , more efficient approach to bias mitigation.
The first instrument, named REVISE (REvealing VIsual biaSEs), uses statistical methods to inspect a data set across three dimensions for possible biases or underrepresentation problems: object-based, gender-based and geography-based. REVISE, a completely automated tool, builds on earlier work that included filtering and balancing the images of a data set in a way that required more user guidance. The research was introduced at a virtual European Computer Vision Conference on Aug. 24.
Using existing image annotations and metrics such as object counts, the co-occurrence of objects and persons, and countries of origin of images, REVISE takes stock of the content of a data set. The tool reveals trends among these measures that vary from median distributions.
For example, REVISE showed in one of the tested data sets that images involving both individuals and flowers varied between males and females: in ceremonies or gatherings, males more frequently appeared with flowers, while females tended to appear in staged settings or paintings. (The study was limited to annotations representing individuals’ perceived binary gender appearing in images.)
Olga Russakovsky, an assistant professor of computer science and principal investigator of the Visual AI Lab, said that once the tool shows these kinds of differences, “then there is the question of whether this is a completely harmless reality, or whether something deeper is happening, and that is very difficult to automate.” The paper was co-authored by Russakovsky with graduate student Angelina Wang and an associate professor of computer science, Arvind Narayanan.
REVISE, for instance, showed that in the images containing them, objects containing aircraft, beds and pizzas were more likely to be huge than a normal object in one of the data sets. Such a dilemma does not reinforce social prejudices, but may be troublesome for models of computer vision training. The researchers recommend capturing photos of airplanes that also have mountain, desert or sky labels as a solution.
However, the under-representation of regions of the globe in computer vision data sets is likely to result in AI algorithm biases. In line with previous studies, the researchers found that the United States and European countries were significantly overrepresented in data sets for the countries of origin of photos (normalized by population). Beyond this, REVISE revealed that picture captions were mostly not in the local language for photographs from other parts of the world, indicating that many of them were taken by visitors and possibly contributing to a distorted perception of a nation.

In computer vision, researchers focusing on object detection can overlook issues of fairness, Russakovsky said. “This geographical study, however, shows that object recognition can still be very biased and exclusionary and can have an unfair impact on various regions and individuals,” she said.
“Until recently, data set collection methods in computer science have not been scrutinized that extensively,” said co-author Angelina Wang, a computer science graduate student. “She said that photographs are often” scraped from the internet, and people do not often know that [in data sets] their photographs are being used. We should gather images from more diverse groups of people, but we should be careful when we do so that we get the photos in a respectful way. Vicente Ordonez-Roman, an assistant professor of computer science at the University of Virginia who was not involved in the studies, said, “Tools and benchmarks are an significant step … allowing us to catch these prejudices earlier in the pipeline and reconsider our problem setup and assumptions as well as data collection procedures.” There are some unique issues in computer vision about the portrayal and dissemination of stereotypes. Works such as those by the Princeton Visual AI Lab help to explain and bring some of these problems to the attention of the computer vision community and provide solutions to minimize them.
A related research from the Visual AI Lab explored approaches to prevent the learning of false associations by computer vision models that may represent biases, such as over-predicting activities such as cooking in women’s images or computer programming in men’s images. Visual signs, such as the fact that zebras are black and white, or basketball players frequently wear uniforms, lead to model accuracy, so it is a big challenge in the field to create efficient models while preventing troublesome similarities.
Electrical engineering graduate student Zeyu Wang and colleagues compared four different strategies for reducing biases in computer vision models during research presented in June at the virtual International Conference on Computer Vision and Pattern Recognition.
They found that the overall performance of image recognition models was harmed by a common technique known as adversarial training, or “fairness by blindness.” The model may not accept knowledge about the protected variable in adversarial training — in the analysis , the researchers used gender as a test case. In the team ‘s study, a particular strategy, referred to as domain-independent preparation, or “fairness by knowledge,” worked much better.
“Essentially, this means we’re going to have different frequencies of different gender-based behaviors, and yes, this forecast is going to be gender-based, so we’re only going to welcome it,” Russakovsky said.

The technique outlined in the paper mitigates potential biases by considering the protected attribute separately from other visual cues.

“How we really address the bias issue is a deeper problem, because of course we can see it’s in the data itself,” said Zeyu Wang. “But in in the real world, humans can still make good judgments while being aware of our biases” — and computer vision models can be set up to work in a similar way, he said.

Source of Story: Materials from the Engineering School at Princeton University. Posted originally by Molly Sharlach. Note: For style and length, material can be edited.
Comments to: Tool helps explain machine vision prejudices

Your email address will not be published. Required fields are marked *

Attach images - Only PNG, JPG, JPEG and GIF are supported.

Good Reads

When urban development takes place, a traffic impact assessment is often needed before a project is approved: What will happen to auto traffic if a new apartment building or business complex is constructed, or if a road is widened? On the other hand, new developments affect foot traffic as well — and yet few places study the effects of urban change on pedestrians.

Worlwide

When urban development takes place, a traffic impact assessment is often needed before a project is approved: What will happen to auto traffic if a new apartment building or business complex is constructed, or if a road is widened? On the other hand, new developments affect foot traffic as well — and yet few places study the effects of urban change on pedestrians.
This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

Trending

WHEN SARTRE SAID hell is other people, he wasn’t living through 2020. Right now, other people are the only thing between us and species collapse. Not just the people we occasionally encounter behind fugly masks—but the experts and innovators out in the world, leading the way. The 17-year-old hacker building his own coronavirus tracker. The […]
13 September marks six months since the first coronavirus announced in Ethiopia.In the half-year since then, reported cases are close to 64 Thousend, with more than 996 deaths. At the onset, COVID-19 mainly affected the capital city. However, the virus is now moving from high-density urban areas to informal settlements and then onward to rural […]
Present international artificial intelligence (AI) inventory and progression in self-driving vehicle research and development Complementary subjects in technology are also artificial intelligence ( AI) and self-driving vehicles. In brief, without someone involved, you just can’t debate one. While AI has been rapidly applied in different areas, a new hot topic has been the way you […]

Login

Welcome to Intech Analytica

AI news hub. It checks trusted sites and collects best pieces of AI info.
Join Intech Analytica

Agree the terms of use.