IBM will no longer offer, develop, or research face recognition systems

IBM CEO Arvind Krishna wrote in a letter to the U.S. Congress that the company will no longer offer and develop general facial recognition systems. The key reason he cited was inaccuracy or. racial bias of such systems.

Machine-based face recognition systems have advanced greatly in recent years and are also widely used by law enforcement agencies. The best-known examples are from China, but they are, at least as ancillary systems, also used in the USA and, for example, in the United Kingdom.

However, due to incorrectly calibrated patterns of photographs in which they “learn”, they are often biased towards minority races, in the US most often towards blacks. In practice, this means that a computer is more likely to falsely identify a suspect in a collection of already convicted persons if he is black.

One of the better known systems also used by some U.S. police departments is Amazon’s Rekognition. In 2018, the American Civil Liberties Union found that the system mistakenly “found” 28 members of the US Congress among 25,000 photos of suspects.

Also on the wallpaper is Clearview AI, which has “learned” its system mechanically with the help of three billion photos that it has machine-picked from social networks. This system was then used by quite a few companies and even law enforcement agencies. The company is currently drowning in lawsuits. Facebook also paid a $ 550 million fine for its face recognition system in January.

Some observers believe that IBM has chosen a politically opportune moment when the U.S. is in a storm of racial unrest to abolish (or perhaps just abolish) its face recognition technology.

2020-06-11T12:11:41+01:00 BLOG|