Categories
Articles

The Global Implications of Algorithm Bias

Written by Geoffrey Wiggins-Long

The prophetic theme of George Orwell’s novel titled, Nineteen Eighty-Four (1984), is that dystopian society is being monitored by Artificial Intelligence at all times.  This fact has definitely become an intrinsic part of our reality, and as we continue moving toward an exclusive reliance on technology to efficiently ease secondary tasks and allowing us time to earn a living or take care of other responsibilities.  The impact of such reliance is this; operating systems continue to advance, but have no accountability for errors or decisions resulting in negative/unethical impacts on women, Immigrants, African-American, Indigenous Americans, or others who are disenfranchised.  Such “advancement” also affords the capability to invade the privacy of individuals.  In view of these deficiencies, the punitive costs of technological advancements seem too great a price for the global community to pay, since payment requires sacrifices of human dignity and self-respect.  

According to statistics.com, as of January 2021 there were 4.66 billion active internet users worldwide, which constitutes 59.5 % of the global population.  Of this total, 92.6% (4.32 billion) have access to the internet through a mobile device (Johnson, 2021).  Gaining access and the ease of interface with the internet have not only become top priorities for program and website developers, but the engineers who design mobile phones, and voice-activated devices, like Alexa.  While such developments do make our lives more convenient – because anything can be ordered; business transactions can be performed in the blink of an eye; conversations held with anyone, from your home, without regard for geographic distance; and have multiple sources at one’s fingertips, sinister components also exist.  This is particularly true regarding the algorithms which are the foundation for artificial intelligence, and are an integral part of decision-making processes.  Again, in the United States these particularly impact the non-ethical treatment of people of color, particularly African Americans, in terms of health care, criminal justice, and the hiring practices of women.  Such developmental impacts are far-reaching, for example, they also affect immigration in the United Kingdom and the Social Credit System of  China.

The United States

Health Care:

Hospitals and insurance companies have decided, in the name of efficiency, to use algorithms to determine who receives healthcare from which particular program, or what program meets complex healthcare needs.  Again, this constitutes systemic discrimination of people of color.  According to a study of medical algorithms conducted by Ziad Obermeyer and his colleagues at the University of    California, Berkeley on October 24, 2019, African Americans were less likely to be referred to improve- care programs that provided personal care treatment, and receive the insurance coverage than their Caucasian counterparts.  The number of patients unethically impacted is 200 million people each year.  As a result, many African Americans may resent the process, and choose to put their health at risk

by deciding not to go to a Medical Care Facility until it is too late to save their lives, (Heidi, 2019). 

Criminal Justice:        

Facial recognition technology, known by law enforcement as Clearview Artificial Intelligence, is often utilized if surveillance footage of suspects is unclear, or grainy; those (not clear) individuals will be compared and/or matched within a database of known criminals.  The system is trained on, and turned onto Caucasian faces.  Subsequently, there are higher numbers of errors among people of color, because their facial images are misidentified or mislabeled, resulting in numerous wrongful arrests such as the January 2020 incident involving Detroit resident Robert Julian-Borchak Williams.  Williams was unjustly detained for a shoplifting crime which had happened two years earlier.  Even though he had nothing to do with the misdemeanor, facial recognition technology used by Michigan State Police “matched” his face with a grainy image obtained from an in-store surveillance video showing another African American man taking $3,800 (US) worth of watches. (Bailey, Burkell, & Steeves, 2020).

Although Williams was exonerated, his unjust predicament lasted for two weeks – all because of state police reliance on a faulty system of matching.  By this time, emotional and psychological damage had already transpired.  The innocent man had been arrested and handcuffed  in front of his family, forced to provide a mug shot, fingerprinted, interrogated, subjected to DNA sampling, and imprisoned overnight.  Although this man’s story was highlighted, other minorities have experienced similar, horrific encounters.  The ongoing controversy about police use of Clearview Artificial Intelligence  (AI)  certainly underscores the privacy risks posed by facial recognition technology.  However, it is very  important to realize that not all  of “us” are subjected equally to these risks. (Bailey et al., 2020). 

Hiring Practices:

The employee recruitment process has been within the business world in which companies desire to become more efficient and fair, and to eliminate objectivity based on race or gender.  To this end, Artificial Intelligence (AI) screening tools are being used by human resources departments.  While this software saves time when searching for that one prospective employee among thousand applicants, there are numerous drawbacks which circumvent the program’s intended purpose.  The system can easily get things wrong if it screens for keywords without context, resulting in locating applicants who are not qualified.  Applicants may be accepted who used the right keywords, but are not a good match for the position, or they may actually be trying to deceive the system. More importantly, AI does not always eliminate biases.  Systems must be taught to look for certain characteristics, and failure of being so conditioned, biases may still come into the process inadvertently.(Bridget Miller, 2018).  A glaring example of this happened when Amazon’s recruitment AI algorithm returned significantly biased results; it systematically preferred men over women.  The system even went so far as to discount degrees from all-female higher education facilities, and to de-prioritize resumes with items that singled them out as being female, such as belonging to specific all-female groups.  To their credit Amazon did try to fix the algorithm and to remove the biases – but could not.   Therefore, discontinuing the company’s AI recruiting process were the end results.  (Bridget Miller, 2018).

The United Kingdom and China

Globally, Algorithm bias has impacted immigration in the United Kingdom and in China’s social 

credit system.  

The UK Home Office visa application processing system has recently garnered criticism because an actual racially biased algorithm, which uses nationality to decide which applications get fast-tracked, has led to a system in which “people from rich white countries get ‘Speedy Boarding’; while “poorer people of color get pushed to the back of the queue.”  Although the office denied any systemic racial improprieties, as of August 5, 2020 it has agreed to drop the algorithm, and plans to relaunch a redesigned version later this year, after conducting a full review that will specifically look for unconscious bias. (Will Douglas Heaven, 2020). 

China’s Social Credit System is composed of databases and initiatives that assess trustworthiness via monitoring the daily activities of individuals, companies, and government entities.  The databases are managed by China’s economic planner, the National Development and Reform Commission (NDRC), the People’s Bank of China (PBOC), and the country’s court system.  A single score is tabulated after entry data has been analyzed from different/various sources.  Financial, criminal and governmental records are used.  Existing data from registry offices, along with third-party sources such as online credit platforms, data collected via video surveillance are utilized.  Other real-time data transfers,such as monitoring emission data from factories are used, although these are not considered primary sources. (Amanda Lee, 2020).

Depending on the scores, rewards and sanctions will follow.  For instance, individuals with high trustworthiness scores will get incentives such as prioritized health care and having deposits waivers for renting public housing.  However, individuals with poor scores will face a number of restrictions affecting areas including loans, air and rail travel, and education (Amanda Lee, 2020).

Critics of China’s social credit system say that Beijing’s commitment to regulating behavior, and the practice of mass surveillance id Orwellian in nature; a state in which the government tries to control every part of people’s lives, similar to what is described in the novel by English author George Orwell, titled Nineteen Eighty Four (1984).  Amanda Lee, 2020).                         

Solutions:

Mitigating algorithm bias will be an evolving process as technology continues to advance. Three components will be needed to execute the process effectively:  training, legislation, and auditing. 

Training involves expanding the initial data set to be more inclusive. For instance, scientists at image.net are working on a project that will remove derogatory words or predictive phrases associated with photo entries which could possibly initiate biased decisions.

Legislatively, the algorithm accountability act, which requires tech companies to audit their AI systems for decimation, will soon become law; because it is imperative that regulatory standards be in place to identify and help system biases – regardless of the business sector or type.  An Algorithmic Accountability Act has been proposed by Sen. Ron Dryden (D-Ore.), with support and sentiments voiced by Sen. Cory Booker (D-N. J.) and Rep. Yvette Clark (D-N. Y.).  He intends to update the proposal, reintroduce it in a few months, and it is likely to become law in the very near future. (Grace Dille, 2021).     

Auditing would utilize a portfolio of technical tools, as well as operational practices, to maintain a non-bias system-partner with tech companies.  This may be an effective strategy since many companies already have monitoring programs in place; such as IBM’s “Fairness 360” framework, and Google AI which publishes and conveys recommended practices.(Manyika, Silberg, & Presten 2019).

Conclusion:

Algorithm bias is not an issue which can easily be removed from the subconscious, nor the inherent prejudices that some humans persistently associate with racial, religious, gender or other inequalities.  Mankind cannot  expect technology to self-correct when the designers of the algorithms that power artificial intelligence are complacent, or biased, but are tasked with initial installations of data sets without extensive directions and/or supervision; in spite of or even because of time or budget constraints. Technology is a byproduct of our mental creativity, so it is imperative that we come to terms with our biases and deal with them.  That is the only way to ensure that the operating systems are not only efficient, but more non-discriminatory and much more inclusive.  In spite of widespread acceptance, usage and need to have modern technology at our disposal, the law of diminishing returns will be a constant, and possible destructive factor, if  changes are not implemented. 

In the words of Albert Einstein, “It has become appallingly obvious that our technology has exceeded our humanity.”(Goodreads.com, 2021).  However, a preferable alternative is to listen, to endure, implement changes to improve the systems, and cling to the belief that “While there is life, there is hope.”- Marcus Tullius Cicero.rai(BrainyQuote.com., 2021)

References

Bailey, J., Burkell, J, Steeves, V. (2020, August 24). AI technologies — like police facial recognition — discriminate against people of colour. The Conversation.

https://theconversation.com/ai-technologies-like-police-facial-recognition-discriminate-against-people-of-colour-143227

Brainyquote.com. (2021, May 1). Marcus Tullius Cicero Quotes. 

https://www.brainyquote.com/quotes/marcus_tullius_cicero_156324

Dille, Grace. (2021, February 19).Sen. Wyden to Reintroduce AI Bias Bill in Coming Months. MeriTalk.com.

Goodreads.com. (2021, May 1). Albert Einstein Quotes

https://www.goodreads.com/quotes/7091-it-has-become-appallingly-obvious-that-our-technology-has-exceeded

Heaven, Will Douglas. (2020, August 5). The UK is dropping an immigration algorithm that critics say is racist. MIT Technology Review.

https://www.technologyreview.com/2020/08/05/1006034/the-uk-is-dropping-an-immigration-algorithm-that-critics-say-is-racist/

Johnson, Joseph. (2021, April 28). Digital population in the United States as of January 2021. Statista.com

https://www.statista.com/statistics/1044012/usa-digital-platform-audience/

Ledford, Heidi. (2019, October 24). Millions of black people affected by racial bias in health-care algorithms. Nature.com

https://www.nature.com/articles/d41586-019-03228-6

Lee, Amanda. (2020, August 9). What is China’s social credit system and why is it controversial?. South China Morning Post.

https://www.scmp.com/economy/china-economy/article/3096090/what-chinas-social-credit-system-and-why-it-controversial

Manyika, J., Silberg, J, Presten, B. (2019, October 25). What Do We Do About the Biases in AI?. Harvard Business Review.

https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Miller, Bridget. (2019, June 5). Cons of Using AI in the Recruiting Process. HR Daily Advisor

Miller, Bridget. (2018, November 19). The Amazon Example: Can AI Discriminate?. HR Daily Advisor