Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of

Question:

“Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing (NLP), speech recognition and machine vision” (Tech Accelerator, 2020).

The following three features characterize artificial intelligence:

• Learning—the ability to acquire relevant information

• Reasoning—the ability to apply the rules acquired and use them to reach conclusions

• Iterative—the ability to change the process based on newly acquired information

The power of AI has been widely used to aid in Covid-19 efforts by aiding “in detecting, understanding and predicting the spread of disease . . . supporting physicians by automating aspects of diagnosis, prioritizing healthcare resources, and improving vaccine and drug development . . . combating online misinformation about COVID-19” (Tzachor and Whittlestone et al., 2020). 

Emergence of Artificial Intelligence

AI as a field was formally founded in 1956 at a conference at Dartmouth College in Hanover, New Hampshire, where the term “artificial intelligence” was coined” (Lewis, 2014). Over the next two decades, artificial intelligence as a field has flourished and is becoming more ubiquitously applied across industries. Government agencies such as the Defense Advanced Research Projects Agency (DARPA) funded special projects to research AI and its ability to translate and transcribe spoken language. “In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds” (Anyoha, 2017). Researchers John Hopfield and David Rumelhart popularized “deep learning” techniques and computer-enabled programming for teaching computers to learn from experience. At the same time, researcher Edward Feignenbaum introduced expert systems that could mimic the decision-making process of a human expert in a particular field. Programs were designed to ask experts to respond to a given situation, and once this information was learned, nonexperts could receive advice from that program. During the 1990s and 2000s, AI thrived; many of the landmark goals of artificial intelligence had been achieved. In 1997, IBM’s Deep Blue, a chess-playing computer program, defeated reigning world chess champion and grand master Gary Kasparov. This was the first time a world chess champion lost to a computer and served as a huge step toward an artificially intelligent decisionmaking program. Also in 1997, voice recognition software developed by Dragon Systems was implemented in Windows. AI breakthroughs have already surpassed human ability in certain activities, such as online search functions for photographs, videos, and audio; translation, transcription, lip reading, emotions recognition (including lying); and signature and handwriting recognition and documents forgery (Hancock, Naaman, and Levy, 2020).

Social Issues

“Many forms of bias underlie data sets and can interfere with data quality and how data is analyzed. These problems predate the advent of AI, but they could become more widely encoded into the fabric of the health care system if they are not corrected before AI becomes widespread” (Howard and Borenstein, 2020). Artificial intelligence was created by humans and as such is subject to the same biases prevalent in society. Although the interweaving of said biases was not intentional, it does not make their threat any more real, especially in times of crisis. The data these machines rely on are filled with biases. Data may exist in poor quality or not at all for some groups, which will only further these inequalities. “Infection and mortality data released by the CDC [Centers for Disease Control], while still infrequent and incomplete, paints a bleak picture around how COVID-19 disproportionately kills certain racial and ethnic groups. Alarming rates among black Americans are rooted in longstanding economic and health care inequalities, and the ambiguous racial/ethnic categorization of existing data further obscures disparities” (Smith and Rustagi, 2020). The data is also incomplete for immigrants, the LGBTQ+ community, and other marginalized groups because of their fear of deportation or their lack of resources as they have “economic and social vulnerabilities” (Smith and Rustagi, 2020). On top of that, the data is skewed toward affluent, white communities, as they are most often able to access the limited amount of tests and expensive medical procedures. In order to avoid more drastic and deadly effects of these biases “developers should employ values-based design methods in order to create systems that can be evaluated in terms of providing benefits for all impacted populations, and not only economic value for organizations.” As well as ensure the systems are “inclusive, fully taking into account human gender diversity (e.g., research on the impact of the virus across the nonbinary gender spectrum) and economic condition along with environmental sustainability” (IEEE Global Initiative on Ethics of AIS, 2020).

Legal Issues

The legal system is the institution tasked with defending civil rights and liberties. To better understand the legal aspects of AI and applications using this technology, a central question concerns how the law will evolve in response to this technology. Will it be through the imposition of new laws and regulation or through the time-honored tradition of having courts settle disputes, grievances, and harm done from the consequences of AI technology? Courts have already been involved in a number of U.S. decisions. “In Washington v. Emanuel Fair, the defense in a criminal proceeding sought to exclude the results of a genotyping software program that analyzed complex DNA mixtures based on AI while at the same time asking that its source code be disclosed” (Lexology, 2019). “The Court accepted the use of the software and concluded that a number of other states had validated the use of the program without having access to its source code” (Lexology, 2019). “In State v. Loomis, the Wisconsin Supreme Court held that a trial judge’s use of an algorithmic risk assessment software in sentencing did not violate the accused’s due process rights, even though the methodology used to produce the assessment was neither disclosed to the accused nor to the court” (State v. Loomis, 2017). Another legal consideration revolves around robots. Current legal frameworks do not have rules under which robots shall be held liable for their acts or omissions that cause damage to third parties. Robots can be so sophisticated that it can be questioned if ordinary rules on liability are sufficient. This is an important consideration in cases where cause cannot be traced back to a specific human and where the acts or omissions of robots, which have caused harm, could have been avoided..... 


Questions for Discussion

1. What are outstanding ethical concerns and issues with AI technology and applications?

2. Who are the stakeholders and stockholders in developing, designing, distributing, and selling these applications?

3. What are some practical solution suggestions for keeping AI technologies ethical and accountable?

4. Do you believe AI technologies and applications will take over most jobs done by humans today or complement the work people do now? 

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Question Posted: