Question: - Please read the case study above and help me by answering these two following questions: Discussion questions: 1. What issues and concerns come into

- Please read the case study above and help me by- Please read the case study above and help me by- Please read the case study above and help me by

- Please read the case study above and help me by answering these two following questions:

Discussion questions:

1. What issues and concerns come into focus in this case from applying each of the five ethical lenses?

  • Rights
  • Fairness/Justice
  • Utilitarianism
  • Common good
  • Virtues

2. Given your discussion, how would you assess the ethics of Open AIs decision in November to release GPT-2 in full?

Case Study 5 AI: To Release or Not To Release the GPT-2 Synthetic Text Generator Subbu Vincent In February 2019, the San Francisco-based Open Al group made a decision that sent reverberations through the Al and open source communities worldwide. First, it announced "GPT-2," a major improvement in language models which, according to its creators, generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarizationall without task-specific training." Open Al then added this: Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Open-Al also released a technical paper. GPT-2 is trained as a large-scale unsupervised language model on 40 GBs of content scraped from the Internet with a Reddit karma score of over 3. Given the "fake news" era, much discussion in the community followed on the potential harms to society vs. the benefits to researchers. Open Al then did a staged release. In May 2019, it released an expanded dataset with a more detailed model. Finally, in November, it released the full GPT-2, arguing this: We've seen no strong evidence of misuse so far. While we've seen some discussion around GPT-2's potential to augment high-volume/low-yield operations like spam and phishing, we haven't seen evidence of writing code, documentation, or instances of misuse. We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent. We acknowledge that we cannot be aware of all threats, and that motivated actors can replicate language models without model release. EXERCISE: Go to talktotransformer.com. This website lets you test GPT-2 live by inputting any sentence and lets you watch it generate synthetic text as a response in the context of your input. Type any sentence about an accident or an assassination. See for yourself what the machine generates. Here is one example. *** EXAMPLE 1. TEST ONLY, THIS IS NOT REAL, DO NOT SHARE *** User: Baltimore Mayor Catherine Pugh was sentenced to three years in prison yesterday. GPT-2: Baltimore Mayor Catherine Pugh was sentenced to three years in prison yesterday. She was convicted of knowingly accepting illegal campaign contributions from the family of Gov. Larry Hogan. According to a Fox 5 report, "Pugh's husband Peter Ruiz donated $1,000 to Hogan's campaign in 2013. He also hosted a fundraiser with Hogan in October 2014, with $2,500 going to her campaign, a meal and a photo of the two of them with their grandchildren. Ruiz also testified at Pugh's trial that Ruiz is 'very pro-Trump' and 'knows what's up.' But by taking the funds, Pugh and Ruiz admitted that they had violated Maryland law by 'tattling' or soliciting a campaign Example 1 (generated 2/28/2020) shows how a machine trained on web pages can generate completely fictitious and politically-loaded narratives. There are more examples of dehumanizing and racist text that GPT-2 generates based on what you feed it. On Github, in their section on "Out-of-scope use cases for GPT-2, Open Al states, Because large scale language models like GPT-2 do not distinguish fact from fiction, we don't support use-cases that require the generated text to be true." It also acknowledges that language models like GPT-2 "reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related General Management Questions!