Question: Old MathJax webview Old MathJax webview Use the decision procedure described in Chapters 1 and 2 to analyse this case. What further facts would you
Old MathJax webview
Use the decision procedure described in Chapters 1 and 2 to analyse this case. What further facts would you need to know? What are the ethical issues involved? Who are the stakeholders involved in this case?



Decision Point Facebook In October 2017, one year after the 2016 U.S. presidential election, executives from Twitter, Google, and Facebook testified before both the U.S. House and U.S. Senate Intelligence Committees investigating Russian interference in the U.S. elections. The committees were presented with thousands of ads, tweets, and posts that were placed by Russian agents. Facebook alone presented the committees with over 3,000 such ads. Some of the ads aimed to aggravate social tensions within the U.S. on such topics as immigration, Black Lives Matter, the confederacy, Muslims, and gun rights. One ad promoted a rally to oppose the "Islamization" of America, while another ad promoted an opposing rally to defend Islam scheduled at the same time and place. Other ads took specific positions supporting or opposing candidates, most of which were critical of Hilary Clinton and supported Donald Trump. For example, one ad depicted the devil promising that if Hilary Clinton wins, the devil wins, while Jesus promises that he will help defeat both the devil and Clinton. Paid advertising was only one way that the Russians used Facebook to promote their agenda. They created false accounts and fictional groups where they could simply post information like any other Facebook user. Similarly, Russians not only bought ads on Twitter and Google, but they created false accounts on all these platforms, used bots to post false tweets, and posted videos on Google's YouTube and photos on Facebook's Instagram. The Russians promoted their ads, posts, and agenda the same way any digital marketer operates on Facebook. They relied on Facebook data to target individuals and groups who would be predisposed to "buy" what they were selling. They created ads that would appeal to these people, and then paid Facebook to place the ads on the targeted audience's page. They created accounts and groups that attracted thousands of other users and then stepped back as Facebook users did much of the rest of the work to increase circulation by "liking" and forwarding ads and posts to "friends." Facebook initially told Congress that some 3,000 ads reached as many as 10 million Americans. In later testimony, those numbers changed to 80,000 items (ads and posts) that were viewed by 29 million people, who then forwarded them to more than 10 million more people. By 2018, Facebook revised their estimate to as many as 126 million people who were exposed to a Russian-linked ad or post between 2015 and 2017. Many observers thought that Facebook, and in particular its founder and CEO Mark Zuckerberg and COO Sheryl Sandberg, were slow to respond to the issues raised by this. Zuckerberg's first public statement described the idea that fake information on Facebook could have influenced the election "a pretty crazy idea." Initially, Facebook denied any responsibility, claiming that it was more a victim of Russian meddling than an accomplice. It claimed that it did not know about Russian interference and could not have done much about it without becoming a censor of content, a step that it was unwilling or unable to take. Critics argued that Facebook had many options available that would have prevented such widespread Russian meddling. Most traditional media companies such as television stations, newspapers, and radio stations, have a marketing department that works directly with advertisers and reviews each advertisement individually. Given the nature of the business and the medium, it would be unimaginable for a television station or newspaper to run an ad if it didn't know its content or who was paying for it. As demonstrated by the Russian political ads, within Facebook's business model it was easy for someone to buy ads, post political content, and pay for it without Facebook's review or approval. Facebook's crtics pointed out that if the company in fact did not know that Russians were behind these ads, it was negligent for not knowing. During hearings before the U.S. Senate Intelligence Committee, Senator Al Franken challenged Facebook's general counsel, Colin Stretch. "Mr. Stretch, how did Facebook, which prides itself on being able to process billions of data points and instantly transform them into personal connections for its users, somehow not make the connection that election ads, paid for in rubles, were coming from Russia? Those are two data points: American political ads and Russian money, rubles. How could you not connect those two dots?"3 Senator Franken also reminded Mr. Stretch that it is illegal for foreign individuals and organizations to contribute to or participate in U.S. elections. Underlying Facebook's initial response was its view that it was not a media company comparable to cable providers or newspapers, but was simply a company that provided a platform that people could use to communicate. It claimed that its priority at the time was to guarantee the reliability and security of that communication platform. In reply to Franken, Facebook's attorney admitted that during 2016 Facebook was more focused on protecting the security of accounts and preventing theft of content than on deceptive or compromised content. On that view, Facebook saw itself more like a telephone company than a newspaper. It provided the means for people to communicate but could not be held responsible for what was communicated through the use of its platform. Its primary responsibility was to ensure the quality, reliability, and security of the communication that took place, but not its content. But, of course, in one important way, Facebook is very different from a telephone company. Unlike a phone company, Facebook makes its money not by charging users but by collecting huge amounts of data about its users and selling access to its users and its users' data to paying customers. However, there are obvious tensions built into the Facebook business model. There is the tension between the goals of profiting from selling access to data and Facebook's stated goal of protecting the security and privacy of users' accounts. In response to this challenge, Facebook has created policies to limit access to its platform and restrict what developers can do with the information provided about users. However, it is in Facebook's financial interests to ensure that its policies do not overly discourage developers and advertisers. Apps and ads developed by outside vendors that are more successful in drawing and keeping users active in the platform are more valuable to Facebook because users are thereby spending more time viewing ads and providing information. The more Facebook provides open access to its platform, the more it increases the market it can provide to those willing to pay for access to that audience. Facebook's failure to navigate the inherent tensions between providing security and privacy for users' data and gaining financially from those data underlies most of Facebook's ongoing social and political scandals. In 2011, Facebook settled a series of complaints issued by the U.S. Federal Trade Commission (FTC) by agreeing to better protect consumer privacy and by keeping users better informed of how their information was used. The FTC had charged Facebook with deceptive (continued 141 and unfair business practices associated with its failure to keep user information private. In 2018, Facebook suffered a massive data breach in which software flaws allowed the private data of more than 50 million users to be taken by hackers. The 2018 data breach followed the 2016 unauthorized use of consumer data by the political consulting firm Cambridge Analytica. In that scandal, Cambridge Analytica gained legitimate access to Facebook (unlike the 2018 unauthorized data breach) by paying Facebook to place an app that asked users to participate in a survey. Users were asked for their informed consent for what was described as academic research purposes. Cambridge Analytica then violated Facebook's policies by using those data as part of its political agenda supporting campaigns of Texas Senator Ted Cruz, President Donald Trump, and the 2016 Brexit vote. Facebook ultimately admitted that data from as many as 87 million users were involved in this breach of privacy. Facebook's troubles increased in late 2018 when The New York Times reported that beyond these unauthorized releases of user data, Facebook has long provided authorized access to user information to more than 100 partners, including Microsoft, Amazon, Netflix, and Spotify. Facebook defended these practices by pointing out that it did not sell the data to other companies but merely shared them so that users would be provided with better service from all the companies involved. For example, by knowing your Facebook friends and "likes," Microsoft's Bing search engine could provide more specific search results, or Amazon could provide more detailed shopping suggestions, or Netflix and Spotify could provide better suggestions for movies to watch or songs to listen to. 1. Does Facebook have particular responsibilities regarding political ads and apps? Should it be responsible for ensuring that political ads and their sponsors are identified? 2. It is often said that business has a responsibility to its customers. Who are Facebook's customers and what responsibilities does Facebook have to them? 3. Facebook has also been criticized for allowing hate speech and postings from neo-Nazis, white supremacists, and racists on its platform. What responsibility does Facebook have for policing the content of what is posted? 4. The legal right of freedom of speech is a right against government censorship. Do private companies like Facebook have a duty to respect free speech, or should they have a right to limit what is said on their platform? 5. Like all other social media platforms and software companies, Facebook has a "terms of usage" agreement that users acknowledge when they sign up for a Facebook account. To what degree does this mean that users have consented to Facebook's policies and practices? 6. Use the decision procedure described in Chapters 1 and 2 to analyze this case. What further facts would you need to know? What are the ethical issues involved? Who are the stakeholders involved in this case? 7. Should Facebook censor hate speech? What criteria should Facebook use to censor posts, ads, or apps
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
