Question: Please read the following article: https://www.nytimes.com/2020/11/24/science/artificialintelligence-ai-psychology.html. Answer the following questions, and submit on the LMS: 1. How do researchers usually develop hypotheses to test in
Please read the following article: https://www.nytimes.com/2020/11/24/science/artificialintelligence-ai-psychology.html.
Answer the following questions, and submit on the LMS:
1. How do researchers usually develop hypotheses to test in experiments? What does this article suggest as a new approach?
2. Briefly describe:
a. How did the researchers determine the hypotheses they wanted to test ?
b. What is the design of the experiment: what was the treatment group? What was the control group? What is the level of randomization?
c. What were their findings?
3. Imagine you work on the data science team at Twitter, and you want to design an experiment to use social norms to persuade people to get vaccinated. The idea is leverage advertisements on the platform to send different messages to different groups of people (similar to the video from the University of Birmingham that you watched, but on Twitter instead of through email at a workplace).
a. Using the approach described in this article, what process might you follow to generate potential hypotheses?
b. Think about one potential hypothesis that you might want to test, and think about
1) what is the message that the treatment group would see; and
2) what is the message that the control group would see?


Need-a Hypothesis? This A.l. Has One By Benedict Carey 1,084 words 28 November 2020 International New York Times INHT English 2020 The New York Times Company. All Rights Reserved. Slowly, machine-learning systems are beginning to generate ideas, notjust test them. Machine-learning algorithms seem to have insinuated their way into every human activity short of toenail clipping and dog washing, although the tech giants may have solutions in the works for both. If Alexa knows anything about such projects, she's not saying. But one thing that algorithms presumably cannot do, besides feel heartbreak, is formulate theories to explain human behavior or account for the varying blend of motives behind it. They are computer systems; they can't play Sigmund Freud or Carl Jung, at least not convincingly. Social scientists have used the algorithms as tools, to number-crunch and test-drive ideas, and potentially predict behaviors like how people will vote or who is likely to engage in self-harm secure in the knowledge that ultimately humans are the ones who sit in the big-thinking chair. Enter a team of psychologists intent on understanding human behavior during the pandemic. Why do some people adhere more closely than others to Covid-19 containment measures such as social distancing and mask wearing? The researchers suspected that people who resisted such orders had some set of values or attitudes in common, regardless of their age or nationality, but had no idea which ones. The team needed an interesting, testable hypothesis a real idea. For that, they turned to a machine-learning algorithm. "We decided, let's try to think outside the box and get some actionable ideas from a machine-leaming model," said Krishna Savani, a psychologist at Nanyang, Technological University's business school, in Singapore, and an author of the resulting study. His co-authors were Abhishek Sheetal, the lead author, who is also at Nanyang; and Zhiyu Feng, at Renmin University of China. \"It was Abhishek's idea," Dr. Savani said. The paper, posted in a recent issue of Psychological Science, may or may not presage a shift in how social science is done. But it provides a good primer, experts said, in using a machine to generate ideas rather than merely test them. "This study highlights that a theory-blind, data-driven search of predictors can help generate novel hypotheses.\" said Wiebke Bleidorn, a psychologist at the University of California, Davis. "And that theory can then be tested and refined." The researchers effectively worked backward. They reasoned that people who choose to flout virus containment measures were violating social norms, a kind of ethical lapse. Previous research had not provided clear answers about shared attitudes or beliefs that were associated with ethical standards for example, a person's willingness to justify cutting corners in various scenarios. So the team had a machine-learning algorithm synthesize data from the World Values Survey, a project initiated by the University of Michigan in which some 350,000 people from nearly 100 countries answer ethics-related questions, as well as more than 900 other items. The machine-learning program pitted different combinations of attitudes and answers against one another to see which sets were most associated with high or low scores on the ethics questionnaires. They found that the top 10 sets of attitudes linked to having strict ethical beliefs included views on religion, views about crime and condence in political leadership. Two ofthose 10 stood out, the authors wrote: the Page 1 of 2 2022 Factiva, Inc. All rights reserved. Reproduced with permission from the Publisher for use only in \"2 Business Experiments and Data-Driven Organizations - Term T_[PGP]" taught by \"Professor Jul Ramaprasad and Professor Gal Singer\" at Indian School of Business on \"March 14 - April 14, 2022.\" belief that \"humanity has a bright future" was associated with a strong ethical code, and the belief that "humanity has a bleak future\" was associated with a looser one. "We wanted something we could manipulate, in a study, and that applied to the situation we're in right now what does humanity's future look like?" Dr. Savani said. In a subsequent study of some 300 US. residents, conducted online, half of the participants were asked to read a relatively dire but accurate accounting of how the pandemic was proceeding: China had contained it, but not without severe measures and some luck; the northeastern US. had also contained it, but a second wave was undemay and might be worse, and so on. This group, after its reading assignment, was more likely to justify violations of Covid-19 etiquette, like hoarding groceries or going maskless, than the other participants, who had read an upbeat and equally accurate pandemic tale: China and other nations had contained outbreaks entirely, vaccines are on the way, and lockdowns and other measures have worked well. "In the context of the Covid-19 pandemic,\" the authors concluded, "our ndings suggest that if we want people to act in an ethical manner, we should give people reasons to be optimistic about the future of the epidemic" through government and mass-media messaging, emphasizing the positives. That's far easier said than done. No psychology paper is going to drive national policies, at least not without replication and more evidence, outside experts said. But a natural test of the idea may be unfolding: Based on preliminary data, two vaccines now in development are around 95 percent effective, si_e_m',i_s_tsp_oed this mgnth. Will that optimistic news spur more-responsible behavior? "Our ndings would suggest that people are likely to be more ethical in their day-to-day lives, like wearing masks, with the news of all the vaccines," Dr. Savani said in an email. One common knock against machine-learning programs is that they are "black boxes": They nd patterns in large pools of complex data, but no one knows what those patterns mean. The computer cannot stop and explain why, for instance, combat veterans of a certain age, medical history and home ZIP code are at elevated risk for suicide, only that that's what the data reveal. The systems provide predictions, but no real insight. The "deep" learners are shallow indeed. But by having the machine start with a hypothesis it has helped form, the box is wedged open just a crack. After all, the vast banks of computers already running our lives may have discovered this optimism-ethics connection long ago, but who would know? For that matter, who knows what other implicit, "learned\" psychology theories all those machines are using, besides the obvious ad-driven, commercial ones? The machines may already have cracked hidden codes behind many human behaviors, but it will require live brains to help tease those out. |Like the Seienee Times pege en Feeebeekl Sign up for theeienee Times newsletter.| PHOTO: (PHOTOGRAPH BY Cristina Spano FOR THE NEW YORK TIMES) Document INHT000020201 127eg bsOOOOm
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
