Question: Please read the following: Case Study: The Hawthorne Effect - Levitt & List Reanalysis (2009) The reanalysis of the Hawthorne studies by Levitt and List
Please read the following: Case Study: The Hawthorne Effect - Levitt & List Reanalysis (2009)
The reanalysis of the Hawthorne studies by Levitt and List (2009) underscores several challenges that arise when using historical, secondary data in program evaluation. Despite the iconic status of the Hawthorne effect in organizational theory, their examination reveals significant flaws in the data collection and experimental design that complicate the interpretation of results. Two primary data collection issues stand out in their analysis: incomplete documentation and selection bias.
First, the original Hawthorne data were incomplete and poorly documented. The studies, conducted in the 1920s and 1930s, did not adhere to contemporary standards for data collection or preservation. Key variables were inconsistently recorded, and important contextual informationsuch as external factors that may have influenced productivitywas either missing or insufficiently described (Levitt & List, 2009). This lack of comprehensive documentation limits the reliability of the findings and raises serious concerns regarding internal validity (McDavid, Huse, & Hawthorn, 2019). Without a consistent baseline or control group, it is difficult to attribute observed changes in worker behavior to specific interventions, such as changes in lighting or observation protocols.
Second, the study suffers from selection bias and a lack of randomization. Workers were not randomly assigned to treatment or control conditions, and the working environment was not controlled in a systematic way. This methodological flaw opens the door to alternative explanations for the observed productivity changessuch as workers' pre-existing characteristics, external economic conditions, or managerial influence (Geddes, 1990). As Langbein (2012) emphasizes, the absence of randomized designs and reliable comparison groups undermines the evaluator's ability to draw causal inferences.
Levitt and List's (2009) analysis demonstrates how modern evaluators must be cautious when interpreting archival or secondary data. Even data that have become foundational in public administration discourse may suffer from methodological limitations that obscure the true nature of the phenomena being studied. As evaluators, our responsibility is to critically examine the origins, completeness, and structure of our data sources to ensure credible and valid conclusions.
Then answer the following questions:
explain whether the problems have to do with design or data collection. What other data collection problems could, in your view, also be important? Explain. And explain in what ways you agree or disagree with the permissibility of the identified scenarios. Justify your reaction.
References
Geddes, B. (1990). How the cases you choose affect the answers you get: Selection bias in comparative politics. Political Analysis, 2(1), 131-150. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.372.5896&rep=rep1&type=pdfLinks to an external site.
Langbein, L. (2012). Public program evaluation: A statistical guide (2nd ed.). M.E. Sharpe.
Levitt, S. D., & List, J. A. (2009). Was there really a Hawthorne effect at the Hawthorne plant? An analysis of the original illumination experiments. National Bureau of Economic Research. https://www.nber.org/papers/w15016Links to an external site.
McDavid, J. C., Huse, I., & Hawthorn, L. R. L. (2019). Program evaluation and performance measurement: An introduction to practice (3rd ed.). SAGE Publications
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
