New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
psychology
Research And Evaluation In Education And Psychology Integrating Diversity With Quantitative Qualitative And Mixed Methods 6th Edition Donna M. Mertens - Solutions
Discuss possible strategies that can be used to improve single-case designs (SCDs).
Find a single-case design study and critically analyze it using the questions for critical analysis of single-case research presented at the end of the chapter.
Identify reasons for choosing qualitative methods.
Describe strategies for qualitative inquiry and their methodological implications, including ethnographic research, case studies, phenomenology, grounded theory, and participatory action research.
Summarize the general methodological guidelines for qualitative research.
Examine sample qualitative research studies using criteria for critically analyzing qualitative research
Describe the rationale for the importance of historical, narrative, and autoethnographic research in education and psychology.
Identify different types of historical research including topical, biographical, autobiographical (autoethnography), oral history, and the narrative study of lives.
Describe the three sources of historical, narrative, and autoethnographic data:documents, artifacts, and interviews.
Recognize the steps in conducting historical, narrative, and autoethnographic research, including determining appropriate research and sources, conducting the literature review, identifying data, and evaluating the quality of sources.
Discuss the special considerations for conducting oral history, including biographical, autoethnographic, and autobiographical research.
Describe the application of the principles of Universal Design for Learning (UDL).
What was the influence of the individual experimenter?
How well and in what manner does it meet internal and external tests of corroboration and explication of contradictions?
Are female participants excluded, even when the research question affects both men and women? Are male subjects excluded, even when the research affects both women and men?
Does the researcher report the sample composition by gender and other background characteristics, such as race or ethnicity and class?
How does the researcher deal with the heterogeneity of the population? Are reified stereotypes avoided and adequate opportunities provided to differentiate effects within race/gender/disability group by other pertinent characteristics (e.g., economic level)?
How did the researcher address ethical issues particular to the characteristics of the participants (e.g., older age groups, children, Indigenous communities, sexual minorities, and people with mental illness)?
Did the researcher objectify the human beings who participated in the research study?
Did the researcher know the community well enough to make recommendations that will be found to be truly useful for community members?
Did the researcher adequately acknowledge the limitations of the research in terms of contextual factors that affect its generalizability or transferability?
Whose voices were represented in the research study? Who spoke for those who do not have access to the researchers? Did the researchers seek out those who are silent? To what extent are alternative voices heard?
If deception was used in the research, did the researcher consider the following issues(adapted from Sieber & Tolich, 2013):a. Could participant observation, interviews, or a simulation method have been used to produce valid and informative results?b. Could the people have been told in advance that
Do the items on the measurement instrument appear relevant to the life experiences of persons in a particular cultural context?
Do the measurement items or tools have content relevance?
Have the measures selected been validated against external criteria that are themselves culturally relevant?
Are the constructs that are used developed within an appropriate cultural context?
Have threats to generalization of causal connections been considered in terms of connections across persons and settings, nonidentical treatments, and other measures of effects?
What evidence is provided of the quality of the data collection instruments in terms of the following?a. Relia bility or dependabilityb. Validity or credibilityc. Objectivity or confirmabilityd. Freedom from bias based on gender, race a nd ethnicity, or disability
Are the procedures used by the test developers to establish reliability, validity, objectivity, and fairness appropriate for the intended use of the proposed data collection techniques? Was the research instrument developed and validated with representatives of diverse gender, racial/ethnic, and
Is the proposed data collection tool appropriate for the people and conditions of the proposed research?
In qualitative research, what is the effect of using purposive sampling on the transferability to other situations?
In qualitative research, was thick description used to portray the sample?
What is the relationship between information obtained from primary sources(interviews, letters, diaries, etc.) and existing documentation and historiography?
Are the scope, volume, and representativeness of the data used appropriate and sufficient to the purpose? If interviews were used, is there enough testimony to validate the evidence without passing the point of diminishing returns?
In what ways did the interviewing conditions contribute to or distract from the quality of the data? For example, was proper concern given to the narrator’s health, memory, mental alertness, ability to communicate, and so on? How were disruptions, interruptions, equipment problems, and extraneous
Did the interviewer do the following?a. Thoroughly explore pertinent lines of thoughtb. Make an effort to identify sources of informationc. Employ critical challenges when neededd. Allow biases to interfere with or influence the responses of the interviewee
What are the multiple purposes and questions that justify the use of a mixed methods design?
Has the researcher matched the purposes and questions to appropriate methods?
To what extent has the researcher adhered to the criteria that define quality for the quantitative portion of the study?
To what extent has the researcher adhered to the criteria that define quality for the qualitative portion of the study?
How has the researcher addressed the tension between potentially conflicting demands of paradigms in the design and implementation of the study?
Has the researcher appropriately acknowledged the limitations associated with data that were collected to supplement the main data collection of the study?
How has the researcher integrated the results from the mixed methods? If necessary, how has the researcher explained conflicting findings that resulted from different methods?
What evidence is there that the researcher developed the design to be responsive to the practical and cultural needs of specific subgroups on the basis of such dimensions as disability, culture, language, reading levels, gender, class, and race or ethnicity?
What is the population of interest? How was the sample chosen—probability, purposeful, or convenience sampling? What are the strengths and weaknesses of the sampling strategy?
What are the characteristics of the sample? To whom can you generalize or transfer the results? Is adequate information given about the characteristics of the sample?
How large is the population? How large is the sample? What is the effect of the sample size on the interpretation of the data?
Is the sample selected related to the target population?
Who dropped out during the research? Were they different from those who completed the study?
Given the research questions of the proposed research, when and from whom is it best to collect information?
Does the instrument contain language that is biased based on gender, race and ethnicity, class, or disability?
Was there corroboration between the reported results and people’s perceptions? Was triangulation used? Were differences of opinions made explicit?
Was an audit used to determine the fairness of the research process and the accuracy of the product in terms of internal coherence and support by data?
Was peer debriefing used? Outside referees? Negative case analysis? Member checks?
Is the report long and rambling, thus making the findings unclear to the reader?
Was the correct conclusion missed by premature closure, resulting in superficial or wrong interpretations?
Did the researcher provide sufficient description?
How did the researcher address the limitations of each kind of data?
How did the researcher integrate the results from the mixed methods? If data from different sources yielded conflicting results, how did the researcher address this conflict?
How did the researcher engage stakeholders in culturally responsive ways to interpret the results of the mixed methods analysis?
How do you account for the results? What are the competing explanations, and how did the authors deal with them? What competing explanations can you think of other than those the author discussed?
How would the results be influenced if applied to different types of people (e.g., rural or urban)?
What were the processes that caused the outcomes?
What conclusions and interpretations are made? Are they appropriate to the sample, type of study, duration of the study, and findings? Does the author over- or undergeneralize the results?
Is enough information given so that an independent researcher could replicate the study?
Does the researcher relate the results to the hypotheses, objectives, and other literature?
Does the researcher overconclude? Are the conclusions supported by the results?
What extraneous variables might have affected the outcomes of this study? Does the author mention them? What were the controls? Were they sufficient?
Did regularities emerge from the data such that addition of new information would not change the results?
Are basic assumptions for parametric, inferential statistics met (i.e., normal distribution, level of measurement, and randomization)?
If observers are used, what are the observers’ qualifications? What steps were taken to reduce bias? Was it possible or reasonable to use “blind” observers? Are the data independent of the person doing the observations? Should they be? What is the influence of the nature of the person doing
Were instruments explored for gender bias? For example, were instruments used for both sexes that had only been validated for one sex? Did questions use sexist language?Was consideration given to the sex of the test administrator? Were questions premised on the notion of sex-inappropriate behavior,
In terms of race and ethnicity biases, were the data collection instruments screened so that the test content reflected various racial and ethnic minority groups? Was consideration given to cultural differences in terms of examinee readiness, motivation, and response set? Were racial and ethnic
If the instrument is to be or was used with people with disabilities, was the accommodation made on the basis of a specific disability? How was eligibility determined for the accommodation? What type of modification was or should be allowed? Do the scores achieved under the nonstandard conditions
Did the researcher consider their own prejudices and biases that might affect data collection? If a research team was used, how sensitive were team members to cultural issues? Was training provided to people in dealing with people who are culturally different from themselves?
Were the various different cultural groups involved in planning, implementing, and reviewing the data collection instruments? In the results?
Were multicultural issues addressed openly at all stages of the research process?
In observational research, was it possible or reasonable to use multiple observers or teams, diverse in age, gender, or ethnicity? Were observational findings cross-checked with other researchers? Were negative cases sought out to test emergent propositions?Were the research setting and findings
Identify several of the main journals in education and psychology. Examine their instructions to potential authors to see what the journal editors require in the way of evidence of measurement reliability, validity, objectivity, and lack of bias based on gender, ethnicity or race, and disability.
Review the same test and determine to what extent you think it is appropriate for males and females, people of different racial and ethnic backgrounds, people whose primary language is not English, and people with different types of disabilities. What kinds of accommodations would be needed to
Identify several research articles that provide descriptions of their data collection strategies (of course, they all should have such sections). Using the questions for critical analysis provided in this chapter, analyze the strengths and weaknesses of the data collection sections of the research
Using the research proposal that you have been developing as you move through this text (assuming that you are developing one), write out the data collection plan for your study. Be sure to include information about how you will ensure the quality of the data that you propose to collect.
What types of statistical analysis were used? Were they appropriate to the level of measurement, hypotheses, and the design of the study? What alpha level was used to determine statistical significance?
Is there statistical significance? What was the effect size?
Does the researcher interpret significance tests correctly (i.e., avoid saying the results were highly significant or approached significance)?
When the sample size is small and the effect size large, are the results underinterpreted?Or if the sample size is large and the effect size modest, are the results overinterpreted?
Are many univariate tests of significance used when a multivariate test would be more appropriate?
Did the author acknowledge the limitations of the study?
Were the participants sensitized by taking a pretest?
Examine the wording of the questions. Could the way questions are worded cause bias because they are leading?
Because surveys are based on self-reports, be aware that bias can result from omissions or distortions. This can occur because of a lack of sufficient information or because the questions are sensitive. Could self-report result in bias in this study?
Were any other response-pattern biases evident, such as question order effects, response order effects, or social desirability, acquiescence, recency, or primacy effects?
What was the response rate? Was a follow-up done with nonrespondents? How did the respondents compare with the nonrespondents?
Who answered the questions? Was it the person who experienced the phenomenon in question? Was it a proxy? How adequate were the proxies?
If interviews were used, were interviewers trained? What method was used to record the answers? Was it possible or desirable to “blind” the interviewers to an “experimental”condition?
How did the surveyors handle differences between themselves and respondents in terms of gender, race or ethnicity, socioeconomic status, or disability? What consideration was given to interviewer effects?
If the survey instrument was translated into another language, what type of translation process was used? What kind of assurance do you have that the two forms were conceptually equivalent and culturally appropriate? How was accommodation made for language differences based on country of origin,
What sampling strategy was used? Was it appropriate for reaching adequate numbers of underrepresented groups (such as ethnic minorities or low-incidence disability groups)?
Was the survey descriptive, cross-sectional, or longitudinal? How did this design feature influence the interpretation of the results?
Showing 2800 - 2900
of 5177
First
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Last
Step by Step Answers