**heads up: this is one of my more dense pieces** This essay argues for a multi-method approach to engaging ontology of inquiry supported from selected readings.
Morse (2006) opens drastically by asserting the title of the piece; politics are concerned with the acquisition or exercise of authority and are necessarily subjective while evidence is concrete and indisputable (p. 395). In this article, Morse specifically addresses issues in healthcare research surrounding the question of “who defines evidence?” This emphasis on evidence entrenches research in health care as well as other applied disciplines, including education. This agenda, at least in health care research, is driven by (1) a public lobby for cure rather than care and (2) a public and political lobby for reduced costs in care. The standard for health care research is established by the Cochrane criteria:
Grade A = randomized trials
Grade B = Downgraded randomized trials; or upgraded observational studies.
Grade C = Double-downgraded randomized trials; or observational studies.
Special notes: downgraded trials suffer from identifiable bias; “observational studies can be upgraded when multiple high-quality studies show consistent results” (Long, 2013, s. 5).
Qualitative inquiry resides as “observational studies.”
Morse (2006) reviews three responses of qualitative researchers to this bias. First, qualitative researchers voiced their discontent. This was unsuccessful as their numbers were small and did not occur within the realms of health care policy makers. Second, qualitative researchers joined the Cochrane movement. This produced some progress. A mixed methods approach provided more strides in proving the exigency of qualitative inquiry and its relation to overall efficacy. Finally, the development of qualitative meta-analyses by Thorne, Jensen, Kearney, Noblit, & Sandelowski (2004). Following quantitative meta-analyses, this involves (1) selecting pertinent studies, (2) critiquing these studies according to identified standards such as those in quantitative meta-analyses, and (3) developing a model according to theoretical commonalities. Morse (2006) declares that qualitative researchers are “tired of conducting underfunded research … that goes nowhere. Yet, forcing ourselves into a quantitative system does not appear to be the answer” (p. 403). The claim contradicts how qualitative researchers have systematically addressed the quantitative reach bias in health care. Both the second and third responses involve methodological approaches of incorporating qualitative inquiry into the quantitative system and have demonstrated some progress. Morse further asserts that qualitative researchers know that their research is significant and that the problem is convincing those who control funding. Something that Morse (2006) negates in the article is that this problem is relevant to all researchers.
Morse (2006) highlights qualitative inquiry in other disciplines in an attempt to prove its place in health care research. This includes forensic design. For example, NASA’s analysis of black box aviation. Morse (2006) criticizes that “researchers are working from a theory of causality that states if something almost happened once, it could actually occur” (p. 402). This leads to an ultimate ethic; “learn from near misses and to modify practices and develop policy from them” (p. 402). There was an expectation by this reviewer that the author would attest in greater detail to this “new ethic of inquiry” and its explicit position within healthcare research. Some later examples of additional modes of qualitative inquiry as established types of evidence (i.e. deliberate trial or testing of interventions and observation and precise, microanalytic observational description, pp. 402-403) only succinctly provide examples within health care research as beneficial. The argument gathered from this article is that quantitative research is more successful because of its ability to prove exigency of findings to policy planners. Whereas, as Morse (2006) asserts, outcomes of most qualitative authors do not extend beyond “provid[ing] insight and understanding into the experience of patients, families, and caregivers” (p. 399).
St. Pierre (2006) echoes the politics of health care research (Morse, 2006) within education. Science based research (SBR) or evidence-based research (EBR) in education is the response of the federal government to reengineer schools. St. Pierre (2006) asserts that underlying the rhetoric of reform is the condemnation of colleges and education who have neglected quality (p. 241). The thought is that quantitative, objective science will make better schools. The definition of SBR derived from sharing drafts of a definition with numerous university-based researchers, mostly from one field. This further demonstrates the bias of “who defines evidence?” put forth by Morse (2006).
St. Pierre (2006) explains in detail who answers this question in educational research. “Legislators are portrayed as too busy to learn [what constitutes good research] and are therefore off the hook” (p. 241). There is an extensive section outlining the bureaucratic evolution of defining SBR for legislation. Numerous committees appear to beg-borrow-steal verbiage from abolished educational committees of similar paradigm. It is worth noting that most of these departments, associations, organizations and agencies are composed of individuals who do not have experience in public schools and/or solicit advice to conform education from only one discipline.
St. Pierre (2006) asserts that the problem surrounding this process is the assumption of “what works holds across space and time, across schools, across teachers, and across children” (p. 248). It is here that the author addresses dismissing qualitative inquiry also dismisses those epistemologies that allow addressing those variations and therefore the knowledge that may be produced.
The major argument in SBR is that science needs to be unified. The well supported rebuttal made by the author is that it assumes multiple observers of reality can agree on what they see (p. 250). Numerous comments on the language of federal government documents to refine and solidify the truth of SBR in public law and policy disfavor a single paradigmatic perception of what constitutes good science (p. 246). St. Pierre formulates an extensive argument to discard a perfect consensus, come to terms with pluralism, and use research designs and methods appropriate to the research questions posed, in order to answer the question “How can the National Academies make a science of education?” (p. 249).
Silk, Bush, & Andrews (2010) criticize the uncritical embrace of evidence-based research (EBR) in the sociology of sport. The claim is that EBR compromises the ontological, axiological, epistemological, methodological, and political approaches that critical intellectuals strive for and believe in (Silk et al., 2010, p. 108). This article defines evidence as overdetermined by the social, historical and political contexts that lend them their currency and power (Murray et al., 2007, p. 515 as cited in Silk et al., 2010, p. 109). It is the position of these authors that EBR, in a way, privatizes knowledge. This is not a far fetched claim. Accepting EBR as the privileged form of inquiry is to also only accept one paradigm of how knowledge is structured. It is to contradict the mission of higher education and fail to provide students with information necessary to think critically about the knowledge they gain (Silk et al., 2010, p. 111).
This ideology threatens the sociology of sport discipline. A large body of literature within the sociology of sport is concerned “with excavating the way large bodies become organized, represented, and experienced in relation to the operations of social power” (Silk et al., 2010, p. 112). This fuels a need to understand complexities, experiences, and injustices of the physical cultural context engaged on and through the body. Silk et al. (2010) further notions that “this work is targeted towards the politics of the present conjuncture and thus seeks to, in multiple ways, generate knowledge that empowers individuals by confronting injustices and promoting social change” (p. 113). Understanding such complexities and defining physical life requires inquiry from multiple perspectives. Unifying inquiry is an impossible task and neglects perspectives that contradict our own lines of thought.
Silk et al., (2010) attempt to redefine rigor. However judging quality and rigor requires addressing the power imbalances in defining questions deemed worthy to be addressed (p. 118). Evaluating qualitative works are mostly based on moral and ethical concerns in which “good quality” is a participatory and collaborative project with ongoing oral dialogue. Rigor is assessed through accuracy, nonmaleficence and demonstration of “interpretive sufficiency” (p. 119).
These articles position themselves as victims of unified science that attempts to objectify knowledge, truth and ultimately, reality. Each makes an argument for the importance of adopting multiple paradigms and inquisitions of knowledge, a position that this reviewer accepts as valid. However, none of these articles accomplished persuading me that qualitative inquiry is of equal stature to quantitative research. Quantitative inquiry is subjective in its own nature and it is true that variations exist that can only be addressed through qualitative inquiry. However, the necessity and exigency of such inquiry must be proved by those pushing a subjective perception of existence as important. Mixed methods appear to satisfy the requirements desired by institutional hierarchies (Morse, 2006) although my attachments to a middle paradigm protect me from thinking certain things (St. Pierre, 2006).
References
Long, l. (2013). Grading of recommendations, assessment, development, and evaluations (GRADE). Peninsular Technology Assessment Group, Exeter University Medical School, Veysey Building, Salmon Pool Lane, Exeter EX2 4SG.
Morse, J. (2006). The politics of evidence. Qualitative Health Research, 16(3), 395-404.
St. Pierre, E. A. (2006). Scientifically based research in education: Epistemology and ethics. Adult Education Quarterly, 56(4), 239-266.
Silk, M. L., Bush, A., & Andrews, D. L. (2010). Contingent intellectual amateurism, or, the problem with evidence-based research. Journal of Sport & Social Issues, 34, 105-128.
The Gottman Institute: A RESEARCH-BASED APPROACH TO RELATIONSHIPS. Retrieved from https://www.gottman.com/about/
Comments