Part1
Using the benchmarks that you identified in Week 3, create a word-processed diagnostic assessment (pretest, survey, questionnaire, anticipation guide) that will provide you information about your students’ content readiness.
Benchmarks: for part 1 of this assignment. LAFS.4.RL.1.2 The student will determine the theme of a story, drama, or poem from details in the text; and will be able to summarize the text.
LAFS.4.RL.1.3 The student will describe in depth a character, setting, or event in a story or drama, drawing on specific details in the text (e.g., acharacter’s thoughts, words, or actions).
Part II: Describe how you will make this assessment comprehensible for your ELLs based on their language readiness.
Part III: After reading pages 109 – 117 of The Crosscultural, Language, and Academic Development Handbook, explain how bridging and schema building applies to your students. Provide specific examples in your response.
SAMPLE ANSWER
Part1
Anticipation Guide
Diagnostic Assessment
Before
After
Agree
Disagree
Statement and Evidence
Agree
Disagree
1. Depict and establish the aspects that point out to the theme of the story.
2. Establish how the character plays significant roles in portraying the theme of the story.
3. Clearly establish the roles of the characters in the story and how they support the whole of the story.
4. Depict the manner in which chronology of events in the story and how the narrator arrives at the climax of the story.
5. The students should draw out moral lessons from the story and establish how they can be applied in real life.
6. The student should be able to show an understanding of the characters in the story and establish how the story fits within the objectives of the study.
Outcome for lesson:
Student will become familiar with the theme of a story, drama, or poem from details in the text; and will be able to summarize the text as provided in objective LAFS.4.RL.1.
The student will become familiar with in depth knowledge of a character, setting, or event in a story or drama, drawing on specific details in the text (e.g., a character’s thoughts, words, or actions) as detailed in objective LAFS.4.RL.1.3
Content Readiness
In establishing these students content readiness, it is essential to establish that these students bring a wealth of prior academic experiences and knowledge in the class, an aspect that requires an efficient approach aimed at increasing their engagement and motivation. It is additionally essential to detail that the inclusion of response cards will be incorporated with the aim of ensuring that the learners are actively engaged in the learning process(Lin, & Johnson, 2016). It will be significant to establish the student’s efforts in exploring the history of the American leaders, as an element that will aid in building their exposure and readiness to delve into the unit on the history of Abraham Lincoln.
Part II
Making the Assessment Comprehensible
The history of Abraham Lincoln remains a common literature that depicts the history of America and the struggles of significant individuals in changing the state of the nation. In displaying this story to the ELL learners, it is significant to understand that the student’s content readiness levels may vary based on the topic (Paradis, 2016). Spending time in the introduction of this story would result in some of the students growing bored quickly since some already have a grasp of this history while others don’t.
In order to ensure that time is not wasted, it is essential to determine the level of exposure of these students on the topic through the administration of a diagnostic assessment before the commencement of the learning period. This may be a simple pretest, questionnaire, anticipation guide or a survey (Paradis, 2016). Once a clear idea is obtained on the level of content readiness among the students in identifying a character, setting, or event in a story or drama, drawing on specific details in the text, an informed decision can be arrived at on how to proceed with the learning process. The gathered information can be utilized in developing a grouping configuration that may include heterogeneous pairs based on the content and language readiness of the students.
On the other hand, to establish the ELL’s language readiness, it is significant to determine if the learners understand the content and the instructions through a comprehensible input. This requires that the tutor ensures that the content of the study is understandable for all of these students. As a tutor, it is essential to basically ensure that the choice of words and the manner in which speeches are made are considered (Paradis, 2016). This would require that the speeches are paraphrased, enunciated, repeated, pointed out and the use of concise language through the use of gestures are included in the process of learning.
Part III
The element of building schema details approaches aimed at initiating relationships among different concepts, an aspect that makes connections in the learner’s experiences. On the other hand, building schema results in bridging details, an explicit intentional instruction provided by a teacher through a verbalized approach that leads the students to a conclusion (Paradis, 2016).
This can be experienced in an example where lessons are designed by a tutor with the aim of drawing the attention to the relationships across the developed concepts. In this case, the students are bound to develop an understanding of the connections within the experiences and concepts in a lesson.
References
Lin, L., & Johnson, C. J. (2016). Mandarin-English bilingual vocabulary development in an English-immersion preschool: How does it compare with monolingual development?. International Journal Of Bilingualism, 20(2), 173-189. doi:10.1177/1367006914547662
Paradis, J. (2016). The Development of English as a Second Language With and Without Specific Language Impairment: Clinical Implications. Journal of Speech, Language & Hearing Research, 59(1), 171-182. doi:10.1044/2015_JSLHR-L-15-0008
We can write this or a similar paper for you! Simply fill the order form!
Designing Quantitative Validity and Reliability Research Order Instructions: kindly view attached 7 Quality Considerations
Mary S. Stewart, PhD, and John H. Hitchcock, PhD
Designing Quantitative Validity and Reliability Research Introduction
If you want to have an impact in academic, policy, business, and program evaluation settings, you must be able to conduct high-quality research. You must also have the skills to assess the rigor of other published research.
Designing Quantitative Validity and Reliability Research
A way to think about quality in research is to consider certain indicators to demonstrate that research findings accurately represent the subject, phenomenon, or process being studied. Failure to meet standards of quality may result in research that is misleading or inaccurate. For example, suppose that a study examined the effectiveness of a reading intervention by analyzing the test results of first graders before and after the implementation of the intervention. However, the study was done in a way that did not allow researchers to be reasonably sure that intervention exposure was the best explanation for any observed improvement. The findings of such a study would have limited use for educators and administrators because they would not know if they should use the intervention in question. Hence, it is important to be able to develop and critique studies that yield findings that can clearly inform decision making. This chapter introduces and offers examples of commonly used quality indicators in the context of different approaches to inquiry.
The most common way for researchers and scholars to address quality is to consider whether findings (and the data and inferences that form the basis of findings) are valid; in other words, they must reflect the actual phenomenon under study rather than reflecting coincidental relationships, the biases of the researcher, or the limitations of the study design. One aspect of validity is reliability, which refers to the consistency of results from a research instrument, strategy, or approach. That is, a reliable research instrument would be one that yields the same findings when administered multiple times on the same subject. In this chapter, we provide a conceptual overview of these two specific quantitative quality considerations (validity and reliability) as well as of their qualitative counterparts (credibility, transferability, and dependability), because these are the primary yardsticks by which research quality is gauged.
There are, however, some challenges to presenting such a broad overview, because even though these are fundamental terms, they have different meanings for different types of researchers, depending on their area of expertise and methodological training. These concepts take on slightly different definitions and are represented by competing terminology across approaches to inquiry. These different definitions can yield some disagreements across various subfields, although such disagreements are not inherently problematic; indeed, scholarly arguments are a necessary ingredient for improving academic disciplines. Instead of focusing on inconsistencies or evolving definitions, the purpose of this chapter is to make you aware of the conceptual bases of quality in social science research and some of the broad debates that shape these concepts. In this chapter, we do not go into great detail on method-specific issues related to validity and reliability, or the philosophical orientations aligned with different epistemologies; rather, our hope is to describe how your increased attention to research design, execution, and analysis can yield higher quality findings.
We provide a broad introduction to validity, specifically using experimental design as an example of how validity can be undermined or enhanced as a function of design choice. We also review the topic of trustworthiness, which is often used in qualitative research. We chose to highlight these two approaches to research because they are reasonably concrete and span a wide set of studies. Although these approaches are quite different, we argue that quality considerations apply to many different types of social science research (e.g., case studies, survey work, test design, phenomenology, single-case designs, and developing models that predict given outcomes). We also introduce several basic method-specific terms and techniques for improving research quality. Finally, although validity and reliability are two of the most central quality indicators, they are certainly not the only important indicators. To that point, this chapter cannot be thought of as a one-stop source for what you need to know. Indeed, quality considerations pertaining to the research process are involved in every step of the research process (Guest & MacQueen, 2008), and you should always investigate issues of quality that are specific to your chosen research designs.
Validity and Reliability
Validity
The meaning of validity is related to the concept of truth; in research, valid findings accurately describe or reflect the phenomenon under study. The concept of truth is also reflected in the qualitative term trustworthiness, which some scholars approximate to the quantitative notion of validity. Cook and Campbell (1979), two of the best-known scholars on experimental design methodologies, wrote that validity is “the best available approximation to the truth . . . of propositions” (p. 37).
There are several considerations in the research process that are necessary to promote valid findings, and these all relate to designing a study that is appropriate to the research question. Such considerations include understanding whether (a) the method of data collection (quantitative or qualitative) enables you to answer the specific research question, (b) the type(s) of data collected (interviews, attitude surveys, standardized test results) enable(s) you to answer the question, (c) the sample of data collected enables you to address a target question (i.e., did you question or test the appropriate types of people or other subjects?), (d) you asked the participants questions that were appropriate to the research question, and (e) you included enough participants such that results can be applied beyond the study. These are just a few of the details that you must consider when thinking about the quality of a study.
Although the concept of validity broadly reflects the idea that research findings reflect the true phenomenon, causal mechanism, or attitudes under study, different types of studies and methods necessitate different approaches to ensure validity. Some methodologists, primarily from qualitative and mixed methods traditions, sometimes use different terminology for concepts related to validity, such as credibility, trustworthiness (e.g., Lincoln & Guba, 1985; see also Guest & MacQueen, 2008; Onwuegbuzie & Johnson, 2006), legitimation (e.g., Johnson & Onwuegbuzie, 2004), and inference quality (e.g., O’Cathain, 2010). Some qualitative methodologists even reject the concept of qualitative validity altogether (e.g., Wolcott, 1990). Some of these disagreements are rooted in real and honest differences in philosophy, or how one thinks about the world. For example, some researchers espouse a postmodern framework to question the assumption that there can be one reality to portray a finding, or that even primary data, such as informant interviews, test results, or survey responses, are able to fully describe that reality (cf. Lofland, Snow, Anderson, & Lofland, 2009; Onwuegbuzie & Johnson, 2006). However, for the purposes of this chapter, we avoid this disagreement and operate from the assumption that certain aspects of reality can be observed and/or measured by researchers; the slight variations in the concepts of validity and trustworthiness correspond to their relationship with various approaches to inquiry and methods.
After choosing research questions, you must consider what kind of study design and methods are appropriate to address your questions at hand. Fortunately, there is an existing framework1 for just about every type of design. Simply put, if you are going to pursue any project that takes several steps and careful thinking, it helps to have a set of guidelines to follow. Methodological guidelines, or frameworks, are available when doing surveys, case studies, psychometric studies (i.e., developing tests and measurement instruments), experiments, ethnographies, phenomenological studies, mixed methods investigations, and so on (see Table 7.1).
A framework is an established structure for the design and execution of a given type of study, including data collection methods, data management, and analytic methods. In addition, all frameworks include components for checking for quality, whether in terms of validity (quantitative) or trustworthiness (qualitative). Frameworks also tend to address validity as both a process, where following certain steps should help yield defensible findings, and as an outcome, where one examines the degree to which a set of findings are defensible (e.g., Kane, 2013; Onwuegbuzie & Johnson, 2006). Validity should not be considered as a one-dimensional goal. It is both a process and an outcome and requires an iterative process that continually helps us get a better understanding of whatever is being studied. On that note, notice that we refer to validity as something that you strive for; it is thus best thought of as a kind of continuum. That is, evidence for validity ranges from being poor to really great, as opposed to something being either valid or not. As Cook and Campbell (1979) have stated, “We should always use the modifier ‘approximately’ when referring to validity, since one can never know what is true. At best, one can know what has not yet been ruled out as false” (p. 37).2
TABLE 7.1
List of Different Frameworks and Suggested Readings for Designing Quantitative Validity and Reliability Research
Type of Inquiry
Description
Suggested Readings
Case studies
Used when trying to learn about phenomena in the context of a particular case (e.g., a person, school, etc.).
Yin (2009)
Experimental design
To be used when a central question is causal in nature, such as when obtaining evidence that a new teaching technique might yield better learning outcomes compared to another technique.
Shadish et al. (2002)
Ethnography
This form of inquiry tends to be used when studying a particular cultural group.
LeCompte & Schensul (2010)
General qualitative inquiry
For studies where the aim is to explore and learn about phenomena in natural settings.
Brantlinger, Jimenez, Klingner, Pugach, and Richardson (2005) Denzin and Lincoln (2005) Lincoln and Guba (1985) Nastasi and Schensul (2005) Patton (2014)
General statistical guidance
There is a lot of guidance around application of the general linear model, which is used in most statistical analyses readers of this text are likely to run into. We offer one text because we find it to be accessible and amusing.
Field (2013)
Meta-analysis
For use when synthesizing the results of multiple, existing studies to learn about aggregated levels of evidence.
Hedges and Olkin (1985) Lipsey and Wilson (2001)
Mixed methods
These apply when combining both qualitative and quantitative design elements.
Creswell and Plano Clark (2010) O’Cathain (2010) Tashakkori and Teddlie (2010)
Psychometrics
To be used when the central purpose of a study is to develop or refine a test/measurement instrument.
American Education Research Association, American Psychological Association, and National Council on Measurement in Education (2014) Crocker and Algina (1986)
Phenomenology
This is a particular variant of qualitative inquiry that focuses on understanding the experiences of research participants.
Moustakas (1994)
Single-case (single-subject) designs
For studies that aim to test intervention effects on small numbers of people (e.g., ABAB and multiple baseline designs).
Horner et al. (2005) Kratochwill et al. (2010) Kratochwill et al. (2013) Kratochwill and Levin (2014)
Surveys
For studies that use surveys, typically when working with a sample and the intent is to learn more about some population of interest.
Dillman, Smyth, and Christian (2009) Fowler (2009) Groves et al. (2009)
Note. This list is not meant to be complete because there is such wide variation in the types of studies and designs within the broad arena of the social sciences. We selected a few on the basis that we think they are commonly used. We also did not attempt to be comprehensive with the citation list. These can be considered as a beginning set of resources to learn more. Later in the chapter, we cover ideas from experimental and qualitative frameworks in more detail. Finally, in some cases, we impose the word framework. Some of the citations we offer use this term and others do not, but we are otherwise confident that the authors of the citations would agree that their intent was to offer guidance on how to carry out the particular form of inquiry.
Quality of Data Sources and Methods for Designing Quantitative Validity and Reliability Research
The quality of data sources and data collection methods has implications for validity. In both qualitative and quantitative studies, there may be inconsistencies among data sources. For example, a subject’s actions may not match what the subject says he or she does, or the topic of a given survey question, such as drug use or other illegal activities, may incentivize subjects to answer items inaccurately (Cronbach, 1946; Groves et al., 2009; Lofland et al., 2009). For example, if you wanted to study the prevalence of cheating, how might you gather such information? Would you conduct interviews to ask participants to confess their tendency to cheat? The chances are fairly high that your participants would underreport their behavior. If you chose a different approach, such as allowing them to self-disclose their cheating behavior via an anonymous survey, you might get a more accurate—or valid— portrayal of their behavior. You must therefore examine your data sources and data collection methods for problems that may undermine validity. For this reason, triangulating data sources and using mixed methods are often done to bring to light inconsistencies among qualitative sources (Denzin, 1989; Guest & MacQueen, 2008; LeCompte & Goetz, 1982; Onwuegbuzie & Johnson, 2006), and there are statistical methods for designing and checking the validity of specific survey and assessment questions (e.g., Borgers, Hox, & Sikkel, 2004).
The validity of a research instrument depends in part on its intended purpose and whether it is used for that purpose. In other words, when thinking about data quality, you should consider the evidence for using a specific instrument in a particular situation, rather than thinking of the instrument as valid or not (see Kane, 2013). Consider, for example, standard college or graduate school entrance examinations designed to assess achievement, such as the Scholastic Aptitude Test (SAT), American College Testing (ACT), and the Graduate Record Examinations (GRE). There may be some evidence that these assessments have a valid application in terms of deciding which students are likely to perform well if admitted to given schools (cf. Brewer, Knoeppel, & Lindle, 2014; Hamilton, Stecher, & Klein, 2002; Heubert & Hauser, 1999; Messick, 1994, 1995), but the evidence to support their use for assessing intelligence is far weaker because achievement and intelligence are currently understood to be two different things. Therefore, it is critical to always think of a measurement instrument as a tool, and then consider whether the tool is being used for its intended purpose (Shadish, 1995). Following the tool analogy, you might have a poorly made hammer, such that even after light work it breaks easily. In this case, the tool itself is problematic. But another consideration is the purpose for which the tool is used.
The hammer may be one of the very best ones ever made, but it still would be a poor tool to use when needing a screwdriver. This analogy applies equally well when thinking about what instruments to use when measuring psychological or educational traits. If, for example, we hoped to assess whether some new teaching technique yielded improvements in reading scores, we would not logically choose a mathematics test to use for the outcome measure. But reading is a complex skill with many subcomponents— such as fluency versus comprehension—and we must be able to distinguish the specific skill(s) being tested and which assessments will measure those specific skill(s). Above all, it is important to make sure you are using the right tool for the job.
A related aspect of understanding validity entails thinking about contextual variables of the study, including local cultures, time period, and environment (Onwuegbuzie & Johnson, 2006). Every variable, even in quantitative work, represents an attribute that is situated within a specific time and place, and depending on the focus of a given study, certain aspects of variables may be relevant to the defensibility of findings. It is therefore important to always consider the context in which data were collected and interpreted. For example, if you were to do a study on political values, you would need to note the current political climate in which you are doing the study. Or, if you were to read a study on political values, you would need to note the publication date on that study and make sure to take into consideration the political climate of that time period. Consider the two largest political parties in the United States: Democrats and Republicans. In this example, time is the specific type of context, because there has been a shift in the overarching political beliefs of these two parties over time. Decades ago, Republicans would have been thought of as being the more liberal of the two political parties. This example demonstrates that researchers, and readers of research, must understand the context of studies.
A related issue is the concept of social construction of variables, or the fact that the common understanding of a variable may be defined by the society in which it is situated, as opposed to being defined by scientific differences. Race is a well-known example of a socially constructed variable, in which racial identity has very little to do with biological differences among races. However, the experiences of people from different races tend to be systematically different within certain societies, based on how those societies consider different races. One example of how race can be socially constructed in multiple ways is the construction of White in the United States. Several immigrant groups that are now considered White, including Irish and Eastern Europeans, were once considered racially different from immigrant groups from other Western and Northern European areas (Jacobson, 1998). Racial identities can change as demographic groups assimilate with or differentiate themselves from other groups, and there are often social, economic, and political benefits and/or drawbacks to these changes. As a researcher, you must be aware of the possibility that an attribute has been socially constructed and, if so, whether and how that social construction affects the meaning of the variable. The ways in which variables are constructed will have considerable influence on study validity (cf. Reynolds et al., 2014; Spillane et al., 2010; Wells, Williams, Treweek, Coyle, & Taylor, 2012). Clear, straightforward definitions of each study variable can help to increase validity.
In summary, we reiterate that validity is somewhat synonymous with truth. And just as your definition and understanding of truth can be individually subjective as well as based on cultural and social interpretations, so it goes with the concept of validity. When thinking in terms of research quality, consider how to design studies that can yield defensible evidence that can be used to make a reasonably accurate inference or proposition. Being detailed, specific, and thoughtful in all of your design elements and analysis will help you increase validity. These efforts both help you as a researcher to keep your focus within the scope of the study and help the readers of your study to understand your specific research questions, methods, and variables.
Reliability
In a broad sense, reliability refers to the extent to which findings and results are consistent across researchers using the same methods of data collection and analysis. The heart of the concept is synonymous with the notions of consistency and accuracy (Crocker & Algina, 1986), and it is related to validity. You must consider the importance of data and methodological consistency (reliability) because consistency increases the likelihood that your interpretation of data has validity. On the other hand, findings might also be reliably wrong, and this is a critical difference between reliability and validity. To illustrate reliability, consider a scale used to measure a person’s weight. If the scale yields a close approximation to a person’s actual weight, then one would say the scale’s measure is accurate, a key aspect of validity. But now consider the idea of consistency. Suppose a person is weighed once a day for a week, and the scale indicates values of 130, 150, 170, 110, 190, 145, and 155 pounds. Because the weight of one person will not vary so much in a span of a week, the conclusion to be drawn is that the scale is broken. Because of the lack of consistency, or reliability, there is little reason to trust the validity of any single measurement. In this sense, valid estimation requires consistent, or reliable, scores.
It is also instructive to see that just because measurements are reliable, they are not necessarily valid. A person may weigh 150 pounds, but suppose that the measurements produced by the scale across 1 week are 191, 189, 190, 191, 190, 190, and 189 pounds. These scores consistently indicate that the person weighs about 190 pounds, but the estimates are consistently wrong. This idea reflects an often-repeated phrase in research methods: Reliability is a necessary aspect of validity, but insufficient if used alone as a measure of quality. When using tests and surveys to measure a phenomenon, it is thus critical to understand the properties or the measurement tool and consider whether the measurement tool has been well designed and suited for the job at hand. When engaging in qualitative tasks such as observations and interviews, think of strategies for assessing whether any conclusions to be drawn from these data collection approaches are likely to be consistent (reliable) and accurate (valid).
Reliability Issues in Data Collection and Analysis. Reliability checks can take place at two stages of research: during data collection and during analysis. To test reliability at the data collection stage, another researcher could collect data using the same sampling strategy as you to see if consistent data are being collected. An example of this strategy would be two researchers, with two different scales, each measuring the same person. If both scales indicate the same result, you have some assurance of reliability. It is also possible to have two groups analyze the same set of data with the same analytical methods to see if the two groups come to the same results.
The types of data collected have significant influence over the reliability and replicability of a study (Peräkylä, 1997). Quantitative data sets are often easily accessible and transferable among researchers. Furthermore, quantitative data, once collected and recorded, are not usually subject to any detailed interpretation beyond understanding what a number is supposed to represent. For example, consider a five-option response scale, where 5 = strongly disagree, 4 = disagree, 3 = neutral, 2 = agree, and 1 = strongly agree. The number 4 is understood to have one meaning (disagree), and researchers tend not to conjecture further without having special reason to do so. In contrast, qualitative data are often products of the researcher’s filtering and interpreting of information during data collection via observation and interview notes. For example, a researcher creates notes about an observed lesson, and these notes become part of the data set. However, the researcher cannot observe, or record, every detail of the lesson, due to the limited capacity of human observation and as well as choices—conscious or unconscious—that the researcher makes about which details to notice and record. In contrast, data that are not filtered by the researcher at the time of collection include documents, tape-recorded interviews, and videos of observations. The researcher does not have to transfer heard or observed data into a tangible record, such as drawings or notes, because the data are already in a tangible format. Qualitative researchers generally agree that a combination of machine-recorded and interpretive data is ideal in order to achieve a full understanding of the phenomenon under study (see, e.g., Lofland et al., 2009). Research design conceptualization, whether qualitative, quantitative, or mixed method, should entail examining the trade-offs of different kinds of data collection and should incorporate plans for increasing the reliability of the research, such as by using multiple researchers, multiple data sources, detailed data audits, or other strategies.
LeCompte and Preissle (1993) discussed the challenges inherent in trying to replicate the data collection phase of a qualitative study, comparing it with quantitative methods: “Unique situations cannot be reconstructed precisely because even the most exact replication of research methods may fail to produce identical results. Qualitative research occurs in natural settings and often is undertaken to record processes of change, so replication is only approximated, never achieved.” Furthermore, they point out that replication may not even be appropriate for qualitative research: “Researchers whose goals are generation, refinement, comparison, and validation of constructs and postulates may not need to replicate situations. Moreover, because human behavior is never static, no study is replicated exactly, regardless of the methods and designs used” (p. 332). One reason for these issues, suggest LeCompte and Preissle, is the longer history of discussions of reliability within quantitative research arenas. These discussions are still fairly new to qualitative methodologists, as demonstrated by the diversity of opinions regarding theory and standardized practices for achieving reliability in qualitative studies.
In summary, you will need to consider whether your data are collected in a consistent manner and whether the type(s) of data collected will help you develop inferences and propositions that approximate the reality of your studied phenomenon. These are not the only important considerations, however. Even when your data are of high quality and are appropriate for the research questions, there are additional considerations to keep in mind when making analytic inferences from these data. That is, you must choose the most appropriate methods to analyze and interpret data in order to reach valid conclusions about your object of study. We use the methodology of experimental design as one example to demonstrate how analytic methods can positively or negatively affect the validity of conclusions.
Validity Considerations in Experimental Design
Shadish and colleagues (2002) provided an overview of an experimental design validity framework that focuses on validity in the context of experimental designs. There are four components to the framework: internal, external, statistical-conclusion, and construct validity. Each type of validity is briefly reviewed in this section, and we focus on internal and external validity in this chapter because these offer relatively concrete examples of how the truth of a proposition can be defended or undermined. Overall, understanding these validity issues is necessary in order to create rigorous study designs as well as to enable critique when reading empirical work done by other researchers. Note that this particular validity framework was developed to help researchers specifically assess causal mechanisms; that is, it is used to determine whether a particular condition or treatment causes better outcomes compared to some alternative. This is one example of a framework, as discussed earlier, which provides researchers with standards by which to judge the validity of conclusions. The following discussion of internal validity prompts the use of the experimental design framework, due to the element of causation.3
Internal Validity of Experimental Findings
Consider the following statement: I took some aspirin and my headache went away; therefore, aspirin reduced my pain. This statement contains a causal inference: Taking aspirin caused the reduction in pain. The degree to which this inference is valid reflects the degree of internal validity. In experimental designs, researchers examine whether some variable (the independent variable), rather than others, produces some result or change (dependent variable; Shadish et al., 2002). Consideration of internal validity begs the question: How truthful is the proposition that a change in one variable, rather than changes in other variables, causes a change in outcome?
Causal inference, and thus internal validity, can be surprisingly tricky. For any given proposition about a causal inference, there are rival explanations; these explanations are referred to as threats to a statement’s validity. For our example, we might assume that you usually take aspirin with water; if the headache had been caused by dehydration, then it is possible that the water—not the aspirin—was the actual cause of pain relief. Alternatively, it is possible that the headache eventually subsided on its own, and thus it was the natural recovery processes—not the aspirin—that yielded the improvement. In short, just because pain subsided after aspiring ingestion does not necessarily mean that we can be sure that the drug was the causal agent. The point here is that rival explanations exist, and it becomes important to consider whether the aspirin explanation is better than the others. The same logic applies to making decisions about policies. For example, you might be interested in implementing a new teaching technique, a new type of counseling procedure, or pay-for-performance compensation models. These all represent policy options, and making the best policy necessitates having data that result in reasonably valid inferences about program or policy impacts by ruling out rival explanations.
Once you infer that a given approach resulted in the desired outcome, the quality of this inference can be judged by assessing various threats to internal validity. Each threat is a form of alternative explanations—other than the treatment—for the cause of an observed outcome. The experimental validity framework identifies a number of common threats to internal validity; the following discussion draws on examples of these threats in order to illustrate the process of identifying and eliminating rival explanations (see Shadish et al., 2002, for a complete list and description of threats; see also Table 7.2). As an example, there is the so-called history threat, or the possibility that other events may have occurred during the duration of the experiment that could explain the change in outcome. In the case of aspirin, the fact that common headaches eventually subside on their own is an example of such a threat to the inference that it was aspirin that caused pain reduction. This threat, and many of the other internal validity threats, can be addressed by including a comparison (or control) group that does not receive the treatment being studied. If a treatment effect is observed by comparing performance across both groups (i.e., students who received counseling show better outcomes than those who did not), then it becomes the case that the independent variable—that is, treatment exposure—is the best overall explanation for the difference in scores between study groups. An important point here is that this and many other threats to internal validity can be addressed by adding a control group when the intent is to make a causal inference. Indeed, a basic quality indicator for studies that set out to address a causal question is to look for the presence of a control condition (Shadish et al., 2002).
By adding a control group, we also potentially introduce new threats to internal validity. One such threat is selection, which refers to how groups in a study were formed. There are many ways to form groups. People can volunteer to be treated, students might be picked by a teacher, a researcher may decide who is most in need of treatment, and so on. Of the many options, one approach to selecting who is treated and who is in a control group is to use random assignment to treatment and control groups. Such assignment is essentially based on chance: If you use random procedures (such as coin flips or computer algorithms, which are user-friendly in most statistical packages), you can expect that, on average, there will be no systematic differences between groups.
When assignment is nonrandom, such as when participants in groups are purposefully selected based on certain characteristics, there may be key differences between treatment and control groups. For example, due to legal restrictions about research, students in a treatment group may have to be volunteered by their parents/guardians; the requirement that participants actively volunteer and have parent permission introduces the possibility that the characteristics of students in the treatment group are systematically different than those in the control group. Thus, the manner by which selection was done can threaten later attempts to make causal inferences. Suppose the question at hand was whether a new counseling technique yielded better outcomes than typical treatment procedures. Furthermore, suppose that the counseled group did in fact show better outcomes than their control counterparts. One might infer that these improved outcomes were caused by the treatment, but the threat to this inference is the fact that treated students came from families who desired and supported the treatment and students in the control group tended to come from families who were indifferent to the treatment. It is therefore possible that students in the treatment group might have done better than those in the control group even if the treatment was not the cause of the students’ improvements. In other words, the way groups were formed may have made it look like the new counseling technique made a difference, even if it did not. Thus, selection processes can threaten the validity of any inference made about treatment. Researchers can prevent selection bias by using random assignment to treatment and control groups, when possible. When random assignment is not an option, researchers may statistically “control” for other variables such as socio economic status, gender, race, age, and disability status in statistical models. These types of controls can help tease out the differential impact of these contextual factors.
TABLE 7.2
Overview of Internal Validity Threats
Threat
Definition
Example
History
Other events may have occurred during the duration of the study that could explain the improved behavior.
During the course of treatment, the children in the counseling treatment group were also assigned to a new teacher who is excellent at managing behavior concerns. In this scenario, was the improvement because of the treatment, the presence of a new teacher, or both?
Maturation in Designing Quantitative Validity and Reliability Research
The fact that people, including study participants, change over time.
During a 1-year study, the treatment students could have simply outgrown their initial behavior problems; their personal development, unrelated to the counseling, may have contributed or even been solely responsible for the improved behavior scores.
Testing
The possibility that repeated exposure to a measurement instrument could, by itself, affect test-taking behavior and test scores.
The children who took the baseline measurement test reflected on what the test was measuring and at posttest offered socially desirable responses that resulted in higher scores; yet, their overall classroom behavior may not have actually improved. Instead, observed score change was the result of knowing from baseline the test questions and how to best respond.
Instrumentation
(1) A testing instrument may change or may be used in a way that does not correctly measure the treatment effect.
This might happen if there are two versions of a test (Form A and Form B) that are incorrectly assumed to be equivalent. In this case, it could be that behavior as measured by Form A looks more problematic as compared to Form B. If Form B was used at posttest (the second testing session), then any apparent improvement cannot be attributed to the treatment; rather, differences in the test drove the change.
(2) There may be unknown contextual factors that can impact testing.
Perhaps baseline measurement was done in the morning and posttest measurement was done in the afternoon, and for some reason, the children in the study are more likely to report or demonstrate better behavior after lunch.
Designing Quantitative Validity and Reliability Research and Statistical regression to the mean
The phenomenon that extreme scores tend to not be repeated.
Anyone might score unusually high on a psychological measure because of a string of positive but rare events, such as winning the lottery. Taking the test 6 months later may still result in a high score, but not extremely high. Over time, extreme scores—both positive and negative—tend to move closer to the average for that particular measure.
Researcher bias
Changes in research design or analysis that are a result of the researcher’s subjective views regarding the study topic, participants, theory of change, or other relevant areas of design or execution.
The researchers may be so convinced that the new counseling approach works that they unintentionally modify aspects of the original study design in order to show that the treatment makes a difference.
Selection
The process of creating participant groups in a study. Nonrandom selection (e.g., participants in groups are purposefully selected based on certain characteristics) might yield two groups that are not equivalent at the beginning of a study. If there are key differences between two groups, one cannot know if any posttest differences are because of a treatment effect or because of such differences.
Due to legal restrictions about research, students in a treatment group may have to be volunteered by their parents/guardians; the requirement that participants actively volunteer and have parent permission introduces the possibility that the characteristics of students in the treatment group are systematically different than those in the control group. For example, parents who push for their children to be exposed to a new treatment might, on average, be more involved in their children’s education than ones who do not. In such an example, if we see that children who were treated appear to perform better on a posttest, is such improvement because treated children have more involved parents (i.e., they would have been better off anyway), or is it because the treatment worked? The selection threat is not a concern if study participants are assigned randomly to study groups because, on average, there should be no differences between participants in treatment and control conditions.
Overall mortality (attrition)
Loss of members in the study sample.
A study compares pre- and posttest scores on an assessment in order to measure participant change. Some students drop out before completing the posttest. The loss of part of the sample creates the possibility that the students who remained in the study and completed the posttest have systematically different characteristics than the students who left the study. If this is the case, any average positive or negative change from pretest to posttest may simply be a function of the mortality threat, or the loss of students with particular characteristics.
Differential mortality
Members of sample groups (e.g., treatment and control) drop out in different rates, and nonrandomly, in one group as compared with the other(s).
Some students in the treatment sample drop out because they no longer wish to receive the counseling and miss out on other activities during the school day. These students may be differently motivated or have systematically different behavioral characteristics than the students who are willing or happy to miss other school activities.
The broader point here is that, in general, not all designs are equal in terms of their inherent capacity to address the internal validity of target research questions. If you are conducting an experiment, it will behoove you to design studies to have stronger validity because doing so will result in better quality. Without certain design features put into place, the improvement in behavior may indeed be due to the treatment but may also be due to any of the threats listed. Fortunately, in the context of an experiment, as with many other methodologies, there is guidance that you can consult to assist with recognizing and addressing these threats. Unless these threats can be removed as plausible explanations, the study quality must be considered questionable (see Table 7.2).
External Validity of Experimental Findings
External validity—the extent to which findings hold true across contexts— and its threats are also major considerations in research design quality. Suppose you have produced a study with high internal validity; that is, none of the previously discussed threats are plausible explanations for observed improvements in a treated group. The best explanation for the outcome of the study is that the treatment worked. This high level of internal validity leads other researchers to want to know whether this finding has high external validity as well, or whether it is likely to hold true across other students, in other places, times, contexts, cultures, and so on. As with internal validity, there are several common threats to external validity (Shadish et al., 2002) (see Table 7.3).
To elaborate a little, one such threat is treatment variation; this type of threat addresses the degree to which observed treatment effects reflect variations in the treatment received by the study subject(s). Treatment variation can be a function of the human error of administering a treatment or a function of seemingly innocuous choices around implementing a program. One example would be inconsistency in dosage levels; consider two teachers ostensibly delivering the same intervention, but one teacher has excellent classroom management skills and the second teacher does not. The second teacher’s students receive less of a dosage of the intervention because one-third of class time is spent on classroom management issues. Other examples include the time of day that treatment is delivered or failure to correctly implement some element of the treatment.
Threats to external validity present a number of concerns, and researchers must find ways to address these threats. There are two broad strategies for addressing threats to external validity. The first is to engage in thorough literature reviews and to build on previous, related studies. External validity can be strengthened by limiting the research focus and by comparing new findings to existing studies in the literature. A careful review can highlight gaps in the existing literature; these gaps then justify a specific focus that is situated within an existing framework of studies. For example, a specific counseling technique may have been thoroughly studied in residential treatment settings, and so your focus might be on the first effort to try it in a public school. The design of your study will be strengthened by the evidence available from other related studies, and the threats to external validity will be minimized by limiting the focus to a very specific area.
The second strategy is to think carefully about ways in which your findings may apply, or generalize, to other settings. Shadish (1995) listed a number of principles that can help you think about generalization when doing experiments, ethnographies, or other types of studies. You must consider how applicable the findings from your study might be to another setting, such as similarities in the sample and how it was obtained, measurements used, duration, and other treatment details. Above all, claims of general-izability are most appropriate when there is evidence that a very specific aspect of a treatment yields an exact outcome. Knowing what aspects of a study are likely to generalize and what aspects are likely to be highly context specific is the key to thinking through considerations that might threaten the generalizability of a finding to some new scenario.
TABLE 7.3
Overview of External Validity Threats
Threat
Definition
Example
Interactions of the observed causal relationship with sample units
The possibility that whatever was observed with one particular sample may not hold true for different samples.
Simply put, the treatment may work well with one type of student and not another.
Treatment variations
The effect of a treatment reflects variations in how it was administered, and so on, as opposed to the effect of the treatment itself.
Treatment variation can be a function of the human error of administering a treatment or a function of seemingly innocuous choices around implementing a program (e.g., dosage levels, time of day treatment is delivered, or failure to correctly implement some element of the treatment).
Types of outcome measures used
Treatment effects may be found with one kind of test but not another.
One might see an effect with a particular type of test but not another. If two tests measure approximately the same thing (e.g., SAT and ACT), this should be less of a concern, although when differences are found across similar but different tests, one has to wonder about the external validity of any observations from a study. Logically, concerns arise when thinking about the degree to which study findings might be externally valid when thinking about clearly different outcome measures.
Settings in which the treatment was delivered
The possibility that observed effects are due to contextual factors, as opposed to the treatment itself.
A simple example of this threat would be observing treatment effects in a school that is located in a high-income community; the same effects may or may not hold in more impoverished settings.
Context-dependent mediation
The influence of a mediating factor in one setting versus another setting.
A common mediating factor is treatment dosage; others may be factors such as staff skill or availability. For example, is it possible to fully implement an intended treatment in the form of intense counseling in an overcrowded school setting where there are extensive demands on a counselor’s time?
Note. SAT = Scholastic Aptitude Test; ACT = American College Testing.
To illustrate these issues around generalizability, we use an example from a study by Paino, Renzulli, Boylan, and Bradley (2014) that did not examine the effect of a treatment, but rather charter school closings in North Carolina (this is to show that generalization can and should be pondered not only when dealing with treatment effects but also when dealing with other issues, such as state policy). The authors performed a quantitative analysis of data on charter schools and the nearby public districts. Data included financial information, local market variables, density of charters in the area, school demographics, enrollment, age of school, and academic performance information. This quantitative analysis allowed the researchers to examine the probability of a charter school closing at a given point in time. The findings suggested that charter schools were less likely to close with increases in school enrollment, compliance with federal desegregation orders, and state and federal funding of charters. However, because the location of this study is in one state, its findings may not generalize well to another state that may have different policies. Here, the authors’ inclusion of a qualitative case study analysis could help them to better understand the degree to which these findings might generalize to other states and contexts. Suppose a state has conditions similar to that of North Carolina—contextual conditions that have been rigorously analyzed in relation to the quantitative findings. As a reviewer of the study, you may feel more confident in applying the study findings to that new context. On the other hand, if the case studies in North Carolina show major differences in state charter policies, funding, or enrollment patterns, you may not feel confident in using the findings of this study to understand patterns in the other state.
Statistical-Conclusion and Construct Validity
There are two remaining types of validity from Cook and Campbell’s (1979) framework. Statistical-conclusion validity refers to the degree to which researchers are correct about the relationship between two variables. This type of validity requires not only that researchers know which kind of statistical models or techniques are appropriate for a given data set and research question but also that they can accurately test those models and apply those techniques. Shadish and colleagues (2002) identified nine distinct threats that are helpful; if you are doing quantitative research, we highly encourage you to review this resource in depth. Other concepts and techniques that relate to statistical-conclusion validity include statistical power, data cleaning, and outlier analyses. Measurement reliability, or lack thereof, is classified as a threat to this form of validity.
Construct validity refers to the degree to which underlying ideas (e.g., treatments, behaviors, behavior problems, cooperative learning, and socioeconomic status) are properly conceptualized and operationalized in a study. Every study is based on a set of concepts that underlie the theory being tested. In our ongoing example, the theory being tested in the experiment is that a certain type of counseling intervention will improve problematic behavior issues. If the measurement of this improvement is completed through a student pre- and postintervention assessment, we must ensure that (a) the intervention addresses the behaviors under study and (b) the questions on the assessment correctly represent the behaviors under study. An intervention or measurement that does not accurately represent the constructs being studied cannot result in valid findings about the constructs.
Considerations in Qualitative Inquiry
Earlier, we presented aspects of the experimental validity framework to demonstrate the point that your design choices can affect the validity of inferences you make at the end of your study. We also demonstrated this point because causal questions tend to be of wide interest. Moving forward, we focus on another broad arena: qualitative research. Some of the challenges, or threats, to reaching validity and reliability in quantitative and qualitative research are similar, although they must be observed or measured using different techniques (e.g., Onwuegbuzie & Leech, 2007). For example, whereas quantitative researchers attempt to statistically control for variables that may influence the outcome, qualitative researchers attempt to understand the influence of variables through careful observation and recording of phenomena (Cook & Campbell, 1979; LeCompte & Goetz, 1982). In the next section, we provide an introduction to validity and reliability issues regarding qualitative research methods.
Trustworthiness
Trustworthiness is the qualitative term that is often used in place of the quantitative term validity. Trustworthiness is the degree to which you, as a researcher, can have confidence in your sources as well as the methods used to gather your sources. Steps taken in the earliest stages of research— study purpose and design—can help you decide which collection methods will result in the most relevant, trustworthy data for your questions under study. Ethnographic field notes, formal and informal interviews, formal and informal observations, video recordings, photographs, and archival records offer different strengths and weaknesses (LeCompte & Goetz, 1982). For example, Peräkylä (1997) discussed the specific benefits and drawbacks to tape-recorded and transcribed (audio and/or visual) data, as compared to ethnographic field notes. Field notes filter observations at the time of data collection through the researcher’s particular frameworks; in contrast, audio/visual recordings capture all of the data from one particular angle and/ or sense (e.g., visual vs. audio). Downsides to audio/visual recordings are the inabilities to see gestures and movements or to see the observation from multiple angles or perspectives, respectively. Ethnographers can take in an entire observation site through all of the senses, but they are limited in what they can record in words or pictures. Using a combination of these data collection methods would allow you to compare two or more data sources; such comparisons can highlight areas of inconsistency that need further inquiry or patterns/themes that have a high degree of consistency (i.e., they surface in multiple types of sources and in ways that do not conflict).
There are a variety of ways in which you as a qualitative researcher can check the trustworthiness of emerging themes in your data (Tracy, 2010). During data collection and analysis, researchers can attend to potential observer effects, employ multiple researchers, and use member checks. Also see Lincoln and Guba (1985) and Nastasi and Schensul (2005) for more in-depth discussion on trustworthiness.
Observer Effects. As a qualitative researcher, you must address the potential influence of observer effects, which is the possibility that collected data have been contaminated, or influenced, by your presence or your research instruments. One example of observer effects is a change in participant behavior during observations due to your presence (LeCompte & Goetz, 1982). Depending on the type of activity and individuals under observation, your demographic characteristics, and the methods by which you are recording data, participants may consciously or unconsciously change their behavior. If participants change their behavior, then you cannot report that their observations are typical of participants’ natural or normal behavior.
We can use the example of a counseling intervention to illustrate this issue. Imagine this scenario: Suppose there is a qualitative observation element to the study, in which you might observe in a group counseling session for student participants. The majority of the students in the session speak English as a second language, and about half of them have parents who are not U.S. citizens. The majority of students in the group are also on free or reduced-price lunch. In comparison, you, the researcher, are White, well dressed, and speak only English. The demographic differences between you and the student participants include social class, first language, age, and, in some cases, citizenship. These differences may lead students to behave differently in front of you than they would behave with only the counselor present; additionally, they may behave differently in a group session with their peers than in a one-on-one session with the counselor. You can take two precautions against observer effects. First, you can note all of the potential effects that your presence may have on the participants or their behavior; getting a second opinion on these potential effects can dually strengthen this precautionary strategy. Second, you can follow up with members of the group—in this case, the counselor or one of the participants—to ask whether the observed session was typical or uncommon in any way. This type of context from a normal member of a group can help you put your observations in perspective.
Multiple Researchers. Although not always feasible in qualitative studies, using multiple researchers in data collection has benefits as well as challenges for validity. When multiple researchers collect data, they are able to demonstrate that they are recording data in comparable ways; this is vital to study validity. Similar to interrater reliability (see later discussion), mul-tiresearcher data collection procedures must be uniform in order to collect valid and trustworthy data across an entire study. One example of aligning data collection procedures relates to level of detail; all researchers should know how much detail to include in field notes or observation rubrics. This is true for all methodologies; just as tests producing quantitative data must be administered and recorded consistently, interview and observation data must be recorded using the same techniques.
Member Checks. Similar to one of the strategies involved in minimizing observer effects at the data collection level, member checking involves sharing emergent patterns and findings with members of subject groups to get feedback on the accuracy of those findings. Although the purpose of independent research is to create and implement an unbiased research design that examines the input of all relevant stakeholders or participants, there are also limitations to using outside researchers. Outside researchers rarely have the insider cultural perspective or organizational knowledge that is needed to fully understand the phenomena being observed. Member checking allows the outside researcher to share his or her ideas with the views of an insider and develop an ongoing, increasingly accurate understanding of the phenomena (LeCompte & Goetz, 1982; Lofland et al., 2009). The dual use of insider and outsider perspectives is crucial to achieving this accuracy because both perspectives tend to have particular types of biases, such as ingrained cultural or social beliefs (Bloor, 1978; Turner & Coen, 2008). Such beliefs can include views on gender/sex, racial or ethnic groups, or age-appropriate behaviors; for example, a study participant who is a member of a diverse urban community may have different views on race and ethnicity than a White researcher working in a predominantly White, elite institution.
Reliability in Qualitative Research
The concept of reliability, sometimes called dependability, is relevant in some ways for qualitative methods and problematic in others. Specifically, the definition of reliability as “replicability” is problematic for qualitative, especially naturalistic, methodologies. As LeCompte and Goetz (1982) explained, “Because human behavior is never static, no study is replicated exactly, regardless of the methods and designs used” (p. 332). However, there are ways in which the larger concept of reliability has been adapted to apply to qualitative, naturalistic fields of study. Areas of focus within the umbrella of qualitative reliability include the replicability of data collection and analysis (e.g., understanding how much of the analysis is specific to an individual researcher’s interpretations) and intercoder reliability or interrater agreement, which refers to the degree to which multiple researchers within the same study agree on how to describe and categorize the observed data in terms of the study’s theoretical framework. These issues of reliability can be found in many qualitative studies, and the researcher subjectivity plays an important, if sometimes overlooked, role in these processes. The following sections examine challenges to and strategies for strengthening qualitative reliability.
Researcher Subjectivity. Researcher subjectivity refers to the unique perspective that each researcher brings to a given study; this uniqueness poses a reliability challenge for qualitative studies at the stages of both data analysis and data collection because the interpretations of two or more unique researchers are unlikely to be identical, or replicable (Carey & Gelaude, 2008). For example, in an empirical study of qualitative thematic coding, Armstrong, Gosling, Weinman, and Marteau (1997) found that a sample of trained, experienced experts in qualitative coding, looking at the same data set, did not reach the exact same conclusions about the data. The study demonstrated that when multiple researchers analyzed the same data, the themes that emerged were similar enough to be considered common, but different enough to highlight the role of researcher discipline, training, and cultural background. The findings of this study suggested that the inherent nature of subjective analysis in qualitative methods will result in some degree of agreement and some degree of disagreement. See Glaser and Strauss’s (1967) description of the constant comparison method for a specific example of how to systematically code qualitative data.
Reflexivity. The findings of this study also point to the need for individual researchers to be reflexive, or transparent and forthcoming about their demographics, their discipline, their training, and any other characteristics that may influence their collection or analysis of data. Toward this end, you should reflect on your position in relation to the study and examine the potential for bias based on your cultural or socioeconomic background, nationality, ability status, and other factors (LeCompte & Preissle, 1993; Onwuegbuzie & Johnson, 2006). Your explanation of methodology should also include steps that you take to minimize the impact of your researcher bias on research design, data collection, and analysis (Guest & MacQueen, 2008).
Interrater Reliability. When multiple researchers are used to analyze qualitative data, reliability issues multiply as well. In addition to being reflexive about individual characteristics, the research team must also take steps to ensure that they are using the same criteria to collect or interpret the same data set. Interrater reliability refers to the rate of agreement among multiple research team members applying the same analytic methods to the same data set; these methods typically involve some degree of researcher subjectivity, such as coding text or rating observed behaviors. Additional benefits to determining interrater reliability are twofold: The process allows the research team to examine both the team’s understanding of codes and concepts as well as individual team member accuracy in using the coding or rating system (Carey & Gelaude, 2008). Like many phases of research, interrater agreement is an iterative process. If the independently coded data samples end up with substantially different results, the coding system must be reviewed and clarified or individual coders must be trained further. Interrater reliability testing must continue until the desired level of agreement among researchers has been achieved (MacQueen, McLellan-Lemal, Bartholow, & Milstein, 2008).
Transferability. A final point, related to reliability in qualitative research, is to consider the concept of transferability. Transferability is the degree to which a set of findings from one study will transfer to another particular situation (Lincoln & Guba, 1985). The idea is largely associated with qualitative inquiry, but the principle can be applied to almost any kind of study. The general challenge in transferability is describing the setting of a study with sufficient clarity and detail so that readers of that study can make their own judgments about what does and does not apply to their particular scenarios.
Designing Quantitative Validity and Reliability Research Conclusion
Research quality is important in all disciplines and fields, including program development and implementation, because all knowledge—understanding human behavior, program designs, and effects of medical treatments—is influenced by the quality of the research on which it is based. If inaccurate research findings are used as the basis for products, program development, or policy improvements, these changes are unlikely to actually work as hoped, potentially wasting time and other valuable resources. Some areas of product or program development have a variety of parties with established financial or political stakes in the direction of development; here, it is especially important that cited research be independent and of high quality. Peer review is generally understood to be a hallmark in the research process, because it entails review by multiple experts in the field; the experts are looking for indicators of research quality that provide confidence in the findings. Even with basic knowledge of indicators of research quality, it is possible for a layperson to review the methodology of a given piece of research and decide for oneself whether the piece contains the necessary quality indicators.
It is also important to note that there are particular aspects of mixed methods research that lend it to increasing validity, such as the ability to take advantage of the strongest tools of each framework and discard the weaker tools (Onwuegbuzie & Johnson, 2006). One of the challenges of using mixed methods is figuring out which tools are strongest for which research questions, and Onwuegbuzie and Johnson (2006) discussed several sets of existing guidelines for making mixed methods research decisions (e.g., Collins, Onwuegbuzie, & Sutton, 2006; Greene, Caracelli, & Graham, 1989; Onwuegbuzie & Johnson, 2004). Some qualitative and quantitative methodologists, without purposefully using a mixed methods framework, have incorporated these tools organically in order to best answer their research questions (e.g., Reynolds et al., 2014; Wells et al., 2012).
In this chapter, we have offered an introduction to the range of quality issues that can arise in research studies. This introduction should help you understand that threats to validity and reliability can surface at any point of the research project: design, data collection, data analysis, or even results reporting. To handle validity and reliability concerns, you first need to be aware of them. At every step, you should be looking out for possible threats to research quality, making sure that their design minimizes these threats as much as possible, and clearly reporting the severity of existing threats. To facilitate this process, you should first have a clear understanding of your research question(s). Then, you should seek out methodological frameworks or guidance that promote thinking through designs and generating the highest quality inferences. Finally, you should identify design choices that have the capacity to answer the question well. Again, Table 7.1 is designed with that purpose in mind.
Given the space allocated for this chapter, our overriding advice for you is to appreciate the idea that design decisions can influence the quality of the data collected, later analyses, and overall inferences drawn from your work. We recommend that you investigate further the wide range of specific strategies and techniques to address the threats to validity and reliability that were briefly introduced here. The following chapter, building on these quality concerns, examines ethical considerations in research projects. Many of the same rationales for research quality support the concern for ethics in research, such as the increasing focus on using research findings to make policy and program decisions.
Designing Quantitative Validity and Reliability Research Key Sources
Brewer, C., Knoeppel, R. C., & Lindle, J. C. (2014). Consequential validity of accountability policy: Public understanding of assessments. Educational Policy 29, 1–35. doi: 10.1177/0895904813518099
LeCompte, M. C., & Preissle, J. (1993). Ethnography and qualitative design in educational research (2nd ed.). San Diego, CA: Academic Press.
O’Cathain, A. (2010). Assessing the quality of mixed methods research: Toward a comprehensive framework. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (2nd ed., pp. 305–338). Thousand Oaks, CA: Sage.
Shadish, W. R., Cook, T., & Campbell, D. (2002). Experimental and quasi- experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851.
Designing Quantitative Validity and Reliability Research References
American Education Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Armstrong, D., Gosling, A., Weinman, J., & Marteau, T. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31(3), 597–606.
Bloor, M. (1978). On the analysis of observational data: A discussion of the worth and uses of inductive techniques and respondent validation. Sociology, 12(3), 545–552.
Borgers, N., Hox, J., & Sikkel, D. (2004). Response effects in surveys on children and adolescents: The effect of number of response options, negative wording, and neutral mid-point. Quality & Quantity, 38(1), 17–33.
Brantlinger, E., Jimenez, R., Klingner, J., Pugach, M., & Richardson, V. (2005). Qualitative studies in special education. Exceptional Children, 71, 195–207. DOI: 10.1177/001440290507100205
Brewer, C., Knoeppel, R. C., & Lindle, J. (2014). Consequential validity of accountability policy: Public understanding of assessments. Educational Policy, 1–35. DOI: 10.1177/0895904813518099
Carey, J. W., & Gelaude, D. (2008). Systematic methods for collecting and analyzing multidisciplinary team-based qualitative data. In G. Guest & K. M. MacQueen (Eds.), Handbook for team-based qualitative research (pp. 227–272). Altamira, CA: Lanham.
Collins, K. M. T., Onwuegbuzie, A. J., & Sutton, I. L. (2006). A model incorporating the rationale and purpose for conducting mixed methods research in special education and beyond. Learning Disabilities: A Contemporary Journal, 4, 67–100.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston, MA: Houghton Mifflin.
Creswell, J. W., & Plano Clark, V. L. (2010). Designing and conducting mixed methods research (2nd ed.). Thousand Oaks, CA: Sage.
Crocker, L. M., & Algina, J. (1986). Introduction to classic and modern test theory. New York, NY: Holt, Rinehart, and Winston.
Cronbach, L. J. (1946). Response sets and test validity. Educational and Psychological Measurement, 6(4), 475–494.
Denzin, N. K. (1989). The research act: A theoretical introduction to sociological methods. Englewood Cliffs, NJ: Prentice Hall.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2005). The discipline and practice of qualitative research. In The Sage handbook of qualitative research (3rd ed., pp. 1–32). Thousand Oaks, CA: Sage.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Field, A. (2013). Discovering statistics using IBM SPSS Statistics (4th ed.). Thousand Oaks, CA: Sage.
Fowler, F. J. (2009). Survey research methods. Thousand Oaks, CA: Sage.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New Brunswick, NJ: Transaction.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation & Policy Analysis, 11, 255–274.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: Wiley.
Guest, G., & MacQueen, K. M. (Eds.). (2008). Reevaluating guidelines in qualitative research. In Handbook for team-based qualitative research (pp. 205–226). Altamira, CA: Lanham.
Hamilton, L. S., Stecher, B. M., & Klein, S. P. (2002). Making sense of test-based accountability in education. Washington, DC: Rand.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York, NY: Academic Press.
Heubert, J. P., & Hauser, R. M. (1999). High stakes: Testing for tracking, promotion and graduation. Washington, DC: National Academy Press.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179.
Jacobson, M. F. (1998). Whiteness of a different color: European immigrants and the alchemy of race. Cambridge, MA: Harvard University Press.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26.
Kane, M. (2013). The argument-based approach to validation. School Psychology Review, 42(4), 448–457.
Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. M. (2010). Single-case design technical documentation. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf
Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. M. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38. doi:10.1177/0741932512452794
Kratochwill, T. R., & Levin, J. R. (Eds.). (2014). Single-case intervention research: Methodological and statistical advances. Washington, DC: American Psychological Association.
LeCompte, M. D., & Goetz, J. P. (1982). Problems of reliability and validity in ethnographic research. Review of Educational Research, 52(1), 31–60.
LeCompte, M. D., & Preissle, J. (1993). Ethnography and qualitative design in educational research (2nd ed.). San Diego, CA: Academic Press.
LeCompte, M. D., & Schensul, J. J. (2010). Designing and conducting ethnographic research: An introduction (2nd ed.). Plymouth, United Kingdom: AltaMira Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Lofland, J., Snow, D. A., Anderson, L., & Lofland, L. H. (2009). Analyzing social settings: A guide to qualitative observation and analysis (4th ed.). Belmont, CA: Wadsworth.
MacQueen, K. M., McLellan-Lemal, E., Bartholow, K., & Milstein, B. (2008). Team-based codebook development: Structure, process, and agreement. In G. Guest & K. M. MacQueen (Eds.), Handbook for team-based qualitative research (pp. 119–135). Altamira, CA: Lanham.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23, 13–23. doi:10.2307/1176219
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749. doi:10.1037/0003-066X.50.9.741
Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.
Nastasi, B. K., & Schensul, S. L. (2005). Contributions of qualitative research to the validity of intervention research. Journal of School Psychology, 42, 177–195. doi:10.1016/j.jsp.2005.04.003
O’Cathain, A. (2010). Assessing the quality of mixed methods research: Toward a comprehensive framework. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (2nd ed., pp. 305–338). Thousand Oaks, CA: Sage.
Onwuegbuzie, A. J., & Johnson, R. B. (2004). Mixed method and mixed model research. In B. Johnson & L. Christensen (Eds.), Educational research: Quantitative, qualitative, and mixed approaches (pp. 408–431). Boston, MA: Allyn & Bacon.
Onwuegbuzie, A. J., & Johnson, R. B. (2006). The validity issue in mixed research. Research in the Schools, 13(1), 48–63.
Onwuegbuzie, A. J., & Leech, N. L. (2007). Validity and qualitative research: An oxymoron? Quality & Quantity, 41, 233–249.
Paino, M., Renzulli, L., Boylan, R., & Bradley, C. (2014). For grades or money? Charter school failure in North Carolina. Educational Administration Quarterly, 50(3), 500–536.
Patton, M. Q. (2014). Qualitative research and evaluation methods: Integrating theory and practice (4th ed.). Thousand Oaks, CA: Sage.
Peräkylä, A. (1997). Reliability and validity in research based on tapes and transcripts. In D. Silverman (Ed.), Qualitative research: Theory, method, practice (pp. 201–220). London, United Kingdom: Sage.
Reynolds, J., DiLiberto, D., Mangham-Jefferies, L., Ansah, E. K., Lal, S., Mbakilwa, H., . . . Chandler, C. I. (2014). The practice of “doing” evaluation: Lessons learned from nine complex intervention trials in action. Implementation Science, 9(75), 1–12.
Shadish, W. R. (1995). The logic of generalization: Five principles common to experiments and ethnographies. American Journal of Community Psychology, 23, 419–428.
Shadish, W. R., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Spillane, J. P., Pareja, A. S., Dorner, L., Barnes, C., May, H., Huff, J., & Camburn, E. (2010). Mixing methods in randomized controlled trials (RCTs): Validation, contextualization, triangulation, and control. Educational Assessment, Evaluation and Accountability, 22(1), 5–28.
Tashakkori, A., & Teddlie, C. (Eds.). (2010). Sage handbook of mixed methods in social and behavioral research (2nd ed.). Thousand Oaks, CA: Sage.
Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851.
Turner, S., & Coen, S. E. (2008). Member checking in human geography: Interpreting divergent understandings of performativity in a student space. Area, 40(2), 184–193.
Wells, M., Williams, B., Treweek, S., Coyle, J., & Taylor, J. (2012). Intervention description is not enough: Evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials, 13(95), 1–17.
Wolcott, H. F. (1990). On seeking—and rejecting—validity in qualitative research. In E. W. Eisner & A. Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp. 121–152). New York, NY: Teachers College Press.
Yin, R. K. (2009). Case study research: Design and methods (4th ed.). Thousand Oaks, CA: Sage.
1 By framework, we refer to a set of ideas that can help us think through research processes and findings. Shadish, Cook, and Campbell (2002), for example, describe four facets of experimental validity and ways in which validity can be undermined during the course of an experiment. Some have called this the Campbellian validity framework. Lincoln and Guba (1985) offer one of the earlier sets of guidelines for strengthening qualitative inquiry. As another example, O’Cathain (2010) describes a framework for assessing the quality of mixed methods studies.
2 You may have been exposed to this sort of wisdom before. Consider the saying: Believe those who seek the truth; doubt those who say they have found it.
3 A broader notion of internal validity can be conceptualized as the degree to which interpretations of a particular data set are reasonable inferences, causal or otherwise, without getting into the separate question of how well a set of findings applies to new settings outside of the study. But for now, we apply the narrower idea as used in an experimental framework, where one has a research question related to causation.
SAMPLE ANSWER
Designing Quantitative Research
Introduction
The results of a research study are significant in the event that they can be considered as accurate and confidently in their interpretation. The element of accuracy and confidence in the interpretation of a research study’s result is subsequently dependent on the validity of the study. Validity in this case infers to the degree in which a research study’s inferences can be articulated from the results of the study. In consideration of this, there are two primary aspects of validity that include the internal and the external validity.
Internal Validity
This can be established as the extent in which the results of a research study are considered as a function of the variables that are manipulated in a systematic way, measured and observed during a study. An example of this can be seen in a researcher determined to establish which of the two instructional approaches are superior in teaching a mathematical concept within a classroom setting(Haegele, & Hodge, 2015). The researcher would be intrigued to encourage two tutors to use each of the instructional methods and then take a comparison of the mean test scores of each and every class following the use of the instructional method. The validity of this study can be depicted in the tutors efficiency and enthusiasm in using the instructional method, the interest of the class and their preparation. In this case, it is essential to establish that some of the potential threats of internal validity include:
History:
History can be considered as the occurrence of events that are prone to alter the end result or outcome of a research study. In this case, before conducting a research study, it is essential to determine that a previous history is likely to have taken place(Haegele, & Hodge, 2015). For instance, a study on the effectiveness of a new concept used in teaching a unit on the biology of a nervous system may be overtaken by history since many students may have watched a documentary on this on the television.
Maturation:
The aspect of maturation depicts the changes that are likely to occur on the subjects of a study during the research period. These changes are considered as not part of a study since they are likely to affect studies results(Haegele, & Hodge, 2015). For example, in a biological growth process, a researcher may be forced to consider the element of weight gain or the increase in an individual’s height that results from lunch or breakfast programs as a change that may occur during a study.
Mitigating the Potential Threats of Internal Validity
In addressing the element of history in internal validity, a research may consider using a control group that is selected within the same population within an experimental group(Haegele, & Hodge, 2015). This group therefore needs to experience the same history as the experimental group, an aspect that would eliminate the effects of history. On the other hand, the duration of an experiment may be shortened in reducing such effects. On the other hand, the effects of maturation can be compared to those of maturation and can be mitigated through the selection of the same population from as that of an experimental group and the study period may be shortened as well.
External Validity
This refers to the extent in which a studies result can be generalized in a confident way to a larger group that engaged in the study(Haegele, & Hodge, 2015). In this case, a researcher needs to determine the reasons behind the use of variables that are similar to the aspects that exist within the larger population. Some of the potential threats of an external validity include:
The Selection-treatment Interaction:
This is primarily considered as the possibility of the selected participants characteristics interactions with some elements of the treatment(Haegele, & Hodge, 2015). This may therefore include the participants learning, prior experiences, personality or any other elements that may interact with the effects of the study.
Effects of an Experimentation Arrangement
This primarily infers to the situations in which the participants of a study become aware of their involvement in a study and as a result of this, their performance and response changes from what would have been.
Mitigating the Potential Threats of Internal Validity
The possible approaches of mitigating threats to external validity include the inclusion of an efficient design by adding treatment or control groups and differential waves of measurement (Haegele, & Hodge, 2015). On the other hand, a researcher may also consider the use of statistical analysis
Ethical Issue in Quantitative Research
Ethics can be perceived as the development of a good study conduct with the aim of making moral judgments on the element of good conduct. In quantitative research, one of the ethical issues that need to be given consideration is the acquisition of the participants consent in a study(Haegele, & Hodge, 2015). This may influence the design decision of a study since the researchers may have to include efficient methods aimed at attaining the consents of participants in a research study.
Amenability of a Research to Scientific Study Using a Quantitative Approach
In considering this, it is vital to establish that this element enables a researcher to scientifically establish the primary causes of his/her observations with the aim of in providing unambiguous answers to the research studies intent. This element remains essential since without it, the cause of an effect may not be established and isolated.
Main Issue Post
As established, the primary issues established in this post can be seen in the construction of a social variable that determines the limitations of racial identity with the biological differences that exist among races (Haegele, & Hodge, 2015). It is essential to consider that an individual’s experience on different faces may be viewed as systematically different within particular societies based on how these societies take cognizance of the element of racial differences.
An instance of this can be viewed in the manner in which multiple races are socially contracted among the whites in U.S. Numerous immigrant groups that are now classified as Whites that include the Eastern Europeans and the Irish populations were first considered as racially different from other groups that include the North European and Western regions. In this case, racial identities may be viewed as changing as a result of the assimilation of demographic groups that differentiate themselves from other groups as a result of political, economic and social variables.
Response Post
As a researcher, it is credible to take cognizance of the possibilities that result in the social construction and the manner in which such constructions affect the meaning of a studies variable. This is ion consideration of the fact that variables are constructed to have immense influence on studies validity (Haegele, & Hodge, 2015). This therefore requires a clear and concise definition of each and every variable in a study with the aim of increasing the validity of the study. On the other hand, it is vital to establish the context in which research data is collected and interpreted.
Conclusion
As determined in this study, the element of internal and external validity plays a significant role in a study since they determine the confidentiality and accuracy of a research design.
Reference
Haegele, J. A., & Hodge, S. R. (2015). Quantitative Methodology: A Guide for Emerging Physical Education and Adapted Physical Education Researchers. Physical Educator, 7259-75.
Data Collection Phase of the Research Process Order Instructions: Dear Admin, make sure that all the references be in HARVARD STYLE and find the attached files by email. Thank you.
Data Collection Phase of the Research Process
Once you have established the research question, reviewed the existing literature on your topic and chosen a research methodology, you will be ready to engage in the data collection phase of the research process. Last week, you explored how methodologies help to guide and frame your research. This week, you are looking closely at the methods you will use to collect and analyze the data that will address your research question.
Your assignment for this week is to complete the first draft of the methodology section of your Proposal (Section 3). Please indicate which methodology (such as ethnography, survey research, action research) you have chosen to use for your dissertation research, and discuss how it is consistent with your epistemological stance.
Describe the data collection method(s) you will use (such as questionnaire, interviews, focus groups), and just how these will be sufficient to answer your research question. Include as much detail as you can about your participants and how they will be chosen. Briefly describe how you will analyze the data collected and the tools needed for a proper analysis based on your chosen method.
These methods can, and will, change as you negotiate your access; we are interested at this stage to see how you are connecting your aim and research questions, your understanding of what is written already on the topic (preliminary literature review), and the methods you will use to collect or generate data.
Finally, describe how you will address the political and ethical challenges that may arise as you conduct your research.
Data Collection Phase of the Research Process Sample Answer
Data Collection Phase of the Research Process
Type of Research
The research method that will be used for this study is qualitative research method that will be used in determining the underlying reasons as to why sustainable energy resources in the oil-rich GCC states are affected in the event of price depletes. Through an action-oriented research method, the qualitative research method will also gain insights into the motivations and opinions of various stakeholders within these states, and also determine the problem by developing hypotheses for potential quantitative research (Burrell, & Morgan, 2011, pp. 29). This discipline will investigate the why and how of the issue being addressed and also examine this phenomenon’s through observation of the numerical representations and statistical analysis. In as much as a questionnaire will be provided to the respondents for the purpose of statistical representation, interviews with the experts within this field will also remain essential in the study.
Sampling Method
The primary sampling method that will be used in this study is that of a randomized sampling method that will help in obtaining results that would be used in representing the entirety of the oil-rich GCC states and the manner in which they are affected by a deplete in oil prices. A number of GCC states and organizations will be acquired and three of them will be picked through a randomized sampling method (Easterby-Smith, Thorpe, & Jackson, 2012, pp.98). These states and organizations will be the primary target source of respondents that will be used in the research. Contacts will be made to these organizations and stakeholders in GCC states through emails where questionnaires will be administered. On the other hand, a letter of consent will be sent along with the questionnaire, including the protocol of the study.
Respondents
The respondents of this research study will be obtained from the GCC oil-rich states, organizations, governments and individuals. The research study, therefore, seeks to dwell on these entities since they remain relevant for the study and fits the resources and time of the research study (O’Gorman, & Mackintosh, 2015, pp.103). The randomly sampled respondents will be required to provide their consent and approval to engage in answering the questionnaire until the required number of respondents are achieved which stands at 50. In addition to this, the study will also seek the opinions of different experts within the field who will enlighten on particular areas of the study.
Questionnaire
The questionnaire will entail information about sustainable energy resources for the oil-rich GCC states and how they are affected when the prices of oil are lowered. On the other hand, the questionnaire will also provide a backdrop on the benefits of oil among these member states, organizations, governments and individuals and the manner in which they are affected by such oil drops in prices (Sang, & Seong-Min, 2013, pp.93). The questionnaire will additionally seek to determine the opinions of these entities, states, governments, and individuals in changing to other economic activities rather than dwelling on oil alone for economic development.
Data Analysis
In this process, the study will incorporate analytical procedures that are designed to draw inductive inferences from data with the aim of providing the results of the study. It is essential to detail that the qualitative research will be inducted through the inclusion of statistical procedures that will see a continual process of iteration where data will be continuously collected and analyzed in a simultaneous way (Toksoz, 2012, pp.198). On the other hand, the research will make consultations with specialists on statistics with the aim of correcting the statistical methods used in evaluating the results that are obtained in the study. Through the co-operation of high probability, an efficient statistical method will be chosen in determining the results of the study. The study will
Political and Ethical Challenges that May Arise in the Study
It is significant to note that the study will ensure that appropriate measures are instituted before engaging the respondents in this study. This will include the acquisition of their consent before engaging in the research study (Toksoz, 2012, pp.199). On the other hand, it will be essential to ensure that the confidentiality of the participants is well covered and the study follows in all the research protocols required in meeting its end objective.
Data Collection Phase of the Research Process References
Burrell, G., & Morgan, G 2011, Sociological paradigms and organizational analysis: Elements of the sociology of corporate life. Heinemann: pp. 29-67.London
Easterby-Smith, M., Thorpe, R. & Jackson, P 2012, Management Research, 4th ed. London: pp. 98-120. SAGE Publications
O’Gorman, K. D., & Mackintosh, R 2015, Research Methods for Business and Management, 2nd Edition, Good fellow Publishers Ltd: pp. 103-193. Oxford.
Sang, K, & Seong-Min, Y 2013, ‘Return and Volatility Transmission Between Oil Prices and Emerging Asian Markets’, Seoul Journal Of Business, 19, 2, pp. 73-93, Business Source Complete, EBSCOhost, viewed 16 June 2016.
Toksoz, M 2012, ‘The Gulf Cooperation Council and the global recession’, Journal Of Balkan & Near Eastern Studies, 12, 2, pp. 195-206, Academic Search Premier, EBSCOhost, viewed 15 June 2016.
Measuring the Impact of Methodological Research Order Instructions: Please reply on this post and make sure all the references are in Harvard style.
Measuring the Impact of Methodological Research
Easterby-Smith, Thorpe& Jackson (2012,p 18) argued that research methodology is referred to the researcher ways of using the techniques and method to provide consistency to the research and an organized research structure. In terms of the research aim, starting point, design, data type, information analysis, and interpretation and the outcome.
Accordingly, determining the research methodology will help the researcher to gain a clear description of the reasons of selecting specific techniques and procedures, beside ways of identifying and selecting the information. As well as, the theoretical perspective that support analyzing and interpreting the outcome. In order to, understand the research problem and it is impacts. Where it will enable the researcher to rethink of the ways of benefiting the economy and the society (Bloch et al., 2014).
On the other hand, it will allow other researcher to evaluate the validity and the reliability of the research, where the information and the outcome can be used by other academic research. Also, the research methodology may help the practitioner to evaluate the research to improve or solve problems by implementing the outcome and information to a similar situation. As Greenhalgh et al. (2016) referred the research impact can be measured through it is befits in health, economic and culture, as well as, in building the academic and knowledge base to the field. Where it can be used both nationally and internationally (Brueton, 2014)
Research methodology type
The Research methodology can be classified into three types based on the research problem, that include quantitative and qualitative and mix methodology that integrate a quantitative and qualitative methods.
For instance, in studying the impact of specific factors on employee’s Joan b satisfaction, the researcher may use quantitative methods such as a survey method to collect information in statistics as the aspect.
While in a research that aim to understand the factors that may impact employees job satisfaction, the researcher may use managers and employee interview in-depth method to collect information, to determine the job satisfaction factors.
On the other hand, the research may use both methods by interviewing methods to determine the factor and an open-end questioners method to understand the impact of the factor on job satisfaction to collect more information
The author believes that in studying any research problem the researcher has to consider a trade-off between methods, where the researcher has to choose the most beneficial method to this research. For instance, if the study aims to study a problem in a large population to generalize the outcome, where a survey is more fitted to the research than the interview, where the researcher has to trade-off with cost and time.
On the other hand, it also depends on the advantages and disadvantages of each method. Where selecting a specific method depending on several factors that include research problem, research time, financial support, management support, culture constrains and the researcher skills.
The author Research methodology
Considering the title of the author research, which is the challenge, facing the home-based (HBB) fashion retailers business based on Instagram as a trading platform in Saudi Arabia. The author is intends to use both quantitative and qualitative methods to investigate the research questions.
Where the author is going to use constructionism of the epistemological perspective as a cases study method with semi- structured in- depth interview of home-based business cases as a primary source. As well as using data resource such as the financial, market and product report as secondary resources.
In order to control, influence and structure the awareness of HBB in the local community, by considering the subject powerfulness impact on the research outcome (Easterby-Smith, Thorpe& Jackson, 2012,p 76), through answering the following question:
1- what are the characteristics of home-based fashion retail in SA.
2- what are the challenges of home- based fashion retailers businesses based on Instagram as the trading platform in Saudi Arabia.
3-What are the strategies used to overcome the challenges?
5- what is the main social media used by home-based fashion retail in SA.
6- what is the competitive advantage of home-based fashion retail in SA.
7- what is the market share of home-based fashion retail in SA.
One of the advantages of investigating the research problem through an in-depth intervening method is gaining more problem understanding. As (Easterby-Smith, Thorpe& Jackson, 2012,p126) refereed it will gain insight and information into social and organization realities by discovering the views, perception and opinions of individual, through the languages they use.
Also, the author will use positivism of the epistemological perspective in the second stage of the research, through distrusting surveys to a systemic random sampling, where the list will be provided from some agencies that support HBB. Where the author will use telephone or the web- based survey depending on HBB knowledge of using advanced technology. In order to, answer the question of what are the relation between HBB characteristics and the challenges.
One of the advantages of using the survey is as (Easterby-Smith, Thorpe& Jackson, 2012,p79) referred it will allow the author to gather more information from the different background, interest, and social class, that may help the author to generalize the outcome.
Measuring the Impact of Methodological Research Conclusion
Indemnifying research methodology is considered as the key factor that determines the reliability and the consistency of the research from the academic and practitioner aspect. Where the researcher may use a qualitative or quantitative or mix method. Based on several factors such as research problem, research time, financial support and management support, as well as, the advantages and the disadvantages of each method that the researcher has to trade –off in to fit the research design.
The author is intends to use a qualitative method through semi-structured in- depth interview with several case studies method to understand the challenges and the characteristic of HBB. As well as, a quantitative method through distributing surveys to a systemic random sampling to investigate the relation between the challenges and the characteristics of HBB.
Measuring the Impact of Methodological Research References
Bloch, C, Sørensen, M, Graversen, E, Schneider, J, Schmidt, E, Aagaard, K, & Mejlgaard, N (2014) ‘Developing a methodology to assess the impact of research grant funding: A mixed methods approach’, Evaluation And Program Planning, 43, pp. 105-117, ScienceDirect, EBSCOhost, [online]. Available from:
http://ejournals.ebsco.com.liverpool.idm.oclc.org/Direct.asp?AccessToken=951I9I18XZJ1DI41E914M5KKJII981D199&Show=Object (Accessed: 17 Jun 2016).
Brueton, V, Vale, C, Choodari-Oskooei, B, Jinks, R, & Tierney, J (2014) ‘Measuring the impact of methodological research: a framework and methods to identify evidence of impact’, Trials, 15(1), pp. 2-17, Academic Search Complete, EBSCOhost [online]. Available from:
http://web.b.ebscohost.com.liverpool.idm.oclc.org/ehost/pdfviewer/pdfviewer?sid=e4c3f91e-bd84-4d4e-9e05-4839ac0ffa66%40sessionmgr101&vid=1&hid=106 (Accessed: 17 Jun 2016).
Easterby-Smith, M., Thorpe, R. & Jackson, P. (2012) Management research. 4th ed. London: SAGE Publications
Greenhalgh, T, Raftery, J, Hanney, S, & Glover, M (2016) ‘Research impact: a narrative review’, BMC Medicine, 14, pp. 1-16, Academic Search Complete, EBSCOhost [online]. Available from:
http://eds.a.ebscohost.com.liverpool.idm.oclc.org/eds/pdfviewer/pdfviewer?vid=4&sid=6fe79b2f-7ac8-4545-aff4-19835e04ca75%40sessionmgr4005&hid=4108 (Accessed: 117 Jun 2016).
Measuring the Impact of Methodological Research Sample Answer
Reply to student’s post
In this post, the student has rightly provided the definition of the term research methodology citing a reliable source. In his own words, the student has noted that research methodology is understood as the researcher’s ways of making use of methods and techniques in providing consistency to the research study as well as an organized research structure. To add on what the student has stated, research methodology generally means more that the techniques the researcher is intending to employ to gather data. It is important to include a consideration of theories as well as concepts underlying the methods (Easterby-Smith, Thorpe & Jackson 2012).
Williams (2011) reported that establishing a research methodology to employ helps a researcher in obtaining a good understanding of the relevant data collection and analysis methods to use; this is something which the student has correctly pointed out in the post. Furthermore, a research methodology is important since it describes the broad philosophical underpinning to the selected research methods, including whether the researcher is using quantitative, qualitative, or mixed methods (Bloch et al. 2014). The case study method for data collection which the student intends to use is appropriate considering the research methodology and topic.
The student in the post has correctly stated that the main kinds of research methodologies commonly used by researchers are the quantitative methodology, qualitative methodology, and mixed methodology. To add on this, quantitative research usually reports results of the study with the use of graphs, tables, data and statements which address every research hypothesis and/or research questions, and are underpinned by positivist research paradigm (O’Gorman & MacIntosh 2015). The qualitative research studies are usually linked to unstructured interviews, participant observations, and focus groups, and the findings are presented by means of case studies, personal accounts, narratives in addition to other ways which identify abstracts or themes that help in addressing the research question. The mixed research methodology entails combining quantitative and qualitative methodologies (Brueton et al. 2014).
In addition, the student is spot-on by noting that before carrying out a research study, the researcher should actually consider the trade-offs between the different research methods and settle for one that is most beneficial for the study. This is very true bearing in mind that some methodologies or research studies may fit with certain research methods and not others. For example, if the researcher intends to include a lot of participants as study subjects may be 100 of them, the researcher cannot interview all of them because it would be really time consuming and costly. However, the use of questionnaire surveys would be really appropriate since it is quick, easy and inexpensive to administer.
The student intends to employ a mixed research methodology in his study. This is important since it helps to overcome the shortcomings of both the quantitative and qualitative methodologies when each is used individually and exploits the advantages and strengths of both methodologies (Williams 2011). The student also intends to employ both the positivism and constructivism epistemological perspectives in his study which is aimed at exploring the challenges that home-based business (HBB) fashion retailers who use Instagram social media channel as a trading platform face in Saudi Arabia. The constructivism perspective is a philosophical stance which holds that reality is something that is socially constructed. Conversely, the positivism philosophical approach underscores that knowledge has to be obtained by means of measurable and observable facts, and is thus regarded as a rigid scientific inquiry (Greenhalgh et al. 2016). Using both the positivism and constructivism epistemological stances in the same research study may prove to be somewhat complicated for the student down the road. It would be better to focus on either of them than both.
Measuring the Impact of Methodological Research References
Bloch, C., Rensen, S. A., Graversen, E., Schneider, J., Schmidt, E., Aagaard, K., & Mejlgaard, N 2014, Developing a methodology to assess the impact of research grant funding: A mixed methods approach, Evaluation And Program Planning, 43, pp. 105-117, ScienceDirect, EBSCOhost,
Brueton, V., Vale, C., Choodari-Oskooei, B., Jinks, R., & Tierney, J 2014 Measuring the impact of methodological research: a framework and methods to identify evidence of impact, Trials, 15(1), pp. 2-17, Academic Search Complete, EBSCOhost
Easterby-Smith, M., Thorpe, R. & Jackson, P 2012, Management research. 4th ed. London: SAGE Publications
Greenhalgh, T., Raftery, J, Hanney, S, & Glover, M 2016, ‘Research impact: a narrative review’, BMC Medicine, 14, pp. 1-16, Academic Search Complete, EBSCOhost.
O’Gorman, K. D., & MacIntosh, R 2015, Research Methods for Business and Management, 2nd Edition, Goodfellow Publishers Ltd: Oxford.
Williams, C 2011, Research Methods. Boston, MA: The Clute Institute.
Analysis and Synthesis of Prior Research Order Instructions: The writer will read the article and offer constructive feedbacks using peer review articles to support the facts.
Analysis and Synthesis of Prior Research
The writer will use the APA 6th edition why completing this paper. It is important for the writer to not just indicate I agree, but to offer constructive criticism why giving alternative suggestions where necessary.
Analysis and Synthesis of Prior Research Sample Answer
Analysis and Synthesis of Prior Research
Constructive Criticism
The author of this material clearly denotes in his/her introduction the need for organizations to realize the need for achieving a competitive edge that the inclusion of an efficient continual process in encouraging and improving the performance and innovativeness of employees within an organization (Chronéer, & Backlund, 2015). However, it is significant to ensure that the introductory part of this article needs to have no references as well as the conclusion. On the other hand, the author has not provided a conclusion, to sum up, the entire article and provide a recommendation on the way forward for the organization in enhancing its performance.
The author of the material gives a clear and precise need for cultural diversity within an organization. It is however essential to gives a depiction of how diversity can be managed and initiated within an organization, and the manner in which an organization can invest in this element in achieving its competitive advantage (Odoardi, Montani, Boudrias, & Battistelli, 2015).
On the other hand, it is also appropriate that the roles of the leaders and the management are detailed in initiating diversity within the organization (Menke, 2013). The manner in which this element if also viewed within the organization plays a significant role in ensuring the process of inclusion within different employees in an organization.
In recommending affirmative action’s that need to be employed in this article, it is imperative for the author of this material to ensure that appropriate definitions are provided where relevant to enable the readers of the article to have ease in understanding the intent and the provisions of the article. On the other hand, the inclusion of examples in the article would add meat in the paper, a factor that need to be considered.
Analysis and Synthesis of Prior Research References
Chronéer, D., & Backlund, F. (2015). A Holistic View on Learning in Project-Based Organizations. Project Management Journal, 46(3), 61-74. doi:10.1002/pmj.21503
Menke, M. M. (2013). Making R&D portfolio management more effective. Research Technology Management, 56(5), 34-44. Retrieved from http://search.proquest.com/
Odoardi, C., Montani, F., Boudrias, J., & Battistelli, A. (2015). Linking managerial practices and leadership style to innovative work behavior, Leadership & Organization Development Journal, 36, 545 – 569.doi:10.1108/LODJ-10-2013-0131
Look at the possible answers below and decide which is the correct format for each reference below.
1. Book
A.Online learning paradigms in the 21st century. London: Houghton Mifflin Publishers. Marsh, C. J. 2008.
B. Marsh, C.J. (2008) Online learning paradigms in the 21st century. London: Houghton Mifflin Publishers.
C. Online learning paradigms in the 21st century. Marsh, C. J. 2008.
D. Marsh, C. J. Online learning paradigms in the 21st century. London: 2008.
E. None of the above: Write your own reference in correct format
Marsh, C. J. 2008, ‘Online learning paradigms in the 21st century. London: Houghton Mifflin Publishers
Chapter in an edited bookA. Green, S. (2007) ‘Dancing the Tango in tropical settings’. In: Wilcox, R. & Francis, D. (ed). Exotic musical interludes of the European Union. London: Cambridge University Press, pp.385-92.B. Wilcox, R. & Francis, D. eds. Exotic musical interludes of the European Union. Dancing the Tango in tropical settings. Green, S. 2007, pp. 385-392. London: Cambridge University Press.C. Green, S. 1980. Dancing the Tango in tropical settings, pp. 385-92. In: Wilcox, R. and Francis, D. eds. Exotic musical interludes of the European Union. London: Cambridge University Press.
D. Wilcox, R. and Francis, D. eds. Exotic musical interludes of the European Union. London: Cambridge University Press, pp. 385-92.
E. None of the above: Write your own reference in correct format
Green, S. 2007 ‘Dancing the Tango in tropical settings’. In: Wilcox, R. & Francis, D. (Ed). Exotic musical interludes of the European Union. Press, pp.385-92. London: Cambridge University
Journal articleA. Journal of Natural Historical Settings, 2007, pp. 142-157: Stalking the wild animals on the north shore of Alaska and Canada. Unruh, J.M. and Thomas, D.W.B. Unruh, J.M. and Thomas, D.W. Stalking the wild animals on the north shore of Alaska and Canada. Journal of Natural Historical Settings84(2).C. Stalking the wild animals on the north shore of Alaska and Canada. Unruh, J.M. and Thomas, D.W. Journal of Natural Historical Settings 84(2), 2007.
D. Unruh, J.M. & Thomas, D.W. (2007) ‘Stalking the wild animals on the north shore of Alaska and Canada’, Journal of Natural Historical Settings, 84 (2), pp.142-157.
E. None of the above: Write your own reference in correct format
Unruh, J.M. & Thomas, D.W. 2007 ‘Stalking the wild animals on the north shore of Alaska and Canada’, Journal of Natural Historical Settings, 84 (2), pp.142-157.
Electronic Internet Journal ArticleA. Smith, L.A. et al. A case study on change within a higher educational institutions and its correlation to higher alcoholism rates among instructors. Electronic Journal of Constant Change[Online] 5(57). 2005. Available at: http://www.ejocc.com/articles/smith-356778.shmpt.B. Smith, L.A. 2005. ‘A case study on change within a higher educational institutions and its correlation to higher alcoholism rates among instructors [Online article]. Available at: http://www.ejocc.com/articles/smith-356778.shmpt (Accessed: Accessed: 25 October 2008).C. Smith, L.A. (2005) ‘A case study on change within a higher educational institutions and its correlation to higher alcoholism rates among instructors’, Electronic Journal of Constant Change , 5 (57), pp.479-495 [Online]. Available from: http://www.ejocc.com/articles/smith-356778.shmpt (Accessed: 25 October 2008).
D. Electronic Journal of Constant Change [Online], 5(57): ‘A case study on change within a higher educational institutions and its correlation to higher alcoholism rates among instructors’. Smith, L.A. Available at: http://www.ejocc.com/articles/smith-356778.shmpt (Accessed: 25 October 2008).
E. None of the above: Write your own reference in correct format
EmailsA. Gruengard, E. 2008. A simple explanation of file storage in Blackboard systems. (a personal communication via email to C. Marsh).B. Gruengard, E. A simple explanation of file storage in Blackboard systems. [Received: 21 September 2008]. [Accessed: 21st September 2008].C. Gruengard, E. A simple explanation of file storage in Blackboard systems. [Accessed: 21 September 2008].
E. None of the above: Write your own reference in correct format
Web (WWW or WWW2) documentsA. Williams, S. 2008. Banking in America: how America leads the world in stability within financial institutions [Online]. New York: NBC. Available at: http://www.wsj.com/central/isd/cite/index.htm [Accessed: 25th October 2008].B. NBC. Banking in America: how America leads the world in stability within financial institutions [Online]. Williams, S. 2008. http://www.wsj.com/central/isd/cite/index.htm [Accessed: 25th October 2008].C. Williams, S. Banking in America: how America leads the world in stability within financial institutions [Online]. New York: Wall Street Journal. Available at: http://www.wsj.com/central/isd/cite/index.htm[Accessed: 25th October 2008].
D. Williams, S. L. (2008) Banking in America: how America leads the world in stability within financial institutions [Online]. New York: Wall Street Journal. Available from:http://www.wsj.com/central/isd/cite/index.htm (Accessed: 25 October 2008).
E. None of the above: Write your own reference in correct format
We can write this or a similar paper for you! Simply fill the order form!
Please make sure that the referencing is in Harvard style.And, find the attached files which have been sent by email
Regards,
Management research relies upon a variety of different data collection methods, such as survey, interview and observation. Different methods produce different types of data, each requiring different analytical approaches. Look to your text for explanations of the various methods you can use in your research and the types of analyses that can be used to make sense of the data you collect.
In a 850 word response, post your answers to the following questions:
•How does the adoption of a particular methodology affect the researcher’s choice of methods for data collection and analysis?
•Which methods might you choose, bearing in mind your chosen methodology and epistemological stance?
SAMPLE ANSWER
Data collection methods
There is a wide variety of methods that researchers can use to collect relevant data for use in answering their research questions. These methods range from observation to structured and unstructured interviews to focus groups to questionnaire surveys to observation to content analysis (Williams 2011). The data collected with the use of each of these methods necessitate the use of different data analysis techniques. This paper provides a detailed description of how adopting a certain research methodology will affect the investigator’s choice of data collection and analysis methods. The paper also specifies the data collection method that would be used by the researcher – that is me.
The use of a particular research methodology in general affects the investigator’s selection of data gathering and data analysis methods since specific methodologies go with certain methods for collecting and analyzing data. In other words, a particular methodology may fit with a particular method of data collection and data analysis but it may not fit with a different method of collecting data and analysing data (O’Gorman & MacIntosh 2015). For example, the positivist approach/methodology fits well with gathering data from the participants through the use of unstructured open-ended interviews where the participants can give detailed, in-depth, comprehensive and thorough answers to the interview questions. In essence, the positivism methodology offers the basis from a phenomenon and data mining view which encourages the use of open-ended interview questions in data collection and fits with the qualitative data analysis techniques (Burrell & Morgan 2011).
However, the positivist methodology is not appropriate for use with gathering data using structured interview questions or questionnaire that allow the respondents to give answers from a given option; that is, limit the answers of the respondents to Yes or No answers (Toksoz 2012). The social constructivism methodology provides understanding from an expertise, knowledge or interest interaction stance between dissimilar parties. As such, this methodology does not really go with the open-ended interview questions and analyzing data through the use of qualitative data analysis techniques. In essence, the social constructivism methodology fits with observation data collection method or questionnaire surveys and analysis of data with the use of quantitative data analysis techniques.
Chosen method: Interviews
Bearing in mind the selected methodology and epistemological stance – that is, the positivist methodology – the method that would be used for data collection will be interviews. Interviews could be carried out over the telephone or in person. Interviews could be semi-structured, unstructured or structured (Williams 2011). Qualitative interviews would be carried out by the researcher so as to ascertain a broad representation of the study from every stakeholder who is involved in the gas/oil sector in the Gulf Cooperation Council (GCC) member states (Sang, & Seong-Min 2013).
During the interviews, the researcher will ask questions that are clear, focused, and open-ended and therefore the researcher will encourage open-ended responses from the interviewees. The researcher will use unstructured interviews with open ended questions as this would allow the participants to provide more detailed information that will help answer the research questions. Structured closed-ended questions would not be used in the interviews given that such questions generally ask a standard set of questions which do not allow the interviewer to given in-depth responses (O’Gorman & MacIntosh 2015). Interviews could either be carried out over the telephone or face-to-face. Telephone interviews are by and large less costly and less time consuming, and the investigator has ready access to the study participants who have a telephone or mobile phone. Even so, the shortcomings of telephone interviews include the fact that the response rates may not be as high as the response obtained with face-to-face interviews, although it is significantly higher in comparison to the mailed questionnaire surveys.
On the other hand, face-to-face interviews have a unique advantage of allowing the researchers to build rapport with the potential study subjects and thus gain their cooperation. Face-to-face interviews, as Williams (2011) pointed out, generate the highest rates of response in survey research. In addition, face-to-face interviews enable researchers to elucidate unclear answers and whenever appropriate, seek follow-up information from the participants. Even so, shortcomings of this interview technique include the fact that it is not practical whenever large samples are involved in the study. It may also be expensive and time-consuming to carry out. All in all, the researcher in the proposed study will conduct in-depth interviews considering that this would be a qualitative research study and the fact that this study would assume the positivist methodology in determining the effect or consequence of oil price drop on Gulf Cooperation Council countries.
Conclusion
In conclusion, the usage of any given research methodology affects the investigator’s selection of data collection as well as data analysis methods since particular methodologies fit with certain data gathering and data analysis methods but do not fit with others. Put simpy, a particular methodology may fit with a particular method of data collection and data analysis but it may not fit with another method of collecting data and analysing data. Bearing in mind the selected methodology and epistemological stance, the method that would be used for data collection will be interviews.
References
Burrell, G., & Morgan, G 2011, Sociological paradigms and organisational analysis: Elements of the sociology of corporate life. Heinemann: London
Easterby-Smith, M., Thorpe, R. & Jackson, P 2012, Management research, 4th ed. London: SAGE Publications
O’Gorman, K. D., & MacIntosh, R 2015, Research Methods for Business and Management, 2nd Edition, Goodfellow Publishers Ltd: Oxford.
Sang, K, & Seong-Min, Y 2013, ‘Return and Volatility Transmission Between Oil Prices and Emerging Asian Markets’, Seoul Journal Of Business, 19, 2, pp. 73-93, Business Source Complete, EBSCOhost, viewed 7 June 2016.
Toksoz, M 2012, ‘The Gulf Cooperation Council and the global recession’, Journal Of Balkan & Near Eastern Studies, 12, 2, pp. 195-206, Academic Search Premier, EBSCOhost, viewed 7 June 2016.
Williams, C 2011, Research Methods. Boston, MA: The Clute Institute.
We can write this or a similar paper for you! Simply fill the order form!
1- Mayer, D. K. (2012). Anatomy of a drug shortage. Clinical Journal of Oncology Nursing, 16(2), 107-108.
2- McKeever, A. E., Bloch, J. R., & Bratic, A. (2013). Drug shortages and the burden of access to care: A critical issue affecting patients with cancer.Clinical Journal of Oncology Nursing, 17(5), 490-495. doi:10.1188/13.CJON.490-495
View the following website:
3- Food and Drug Administration website: www.fda.gov/Drugs/DrugSafety/DrugShortages/default.htm
In the middle of a patient’s course of chemotherapy, the medication is no longer available due to manufacturing issues. The chemotherapy was prescribed for this patient using scientific evidence and patient outcomes.
Answer the following Questions:
1-How does the shortage of chemotherapy drugs impact health care?
2-Are work arounds such as substitute medications a safe alternative? Why or why not?
3-Identify an intervention that you can implement as the RN to advocate for safe patient care and positive patient outcomes when drug shortages occur.
Requirements of the paper:
1-APA style
2-275 pages minimum
3-Two citations using the articles mentioned above
SAMPLE ANSWER
The shortage of chemotherapy drugs causes the increase in the compromised patients care and healthcare costs. The delivery of healthcare is affected also leading to poor adherence to the research protocols in the health sector. The patient safety is not guaranteed with the shortage of the drugs. There have been many cases of canceled treatment and delayments which negatively impact healthcare. As a result of the delays and cancellations, the patients suffer fatal side-effect toxicities and cases of chemotherapy complications due to the shortage of drugs (Mayer, 2012). Death also occurs due to the medical errors resulting from the shortage of drugs. The shortage of drugs has also negatively impacted the medical reimbursement. The healthcare is affected negatively by the many challenges in reimbursement that result due to the drugs shortage (Mckeever et al, 2013). The shortage of the chemotherapy drugs has prompted the clinicians and pharmacists to find alternative means of solving the crisis. The pharmacists have become proactive and provided a list of the alternative agents for the drugs that are in shortage. The alternative agents provided are compared to the drugs about dosing at the onset of action and duration at the end of clinical dosing. The main reason as to why there are works around when it comes to the drug shortage issues is to deal with the daily challenges and future problems that might come along. The pharmacists and medical practitioners seek to support patient care by coming up with the alternative to acting as substitute medication (Mayer, 2012).
One of the intervention strategies I would adopt is the provision of education and putting more focus on the illness prevention. Such effort that seeks to prevent illnesses is teaching better habits, nutrition and also exercises methods. Communicating with others about the issue to ensure they have a collective voice when it comes to the drugs supply shortage (Mckeever et al, 2013). Advocating for a system that gives a vital and exclusive perspective when it comes to patient care would contribute to having more positive outcomes among the patients. Starting and providing the support groups for the patients and their family members is essential when interacting with them during the drug shortage crisis. In a nutshell, implementing of the efficient and reliable protocol goes a long way in ensuring drug shortage is properly addressed.
References
Mayer, D. K. (2012). Anatomy of a drug shortage. Clinical Journal of Oncology Nursing, 16(2), 107-108.
McKeever, A. E., Bloch, J. R., & Bratic, A. (2013). Drug shortages and the burden of access to care: A critical issue affecting patients with cancer.Clinical Journal of Oncology Nursing, 17(5), 490-495. doi:10.1188/13.CJON.490-495
We can write this or a similar paper for you! Simply fill the order form!
The objective of research design and methodology is to help us organize our research in a way so that it can achieve our research goals. The selection of an appropriate research methodology will greatly depend on our epistemological perspective (Easterby-Smith, Thorpe & Jackson, 2012). Both positivism and social constructionism has a number of methodologies that we can choose from depending on our research problem and the broader context in which we are going to carry out our research. For positivism, researchers can choose from experimental and quasi-experimental methods. And as we know, social constructionism approach is more concerned about several possible truths and outcomes in a research. Constructionism does not believe that there is one absolute truth and as such it has a larger group of methodologies that researchers can choose from, such as action research, cooperative inquiry, archival research, ethnography and narrative methods.
After reviewing the different available methodologies, I initially seem to prefer the narrative method more than others. This method states that stories are an important part of an organization and should be given due consideration in a research. The argument is that there are information and insights about organizations that can only be obtained through learning the organizational stories rather than conventional methods (Easterby-Smith, Thorpe & Jackson, 2012). These stories can be gathered either through participant observation or through conversations and interviews with relevant people in the organization. The advantage of this method is that it provides researchers with a broad view of the research problem and the situation in the target organization(s). They are also helpful in understanding the relationship between employees and the organization and so I expect it to be relevant to my proposed research problem i.e. cultural implications of 360 degree performance appraisal in Kabul based non-profit organizations. My previous employer, FAO decided to adopt the 360 degree performance feedback approach back in 2011 but faced a lot of problems in the process. I was with FAO at the time and could identify some of the problems why this initiative was not success. And recalling those issues, I feel a narrative method can give me greater flexibility to carry out an in depth research by talking to FAO employees and listening to their stories about why the 360 degree initiative was not successful.
Case method seems another approach which can be relevant and appropriate for my research problem. In this method, the researcher focuses on one or small number of organizations and studies them in depth. Advocates of positivism approach support multiple case studies while those in favour of social constructionism prefer one case study (Easterby-Smith, Thorpe & Jackson, 2012) and so it might be prudent for me to also consider focusing on one particular organization. I think it would be a particularly appropriate approach considering that most non-profit organizations in Kabul have similar structures and systems. My only concern with case method is that it seems the collection of data will require a reasonable amount of time. Given that we have around nine months for our dissertation, I will hope that this amount of time will suffice for case method approach.
Using case method, Swain et al (2004) carried out their research on the City of Milwaukee Health Department (MHD). MHD was using the conventional method i.e. supervisor to employee approach of performance appraisal in the past. The authors tried to investigate whether employee performance will improve if the MHD decided to use the multi-rater 360 degree performance appraisal system. They conducted interviews with senior managers and other employees in the organization. The use of 360 degree performance appraisal has been successful in private sector organizations but the question was whether a public sector organization MHD can also benefit from adopting such a system. This reminds me about the challenges we faced with my previous employer FAO (which is also non-profit UN organization) while trying to adopt the 360 degree performance appraisal. Initially there was a lot of optimism about the initiative but the end result was not as satisfying as we expected. However MHD’s story is different as they were successful in implementing the 360 degree performance feedback approach. Unlike FAO, MHD decided to involve employees at all levels in the development and implementation of the program. This can be a relevant area for my research as in the case of FAO, our senior management were not able to consider training or involvement of employees in the process of deploying the 360 degree performance appraisal. At the time I felt FAO employees did not take ownership of the new system and simply accepted it as an administrative requirement imposed on them by senior management.
On the other hand, Lee and Akhtar (1996) have used mixed method of research to investigate about the determinants of employee willingness to accept feedback for improving their performance. They conducted their research in two phases. In the first phase they used a questionnaire and distributed it to 100 employees in both public and private sector organizations. In the second phase, they conducted focus group interviews with employees. Based on results, they were able to identify objectives of appraisal, performance planning, fairness and accuracy of performance appraisal as vital determinants for employees to have a positive impression and attitude towards performance feedback. In this case, the use of mixed method seems a suitable approach as the research problem is not very complex or broad. The authors were trying to inquire about determinants of employees’ willingness to accept performance feedback for which a questionnaire approach combined with interviews could work well. However, the same approach may not be appropriate for my research about 360 degree performance appraisal and its cultural implications as a questionnaire may not be able to capture the different perspectives, ideas and outcomes in the context of a complex phenomena like societal or organizational culture.
References:
Easterby-Smith, M., Thorpe, R. & Jackson, P. (2012) Management research. 4th ed.
London: SAGE Publications.
Lee, J. & Akhtar, S. (1996), “Determinants of employee willingness to use feedback for performance improvement: cultural and organizational interpretation”, International Journal of Human Resource Management, 7 (4), pp. 878-890 [Online]. Available from: https://liverpool.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=5819396&site=eds-live&scope=site (Accessed: 09 June 2016).
Swain, G. et al. (2004), “Three hundred sixty degree feedback: Program implementation in a local health department”, Journal of Public Health Management and Practice, 10 (3), pp. 266-271 [Online]. Available from: http://ovidsp.tx.ovid.com.liverpool.idm.oclc.org/ (Accessed: 09 June 2016).
SAMPLE ANSWER
Introduction
Positivism and social construction ism stem from philosophies of knowledge and disciplines that literally have shaped various practices and research in the last centuries. Positivism theories are based on assumptions that conclusive decisions on world view can be made through examination of evidence, our senses and empirical research. Social construction ism world view argues that a set of social interactions that exist between certain people can be fully understood through examination of various factors that contribute, shape or influence circumstances. Positivism draws its foundation on quantitative and empirical data that most positivists’ researcher s believe are reliable (Fletcher and Baldry 2000).
Three hundred Sixty Degree Feedback organizations systems are very popular in some businesses but have limited application in public health entities (Swain 2004). This feedback system is very effective in boosting employee morale, performance, organizational culture and the other organizational target outcomes. The City of Milwaukee Health Department (MHD) has succeeded in implementing the system for several years. MHD succeeded in perfecting the 360 degrees performance and feedback appraisal systems by involving its entire staff at all levels during the introduction and subsequent development and implementation of the system. Unlike other organizations like FAO that failed to implement the system successfully in the first instance due to lack of goodwill from its employees, MHD realized that the contribution of its employees was crucial to the success of the exercise (Swain 2004).
The linkage between performance and feedback can be related to the differences in the behaviors of the leaders against their subordinates especially in an environment where the leaders practice authoritarian leadership style. The differences affect the effectiveness of the feedback systems.
The factors determining employee willingness to embrace feedback systems are many and varied. Utilizing mixed methods of research various factors were identified such as positive management impression of the whole concept of the feedback system, employees’ involvement, fairness and accuracy in performance appraisal systems (Fletcher and Baldry 2000). Case method of re-searching on a limited number of companies and thoroughly studying them in depth provides a deeper insight as advocated by the proponents of positivism. Social construction ism favors a single case study that focuses on specific organization. The major misgiving is that, case method requires more time to gather additional data collection that may take a longer time (Easterby-Smith, Thorpe & Jackson 2012). But according to Ellinger et al (2003) job performance and successful appraisal systems augur well with managerial coaching. Employees develop self-awareness and gain clear understanding of their expectations and responsibilities through systematic feedback from their managers. The objective of the feedback systems besides assessing training needs provides a systematic method of uniform communication (Lee and Akhtar 1996).
To conclude, various research methods can be applied when evaluating the effectiveness of feedback systems. Feedback from customers can be utilized to identify the problem areas or strengths to generate ideas in order to improve the services. MHD succeeded in perfecting the 360 degrees performance and feedback appraisal systems by involving its employees at all levels during the development and implementation of the 360 degrees feedback system while FAO failed to implement a successful feedback system due to non inclusion of its employees in its introductory stages (Lee and Akhtar 1996). Employees develop self-awareness and gain clear understanding of their expectations and responsibilities through systematic participation in the entire process.
References
Ellinger, A. D., Ellinger, A. E., & Keller, S. B., 2003, Supervisory coaching behavior, employee satisfaction, and warehouse employee performance: A dyadic perspective in the distribution industry. Human Resource Development Quarterly, 14(4), 435-458.
Fletcher, C. and Baldry, C., 2000, Journal of Occupational and Organizational Psychology, Volume 73, Issue3 pgs 303 – 319, September.
Easterby-Smith, M., Thorpe, R. & Jackson, P., 2012, Management research. 4th ed.
London: SAGE Publications.
Lee, J. & Akhtar, S., 1996, Determinants of employee willingness to use feedback for performance improvement: cultural and organizational interpretational, International Journal of Human Resource Management, 7 (4), pp. 878-890 [Online]. Available from: https://liverpool.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=5819396&site=eds-live&scope=site (Accessed: 09 June 2016).
Swain, G. et al., 2004, Three hundred sixty degree feedback: Program implementation in a local health departmental, Journal of Public Health Management and Practice, 10 (3), pp. 266-271 [Online]. Available from: http://ovidsp.tx.ovid.com.liverpool.idm.oclc.org/ (Accessed: 09 June 2016)
We can write this or a similar paper for you! Simply fill the order form!
ResearchIntegrity Approval Form Order Instructions: I am now ready to submit my research question. I read that this is a free offer from you all on the website. Thereafter I will pay from my 20 000 words.
Research Integrity Approval Form
The outlined research is to allow me to present the basis of my proposed research topic.
Select a topic area and provide a sound rationale for the proposed research- awareness of the value of the research to the industry, organization and or practitioner should be shown. Link topics studied in the Masters in Business Management programme should be identified.
My major is in Sales and Marketing.
I was thinking along the lines of increasing sales through proper modern marketing in the retail business in the Caribbean such as banks and retail stores.
The proposal requires about 500 words.
Research Integrity Approval Form Sample Answer
Business School
Research Integrity Approval Form
Section 1: Research details
Name:
Matriculation number:
Date:
Title of proposed dissertation
How Caribbean Retail Stores Can Use Digital Marketing to Increase Sales
Aim:
To evaluate the effect of social media on increasing sales in retail stores in the Caribbean
To determine whether or not social media is the best approach for enhancing sales in the retail business in the Caribbean.
Objectives:
To conduct an external environmental scan to determine the competitive situation of retail business in the Caribbean [PESTEL and Porter’s diamond or similar]
To analyze relevant Governmental policies to determine their impact on the retail sector.
To analyze relevant literature on the effect of social media on increasing sales in retail stores in the Caribbean
To conduct research and analyze findings
To make recommendations to help promote the use of social media in the Caribbean’s retail sector.
Section 2: Details of the research methods to be used
How will the data be collected (please outline all methods that you propose to utilize (e.g. self-completion questionnaires, structured interviews; semi-structured interviews; focus groups, etc).
self-completion questionnaires
What data collection tools will be used: questionnaire
Where will the data will be gathered (e.g. in an organization, on the street, at an event, online, etc).
On-line
How will the data sample be selected (e.g. random, cluster, sequential, network sampling, etc.).
Random selection
Section 3 – research subject details
Who will be the research subjects in this study?
Business owners, and employees in the Caribbean retail industry
How will research subjects be invited to take part (e.g. letter, email, asked in lecture, etc)?
Through email
Explain how informed consent will be achieved.
Informed consent will be attained voluntarily whereby participants will be notified of threats as well as benefits of the study.
Will participants be free NOT to take part if they choose?
yes
Will any individual be identifiable in the findings?
No; to protect their privacy. Instead, the scholar will use substitute identities.
How will the findings be disseminated? For example, do you plan to provide participating organizations with a summary of findings?
Yes; in this case, the participating retail stores will get a summary of the findings through the email.
Is there any possibility of any harm (social, psychological, professional, economic etc) to participants who take part or do not take part? Give details.
No. The scholar will take into consideration ethical issues stipulated by IRB so as to protect the human subjects involved in the study. Moreover, the researcher will get informed consent from respondents while ensuring that they understand the risks and benefits of the study
How / where will data be stored? Who will have access to it? Will it be secure? How long will the data be kept? What will be done with the data at the end of the project?
Data will be stored on web servers and protected by a password and it will be accessed by authorized persons such as research team. The data will be kept until the project is completed after which it will be discarded
Section 3 – Declaration
I confirm that I have discussed ethical issues with my supervisor.