School of Education Quantitative Research Plan

School of Education Quantitative Research Plan
School of Education Quantitative Research Plan

School of Education Quantitative Research Plan

School of Education Quantitative Research Plan Paper

Order Instructions:

See the attached file for the comments.

SAMPLE ANSWER

STATEMENT OF ORIGINAL WORK

I understand that Capella University’s Academic Honesty Policy (3.01.01) holds learners accountable for the integrity of work they submit, which includes, but is not limited to, discussion postings, assignments, comprehensive exams, and the dissertation. Learners are expected to understand the Policy and know that it is their responsibility to learn about instructor and general academic expectations with regard to proper citation of sources in written work as specified in the APA Publication Manual, 6th Ed. Serious sanctions can result from violations of any type of the Academic Honesty Policy including dismissal from the university.

I attest that this document represents my own work. Where I have used the ideas of others, I have paraphrased and given credit according to the guidelines of the APA Publication Manual, 6th Ed. Where I have used the words of others, (i.e. direct quotes), I have followed the guidelines for using direct quotes prescribed by the APA Publication Manual, 6th Ed.

I have read, understood, and abided by Capella University’s Academic Honesty Policy (3.01.01). I further understand that Capella University takes plagiarism seriously; regardless of intention, the result is the same.

LEARNER NAME: Bridgette Johnson

LEARNER ID: (kindly insert student ID)

Capella email address: BJohnson44@capellauniversity.edu

MENTOR NAME: Dr. Donna Flood

Date: July 15, 201

School of Education

Research Plan: QUANTITATIVE

This Research Plan (RP), version 2.O, must be completed and reviewed before taking steps to collect data and write the dissertation. In the School of Education, the satisfactory completion of this plan satisfies dissertation milestone 5, indicating that the RP proposal has passed the “scientific merit review,” part of the IRB process.

Specialization Chair’s Approval after Section 1

When you have completed Section 1 along with initial references in section 5 send the RP to your mentor for review. When your mentor considers it is ready, he or she sends it to Dissertation Support to forward to your specialization Chair. The Chair approves the topic as appropriate within your specialization. You then go on to complete the remaining sections of the RP.

Do’s and Don’ts

  • Do use the correct form! This RP is for QUANTITATIVE designs.
  • Do prepare your answers in a separate Word document. Editing and revising will be easier.
    • Set font formatting to Times New Roman, 11 point, regular style font Do set paragraph indentation (“Format” menu) for no indentation, no spacing.
  • Do copy/paste items into the right-hand fields when they are ready.
  • Don’t delete the descriptions in the left column!
  • Don’t lock the form. That will stop you from editing and revising within the form.
  • Do complete the “Learner Information” (A.) of the first table and Section 1 first.
  • Don’t skip items or sections. If an item does not apply to your study, type “NA” in its field.
  • Do read the item descriptions and their respective Instructions Items request very specific information. Be sure you understand what is asked. (Good practice for IRB!)
  • Do use primary sources to the greatest extent possible as references. Textbooks are not acceptable as the only references supporting methodological and design choices.
  • Do submit a revised RP if, after approval, you change your design elements. It may not need a second review, but should be on file before your IRB application is submitted.

Scientific Merit

The following criteria will be used to establish scientific merit. The purpose of the review will be to evaluate if the study:

  • Advances the scientific knowledge base.
  • Makes a contribution to research theory.
  • Demonstrates understanding of theories and approaches related to the selected research methodology.

GENERAL INSTRUCTIONS

Complete the following steps to request research plan approval for your dissertation:

Topic Approval

  1. Develop topic and methodological approach:
  • Talk with your mentor about your ideas for your dissertation topic and a possible methodological approach.
  • Collaborate with your mentor to refine your topic into a specific educational research project that will add to the existing literature on your topic.
  1. Complete Section 1 of the RP
  • Complete Section 1 addressing the topic and basic methodology and e-mail the form to your mentor for approval. Follow the instructions carefully.
  • Collaborate with your mentor until you have mentor approval for the topic. After you have received mentor approval for Section 1, your mentor will submit these sections to your specialization chair for topic approval via dissertation@capella.edu.

The specialization chair will notify you and your mentor of their approval and will send a copy of the approval to dissertation@capella.edu.

Milestones 3 and 4

  1. Complete Remaining RP
  • After your specialization chair approves the topic and basic methodology, continue to collaborate with your mentor to plan the details of your methodological approach.
  • Once you and your mentor have agreed on clear plans for the details of the methodology, complete the remainders of the RP form and submit the completed RP form to your mentor for approval.
  • Expect that you will go through several revisions. Collaborate with your mentor until you have their approval of your RP plan.
  • After you have a polished version, you and your mentor should both review the Research Plan criteria for each section, to ensure you have provided the requisite information to demonstrate you have met each of the scientific merit criteria.
  1. After your mentor has approved your RP (Milestone 3), s/he will forward your RP to your Committee for their approval (Milestone 4).
  • Mentor and committee approval does not guarantee research plan approval. Each review is independent and serves to ensure your research plan demonstrates research competency.

Milestone 5

  • After you have obtained mentor (Milestone 3) AND committee (Milestone 4) approvals of the completed RP form, your mentor will submit the completed RP via dissertation@capella.edu to have your form reviewed for Scientific Merit.
  • (a). RP form in review: The scientific merit reviewer will review each item to determine whether you have met each of the criteria. You must meet all the criteria to obtain reviewer approval. The reviewer will designate your RP as one of the following:
  • Approved
  • Deferred for minor or major revisions
  • Not approved
  • Not ready for review
  • Other
  • (b). If the RP has been deferred:
  • The SMR reviewer will provide feedback on any criteria that you have not met.
  • You are required to make the necessary revisions and obtain approval for the revisions from your mentor.
  • Once you have mentor approval for your revisions, your mentor will submit your RP for a second review.
  • You will be notified if your RP has been approved, deferred for major or minor revisions, or not approved.
  • Up to three attempts to obtain research plan approval are allowed. Researchers, mentors, and reviewers should make every possible attempt to resolve issues before the RP is failed for the third time. If a researcher does not pass the scientific merit review on the third attempt, then the case will be referred to the research specialists in the School of Education for review, evaluation, and intervention.
  • While you await approval of your RP, you should be working to complete your IRB application and supporting documents.
  • Once you have gained Research Plan approval (Milestone 5), you are ready to submit your IRB application and supporting documents for review by the IRB team.

Milestone 6

  1. Submit the Approved RP to the IRB:
  • Once you obtain research plan approval, write your IRB application and accompanying materials.
  • Consult the Research at Capella area within iGuide for IRB forms and detailed process directions.
  • You are required to obtain research plan approval before you may receive IRB approval. Obtaining research plan approval does not guarantee that IRB approval will follow.

Milestone 7

  1. Complete the Research Plan Conference call:
  • Once you have gained approval by the IRB, you are ready to schedule your Proposed Research Conference Call. You may not proceed to data collection until you have completed this set.
  • Work with your mentor and committee to set a date for the conference call.
  • Upon successful completion of the Proposed Research Conference Call, your mentor will complete the corresponding Milestone Report and you are ready for data collection.

Researchers, please insert your answers directly into the expandable boxes that have been provided!

A.  Learner and Program Information
(to be completed by Researcher)

Researcher Name

Bridgette Massey-Johnson

Researcher Email

Bridgette.johnson@mps.k12.a.us

Researcher ID Number

XXXXXXXXXXXX

Mentor Name

Donna Flood

Mentor Email

Donna.flood@capella.edu

Specialization

Curriculum and Instruction

Spec Chair Email

Melissa McIntyre melissa.mcintyre@capella.edu

Committee Member

Adrienne Gibson

Email

Adrienne.gibson@capella.edu

Committee Member

Patricia Guillory

Email

Patricia.guillory@capella.edu

 

 

Section 1.  Research Problem, Significance, Question(s), Title: Quantitative

 

1.1  Proposed Dissertation Title

 

(Usually a statement based on the research question–short and to the point.)

 

PREDICTOR OF STUDENT PERFORMANCE FOR GRADE 6 IN READING: A CORRELATION BETWEEN STAR READING SCORES AND PERFORMANCE ON THE ALABAMA READING AND MATH TEST (ARMT)

1.2 Research Topic

 

Describe the specific topic to be studied in a paragraph. (Be certain that the research question relates to the topic.)

 

In order to identify and meet the learning needs of students with academic deficiencies in both reading and mathematics, The No Child Left Behind Act of 2002 [NCLB, 2002] (Sanger, 2012, p 43) mandates that each school district implements the Response to Instruction (RtI) model. Mahoney and Hall (2010) reported that RtI is a service-based model designed to meet the learning needs of students prior to diagnosis and placement in special education settings (p.1). The model also requires that schools have to make Adequate Yearly Progress (AYP) based on standards outlined in NCLB.  Local Education Agencies (LEAs) are granted Title I funds to assist with ensuring that schools teach state standards to mastery (US Department of Education, 2003).  In the event that a schools fails to meet AYP for three consecutive years, then the management of the schools becomes liable to penalties such as a compulsory restructuring of the school or an imminent takeover by the state.

 

In states such as Alabama, districts have utilized Renaissance Incorporated’s Star Reading and Math Computer Adaptive Tests (CAT) to conduct universal screenings, in order to diagnose interventions, and to monitor students’ progress (Alabama State Department of Education, 2013). Currently, the area of Computer Adaptive Assessment has elicited few quantitative studies nonetheless, summative tests conducted by the state identified that the assessments have led to an increase in reading achievement. Based on these findings, this study is designed to determine whether or not there is a statistical relationship between the STAR reading assessments and student performance on the ARMT reading assessment. The population for the study will consist of approximately 500 sixth grade students from three middle schools selected from the state of Alabama.  The sub groups will be disaggregated according to their level of performance on both the CAT and on the STAR assessments.  All sixth graders are assessed using the STAR assessment as an initial screening to determine the degree of intervention necessary for each student based on his or her scaled score.

1.3 Research Problem

 

Write a brief statement that fully describes the problem being addressed. Present this in one sentence or no more than one clear concise paragraph.

 

Statement of the Problem

 

One challenge for educators has been the ability to locate an assessment system that assists with determining which students are on track to meet performance standards. School districts must find systemic and systematic ways to identify students who struggle with certain concepts and be able to provide individualized, differentiated intervention and instruction appropriate for the students’ needs.  It is important that the tool being used is an accurate predictor of the scores on the end of year state summative assessment.

1.4 Research Purpose

 

Write a brief statement that fully describes the intent of the study or the reason for conducting the study. Present this in one sentence or no more than one clear concise paragraph.

 

The purpose of this research study is to determine the statistical relationship between STAR reading assessments and the performance of using the ARMT reading assessment.
1.5 Research Question(s)

 

List the primary research question and any sub questions addressed by the proposed study. The primary research question should flow logically from the problem statement and purpose statement and be very similar in wording although phrased as a question.  Include alternative and null hypotheses as appropriate.

 

Research Question/Hypothesis

RQ1 – What is the extent of the relationship between student performance on STAR reading benchmark tests and performance on the ARMT?

RH1 – Student performance on the STAR is correlated to their performance on the ARMT.

H01 – Student performance on the STAR is not correlated to their performance on the ARMT.

RQ2 – Is there a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT?

RH2- There is a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT.

H02 – There is not a statistically significant correlation difference between sixth grade students’ performance

RQ3 – Is the STAR formative assessment a predictor of student performance on the standardized ARMT?

RH3 – STAR formative assessment is a predictor of student performance on the standardized ARMT.

H01 – STAR formative assessment is not a predictor of student performance on the standardized ARMT.

Variables:

Two variables will be identified for this study:

Dependent variable – students’ scaled scores on the Alabama Reading and Math Test in March, 2012

Independent variable – Predictor variables – the students’ scores on the STAR during first semester.

 

1.6 Literature Review Section

 

Provide a brief overview of the conceptual framework upon which your study is based. Identify the seminal research and theories that inform your study. Discuss the topics and themes that you will use to organize your literature review. Attach the most current list of references with the Research Plan.

 

Theoretical Framework

 

For the purposes of this study, the conceptual framework is founded on constructivists’ theories that include hypothesis by authors and researchers such as Piaget (1952), Bruner (1966), Dewey (1933), and Vygotsky (1962).  Piaget (1952) is known for his learning stages including sensory motor, pre operational concrete operational, and formal operational.  Bruner (1966) is remembered for the five E’s: engage, explore, explain, elaborate and evaluate. Bruner’s (1966) theories contend that students construct their own learning.   Dewey (1933) theorized that children should engage in real life applications and collaboration with other students. He contended that knowledge is constructed from previous experiences.  Finally, Vygotsky (1962) focused on  scaffolding as a teaching strategy which is utilized in conjunction with the students’ zone of proximal development which is the “distance between the actual developmental level as determined by independent problem solving and the level of potential development under adult guidance, or in collaboration with more capable peers” (Anderson, 1993, p. 134). This theoretical framework is relevant to the current study because Renaissance Star Reading identifies students’ Zone of Proximal Development (ZPD) that is defined as the range of reading that a student can engage in without reaching a level of frustration. The ZPD uses a grade equivalent to estimate the zone of proximal development, which is based on the Vygotsky’s (1962) learning theory.

 

On January 8, 2002, President George W. Bush, Jr. signed the No Child Left Behind Act into becoming a legislation. One element encompassed in the NCLB (2002) is its emphasis on the use of standardized, end of course tests in math and reading, with sanctions imposed on schools that do not perform according to state standards.  ESEA (1994) and NCLB (2002) placed standardized tests on the priority list, and schools are compelled to utilize the results as a basis for measuring student achievement. In order to monitor progress, schools have adopted various initiatives as a way to focus on results through the collection of data. Tilly, (2006) identifies the example of some educators who believe that the newest educational fad is RtI.

 

Bianco, (2010) and Erickson, Gaumer, Pattie and Jenson, (2012) provided specific details about Tier 1, Tier 2, and Tier 3, which are the foundation of RtI. The first level of instruction, Tier 1 is also known as core reading instruction.  Generally, Tier 1 takes place in a regular classroom setting, where students participate in a core, scientifically-based reading program.  Tier 1 is presented in a whole group format and includes independent student practice.  The scientifically-based reading program focuses on word study, vocabulary, fluency, and comprehension.  Tier 1 is provided for at least 90 minutes each school day. Students are tested on benchmarks covered during Tier 1 at least three times during the academic school year, typically during the beginning, the middle, and the end of the school year.  The purpose of testing students is so that teachers can determine and address their students’ learning needs.  When a student is unsuccessful during Tier 1 the teacher serves as the sole interventionist.  Teachers who provide Tier 1 must continually participate pedagogical in professional development, which provides them with strategies needed to ensure that each student receives high-quality core reading instruction.  Students, who do not demonstrate sufficient progress during Tier 1, enter into Tier 2, which encompasses a different purpose and format (Bianco, 2010; Erickson et al., 2012).

Bianco (2010) and Erickson et al. (2012) reported that Tier 2 is the second level of reading instruction.  The purpose of Tier 2 is to provide students, who were identified as struggling readers during Tier 1, with strategic interventions that will address their specific learning needs.  During Tier 2, struggling readers are provided with instruction, which supplement the instruction they received during whole group lessons.

 

The typical amount of time allotted for Tier 2 is 30 to 60 minutes each school day.  Unlike Tier 1, Tier 2 takes place within a small group setting, which usually consists of no more than 6 students.   Also unlike Tier 1, which requires students to be tested three times a year, during Tier 2, students’ progress is monitored every two weeks.  Also during Tier 2, students received in depth scientifically based reading instruction that also emphasizes word study, vocabulary, fluency, and comprehension.  During Tier 2, teachers also provide additional modeling, scaffolding, practice, and feedback.  Unlike Tier 1, which only utilizes teachers as interventionist, additional personnel such as teacher assistants and other trained individuals, who have been designated by the school may provide Tier 2 interventions.  However, all Tier 2 interventionist must continually participate in professional development.  When a student does not demonstrate adequate progress within the Tier 2 intervention, the student enters into Tier 3 (Bianco, 2010; Erickson et al., 2012).

 

The third level of instruction reported by Bianco (2010) and Erickson et al. (2012) is Tier 3.  The purpose of Tier 3 is to provide intensive reading instruction for students, who continue to demonstrate significant reading difficulties after receiving Tier 1 and Tier 2.  Tier 3 is typically provided in addition to Tier 1 and Tier 2 for 30 to 60 minutes each day.   Tier 3 consists of intensive reading strategies, which target extreme reading deficits.  Similar to Tier 2, Tier 3 intervention may include an increased amount of time.  However, during Tier 3, teachers may also use of different materials and smaller group sizes.  Tier 3 is also similar to both Tier 1 and Tier 2 in that it focuses on word study, vocabulary, fluency, and comprehension.  However, Tier 3 includes intensive scientifically based reading instruction, which has been designed to explicitly, systematic, and corrective activities for students.   As with Tier 2, Tier 3 interventions may be provided by someone other than the teacher.  However, all Tier 3 interventionists must participate in the on-going professional development. Additionally, Bianco, (2010); Erickson et al., (2012) identify that  students may participate in Tier 3 outside of their classroom or in any setting which has been designated by the teachers and administrators.  The progress of students who participate in Tier 3 is monitored at intervals of one to two weeks.

 

Tilly, (2006) asserted that the national initiative has two facets; formative assessment and progress monitoring.  RtI is simply providing research based curriculum instruction and assessment to students on tiers based on their specific needs as identified through data analysis (Tilly, 2006). Schools are able to identify potential dropouts by analyzing assessment data (Balfanz & Legters, 2004).  According to Williams (2009), a benchmark assessment is designed to be an interim assessment that can be used for both formative and summative purposes and they allow educators to monitor students’ progress towards standards mastery and to predict their performance on the end of year exams.

 

As the district schools seek resources to assist them fully implement RtI, vendors are eager to sell their newest packaged program and materials thus schools are using computerized instructions and assessment programs to assist them increase their test scores. Gersten, (2008) illustrates data that is useful in identifying students who are at risk of failure in math/reading and need more intensive instruction. Alabama schools are implementing benchmark assessments in middle schools in order to begin early intervention for students at risk of dropping out and schools must begin early to identify and provide intervention to students who do not master the benchmark assessment (Alabama State Department of Education, 2013).  Increasing the graduation rate has become a priority in Alabama schools. This study described one way of identifying students who may be at risk of failing state mandated standardized tests.

 

For instance, a nine grader who is perceived as being on the right track has a higher chance of graduating at the end of the course as compared to a ninth grader  who was not on track. Bolfanz and Legters (2004) inferred that a student who does not show academic growth in the 9th grade is most likely to fail or drop out. According to Bolfanz and Legters (2004), seven states have graduation rates lower than Alabama: Florida, Georgia, Louisiana, Mississippi, Nevada, New Mexico, and South Carolina. On the other hand, demographic data show that white students in Alabama graduate at a rate of 75.8% compared to 65.4% for African Americans.

 

Renaissance STAR reading assessments are a way to assist schools with tracking children’s chances of graduating (Allensworth & Easton, 2005). If used correctly, this type of formative assessment can be used to inform teachers and parents of problems with how well students are responding to research based instruction. Schools across the nation are seeking ways to increase the high school dropout rates in public schools. The research is also based on discussions given by different authors on how middle schools try to identify students who are at risk of failing in their assessments thus helping reduce their chances or repeating their grade classes and dropping out of school. STAR Reading is a computer based software program that is used as a universal screener and progress monitoring tool for students in grades k-8.

 

According to data released by USDOE (2012), the percentage of graduating students from Alabama state has increased tremendously. In spite of the increase, the percentage number of students graduating from this state remains considerably low when compared to the national average. These empirical statistics imply that the graduation rates in Alabama are still low and as a result, the state has been lagging behind when compared to its neighboring states.

 

In order to gauge or predict the number of students who are likely to pass the end of year assessments, districts have begun to implement formative assessments as a quick way to gather data about student performance.   Yeo (2009) contended that increased emphasis on scores have prompted districts to utilize predictive assessment tools to identify struggling students early in the school year (Allensworth & Easton, 2005). This type of formative evaluation offer an advantage of collecting data during the instructional process and being able to adjust instruction based on the data. (Deno & Espin, 1991; Shinn & Smolkowski, 2002).

 

This proposed study will address issues on how tutoring assessment is utilized to predict the chances of students’ success during the end of year assessment.  Pompham (2010, p. 138) contends that formative assessment is a planned process in which assessment elicits evidence on the status of the students and thus it is used by teachers to adjust their ongoing instructional procedures.

 

With regard to the same topic, Herman (2009) asserts that very little research has been conducted relative to the effectiveness of benchmark testing.  Brown and Coughlin (2007) insist that specific guidelines must be established to analyze the reliability of assessment items prior to utilizing them. Universal screening provides benchmark assessments which are aligned with Alabama standards and administered three times each year.  These assessments are used to formatively place students in learning tiers and to allow teachers to monitor progress and predict performance on state end of course tests (Williams, 2009).

 

It is imperative that students are demonstrating high levels of performance in their end of year assessments (Amrein, Berlin, & Rideaus, 2010). Yeo (2009) writes that schools have responded to achievement scores by using predictive assessments in order to identify students who might be in need of special learning assistance. Bransford, Brown, and Cocking (2000) identify that assessment is a core component for effective teaching.

 

State assessments should be aligned with state standards indicating which concepts and skills students need (Wiggins, McTighe, & Tyler, 2011). STAR is administered to determine if students who perform well on STAR perform well on the ARMT.  Herman (2009) indicated that little research has been done on the effectiveness of benchmark tests, and the reliability of school developed benchmark testing is significant.

 

 

1.7 Need for the Study

 

Describe the need for the study. Provide a rationale or need for studying a particular issue or phenomenon. Describe how the study is relevant to your specialization area.

 

Schools must have a way to monitor students who need intervention (Gersen, 2008).  Furthermore, they need to know whether or not the tools they are using are valid and reliable predictor on the expected performance at the end of the year assessments. The reason for examining sixth grade, in this study is motivated by a recent decision by the department of curriculum and instruction that STAR reading does not have to be used beyond the elementary grades.  Should this study show a relationship between the two assessments, this policy may be changed, and students may be assessed using STAR throughout middle and high school.

1.8 Methodology

 

Describe the basic quantitative approach and methodology you propose to use. State whether the study will be descriptive, experimental or quasi-experimental, etc.  State the name of the specific type of design to be used and describe the method(s) will you use to collect the data.

 

 

This study will utilize quantitative methods of research and analyze the data from sixth graders at three middle schools in Alabama to determine if a significant relationship exists between students’ scores on the STAR and the ARMT. The sixth grade was chosen because it is the first year of middle school and also marks the transition in the curriculum.

 

Leedy and Ormrod (2013) stated that quantitative research presented empirical and statistical evidence that helps explain given social phenomenon or problem.  The use of quantitative research is seconded by the fact that it can be used to investigate larger population samples since it uses statistical analysis to summarize findings thus facilitate making inferences (Cozby & Bates, 2012). Quantitative research design also identifies an experimental design, which includes both independent and dependent variables. According to Creswell (2012), quantitative research designs attempt to maximize objectivity and are easier to be replicated thus ensuring credibility of the research findings.  The results of quantitative studies can also most often be generalized to larger populations. Specifically, because quantitative research designs are not inclusive of the researchers’ bias, perceptions, and experience, the process is more objective. Furthermore, the use of numerical data from instruments such as surveys and make the results more valid.

 

Cohen, Marion, and Morrison (2013) reported that there are several types of quantitative research designs: (1) descriptive; (2) correlational, (3) causal comparative; and (4) an experimental research design. A causal comparative research design seeks to establish a cause and effect relationships between an independent variable and a dependent variable (Cohen, Minion, & Morrison, 2013).  The current research study will utilize a causal comparative design in order to determine whether or not a computer assistive test is an effective tool for predicting student performance on the Alabama reading and math test.

 

STAR was used to determine students reading levels during the first semester.  These same students took the ARMT in May.  The scores from STAR will be compared to ARMT to identify the levels each students fall within. The results can be used to provide early intervention for those at risk of failure. T-tests will be utilized along with a correlational analysis using Pearson’s product-moment correlation coefficient abbreviated as Pearson r.

 

Data Collection

 

Archival data will be collected from 2012-2013 school year. The students’ names will be coded for confidentiality purposes.  The names will be matched to ensure that students who took STAR also took ARMT the same year or vice versa.  This is to ensure validity and reliability. The data will be disaggregated by test year, school, gender, ethnicity, and students with disabilities.

 

Data will be retrieved from district data warehousing systems.

 

Assumptions/Limitations:

 

Several assumptions were made for this study. First of all, the population of students is the same for both tests. Also, both tests are reliable instruments with the test items correlated or aligned. A regression analysis could be used to determine if the sample is a representation of the population, the variables are normally distributed and there is a straight line for independent and dependent arable. There must be variance of errors or variables must be error free.

 

 

Ethical considerations have been addressed; student’s names will not be used to ensure anonymity. The Superintendent has given permission to conduct the study prior to research plan being submitted.

 

 

 

 DISSERTATION RESEARCHERS:  STOP!!!

 Forward completed Section 1 plus your references gathered so far to your Mentor for review and for Specialization Chairs’ Approval. (Work on your full Literature Review while waiting for topic approval)

 

Section 2.     Advancing Scientific Knowledge

DISSERTATION RESEARCHERS: Do not complete remaining sections until you have received topic approval.

 

Your study should advance the scientific knowledge base in your field by meeting one or more of these four criteria:

 

  1. The study should address something that is not known or has not been studied before.
  2. The study should be new or different from other studies in some way.
  3. The study should extend prior research on the topic in some way.
  4. The study should fill a gap in the existing literature.

 

Specifically describe how your research will advance scientific knowledge on your topic by answering all of these questions.  Include in-text citations as needed.

 

2.1 Advancing Scientific Knowledge

 

Demonstrate how the study (a) will advance the scientific knowledge base; (b) is grounded in the field of education; and (c) addresses something that is not known, something that is new or different from prior research, something that extends prior research, or something that fills a gap in the existing literature. Describe precisely how your study will add to the existing body of literature on your topic. It can be a small step forward in a line of current research but it must add to the body of scientific knowledge in your specialization area and on the topic.

 

1.

To respond to this question you will need to:

 

Provide a paragraph that describes the background for your study and how your research question relates to the background of the study.

 

Background

 

According to the US DOE, the implementation of NCLB (2002) has caused educators to focus on assessment as an integral part of the teaching and learning process. Educators are using formative assessment strategies to gauge how well students are likely to perform during the state mandated summative assessments.  Lembke and Stecker (2007) asserts that the passage of this law has left these districts schools with no option but to produce positive outcomes for all students while monitor student growth over time.  This has been implemented in most of the districts in form of formative benchmark assessments. Marshall (2005) contends that through the utilization of ongoing, frequent, formative assessments, school administrators have begun to marvel at the results they are receiving after the assessments.  School personnel are interested in utilizing periodic benchmarks to predict student performance on end of year accountability tests (Olson, 2005).   This study proposes to add to the existing knowledge base by providing additional formative assessment methods which can predict student outcomes when undertaking the state mandated tests.

 

Previous researches on the topic have identified that through current mandated reform standards, public schools and Head Starts are responsible for implementing high-quality intervention programs to help improve reading achievement of at-risk students (van Kleeck & Schuele, 2010). Early Reading First, Response to Intervention, Reading First, NCLB Act of 2001, IDEA, and the National Reading Panel (2000) collectively support the development and implementation of models to endorse early literacy development with at-risk students (Gettinger & Stoiber, 2010). The current shift toward implementing and developing systematic evidence-based growth models as an intervention ensures a child entering kindergarten will have the prerequisites to become good readers (Gettinger & Stoiber, 2010). In research, five essential elements of scientific-based reading procedures, including phonics, phonemic awareness, fluency, reading comprehension, and vocabulary, are pinpointed, which are important to early literacy development and future success in reading achievement (Podhajski et al., 2009). This study therefore borrows from the scientific based procedures thus guiding the answering of the research question.

 

Through the use of Star reading Computer Adaptive Test (CAT), schools claim to be able to predict whether or not students who score proficient on their benchmark will also show proficiency on the Alabama Reading and Math Test.  The purpose of this type of formative assessment is to target students’ areas of weakness to be able to address learning disabilities before students fail or get too far behind (McGlinchey & Hixon, 2004).  In o rder to ensure that students are showing academic growth, schools need valid, reliable means by which to measure their progress throughout the year. (Stecker, Lemble, & Foegen, 2008). This study has the potential to add to the field of curriculum and instruction by advancing knowledge in the field of tutorial skills.  This study will build on the research of Yeo (2009), Fuchs (2004), and Lembke and Stecker (2007).  If the findings are deemed worthy, this study will provide a means of predicting student outcomes in an effort to be able to examine the effectiveness of classroom instruction, modify instructional practices, and provide empirical intervention. This study can serve to inform school districts on whether or not STAR reading assessment is a predictor of students’ success on the reading portion of the Alabama Reading and Math Test.

 

Researchers across the United States are seeking alternative indicators in order to establish the predictive validity between various reading programs’ benchmark assessments and state assessments. Several similar studies have compared benchmark test results to end of year state assessments (Shadish, Cook & Campbell, 2002). One study addressed the correlation between Academy of Reading and Georgia End of Course Test (Brazelton, 2012).  The purpose of the study was to determine whether or not a correlation existed between the benchmark formative assessments and the Georgia Ninth Grade Composition end of year summative test.  The limitations of this study were that it was limited to Georgia and only two test administrations which was a small sample. Another study was conducted using data from three schools in Southwest Virginia to determine if there was a predictive relationship between fifth grade math scores and their benchmark tests.  Predicting how well students will perform on a state test is important (Helwig, Anderson & Tindal, 2002; Happen & Therriault, 2008).

 

The researcher intends to add to the scientific knowledge by replicating past studies based on recommendations for future research found in past studies (. Scores of sixth grade students from diverse backgrounds and from three different middle schools will be examined in this study.

 

2.2 Theoretical Implications

 

Describe the theoretical implications you believe your study could have for the field of education and your specialization area.

 

The philosophy of education is founded on a constructivists’ theoretical framework that includes the beliefs of Piaget (1952), Bruner (1966), Dewey (1933), and Vygotsky (1962).  Piaget (1952) is known for his learning stages including sensory motor, pre operational concrete operational, and formal operational.  Bruner (1966) is remembered for the five E’s: engage, explore, explain, elaborate and evaluate. Bruner’s (1966) theories contend that students construct their own learning.   Dewey (1933) theorized that children should engage in real life applications and collaboration with other students. He contends that knowledge is constructed from previous experiences.  Finally, Vygotsky (1962) focuses on  scaffolding as a teaching strategy which is utilized in conjunction with the students’ zone of proximal development which is the “distance between the actual developmental level as determined by independent problem solving and the level of potential development under adult guidance, or in collaboration with more capable peers” (Anderson, 1993, p. 134). This theoretical framework is relevant to the current study because  Renaissance Star Reading identifies students’ Zone of Proximal Development (ZPD) that is defined as the range of reading that a student can engage in without reaching a level of frustration. The ZPD uses a grade equivalent to estimate the zone of proximal development, which is based on the Vygotsky’s (1962) learning theory.

 

Quantitative Research is “an inquiry into a social or human problem based on testing a theory composed of variables, measured with numbers, and analyzed with statistical procedures, in order to determine whether or not the predictive generalizations of the theory hold true  (Creswell, 1994).  If successful, this process can continue to be replicated so that schools can make the right decisions relative to purchasing products claiming to predict student outcomes.

 

2.3 Practical Implications

 

Describe any practical implications that may result from your research.  Specifically, describe any implications the research may have for understanding phenomena for practitioners, the population being studied, or a particular type of work, mental health, educational, community, stakeholders or other setting.

 

There is a paucity of research on what to do with benchmark assessment scores. (Herman & Baker 2005; Shepard, 2010; Protheroe, 2009). Wiggins and McTighe (1998) discussed assessment as an integral component in the learning process. They write that if mastery is not achieved, teachers should find a different method of delivering and applying a different form of assessment.  Too often, schools use either teacher made or store bought assessment tools to label students; however, many aspects of benchmark assessments have not been well-researched (Brown & Coughlin, 2007).  These assessments are only relevant if they are applied to the curriculum so that they can be used to predict students’ success during state assessments (Wood, 2006). For instance, if it is noted that a students who did not score proficiently in Star also exhibited a non-mastery on the ARMT, then, it is assumed that the placement of students for intervention was accomplished.

Review of Section 2. Advancing Scientific Knowledge

Does the study advance scientific knowledge in the field and the specialization area by meeting one or more of these four criteria?

Does the study address something that is not known or has not been studied before?

Is this study new or different from other studies in some way?

Does the study extend prior research on the topic in some way?

Does the study fill a gap in the existing literature?

_____YES ____ NO

Reviewer Comments:

 

 

Section 3. Contributions of the Proposed Study to the Field
3.1 Contributions to the Field

 

Briefly describe the primary theoretical basis for the study.  Describe the major theory (or theories) that will serve as the foundation for the research problem and research questions and provide any corresponding citations.

 

According to Linn (2000), as early as 1920, students were assessed using SAT. In the 1950s, testing was being used for accountability and by the 1990s, standards based assessments began to be administered (Linn, 2000).  This assertions are seconded by the constructivist theory which is also known as a theory of epistemology or knowledge. The theory formulated by Jean Piaget posits that human beings have the inert ability to generate meaning and knowledge by interacting their experiences with ideas. The theory is relevant to this study since it determines the interaction between behavioral patterns, reflexes and human experiences which are subtle factors in enhancing the learning process. The theory has been supported by other theorists such as Seymour Paper in his educational theory which borrows heavily from the constructivist theory (Ryan, 2006). Collectively, the two theories emphasize on experiential learning. It is believed that Piaget’s constructivist theory has greatly impacted on learning and teaching processes in the education sector and this makes the theory relevant to this study as it seeks to induce educational reforms.

In line with the theory, Linn (2000) stated that the assessment provides necessary documentation for the state of schools and a method by which educators are able to make informed decisions about student learning.  Researchers suggested that there should exist a substantial predictive relationship between scores on benchmarks and scores on end of year tests (Herman, Mellard, & Linn, 2009; Helwig, Anderson, & Tindal, 2002; Ryan, 2006; Hintz & Christ, 2004). Literature by Black and William (1998) stated that assessments represent a constructivist view.  The researchers also asserted that formative and summative assessments are interconnected, although formative has the greatest impact on student learning (Black & Wiliam, 1998). Annual state tests provide a comprehensive view of how well students are performing, however, benchmark assessments are needed to determine which students are prepared (Pompham, 2009).  It is difficult to determine if benchmark assessments are, indeed, valid predictors of end of year tests; therefore, the entire process could be flawed if there are not actual predictors. Good predictive assessments provide teachers with timely, relevant data to be able to develop needed interventions for students (McGlin, Chey, & Hixson, 2004).

3.2 Contributions to the Field

 

Your study should contribute to research theory in your field by meeting one or more of these four criteria:

 

A. The study should generate new theory.

B. The study should refine or add to an existing theory.

C.         The study should test to confirm or refute a theory.

D.         The study should expand theory by telling us something new about application or processes

Describe how your study will contribute to research theory in your field by meeting one or more of these four criteria.

 

 

A. This study proposes to generate new theory about whether or not STAR reading computer adaptive test is a predictor of student performance on the ARMT.  Furthermore, the findings from this study are expected to aid teachers and administrators about the impact of STAR on the ARMT

 

B.  This study will add to existing theories about the validity or reliability of formative benchmark assessments.

 

C. This test will confirm whether or not STAR is a predictor of success on the ARMT.

 

D. This study will tell us something new about the relationship between the two noted assessments.

 

This study has the potential to advance the knowledge base in the area of benchmark tests used as predictors and build on existing research that posits that STAR is an accurate predictor of state tests.  This study will expand theory by providing new information about the predictive nature of a benchmark test on students’ performance on end of year state tests.

 

 

Review of Section 3. Contributions to the Field

 

Does the Research make a contribution to research theory in one or more of these four ways?

Does the research generate a new theory?

Does the research refine or add to a new theory?

Does the research test to confirm or refute theory?

Does the research expand theory by telling us something new about application or processes?

_____YES ____ NO

 

Reviewer Comments

 

 

Section 4. Methodology Details

4.1 Purpose of the Study

 

Describe the purpose of the study.

Why are you doing it?  (The answer must be grounded in the literature in what has been done–hasn’t been done or needs to be done.)

 

 

Creswell (2008) wrote that the purpose statement, the most important part of the study, provides the reader a clear direction and focus as to why research is being conducted. The purpose of this study is to determine whether or not a correlation exists between the scores students received on the computer adaptive program, STAR reading, and their scores on the Alabama reading and math test. The data obtained from this study will be used to determine if STAR is the best tool to use for tiered instruction and intervention.

Assessment is a high priority for districts and should be developed prior to the curriculum (Wiggins & McTighe, 2006) Frequent benchmark assessment and progress monitoring are essential components of RTI, RTI is a fundamental intervention practice which affords students the opportunity to receive modified or adjusted instructions based on how they respond to research based best practices (Fuchs & Fuchs, 2006).

District school are seeking benchmark assessments in order to identify students at risk of failing benchmark tests (Silberglitt & Hintzel, 2005) A quantitative study will be used evaluate the utilization of STAR as a predictor for mastery on the Alabama Reading and Math Test.   The following questions will be addressed in this study:

Research Question/Hypothesis

 

RQ1 – What is the extent of the relationship, if any, between student performance on STAR reading benchmark tests and performance on the ARMT?

RH1 – Student performance on the STAR is correlated to their performance on the ARMT.

H01 – Student performance on the STAR is not correlated to their performance on the ARMT.

RQ2 – Is there a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT?

RH2- There is a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT.

H02 – There is not a statistically significant correlation difference between sixth grade students’ performance

RQ3 – Is the STAR formative assessment a predictor of student performance on the standardized ARMT?

RH3 – STAR formative assessment is a predictor of student performance on the standardized ARMT.

H01 – STAR formative assessment is not a predictor of student performance on the standardized ARMT.

Variables:

Two variables will be identified for this study:

Dependent variable – students’ scaled scores on the Alabama Reading and Math Test in March, 2012

Independent variable – Predictor variables – the students’ scores on the STAR during first semester.

This study will inform the area of curriculum and instruction by determining the predictability of a benchmark assessment, and provide districts with information about whether or not the purchased assessment is indeed conducive for determining which students will be successful.  There was a study conducted to determine how a particular benchmark assessment was implemented to predict American Indians’ scores on the end of course assessments (Stiggins & DeFour, 2009). Little research has been conducted to determine whether or not STAR reading is a predictor of success on the ARMT.

Review of 4.1 Purpose of the Study

 

Is the purpose of the study clearly stated?

 

_____YES ____ NO

 

Reviewer comments:

4.2 Research Design

 

Describe the research design you will use. Start by specifically stating the type of quantitative approach you will use (descriptive, experimental, and quasi-experimental), include the exact name or type of design to be used and describe the exact method(s) (archival, survey, observations) you will use to collect the data. Briefly describe how the study will be conducted.

 

The research design of this study is quantitative using descriptive statistics (Salkind, 2008). The design was determined by the research question.  A non-experimental correlation design will show the relationships between the two variables. Quantitative data are used to investigate larger population samples and are based on statistical analysis of data (Cozby & Bates, 2012). Quantitative research design also identifies an experimental design, which include independent and dependent variables.    According to Creswell (2012), quantitative research designs attempt maximize objectivity and are easier to be replicated.   The results of the study will be summarizing using visual representation tools such as scatterplot and graphs (Salkind, 2008).

First, the researcher wishes to determine if a relationship exists between STAR benchmark assessment and Alabama Reading and Math test reading section. Next, the researcher seeks to determine if STAR is a predictor of performance on the ARMT.  A quantitative, correlational research design will be used to determine if a predictive relationship exists between two assessments. Pearson’s r will be used to compute a numeric linear relationship between the scores on the two assessments (Pagano, 2004). In correlational research design, two variables are measured and recorded (Creswell, 2013).  Next, the measurements are reviewed in order to determine if a relationship exists and if so, what the extent of the relationship is for making predictions.  The predictor variable, the scores from the STAR and the criterion variable, the scores from the ARMT will be analyzed.

Correlational research attempts to determine whether and to what degree a relationship exists between two or more quantifiable variables. However, it never establishes a cause-effect relationship. The relationship is expressed by correlation coefficient, which is a number between .00 and 1.00. (Gay, 1996). Permission from Capella IRB and the district Superintendent will be obtained.  Data will be gathered from a data warehousing system. SPSS will be used to aggregate and analyze the data. The results will be analyzed by the help of competent statisticians who will guide in making inferences regarding the quantitative data collected. The results will then be reported.

Review of 4.2 Research Design

 

Does the research design proposed seem appropriate for the research question? Is the research design clearly and accurately described?  Can the design answer the research questions or test the hypotheses with the proposed sample, design and analysis?

 

 

 

_____YES ____ NO

 

 

Reviewer Comments

 

 

 

4.3 Population and Sample

 

Describe the characteristics of the larger population from which the sample (study participants) will be drawn. Next describe the sample that will participate in the study and justify the sample size using a power analysis or some other justification supported in the literature.

The variables will be test scores of both male and female students of all races and ethnicities.  The data to be used for the study will include the scores for the 2011 administration of the STAR reading benchmark assessment and the results of the state mandated ARMT from May of 2012 will be utilized.

 

The population consists of randomly selected sixth graders at traditional middle schools. Only the students’ scores of the students who took both tests will be included in the study. In order to ensure that the sample size is adequate to get a large representation of scores the sample will consist of approximately 500 students of both genders. Larger samples are needed for heterogeneous populations (Leedy & Ormond, 2001). The population is 1600; therefore an adequate sample has been selected because if the population is 1500, 20% should be sampled (Gay, 1996). Statistical validity considers the larger sample to reveal more accurate results (Creswell, 2003).

 

Review of 4.3 Population and Sample

 

Are the population and the sample fully and accurately described? Is the sample size appropriate?

 

_____YES ____ NO

 

Reviewer Comments

4.4 Sampling Procedures

 

Describe how you plan to select the sample. Be sure to list the name of the specific sampling strategy you will use. Describe each of the steps from recruitment through contact and screening to consenting to participate in the study. Provide enough detail so that someone else would be able to follow this recipe to replicate the study.

 

Random sampling will be used in this study. No actual student names will be used but pseudonyms will be used to represent the scores on both tests.  The sample criteria are the essential characteristics necessary for the target population (Burns & Grove, 2001).  Scores of all sixth grade students from the three “priority schools” will be utilized.  The students at these schools are administered the STAR test and their scores drive the curriculum that they are placed into for intervention.

 

The criteria for this study include:

 

Students must have participated in the spring administration of the STAR.

Students must have participated in the summer administration of the ARMT.

Students must be first time sixth graders.

 

Steps from recruitment, contact and screening to consenting and participating:

 

1.  According to Burns and Grove (2001), confidentiality must be adhered to.  The researcher proposes not to link the students to the data but to use Pseudonyms to represent the scores on the two assessments.

2. The researcher will maintain privacy in all aspects of the study.

3.  The researcher will obtain permission from the superintendent and Institutional Review Board at Capella University.

 

The researchers will gather the scores from the district data warehousing system and enter the scores into the SPSS to be disaggregated and charted.

 

 

 

 

Review of 4.4 Sampling Procedures

 

Is participant involvement and participant selection fully described and appropriate for the study?

 

_____YES ____ NO

 

Reviewer Comments

4.5 Instruments

 

Describe in detail all data collection instruments and measures (tests, questionnaires, scales, interview protocols, and so forth). This section should include a description of each instrument or measure, any demographic information you plan to collect, its normative data, validity and reliability statistics. Include (A) citations for published measures, (B) data type(s) generated by each measure, and (C) available psychometric information (including validity & reliability coefficients for each Scale or instrument.

Explain how each variable will be operationally defined and the scale of measurement used for each variable (nominal, ordinal, interval, ratio).

 

Attach a copy of each instrument you plan to use as an appendix to your RESEARCH PLAN. If permission is required to use the instrument, please attach a copy of documentation showing that you have permission to use the instrument.

 

 

The ARMT is a criterion-referenced test. It consists of selected items from the Stanford Achievement Test (Stanford 10) which matches the Alabama state content standards in reading and mathematics. Additional test items were developed to be included so that all content standards were fully covered. It is this combination of Stanford 10 items and newly developed items that is known as the ARMT. The ARMT has a 100% alignment to the Alabama state content standards in reading and mathematics.

Decades of research have shown that Computer Adaptive Tests such as STAR reading can be considerably more efficient than conventional tests that present all students with the same test questions. (Lord, 1980; McBride & Martinn, 1983).  The ARMT is a standards based assessment used to determine mastery of state curricula.

 

The researcher will gather anonymous data from each administration as the research instruments for this study.  The students test scores will be gleaned from sixth grade students at three middle schools in Alabama.  An analysis of 500 scores will determine if a statistically significant correlation exists between students’ performance on the STAR and the ARMT.  The performance groups will be all students who took both tests, low scores, middle scores, and high scores.

 

The criterion variable will be the ARMT scores, and the predictor variable will be the STAR scores.

 

Review of 4.5 Instruments

 

Are any instrument(s), measures, scales, to be used, appropriate for this study?  Do the reliability and validity measures of all measurement instruments or scales justify using the instrument?

 

_____YES ____ NO

 

Reviewer Comments

 

4.6 Data Collection Procedures

 

Describe where and how you will get the data and describe the exact procedure(s) that will be used to collect the data.  This is a step-by-step description of exactly how the research will be conducted. This should read like a recipe for the data collection procedures to be followed in your study. Be sure to include all the necessary details so that someone else will be able to clearly understand how you will obtain your data.

Muijs (2004) displayed a quantitative model in which he refers to the people or things that data is collected from as units or cases.  The data collected are called variables, which by definition mean “different data” and the research question is based on the relationship between or amongst the data sets. Before data can be collected, approval must be obtained from the superintendent and the Internal Review Board of Capella University. A sample approval letter must be attached.  Upon receipt of written approval, test scores from the district data warehousing system will be utilized.  Data included will be:

 

2011 Fall administration of the Star reading assessment

2012 Spring administration of the ARMT reading assessment

Students’ scores on the STAR.

Students’ scores on the ARMT.

 

Although ARMT data is public information, STAR data has to be obtained through assignment of a password.  Once the data has been gathered descriptive statistics consisting of the mean, median, mode, standard deviation, and scores will be computed using SPSS. This type of relationship will be distributed visually by a scatter plot (Creswell, 2013).

Review of 4.6 Data Collection Procedures

 

Does the mentee describe in detail the procedure to be followed in a step-by-step way so that it is completely clear how the research will be conducted? Is the data collection appropriate for the proposed study?

 

_____YES ____ NO

 

Reviewer Comments

4.7 Proposed Data Analyses

 

List the research question and sub-questions, followed by the null and alternative or research hypotheses (in quantitative studies) for each research question.  Then describe all methods and all procedures for data analysis including: (a) types of data to be analyzed, (b) organizing raw data, (c) managing and processing data, (d) preparation of data for analysis, (e) the actual data analyses to be conducted to answer each of the research questions and/or to test each hypothesis, including descriptive statistics, any hypothesis tests and any post-hoc analyses, and describe (f) storage and protection of data.

 

Note:  Be sure to include the level of measurement you will use for your variables in the analyses.

 

Research Question/Hypothesis

RQ1 – What is the extent of the relationship between student performance on STAR reading benchmark tests and performance on the ARMT?

RH1 – Student performance on the STAR is absolutely correlated to their performance on the ARMT.

H01 – Student performance on the STAR is not correlated to their performance on the ARMT.

RQ2 – Is there a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT?

RH2- There is a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT.

H02 – There is not a statistically significant correlation difference between sixth grade students’ performance

RQ3 – Is the STAR formative assessment a predictor of student performance on the standardized ARMT?

RH3 – STAR formative assessment is a predictor of student performance on the standardized ARMT.

H01 – STAR formative assessment is not a predictor of student performance on the standardized ARMT.

Variables:

Two variables will be identified for this study:

Dependent variable – students’ scaled scores on the Alabama Reading and Math Test in March, 2012

Independent variable – Predictor variables – the students’ scores on the STAR during first semester.

 

 

A Pearson r, will be conducted to determine the strength of the relationship between the STAR and ARMT.   The groups will be divided into low, middle, and high performing based on scaled scores and cut scores.  Results will be analyzed by SPSS and the scatter plot will show scores on horizontal and vertical axes.

 

Proficient scores on the ARMT will be compared to proficient scores on the STAR.

Review of 4.7 Proposed Data Analyses

 

Is the data analysis that is proposed appropriate? Is there alignment between the research questions, proposed methodology, type or types of data to be collected and proposed data analysis? Is the language used to describe the type of design and data analysis plans consistent throughout?

 

_____YES ____ NO

 

Reviewer Comments

 

4.8 Expected Findings

 

Describe the expected results of the data analysis. Discuss the expected outcome of each of the hypotheses and discuss whether or not your expectations are consistent with the research literature on the topic. Provide in-text citations and references in the reference section.

 

Research Question/Hypothesis

RH1 – Students performance on the STAR is expected to show a relationship with their performance on the ARMT.

 

RH2- There is a statistically significant correlation between sixth grade students’ performance on STAR and on ARMT.

 

RH3 – STAR formative assessment is a predictor of student performance on the standardized ARMT.

 

The hypotheses will be addressed by using the Pearson correlation between the STAR and ARMT. A statistically significant positive correlation would allow the researcher to reject H01.  A p-value less than .05 would suggest a statistically significant correlation between the two assessments (Creswell, 2012).

 

 

Variables:

Two variables will be identified for this study:

Dependent variable – students’ scaled scores on the Alabama Reading and Math Test in March, 2012

Independent variable – Predictor variables – the students’ scores on the STAR during first semester.

Review of 4.8 Expected Findings

 

Does the mentee clearly describe the expected findings? Does the mentee discuss the expected findings in the context of the current literature on the topic?

 

 

_____YES ____ NO

 

Reviewer Comments

 

Section 5. References

Provide references for all citations in APA style. Submit your reference list below.

Review of Section 5 References

References

Ainsworth, L. (2007). Common formative assessments: The centerpiece of an integrated standards-based assessment system. In D. Reeves (Ed.), Ahead of the curve: The power of assessment to transform teaching and learning (pp. 79-101). Bloomington: Solution Tree.

 

Alabama State Department of Education. (2012). Alabama’s education report card.

Montgomery AL: Author. Retrieved from

http://www.alsde.edu/general/alabamaeducationreportcard.pdf

 

Allensworth, E. M., & Easton, J. (2005). The on-track indicator as a predictor of high school graduation. Retrieved from http://ccsr.uchicago.edu/publications/p78.pdf

 

Amrein, A. Berliner, D., & Rideau, S. (2010). Cheating in the first, second and third degree: Educators’ responses to high stakes testing. Education Policy Analysis Archives, 18 (14), 36

 

Anderson, R.H., & Pavan, B.N. (1993). Nongradedness: Helping it to happen. Lancaster,

PA: Technomic Publishing.

 

Ash, K. (2008, 2008). Adjusting to test takers. Education Week, 28(13), 1-4. Retrieved from ies.ed.gov/ncee/edlabs/projects/rct_245.asp?section=AL

 

Baenen, N., Ives, S., Warren, T., Gilewicz, E., & Yaman, K. (2006). Effective practices for at-risk elementary and middle school students.

Balfanz, R. (2008). Early warning and intervention systems’ Promise and challenges for policy and practice. National Academy of Education and National Research Council Workshop on Improved Measurement of High School Dropout and Completion Rates. Retrieved from http://www7.nationalacademies.org/BOTA/Paper%by%20Balfanz.pdf

 

Balfanz, R., & Legters, N. (2004). Locating the dropout crisis, which high schools produce the nation’s dropouts? Where are they located? Who attends them? Center for Research on the Education of Students Placed At Risk, Report 70(70). Retrieved from http://www.csos.jhu.edu/crespar/techjReports/Report7/pdf

 

Bianco, S. (2010). Improving student outcomes: Data driven instruction and fidelity of implementation in a response to intervention (RtI) model. Teaching Exceptional      

Children, 6 (5), 2-11.

Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education,

5(1), 7–74.

Brown, R. S., & Coughlin, E. (2007). The predictive validity of   selected benchmark

 assessments used in the Mid-Atlantic Region

Bruner, J.S. (1966). Toward a theory of instruction. Cambridge,   MA: Belknap Press of

Harvard University Press.

Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the identification and development of early mathematics curriculum- based measurement. School Psychology

Review, 33(2), 234–248.

Cozby, P., & Bates, S. (2011). Methods in behavioral research. Utah State University Faculty

Monographs.

Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA. SAGE.

Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. (2nd ed.). Thousand Oaks, CA: Sage Publications.

Deno, S. L., & Espin, C.A. (1991). Evaluation strategies for Preventing and remediating basic skill deficits. In G. Stoner, M.R. Shinn, & H.M. Walker (Eds), Interventions for achievement and behavior problems. Silver Spring, MD: National Association of

School Psychologists, 79-97.

Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. Boston: DC Heath19377.

Erickson, A., Gaumer, N., Pattie, M. & Jenson, R.  (2012). The school implementation scale:

Measuring implementation in response to intervention models. Learning Disabilities.

10 (2), 33-52.

Fuch, L. S., Deno, S. L., & Mirkin, P. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement and student awareness of learning. American Education research Journal, 22(2), 449-460.

Fuchs, L. S. (2004). The past, present and future of curriculum-based measurement research. School Psychology Review, 33(1), 188-192. Retrieved from http://www.masponline.org/publications/spr/index.aspx?vol=33&issue=1

Gay, L. R. (1996). Educational research: Competencies for analysis application. Upper Saddle River, NJ: Merrill.

Gersten, R. (2008). Assisting students struggling with reading: Response to intervention and multi-tier intervention or reading in the primary grades a practice guide (NCEE 2009-4045). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http:/;ies.ed.gov/ncee.wwc/pdf/practiceguides/RTI_reading_pg_021809.pdf

Helwig, R., Anderson. L., & Tindal, G. (2002). Using a concept-grounded, curriculum-based measure in mathematics to predict state-wide test scores for middle school students with learning disabilities. The Journal of Special Education, 36(2), 102-112.

 

Heppen, J. and Therriault, S. (July 2008), Developing Early Warning Systems to Identify Potential High School Dropouts. Washington, DC: National High School Center, http://www.betterhighschools.org/docs/IssueBrief_EarlyWarningSystemsGuide_081408.pdf.

 

Herman, J. L. (2009). Moving to the next generation of standards for science: Building on recent practices. (CRESST Report 762). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.

 

Leedy, P. D., & Ormrod, J. E. (2013). Practical research: Planning and design (9th ed.).

Upper Saddle River, NJ: Prentice Hall.

 

Mahoney, J., & Hall, C. (2013). Response To Intervention: Research And Practice.

 

Marzano, R. (2003). What works in schools: Translating research into action? Alexandra, VA, ASCD.

 

NCLB, No Child Left Behind Act of 2001: United States Pubic Law, Vol. 20, 6301, United States Congress, 107th Congress Session.

 

Pagano, R. R. (2001). Understanding Statistics in the Behavioral Sciences (6thEdition).

             Wadsworth Publishers.

 

Piaget, J., & Cook, M. T. (1952). The origins of intelligence in children.

 

Salkind, N. (2008). Statistics for people who hate statistics. Los Angeles, CA: Sage.

 

Stecker, P.M., Lembke, E., & Foegen, A. (2008). Using progress-monitoring data to improve instructional decision making. Preventing School Failure, 52(2), 48-58.

 

Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi Delta Kappan, 5,641-644.

 

Vygotsky, L. S. (1962). The Development of Scientific Concepts in Childhood.

 

Wiggins, G. and McTighe, J. (2011) Understanding by Design Guide to Advanced Concepts in

            Creating and Reviewing Units. Alexandria, VA: ASCD.

 

NCLB, No Child Left behind Act. (2001). Retrieved August 13, 2013.

 

Popham, J. (2008) Transformative Assessment. Danvers, MA: Association for Supervision and Curriculum Development Publications.

 

Tilly, David W. III, (2006). Response to Intervention: An Overview—What Is It? Why Do It? Is It Worth It? The Special Edge, 19(1), 4-5.

 

US Department of Education (2011). ESEA Flexibility. Retrieved October 5, 2013.

 

US Department of Education (2008, October). A Uniform, Comparable Graduation Rate. Retrieved October 1, 2014 from http://www.2.ed.gov/political/nclb/accountability.

 

US Department of Education ESEA Flexibility. (2011). Retrieved August 17, 2013, from http://www.ed.gov/esea/flexibility

 

Wiggins, G. & McTighe, J. (2013). Understanding by Design. Alexandria: Association for Supervision and Curriculum Development.

 

Williams, D. (2008). Changing classroom practice. Educational Leadership, 65(4), 36-42.

 

Williams, L. (2009). Benchmark testing and success on the Texas assessment of knowledge and skills: A correlational analysis (Doctoral dissertation, University of Phoenix). Retrieved from http://gradworks.umi.com/55/53/3353754.html

 

Yeo, S. (2009). Predicting performance on state achievement tests using curriculum based measurement in reading: A Multilevel Meta-Analysis. Remedial and Special Education, 31(6), 412-422. http://dx.doi.org/

 

 

 

Has the Researcher presented appropriate citations and references in APA style?

 

 

_____YES  ____ NO

 

Reviewer Comments

 

Review of Scholarly Writing

 

Does the Researcher communicate in a scholarly, professional manner that is consistent with the expectations of academia and of the field of education?

 

 

_____YES ____ NO

 

Reviewer Comments:

 

 

Learner:  Stop here and submit to your Mentor for final approval. Continue working on your final literature review while you wait for Research Plan approval.

Mentor: This form must be approved by all committee members prior to submission for Research Plan review.  Please send completed and approved RP to dissertation@capella.edu for Research Plan review.

Directions for Reviewers

Please indicate your decision for this review in the correct place (First Review, Second Review, and Third Review) and insert your electronic signature and the date below.  If the Research Plan has a Final Status of “Approved” “Not approved”, or other please be sure to indicate this Research Plan Review status below as well.  Return your completed form with substantive comments to dissertation@capella.edu

 

Research Plan Information (to be completed by Reviewer only)
Reviewer Name:

 

 

Date Decision

 

First Review

 

 Date Approved ________________

Date Deferred  ________________

 

Rationale for Deferment (see comments on form)

Minor Revisions                 Major Revisions

Not ready for review

Conference call needed with mentor and mentee

 

Second Review

(if needed)

 Date Approved ________________

Date Deferred _________________

 

Rationale for Deferment (see comments on form)

Minor Revisions                 Major Revisions

Not ready for review

Conference call needed with mentor and mentee

 

Third Review

(if needed)

 Date Approved ________________

Date Deferred  ________________

 

 

Rationale for Deferment (see comments on form)

Minor Revisions                 Major Revisions

Not ready for review

Conference call needed with mentor and mentee

 

Sent to Research Chair for Review and Consultation (if needed)

Date: Research Chair Process Review Outcome (see attachments if needed)

 

Conference Call Notes

(if applicable):

 

 Date Approved ________________

Date Deferred  ________________

Rationale for Deferment (see comments on form):

Minor Revisions                 Major Revisions

FINAL RESEARCH PLAN STATUS

Approved

Not Approved

Date Approved:___________________________

Further Reviewer Comments

This section is not part of determining Research Plan approval.  This is an optional space for the Research Plan Reviewer to make note of any practical or ethical concerns. Reviewers are not expected to comment on these issues but they can make comments or recommendations if they believe these may be helpful.  It is recommended that mentors and researchers carefully consider any comments made here as it may help flag issues or problems that need to be addressed before the researcher moves forward or before the study is submitted for ethical review which will be conducted by the IRB.

Optional Reviewer Comments:

This has been a Scientific Merit Review.  Obtaining Scientific Merit approval does not mean you will obtain IRB approval.

Once you have obtained scientific merit approval move forward to write your dissertation proposal. It should be easy because the methodology section of the Research Plan corresponds directly to the sections included in the School of Education’s Dissertation Chapter 3 Guide.

If a mentee does not pass the scientific merit review on the 3rd attempt, then the case will be referred to the Research Specialist in the School of Education and/or the Research Chair for review, evaluation and intervention. Mentees, mentors and reviewers should make every attempt possible to resolve issues before the SMR is failed on

We can write this or a similar paper for you! Simply fill the order form!

Unlike most other websites we deliver what we promise;

  • Our Support Staff are online 24/7
  • Our Writers are available 24/7
  • Most Urgent order is delivered with 6 Hrs
  • 100% Original Assignment Plagiarism report can be sent to you upon request.

GET 15 % DISCOUNT TODAY use the discount code PAPER15 at the order form.

Type of paper Academic level Subject area
Number of pages Paper urgency Cost per page:
 Total: