Reliability and validity of HoNOS

  • Wing J
  • Lelliott P
N/ACitations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

The skills needed by today's students are far greater than those required of students just a decade ago. A greater emphasis is being given to college readiness and college outcomes because employers now, more than ever, expect college graduates to possess writing, critical thinking, and problem solving skills (Hart Research Associates, 2006) in response to the changing demands of available jobs (Autor, Levy, & Murname, 2003). Students can no longer rely solely on the accumulation of disciplinary knowledge and skills. The educational community has begun to emphasize so-called " 21st century " skills (PARCC, 2012; SBAC, 2012) in addition to knowledge in specific content domains (Arum & Roksa, 2011; Porter, McMaken, Hwang, & Yang, 2011; Silva, 2008; Wagner, 2008) in hopes of fostering the development of critical thinking, problem solving, communication, collaboration, creativity and innovation skills (Porter, et al., 2011). In fact, nearly 80% of Association of American Colleges and Universities member institutions have a list of general learning outcomes intended for all students regardless of their academic programs, and skills such as critical thinking and writing are among the most commonly included (Hart Research Associates, 2009). Since 2002, the Council for Aid to Education (CAE) has pioneered the use of performance-based assessments for determining whether students can successfully analyze a body of information and communicate that information in an open-ended response. To date over 700 institutions, both in the United States and internationally, have participated in our performance assessments, either through our flagship product, the Collegiate Learning Assessment (CLA) or its sister assessment, the College and Work Readiness Assessment (CWRA). Performance assessments are open-ended instruments that require students to demonstrate their knowledge, skills, and abilities by generating their own solutions and responses to a given problem rather than selecting the correct answer from a given list. This type of assessment is directly aligned with current national reform efforts in the K-12 arena, which are aimed at improving teaching and learning (NGA & CCSSO, 2010a, 2010b; Partnership for 21st Centry Skills, 2012). The CLA was originally designed to measure an institution's contribution, or value -added, to the development of the higher-order thinking skills of its students (Klein & Benjamin, 2008; Klein, Benjamin, Shavelson, & Bolus, 2007). Therefore the institution, not the individual student, was the primary unit of analysis. This approach allowed institutions to compare their improvements on the CLA with those at similarly selective institutions, and use that information to improve teaching and learning. Ten years later, CAE is launching a new and improved version of the CLA, the CLA+. The CLA+ retains aspects of the CLA that have made it novel and indispensable for educational improvement. Chief among these is the Performance Task (PT). The original CLA PTs assessed four components of higher-order skills: Analytic Reasoning and Evaluation, Problem Solving, Writing Effectiveness, and Writing Mechanics. The CLA+ PTs are improved versions of this original concept and remain the anchor of the assessment. The new PTs measure similar constructs: Analysis and Problem Solving, Writing Effectiveness, and Writing Mechanics. However, new Selected Response Items (SRIs), which measure analysis and problem solving skills, are now being introduced. These selected-response items, like the PTs, are all anchored to documents that emulate real-world scenarios or problems. They are far from the typical recall and recognition multiple-choice items seen in many other standardized assessments. CAE decided to include selected-response items in the CLA+ in order to improve the precision of student-level results. This report provides an overview and results of the pilot study for the CLA+. The assessment approach and the structure of the test distinguish the CLA+ from other assessments of critical thinking. During the first 60 minutes of the test, students complete an integrated performance task that mirrors a real-world challenges that could be encountered in a work or academic environment. The student is provided with three or more documents such as a data table or graph, a newspaper article, a research report, or other critical information sources that students typically encounter in real-world settings. Students are asked to critically read and analyze the information presented in the documents and then generate a written response. Typically, students are asked to make a decision about the scenario presented in the PT, provide supporting evidence from the documents, and refute the opposing argument. The final 30 minutes of the CLA+ consists of a set of 25 selected-response items (SRIs). These selected-response items are aligned to the same critical thinking skills assessed by the PTs. This section of the examination assesses three different constructs: scientific and quantitative reasoning (10 items), critical reading and evaluation (10 items), and critiquing an argument (5 items). Like the Performance Tasks, each problem set is anchored to authentic source documents that require careful analysis and evaluation of information. Reliability and Validity of CLA+ The reliability and validity data reported in this paper pertains to a pilot study of the CLA+ conducted during the spring of 2012 with four participating higher-education institutions. The version of the CLA+ used for the pilot study was longer than the current CLA+. It consisted of two 50-minute PTs and a set of 30 SRIs. Participants in the pilot study had a total of 150 minutes to complete the assessment. Based on feedback from the participating institutions and current CLA clients, CAE shortened the CLA+ from 150 minutes to 90 minutes to be consistent with current the CLA test time. The operational CLA+ will consist of one 60-minute PT and 25 SRIs to be completed in 30 minutes. While the results presented in this paper pertain specifically to data from the CLA+ pilot study and the traditional CLA, the operational CLA+ uses versions of the CLA and CLA+ pilot study PTs and SRIs from the pilot study. We are confident that the reliability and validity of the operational CLA+ will be similar to the longer pilot study version of the CLA+. Since there are fewer items, reliability may be slightly lower than those reported, but sufficient individual student-level reliability will still be achievable.

Cite

CITATION STYLE

APA

Wing, J., & Lelliott, P. (1999). Reliability and validity of HoNOS. Psychiatric Bulletin, 23(6), 375–375. https://doi.org/10.1192/pb.23.6.375

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free