This page originally authored by Marc Kampschuur and Phillip Chatterton (2007)
The term "assessment" has been used in many contexts to describe many different activities. To define assessment, one must know who or what someone will be assessing and to what end. In defining assessment one cannot overlook the research done by Peter Schwartz and Graham Webb. In their book titled "Assessment: Case Studies, Experience and Practice from Higher Education", they quote Freeman and Lewis as having essentially defined assessment holistically during the 1980s. "Freeman and Lewis identify a number of purposes of assessment including selection;certification/accreditation; maintenance of standards; description; motivation; improving learning; and improving teaching" (Schwartz, P. & Graham, W. 2002)
Even these definitions however do not fully uncover the broad nature of assessment. Assessment can mean everything from grading a single student on a single quiz in a single course to measuring reams of research over many years associated with every student who attended an entire institution. For exampl, on one end of the scale you might have a teaching assistant assessing whether ot not a student has learned the parts to a cell in a Micro Biology quiz question and one the other end of the scale you might have the CIO pouring over data on assessing the IT Literacy of all incoming students. These broadly defined categories in scale (or realms) are also made more complex by the many different approaches and tools that are available.
Assessment is widely recognized as the main "driver" for learning (Schwartz, P. & Graham,W. 2002) ... and as a result it becomes very important to examine technologies that are being used to assist these important practices. The main purpose of this WIKI entry is to examine assessment in general and then to focus in on how technology fits into the current picture. 2002).
History of Assessment
According to Jeanne Pfeifer, a professor at California State University at Sacramento, modern assessment can be defined in three historic stages: Norm Referenced assessment, Criterion Referenced and finally Authentic assessment.
Norm-Referenced Assessment (1910s - 1970s)
Norm-Referenced assessment began in the early 1910s with the military used IQ tests on drafts entering WWI. Norm referenced tests according to Pfeifer were based on a "Bell Curve", they used "standardized tests" and compared students to one another. This means that assessment tasks must be nearly identical in order to compare results from one student to one another. (UTAS Site, 2007)
Criterion-Referenced Assessment (1970s - 1980s)
Criterion-Referenced assessment began in the 1970s. Criterion Referenced assessment was based on specific standards being established. It was also based on the premise that certain information/learning is necessary to continue to the next steps of learning. Students were compared to the criterion ... not one another. Students are still largely assessed by using banks of testing items.
Mastery learning approach stipulates that a student must meet a level of competence in order to gain a given level of master. The results are simple caluclated in terms of pass/no pass or go/no go etc.
Mastery plus approach allows students to gain a level of competency and reach for the next level of competency. This allows for a better system of selection and ensures that the institution can gauge the maximum level of mastery that a student possesses. The idea being that a student can try to jump as high as possible instead of being held back from proving that maximum level of mastery.
Criterion-based approach stipulates that student work will be assessed against publiuc criteria. This is designed to be more transparent and it is designed to measure student results and performance against a known set of criteria. It does still require some estimation on the part of the instructor to determine whether or not the student has met the stipulated public criteria.
(UTAS Site, 2007)
Authentic Assessment (1980s - Present)
Authentic assessment began in the 1980s. Authentic assessment was based on the realization that not all of what is taught can be assessed by paper and pencil exams or multiple choice tests. Authentic assessment required that students demonstrate what they have learned and was performance based (based on constructivist learning theory). Assessment becomes different from testing and grading and there are many different ways of assessing students.
(Pfeifer, J. CSUS, PPT Presentation, date unknown)
Realms of Assessment
Individual assessment revolves around assessing a single student. There are many methods that can be used for assessment of the individual. This realm of assessment is designed to compare the results of an individual against the various metrics that are used for assessment. These metrics can be how they compare to fellow students, how they compare to established standards ... or in a more authentic way ... what they actually know and how they have progressed towards becoming an expert.
Individual assessment revolves around assessing groups of student. There are many methods that can be used for assessment of groups. This realm of assessment is designed to compare the results of a group of individual against the various metrics that are used for assessment. These metrics can be how they compare to other groups, how they compare to established standards ... or in a more authentic way ... what they actually know and how they have progressed towards becoming experts as a collective.
Program assessment works at a higher level than the individual. It is designed to gather data on what is happening within a program, measure that data against defined success criteria ... and finally ideally to alter processes in order to improve future results.
Institution-Wide Assessment is focues around goals that support the entire institution. Usually supported by a group deidcated to assessment across the enterprise, Institution-Wide Assessment is designed to look at how effective the entire institution is toward meeting its mission statement. Institution-Wide assessment can be a complex and laborious process that involves many individuals. Most campuses in Higher Education for example have an Office of Institutional Assessment. This office is tasked with setting a set of assessment goals and responsibilities and then measuring success towards those goals. This can be a fiicult process since many individuals within an institution must take part even if they do not have formal education or a background in assessment practices. There is an ongoing struggle in Higher Education to create a "culture of assessment" so that the right data can be measured and the right steps can be taken to improve processes. Setting standards and measures can be an important step in this direction.
An example of this is the following description from California State University at Sacramento on their on going policy of assessing how CSUS is meeting individual and collective goals.
If you do not have a standard way of meaureing various assessment tools against a common set of goals, it can be very difficult to know if you have been successful.
Types of Assessment
Depending on what stage in the program the assessment is taking place, three types of assessment are typically used: diagnostic, formative and summative.
Diagnostic Asessments are used to determine what student already know and can help to identify any gaps in knowledge that might exist before the instruction in a topic or unit begins. It would be innapropriate for a diagnostic assessment to be reported with student grades as they are being used to identify content and inform the instructor of areas of emphasis or areas for review.
Formative assessment are used in the midst of instruction to help inform the teacher about student learning. Formative assessments help to identify difficulties that students may be having in the learning and provide the teacher and student feedback on what has been successful and what may need to be reinforced. Formative assessment includes more fluid types of assessment like descriptive feedback, criteria setting, goal setting, questioning strategies, self and peer evaluation, observation and student record keeping. According to Thomas Guskey (2007) formative assessments should not be used for grading purposes. Formative assessment is designed to gauge the formation of knowledge, not cummulative student knowledge (NMSA Web Site, 2007)
Summative assessment covers key stages in the learning process and is used to determine what the students know and what they do not know. Summative assessment includes items like Province-wide exams, benchmark assessments, end-of-unit exams, end-of-term exams, assignments and "pop quizzes". Summative assessments are designed to gauge cummulative student knowledge ... not the ongoing knowledge building process. (NMSA Web Site, 2007)
Assessment and Technology
Technology has always played a role in education. The role of technology in assessment is an area that continues to evolve at a rapid rate. As more digital technologies, Web 2.0 tools, and the like become available to students so does the student's ability to demonstrate his/her learning in a variety of ways. How we assess student learning has also benefit from technology.
Computer Adaptive Assessment
Computer Adaptive Assessment (CAA) creates assessment that are tailor themselves to the student knowledge of the outcomes. This relatively new tool, uses previous correct or incorrect responses in order to find out precisely what outcomes a student has mastered and what the student has not. In a traditional exam the teacher design a test based on the outcomes in the class and if the student has not mastered the content they will answer incorrectly. The CAA will choose progressively easier or more challenging questions based on the students response, therefore providing very clear and specific feedback as to what the student actually knows. This is particularly useful when assessing high achieving students as they will not be force to answer multiple questions about outcomes they have clearly mastered. Instead the program will continue to find more challenging questions and help us to truly assess what the student knows.
Assessment in Higher Education Link: http://ahe.cqu.edu.au/
AACSB International (2007) Assessment Resource Center http://www.aacsb.edu/resource_centers/assessment/ov-process-define.asp
Pfeifer, J. CSUS, PPT Presentation, date unknown
Guskey, Thomas R.(2009. Practical solutions for serious problems in standards-based grading. California: Corwin Press
UTAS (2007) Centre for the Advancement of Learning and Teaching (CALT) © University of Tasmania http://www.utas.edu.au/tl/supporting/assessment/judgement.html
Schwartz, P. & Graham,W. (2002) Assessment: Case Studies, Experience and Practice in Higher Education, Routledge, NY.