MET:Assessing Design

From UBC Wiki

Assessing the design of learning objects remains a complicated process, due to diverging theoretical perspectives, social and cultural contexts, learning objectives, technological design and other intervening factors. Consequently, good assessment must be multidirectional and focus on a number of criteria that consist of evaluating the design and how well the design provides for the achievement and experience of attaining learning outcomes.


Issues Guiding Selection of Assessment Criteria

Before assessing design, a number of issues should be considered to guide the assessment process and selection of assessment criteria

According to Diane Goldsmith (2007), one of the first issues to consider when informing the assessment process is the role of stakeholders in the learning object. Educators should consider who are the various stakeholders involved in the design and implementation of the learning object. Who is driving the design of the learning object and what is the purpose behind the design should also be taken into account. Of equal importance is who is the learning object designed for and is the design of the object suitable for the stakeholder? When stakeholders are included, the criteria to evaluate learning object design become clearer.

The nature of criteria used for assessment is also important. Educators need to select assessment criteria to convince stakeholders that a learning object is successful in helping students achieve learning objectives. Questions to ask include what are the expectations surrounding the performance of a learning object? What expectations surround the ability of students to use the learning object to achieve learning objectives? What kinds of data need to be collected and considered? Do the criteria for assessment meet the demands of the stakeholders? (Goldsmith, 2007) For example, if an institution is investing in the design of an online simulation then it might be prudent to perform a cost-benefit analysis as to whether the money spent on creating the simulation enhances student learning, reduces classroom time, frees up educational and computer resources and decreases the time the instructor spends with students. Conversely, a cost-benefit analysis would likely be unnecessary in the development of a classroom blog.

The choice of assessment method should be considered before evaluating learning object design. Educators should be aware if any learning and object assessment tools are built into the learning object. If present, there may be little need to create an assessment. If not, they should decide on the form of the assessment tool- rubrics, scales, surveys or questionnaires and if the data collected should be numerical or qualitative using comments and opinions. If the learning object is in the design stages, will it be assessed throughout the design process, or when the design is complete? If the object is in use, will there be formative or summative assessments to determine the learning object’s value in assisting students achieve learning objectives? If formative assessments are used, how often will they be done? By answering these questions, the educator will gain a clearer picture on how to select the criteria to evaluate learning object design.

Once educators have answered the questions that guide the formation of an assessment to evaluate learning object design, they must then decide on criteria. Criteria to assess learning object design fall into two main categories. Any comprehensive assessment incorporates criteria from both. These categories are 1) Technical aspects and 2) Pedagogical Perspective

Technical Aspects

Technical criteria to assess learning object design fall into three areas- affordances, metadata and design heuristics

Design Affordances

A tremendously useful (if not fundamental) tool of instructional designers is the concept of an 'affordance', as first described by J.J Gibson. Affordances can be broken down into two categories- real and perceived. Real affordances are those aspects of design which allow users to perform actions. Perceived affordances are the elements of the design where the user perceives what actions are possible, rather than what is actually possible. Affordances help designers articulate what is made possible (afforded) by their products or solutions. (Norman, 1999)

When creating a learning object, instructors will start by defining their needs, and proceed to select the product or solution whose design affordances meet these needs. For example, a technology-assisted learning environment centered around collaboration will likely employ networked hardware and software, as this affords peer-to-peer collaboration and interaction.

Limitations of Affordances

Affordances help educators make quick assessments as to the suitability of products and solutions for instructional purposes. However, a significant drawback to design affordances is that they provide no information about the degree to which a particular feature is afforded by their interface. It is therefore difficult to assess the suitability of one product or solution over another, if both claim to afford the same usages and behaviours. Put another way, if educators were to go by affordances alone, they would certainly select /appropriate/ tools, but not necessarily the /best/ tools for their purposes. For example, typewriters and digital word processors both afford writing, but one is generally a better choice than the other.

Metadata

The IEEE Learning Technology Standards Committee created a standard definition of learning object metadata. Metadata refers to the information that accompanies the content of a learning object. It is comprised of object type, author, owner, terms of distribution, unique identifier, explanation, size (granularity) datatype and example (IEEE, 2005) (Chrysostomou and Papadopoulos, 2008). When assessing learning object design, evaluating metadata clarifies the extent to which an object is compatible with other software and can be used. For example, a small learning object can easily be uploaded and downloaded, whereas large learning objects- such as simulations may present difficulties. Data type refers to file type that can impact compatibility with other learning objects, operating systems and learning management systems. Educators need to be able to evaluate the compatibility of learning objects with the technology they intend to use to deliver the learning object.

Limitations of Metadata

Solely relying on using metadata to assess learning objects is problematic for metadata only refers to technical aspects of a learning object. It does not assess the potential uses of a learning object, for instruction. It also does not contain elements that indicate the types of learners it is suited for, subject matter or learning objectives. Evaluation is restricted to a technical analysis and educators must rely on their own technical knowledge to perform assessments, which could be challenging for less technologically aware educators.

Design Heuristics

Design Heuristics refers to the usability of a learning object. Usability is an important consideration in instructional design and evaluation of LOs. Criteria to assess design heuristics include reusability, user interface interactivity and ease of use.

Stephen Downes advocates that useful LOs must be reusable, either as individual self-contained objects or as part of a group. The implications for educators are that they need to evaluate the degree to which any LO can be reused. The issue for educators is that no single object can be used for every learning situation. It is comparable to saying one type of car is best suited for every driver and every form of terrain. Educators need to evaluate whether pre-designed learning objects will meet their teaching and technical requirements.

Usability also pertains to assessing the physical design elements of a learning object. Educators should evaluate user interfaces using aesthetic criteria such as professional appearance and clarity of fonts and graphics. When contemplating usability, educators need to assess if the screen layout and user interface are easy to read and allow students to navigate the object easily.(Kay and Knaack, 2008) (Bates and Poole, 2003) For example, the question, "can students read instructions and make choices easily?" can be asked.

Fundamental to assessing LO design is determining if the type of interaction the LO provides, matches the educators’ needs. For example, if an educator wishes to teach new vocabulary, a blog which allows passive forms of reading might not be near as effective as creating a learning object that uses flashcards and fill-in-the blanks, and forces students to memorize and apply new vocabulary. In this case, passive interaction would not suit the educator’s needs.

Limitations of Design Heuristics

As with metadata and affordances, primarily evaluating design from a heuristic perspective limits the educator to a technical-based decision. It does not consider issues of compatibility, ownership, or the ability of LO design to accommodate pedagogical principles.

Pedagogical Perspective

Learning Object design defines instruction and how learning is achieved. Therefore, when evaluating LOs, criteria from a pedagogical perspective should be considered. Pedagogical perspective pertains to the degree to which a LO adheres to and incorporates elements of learning and instruction methodologies. These criteria are derived from several design evaluation instruments and research on LO design done by a number of researchers including Alonso et. al, John Nesbit, David Wiley, Lucia Baruque and Rubens Melo.

These criteria fall into four categories – theoretical application, compliance with Learning objectives, general pedagogical design and student response. The chart below highlights the specific criteria, which are elaborated on in further sections.

Category Criteria
Pedagogical Design Content Accuracy and Depth
Promotes comprehension of underlying concept
Opportunities for problem-solving
Real-world context and application
Learning Objective Compliance LO design centered around Learning objectives
Clarity and articulation of Learning objectives within design
LO design indicates breaking down of objectives into manageable steps
Steps are arranged in a logical sequence to promote learning
Theoretical Application LO based on learning theory
LO design theory congruent with used learning theories
Theory behind LO design appropriate for students
Student Response Easy to use
Learning achieved
}

Pedagogical Design

Alonso et al (2008), Nesbit et al.(2002) and Wiley (2000) discuss elements of pedagogical design that help focus the establishment of criteria to assess LO design. The first criterion to assess is the content inherent in the LO. LO content should be accurate and geared toward the student’s skill level in the relevant subject (Nesbit et al, 2002). In terms of design, content accuracy and appropriateness should be measured. The educator should evaluate the effectiveness of the LO in promoting opportunities for students to engage in problem solving and develop an understanding of concepts being covered. Formative and summative assessments measure the degree to which students have mastered the concepts. When assessing LO design the degree to which students engage in real-world application of knowledge through the use of the LO should be considered, as students desire relevant learning opportunities (Wiley, 2000). The more students apply problem solving skills and concepts to real-world scenarios within an object, the greater the level of educational significance (Alonso et al, 2008).

Learning Objective Compliance

Other criteria on which to evaluate the educational significance of LO design is the degree to which the design complies with learning objectives. Alonso et al. (2008) define learning objectives as “specific knowledge that a student has to acquire about a concept or skill and the tasks to be performed”. A well-designed LO will be built around a specific objective. Assessment criteria could evaluate the clarity and inherentness of learning objectives within the design. Educators and students should clearly understand the goal of the lesson contained within the LO. Educators should also reflect on the nature of the learning objective and evaluate how the LO design breaks these objectives down into manageable steps for students, and how the design moves students through these steps. A strong design will sequence these steps in a logical order to promote learning. (Wiley, 2000)

Theoretical Applications

An important criteria on which to assess LO design is the degree to which the design is congruent with learning theories used by the educator and/or institution (Baruque and Melo, 2004) If educators adhere to constructivist theories, then creating or using a LO that relies on behaviourist theory of the drill and practice method of point and click, would not create learning opportunities that encourage exploration and reflection, advocated by constructivist theory. Placing design assessment within a theoretical framework helps educators narrow the criteria to assess learning achievement based on design, but it also limits the comprehensiveness of evaluation. There is no singular way to predict whether learning will occur, and there is the possibility of learning outside of the chosen theoretical framework, thus any evaluation focusing on a singular theoretical framework, will not measure all types of learning.

Student Response

Educators need to include students in evaluating design. Before using a LO, educators should evaluate whether the design suits their students learning styles and preferences. Students who are self-motivated and like to innovate will require objects designed to let them explore, while students who are performance focused will need to use objects that focus on task accomplishment. (Baruque and Melo, 2004) Once LOs have been used, educators should involve students in the process of assessing design. Student reactions often provide insight into design features and how well objects help them learn. Students can evaluate the design based on ease of use, interaction capabilities, visual design and navigability. Other criteria can include the type of learning achieved and content readability.(Baruque and Melo, 2004)

Limitations of Pedagogical Perspective

Assessing from a pedagogical perspective helps educators measure learning outcomes made possible by LO design. While some criteria are easy to measure, others are by their nature nebulous. This presents some limitations in evaluating design from a pedagogical perspective. Learning is an individual and independent process and it is hard to predict how it will happen. Even though assessments will measure learning outcomes, it will not be possible to accommodate all the learning possibilities resulting from the design of the LO. The individualized nature of learning also makes it difficult to establish criteria and standards that will measure value and meaningfulness students attach to learning. Thus when educators wish to measure the ability of the design to afford learning, it will be difficult to assess the precise value and nature of learning that has occurred. The implication for educators is that they should keep an open mind when assessing design based on learning outcomes.

Assessment Tools

According to Nesbit et. al. (2002) there are several different evaluation models and tools to assess LO design; all of which use both technical and pedagogical criteria. A list of these models comprises:

(LORI) was developed for the E-learning and assessment network (e-Lera). Reviewers can evaluate LOs across nine criteria using a five-point scale. These criteria comprise: content quality, learning goal alignment, feedback and adaptation, motivation, presentation design, interaction usability, accessibility, reusability and standards compliance. Reviewers are also able to write comments about LO design. (Richards and Nesbit, 2004)

  • The Convergent Participation Evaluation Model
    File:Convergent Participation Model.jpg
    Convergent Participation Model by Nesbit et. al, 2002
    was created by John Nesbit and Karen Belfer and J.Vargo to use in conjunction with LORI. It involves the use of a collaborative evaluation panel to review LOs through two cycles. In the first cycle a moderator brings together a panel of reviewers. The reviewers then evaluate the LO independently using the LORI. In the second cycle, the panel comes together and the moderator leads a discussion to integrate the individual reviews, which are published in the final phase. (Nesbit et. al, 2002)
  • Learning Object Evaluation Scale for students was designed by Robin H. Kay and Liesel Knaack . It measures learning from the student’s perspective. Students are able to assess how much they learned from the object and how involved they were in using the object. The LOES scale also measures elements of object design including animation, graphics and ease of use and interactivity. (Kay and Knaack, 2008)
  • MERLOT stands for Multimedia Educational Resource for Learning and Online Teaching. It is not an assessment scale but a LO repository. However, it frequently appears in lists of LO evaluation instruments because it uses a built-in peer review system. Editors lead a board of peer reviewers (educators) who assess LOs in the repository. Three general criteria are used for evaluation – quality of content, Potential Effectiveness as a Teaching-Learning Tool and ease of use. Within these criteria are a series of questions reviewers must answer. A detailed list of questions can be found here. (MERLOT.org, 2011)
  • Learning Object Analysis Sheet was created by Alive-tek a company that specializes in designing and building online learning environments. The LO Analysis sheet is a form to evaluate design from aesthetic, usability and pedagogical perspectives. (alivetek.com, 2007)
  • Bates and Poole (2003) created the SECTIONS model that contains a framework to evaluate the use of technology in education. This model can also be applied to assessing LO design. The model consists of the following criteria:

S- Student suitability regarding technology. Can students access the technology easily?

E – Ease of use and reliability

C – Cost to create or use per learner

T – what kind of teaching and learning does the technology promote?

I – Interactivity what kind of interaction is provided

O – Organizational Issues – does the institution or organization present any barriers to the implementation of the technology

N – Novelty how new is this technology? Will students be focusing more on learning the technology than on the concept?

S – Speed how fast can technology be designed, adapted or created.

An example rubric adapted from one designed for an ETEC 565 assignment at the University of British Columbia (September 2010) can be found here

References

  • AliveTek, Inc. [ Affordable e-Learning Solutions ]. (2007). . Retrieved February 28, 2011, from http://www.alivetek.com/
  • Alonso, Fernando et al. 2008. “Learning objects, learning objectives and learning design..” Innovations in Education and Teaching International Vol. 45,(No. 4,): 389–400.
  • Baruque, L. B., & Melo, R. N. (2004). Learning Theory and Instructional Design Using Learning Objects. Journal of Educational Multimedia and Hypermedia, 13(no4), 343-70.
  • Bates and Poole. (2003) “A Framework for Selecting and Using Technology.” In Effective Teaching with Technology. San Francisco: Jossey-Bass. Pages 75-105.
  • Chrysostomou, C., & Papadopoulos, G. (2008). Towards an Object-Oriented Model for the Design and Development of Learning Objects. International Journal on E-Learning, 7(2), 219-243.
  • Kay, R. H., & Knaack, L. (2008). Assessing learning, quality and engagement in learning objects: the Learning Object Evaluation Scale for Students (LOES-S). Educational Technology Research and Development, 57(2), 147-168.
  • Norman, D. A. 1999. “Affordance, conventions, and design.” Interactions 6(3): 38–43.
  • Richards, G., & Nesbit, J. (2004). The teaching of quality: Convergent participation for the professional development of learning object designers. Revue internationale des technologies en pédagogie universitaire, 1(3), 56–63.

External Links