Assessment of Experiential Learning

Assessment is an integral part of the experiential learning process. It provides a basis for “participants and instructors alike to confirm and reflect on the learning and growth that has and is occurring.” Further, proper assessment methods engender a “reflective process that ensures continued growth long after specific learning opportunities have been completed” (Bassett & Jackson, 1994, p. 73). Without the “appropriate assessment tool, such as a self-assessment, the educator might not ever realize that significant learning occurred. Therefore, classroom educators should search for assessment techniques that measure more than just the ability to remember information” (Wurdinger, 2005, p. 69).

The assessment of experiential activities presents a unique problem to instructors. Because in experiential activities the means are as important as the ends, “it is important to look at assessment as more than outcome measurement. While outcomes are important to measure, they reflect the end product of assessment, not a complete assessment cycle” (Qualters, 2010, p. 56). It is therefore necessary to devise unique assessment methods to measure success in both the process and the product—each area requires separate learning outcomes and criteria (Moon, 2004, p. 155).

Another difficulty when developing assessments has to do with the variability of experiential activities. Because students are working on different projects, or participating in different external activities, they can’t all be expected to learn the exact same things, and each student may take away something different from the experience. Beyond the variability of activities, there is also the variability amongst the different students.

In experiential learning, these two types of variables are often uncontrollable, and thus have to be accounted for when developing assessment methods. Ewert and Sibthorp have broken these “confounding variables” down into three areas based on what part of the experiential learning cycle they affect. The confounding variables are either precursors, concomitant, or post-experience (2009).

Precursor variables “exert their influence prior to the beginning of an experiential education experience.” They are “the antecedent that an individual ‘brings into’ the experience.” These variables include:

  • Prior knowledge and experience: “Participants with more or less past background and knowledge have both the ability to learn and benefit from (or not benefit from) different lessons from the experience.”
  • Demographics: The age, sex, and socio-economic status of students have an impact on what students learn.
  • Pre-experience anxiety, motivations, and expectations: These three items can “influence a participant’s readiness to learn, engage in, and benefit from the experience.”
  • Self-selection into a specific program or experience: The various reasons for why each student has chosen to participate in an experiential learning activity can create fundamentally different cohorts every time the program is run. The inherent differences between groups or individuals are often difficult to isolate from the “variance between experiential education experiences” (Ewert & Sibthorp, 2009, p. 378).

Concomitant variables “often arise during an experiential education experience and influence the outcomes during, or immediately after, that experience” (Ewert & Sibthorp, 2009, p. 380). These variables include:

  • Course specifics: This refers to the structure of the program, including the length, the specific activities, and the influence of the instructors.
  • Group characteristics: The attributes and characteristics of the individual students make each group different. This impacts both their individual experiences as well as the experience of the cohort.
  • Situational impacts: These “specific, non-structured, or unanticipated events” can have both a positive or negative effect on learning.
  • Frontloading for evaluation: This is a type of experimental bias in which the instructors or students “consciously or unconsciously influence the student results because of the evaluation process.” For instance, instructors might alter the experience to match the findings they hoped to see, or students “might, through a pretest, be predisposed to learning certain course outcomes” (Ewert & Sibthorp, 2009, p. 381).

Post-experience variables exert their influence after the completion of an experiential education activity. These variables include:

  • Social desirability or self-deception positivity, in which students respond to an evaluation survey with what they think instructors want to hear, rather than what they really feel.
  • Post-experience euphoria, in which a short-term feeling of excitement and accomplishment obscures the true feelings of a participant.
  • Post-experience adjustment or re-entry issues refers to the time that students need to adjust back to “normal” life after they complete their experiential activity. Collecting data during this period may not reflect how the student will feel after they get some distance from the program.
  • Response shift bias can occur when “the testing or measurement of a self-perception variable occurs at different times, and the participant’s understanding of the variable changes over this time period.” For instance, a student may, through the learning they experience over the course of their program, change their view of what constitutes “productive teamwork skills,” and thus their self-assessment at the beginning of the program cannot be accurately compared to their self-assessment after the program, as these assessments would be measuring different things (Ewert & Sibthorp, 2009, p. 382).

Effective assessment methods must be able to take these variables into account, and be able to both “separate perceived learning from genuine learning” as well as capture accurate levels of growth and change in students (Qualters, 2010, p.59). To accomplish this, Qualters provides this list of criteria for good assessment:

“ongoing, aimed at improving and understanding learning, had public and explicit expectations, set appropriate standards, and was used to document, explain, and improve performance. But it also seemed reasonable, doable, and logical to the faculty, as it drew on methods and models of the discipline as well as educational methodologies” (Qualters, 2010, p. 60)

To set about creating effective assessment methods, Qualters suggests asking the following “essential questions”:

  1. Why are we doing assessment?
  2. What are we assessing?
  3. How do we want to assess in the broadest terms?
  4. How will the results be used? (Qualters, 2010, p.56)

Having produced answers to the essential questions, Qualters then suggests that the next step be to move from the general to the more specific, answering “burning questions.”

“These are the questions that all parties involved in the experiential experience are really concerned about answering. For example, faculty may be concerned with capturing whether or not students are using classroom theory in practice; students may wonder how the experience enhances their discipline knowledge; administrators may be concerned with out accreditation will view these activities; staff may be apprehensive about the processes involved in setting up the activities; and the site personnel may be anxious about how student involvement affects their clients. By eliciting burning questions, you can develop and prioritize assessment mechanisms to provide useful answers, not just accumulate data” (Qualters, 2010, p. 57).

With the answers to these questions in hand, instructors can then go about developing their assessment strategy. Qualters recommends the use of Alexander Astin’s I-E-O (Input-Environment-Output) model:

  • Input: Assess students knowledge, skills, and attitudes prior to a learning experience
  • Environment: Assess students during the experience
  • Output: Assess the success after the experience (Qualters, 2010, p. 58)

To demonstrate the use of this model in the process of developing an effective assessment method, Qualters provides the example of a health education course in which students worked with the homeless:

  • Input: Students were surveyed for their attitudes and assumptions about the homeless, their conceptions of the homeless community, their concerns, and what they hoped to gain. Their current skill level was assessed through a “mini observed structured clinical experience.”
  • Environment: During the experience, students were required to keep structured reflective journals as well as participate in collective reflection. They were also given periodic structured observations to assess any increase in their knowledge and skill.
  • Output: After the experience, students were given the same attitudinal survey, they were asked to identify any insights or thoughts they had about working with the homeless, and they were given another “mini observed structured clinical experience” to assess any gains in skill level.

Qualters believes this method was successful for the following reasons:

  • Because students only conducted their necessary tasks as part of the experiential portion of the course (i.e. practicing taking blood pressure with the homeless community, not sometimes in class and sometimes on site), skill development could be measured absent of any of Ewert and Sibthorp’s confounding variables.
  • The observations, journals, and collective reflections “allowed the faculty to understand student learning processes as skills improved and attitudes evolved.”
  • The pre- and post- experience surveys were “able to surface student attitudes and misconceptions prior to going into the community, an important step in addressing and structuring the experience to prove or disprove their beliefs… faculty could understand how students were thinking, direct their reflection to make connections with prior knowledge and theory, and help them identify new insights as they reflected through writing and in groups.” The results from these surveys not only improved the current course, but allowed instructors to gather the necessary data with which to improve future course iterations (Qualters, 2010, p. 60).

When developing assessments for experiential learning, it is also important to keep the assessment method student-centered. Much in the same way that students are given power over their learning in the experiential classroom, they should also be given a role in assessing their own learning. Wurdinger reports on three ways in which students can conduct self-assessment in the experiential learning:

  1. Student involved assessment allows students to define how their work will be judged. They choose what criteria will be used to assess their work, or help create a grading rubric.
  2. Student involved record keeping allows students to keep track of their work. This could be done through the creation of portfolio that documents student progress over time.
  3. Student involved communication allows students to present their learning to an audience, such as with an exhibit or conference (2005, p. 70).

Another important point to remember when designing assessments is that although in many cases what is being assessed in the experiential classroom is reflective work, assessment shouldn’t be aimed directly at the actual reflective writing of learners. The reflective writing should be seen as an aid to learners in working through a process, not as a final product. Rather than assess such raw material, require students to re-process their reflection in the form of a more finished report or project. Students should be required to use their primary reflective material “either to support an argument or to respond to a question.” It may even be “useful to ask student to hand in their reflective writing as evidence that it has been completed in an appropriate manner” or require them to “quote material from their reflective writing” in their finished product. Requiring students to “reflect on their primary reflections is likely to yield deeper levels of reflection with improved learning” (Moon, 2004, p. 156).

Extract from :
Prepared by Michelle Schwartz, Research Associate, for the Vice Provost, Academic, Ryerson University, 2012 
This entry was posted in Blogs.

Leave a Reply

Your email address will not be published.

%d bloggers like this: