Only 7/3 of U.S. 8th graders perform at
or above standards for science and math ?
Green schoolyards promote academic achievement through hands-on, experiential learning and by enhancing the cognitive and emotional processes important for learning.
Green schoolyards provide experiential learning across many subjects.

GREEN Help students focus attention and regulate behavior

SCHOOLYARDS Enhance attitudes and engagement with school

CAN Support creativity, critical thinking and problem solving

Seeing nature and greenery from school buildings can foster positive academic outcomes.

Methods for Assessing Experiential Activities

There are many potential ways to assess experiential activities, both external and internal. These methods are tied to reflection, helping learners to focus their learning while also producing a product for assessment purposes. Moon lists several examples:

  • “Maintenance of a learning journal or a portfolio
  • Reflection on critical incidents
  • Presentation on what has been learnt
  • Analysis of strengths and weaknesses and related action planning
  • Essay or report on what has been learnt (preferably with references to excerpts from reflective writing)
  • Self-awareness tools and exercises (e.g. questionnaires about learning patterns)
  • A review of a book that relates the work experience to own discipline
  • Short answer questions of a ‘why’ or ‘explain’ nature
  • A project that develops ideas further (group or individual)
  • Self-evaluation of a task performed
  • An article (e.g. for a newspaper) explaining something in the workplace
  • Recommendation for improvement of some practice (a sensitive matter)
  • An interview of the learner as a potential worker in the workplace
  • A story that involves thinking about learning in the placement
  • A request that students take a given theory and observe its application in the workplace
  • An oral exam
  • Management of an informed discussion
  • A report on an event in the work situation (ethical issues)
  • Account of how discipline (i.e. subject) issues apply to the workplace
  • An identification of and rationale for projects that could be done in the workplace” (2004, p. 166)

Of these methods, Qualters singles out the learning portfolio as one of the most comprehensive methods of assessing experiential learning. Learning portfolios are distinguished from standard professional portfolios through their inclusion of a reflection component. It therefore becomes more than just “a showcase of student materials,” and instead becomes a “purposefully designed collection connected by carefully thought out structured student reflections.” Beyond assessing student learning, well-constructed portfolios can be used for accreditation, university-wide outcome assessment, and to document and understand the learning process at both the level of course and program (Qualters, 2010, p. 60).

John Zubizarreta proposes a simple model for a learning portfolio with three fundamental and interrelated components:

  1. Reflection
  2. Documentation
  3. Collaboration (2008, p. 1).

This conception of a learning portfolio mirrors that of a teaching portfolio, pairing a concise, reflective narrative with a series of appendices containing appropriate evidence for each area of reflection. Zubizarreta believes that “the value of portfolios in improving student learning resides in engaging students not just in collecting representative samples of their work for assessment, evaluation, or career preparation, but in addressing vital reflective questions that invite systematic inquiry” (2008, p. 2). Portfolios engage students in “intellectually challenging, creative, rigorous work,” and serve as both a process and an end product. This recalls the above-stated definition of experiential learning as being as much about the means as about the ends, and the necessity of devising assessment methods to measure success in both the process and the product.

Keeping Zubizarreta’s three fundamental components in mind, it is important to remember that there is no right way of constructing a portfolio, and each portfolio will be different depending on the program of study or experiential learning activity. Zubizarreta provides the following generic table of contents to give suggestions as to the potential contents of a portfolio and a logical order that can be used to drive learning:

  1. Philosophy of learning: What, how, when, and why did I learn? A reflective narrative on the learning, process, learning style, value of learning
  2. Achievements in Learning: What have I accomplished with my learning? Records—transcripts, course descriptions, resumes, honors, awards, internships, tutoring
  3. Evidence of Learning: What products, outcomes do I have to demonstrate learning? Outcomes—research papers, critical essays, field experience logs, creative displays/performances, data/spreadsheet analysis, lab results
  4. Assessment of Learning: What measures and accounting to I have of my learning? Instructor feedback, course test scores, exit/board exams, lab/data reviews, research project appraisals, practicum reports
  5. Relevance of Learning: What difference has learning made in my life? Practical applications, leadership, relation of learning to personal and professional domains, ethical/moral growth, affiliations, hobbies, volunteer work, affective value of learning
  6. Learning Goals: What plans do I have to continue learning? Response to feedback; plans to enhance, connect, and apply learning, career ambitions
  7. Appendices: How coherently have I integrated evidence with reflections and self-assessments in the portfolio? Selected documentation for areas 1 through 6 (Zubizarreta, 2008, p. 4).

To plan a learning portfolio project, Zubizarreta provides a short rubric that asks instructors to first identify the purpose of the portfolio, and then answer the following questions:

  1. What kind of reflective questions should students address?
  2. What kinds of evidence or learning outcomes would be most useful?
  3. How will students engage in collaboration and mentoring during the process? (Zubizarreta, 2008, p. 4)

The purpose of a learning portfolio “strongly determines the themes of the reflective narrative, as well as the types of documentation or evidence selected in the appendices.” A planning rubric representing this can be a table with three columns—purpose, theme, and evidence—and the content of these columns can be quite broad. For example, if the purpose of the portfolio is “improvement,” then the themes could be “development, reflective inquiry, focus on goals, philosophy of learning,” and the evidence for that could be “drafts, journals, online threaded discussions, emails, statements of goals, classroom assessments, research notes.” If the purpose of the portfolio is “problem solving,” then the themes could be “critical thinking, creativity, application of knowledge, flexibility, curiosity,” and the evidence for that could be “problemsolving log, lab reports, computer programs, spreadsheet data analyses” (Zubizarreta, 2008, p. 5).

No matter what the contents of the learning portfolio, a well-designed project will keep students, active, engaged, and reflective, helping them to “own their own learning as more independent, self-directed, and lifelong learners.” To that end, Zubizarreta cites a recent trend amongst universities to supply alumni with perpetual server space, enabling students to maintain their learning portfolios electronically long after their time in university, “a nod toward a true conception of portfolio development as a lifelong commitment to learning” (Zubizarreta, 2008, p. 6).

Extract from :
Prepared by Michelle Schwartz, Research Associate, for the Vice Provost, Academic, Ryerson University, 2012 

Assessment of Experiential Learning

Assessment is an integral part of the experiential learning process. It provides a basis for “participants and instructors alike to confirm and reflect on the learning and growth that has and is occurring.” Further, proper assessment methods engender a “reflective process that ensures continued growth long after specific learning opportunities have been completed” (Bassett & Jackson, 1994, p. 73). Without the “appropriate assessment tool, such as a self-assessment, the educator might not ever realize that significant learning occurred. Therefore, classroom educators should search for assessment techniques that measure more than just the ability to remember information” (Wurdinger, 2005, p. 69).

The assessment of experiential activities presents a unique problem to instructors. Because in experiential activities the means are as important as the ends, “it is important to look at assessment as more than outcome measurement. While outcomes are important to measure, they reflect the end product of assessment, not a complete assessment cycle” (Qualters, 2010, p. 56). It is therefore necessary to devise unique assessment methods to measure success in both the process and the product—each area requires separate learning outcomes and criteria (Moon, 2004, p. 155).

Another difficulty when developing assessments has to do with the variability of experiential activities. Because students are working on different projects, or participating in different external activities, they can’t all be expected to learn the exact same things, and each student may take away something different from the experience. Beyond the variability of activities, there is also the variability amongst the different students.

In experiential learning, these two types of variables are often uncontrollable, and thus have to be accounted for when developing assessment methods. Ewert and Sibthorp have broken these “confounding variables” down into three areas based on what part of the experiential learning cycle they affect. The confounding variables are either precursors, concomitant, or post-experience (2009).

Precursor variables “exert their influence prior to the beginning of an experiential education experience.” They are “the antecedent that an individual ‘brings into’ the experience.” These variables include:

  • Prior knowledge and experience: “Participants with more or less past background and knowledge have both the ability to learn and benefit from (or not benefit from) different lessons from the experience.”
  • Demographics: The age, sex, and socio-economic status of students have an impact on what students learn.
  • Pre-experience anxiety, motivations, and expectations: These three items can “influence a participant’s readiness to learn, engage in, and benefit from the experience.”
  • Self-selection into a specific program or experience: The various reasons for why each student has chosen to participate in an experiential learning activity can create fundamentally different cohorts every time the program is run. The inherent differences between groups or individuals are often difficult to isolate from the “variance between experiential education experiences” (Ewert & Sibthorp, 2009, p. 378).

Concomitant variables “often arise during an experiential education experience and influence the outcomes during, or immediately after, that experience” (Ewert & Sibthorp, 2009, p. 380). These variables include:

  • Course specifics: This refers to the structure of the program, including the length, the specific activities, and the influence of the instructors.
  • Group characteristics: The attributes and characteristics of the individual students make each group different. This impacts both their individual experiences as well as the experience of the cohort.
  • Situational impacts: These “specific, non-structured, or unanticipated events” can have both a positive or negative effect on learning.
  • Frontloading for evaluation: This is a type of experimental bias in which the instructors or students “consciously or unconsciously influence the student results because of the evaluation process.” For instance, instructors might alter the experience to match the findings they hoped to see, or students “might, through a pretest, be predisposed to learning certain course outcomes” (Ewert & Sibthorp, 2009, p. 381).

Post-experience variables exert their influence after the completion of an experiential education activity. These variables include:

  • Social desirability or self-deception positivity, in which students respond to an evaluation survey with what they think instructors want to hear, rather than what they really feel.
  • Post-experience euphoria, in which a short-term feeling of excitement and accomplishment obscures the true feelings of a participant.
  • Post-experience adjustment or re-entry issues refers to the time that students need to adjust back to “normal” life after they complete their experiential activity. Collecting data during this period may not reflect how the student will feel after they get some distance from the program.
  • Response shift bias can occur when “the testing or measurement of a self-perception variable occurs at different times, and the participant’s understanding of the variable changes over this time period.” For instance, a student may, through the learning they experience over the course of their program, change their view of what constitutes “productive teamwork skills,” and thus their self-assessment at the beginning of the program cannot be accurately compared to their self-assessment after the program, as these assessments would be measuring different things (Ewert & Sibthorp, 2009, p. 382).

Effective assessment methods must be able to take these variables into account, and be able to both “separate perceived learning from genuine learning” as well as capture accurate levels of growth and change in students (Qualters, 2010, p.59). To accomplish this, Qualters provides this list of criteria for good assessment:

“ongoing, aimed at improving and understanding learning, had public and explicit expectations, set appropriate standards, and was used to document, explain, and improve performance. But it also seemed reasonable, doable, and logical to the faculty, as it drew on methods and models of the discipline as well as educational methodologies” (Qualters, 2010, p. 60)

To set about creating effective assessment methods, Qualters suggests asking the following “essential questions”:

  1. Why are we doing assessment?
  2. What are we assessing?
  3. How do we want to assess in the broadest terms?
  4. How will the results be used? (Qualters, 2010, p.56)

Having produced answers to the essential questions, Qualters then suggests that the next step be to move from the general to the more specific, answering “burning questions.”

“These are the questions that all parties involved in the experiential experience are really concerned about answering. For example, faculty may be concerned with capturing whether or not students are using classroom theory in practice; students may wonder how the experience enhances their discipline knowledge; administrators may be concerned with out accreditation will view these activities; staff may be apprehensive about the processes involved in setting up the activities; and the site personnel may be anxious about how student involvement affects their clients. By eliciting burning questions, you can develop and prioritize assessment mechanisms to provide useful answers, not just accumulate data” (Qualters, 2010, p. 57).

With the answers to these questions in hand, instructors can then go about developing their assessment strategy. Qualters recommends the use of Alexander Astin’s I-E-O (Input-Environment-Output) model:

  • Input: Assess students knowledge, skills, and attitudes prior to a learning experience
  • Environment: Assess students during the experience
  • Output: Assess the success after the experience (Qualters, 2010, p. 58)

To demonstrate the use of this model in the process of developing an effective assessment method, Qualters provides the example of a health education course in which students worked with the homeless:

  • Input: Students were surveyed for their attitudes and assumptions about the homeless, their conceptions of the homeless community, their concerns, and what they hoped to gain. Their current skill level was assessed through a “mini observed structured clinical experience.”
  • Environment: During the experience, students were required to keep structured reflective journals as well as participate in collective reflection. They were also given periodic structured observations to assess any increase in their knowledge and skill.
  • Output: After the experience, students were given the same attitudinal survey, they were asked to identify any insights or thoughts they had about working with the homeless, and they were given another “mini observed structured clinical experience” to assess any gains in skill level.

Qualters believes this method was successful for the following reasons:

  • Because students only conducted their necessary tasks as part of the experiential portion of the course (i.e. practicing taking blood pressure with the homeless community, not sometimes in class and sometimes on site), skill development could be measured absent of any of Ewert and Sibthorp’s confounding variables.
  • The observations, journals, and collective reflections “allowed the faculty to understand student learning processes as skills improved and attitudes evolved.”
  • The pre- and post- experience surveys were “able to surface student attitudes and misconceptions prior to going into the community, an important step in addressing and structuring the experience to prove or disprove their beliefs… faculty could understand how students were thinking, direct their reflection to make connections with prior knowledge and theory, and help them identify new insights as they reflected through writing and in groups.” The results from these surveys not only improved the current course, but allowed instructors to gather the necessary data with which to improve future course iterations (Qualters, 2010, p. 60).

When developing assessments for experiential learning, it is also important to keep the assessment method student-centered. Much in the same way that students are given power over their learning in the experiential classroom, they should also be given a role in assessing their own learning. Wurdinger reports on three ways in which students can conduct self-assessment in the experiential learning:

  1. Student involved assessment allows students to define how their work will be judged. They choose what criteria will be used to assess their work, or help create a grading rubric.
  2. Student involved record keeping allows students to keep track of their work. This could be done through the creation of portfolio that documents student progress over time.
  3. Student involved communication allows students to present their learning to an audience, such as with an exhibit or conference (2005, p. 70).

Another important point to remember when designing assessments is that although in many cases what is being assessed in the experiential classroom is reflective work, assessment shouldn’t be aimed directly at the actual reflective writing of learners. The reflective writing should be seen as an aid to learners in working through a process, not as a final product. Rather than assess such raw material, require students to re-process their reflection in the form of a more finished report or project. Students should be required to use their primary reflective material “either to support an argument or to respond to a question.” It may even be “useful to ask student to hand in their reflective writing as evidence that it has been completed in an appropriate manner” or require them to “quote material from their reflective writing” in their finished product. Requiring students to “reflect on their primary reflections is likely to yield deeper levels of reflection with improved learning” (Moon, 2004, p. 156).

Extract from :
Prepared by Michelle Schwartz, Research Associate, for the Vice Provost, Academic, Ryerson University, 2012 

Teaching Experiential Learning to Teachers

Not surprisingly, the most effective method of training instructors to use experiential learning in the classroom is experientially. Warren presents a model for teaching experiential education that is project-based and student-directed.

In Warren’s model, the experiential component of a course in experiential education theory is “the students’ active creation of the class itself. Students determine the syllabus, prioritize topic areas, regulate class members’ commitment, facilitate actual class sessions, undertake individual or group-inspired projects, and engage in ongoing evaluation” (Warren, 1995, p. 250).

In this model, the students (and future experiential educators) are given the opportunity to facilitate every aspect of the class, providing them the necessary skills to run their own experiential classrooms. The model includes:

  1. Group work, providing students with direct experience of group dynamics and the management of group work.
  2. Group, class, and/or individual projects that “support an in-depth look at a particular aspect of experiential education theory.” The class can decide whether the project will be collective or not, adding another opportunity for group decision-making.
  3. Constant reassessments as the class learn from their experience. The students rework the components of the class, refining the syllabus, and resetting the ground rules. “Collectively, [the class] determined what, specifically, being prepared for the class meant, agree they wanted to start and end class on time, and verbally announced to their peers what their level of commitment was.”
  4. Student Co-Teacher from a previous class to assist with the course. “Having participated in the struggles of self-direction firsthand in the previous year, the student co-teacher brings an invaluable voice of experience to the new group.” This student co-teacher brings perspective, credibility, and is “yet another way to redistribute power” from instructor to learner.
  5. Evaluation in this model takes three forms.
    1. Facilitation feedback where students are “critiqued on how they ran a particular class… It allows class members immediate access to ideas on how to structure future teaching attempts.”
    2. Mid-course assessment helps keep learning on track, even when a mid-semester slump or class conflicts may have brought about feelings of disengagement or lethargy. This assessment is directed by the instructor and is meant to “gauge satisfaction and frustrations with the class… Because we do the repair work at mid-semester instead of waiting until the end, students feel as if they have the power to change their immediate educational experience.”
    3. Peer evaluation in which class members reflect on the growth and learning of their peers and write constructive evaluations of their classmates (Warren, 1995, p. 256).
Extract from :
Prepared by Michelle Schwartz, Research Associate, for the Vice Provost, Academic, Ryerson University, 2012 

Teaching Reflection

Since reflection is such a crucial component of a successful experiential learning process, it is imperative that students understand exactly what reflection is and how to use the process to deepen their learning. To do so, Moon has articulated a two-stage process for training students in reflection. The first stage is called “presenting reflection.” In this stage, students are provided with examples of reflective writing, and are led through a discussion and some small exercises that get them accustomed to the concept and methodology of reflection. The second stage works to deepen the students’ understanding of reflection, moving from basic to more complex forms (Moon, 2004, p. 134). Here is a skeleton of this model, as laid out by Moon:

Stage 1: Presenting reflection

  1. “Discuss how reflective writing differs from more familiar forms of writing
  2. Consider the issues around the use of the first person
  3. Give examples
  4. Generate discussion of learners’ conception of reflection
  5. Enable practice and opportunities for feedback
  6. Give a starting exercise that does away with the blank page
  7. Support the further development of reflective writing with exercises/activities
  8. Set up situations in which learners can share their ideas
  9. Be prepared to support some learners more than others
  10. Be open about your need to learn about this form of learning and how to manage it
  11. Consider what reflection, reflective writing, reflective learning are
  12. Consider why reflection is being used to facilitate the current area of learning”

Stage 2: Facilitating deeper reflection

  1. “Introduce a framework that describes levels of reflection. Use example to demonstrate deeper reflection activity
  2. Introduce an exercise that involves ‘standing back from oneself’
  3. Introduce exercises that involve reflection on the same subject matter from different viewpoints (people, social institutions, etc.)
  4. Introduce exercises that involve reflection on the same subject matter from the viewpoints of different disciplines
  5. Introduce exercises that involve reflection that is obviously influenced by emotion reaction
  6. Introduce methods of deepening reflection by working with others (eg critical friends, collaborative activities)
  7. Use second-order reflection” (Moon, 2004, p. 143).
Extract from :
Prepared by Michelle Schwartz, Research Associate, for the Vice Provost, Academic, Ryerson University, 2012