III. Assessment Approach
Key to choosing an assessment approach is first asking: What is the
motivation for collecting the data? As discussed earlier, there are a
number of reasons and ways one might assess a UFE including identifying
if students are meeting specific learning goals, to collect publishable
data on students’ sustained interest in a topic, or to identify if the
UFE is meeting programmatic goals in order to report back to a funding
agency or university. Regardless of stakeholders’ motivations, using
backward design to clarify and align program goals, activities and
assessments will allow for a solid platform for improvement and
evaluation.
We recommend that practitioners consider both formative and summative
assessments. A formative assessment might be a UFE student completing a
written reflection or keeping a “reflective diary” (Maskall and
Stokes, 2008, Scott et al. 2019) regarding an aspect of their learning
experience. This strategy would provide students a chance to reflect on
their learning process and their changing experience and competencies in
their own words. Further, such a formative assessment would allow
instructors/stakeholders to better understand how programming, or more
specifically a particular aspect of programming may impact student
perceptions and possibly how to adjust the learning experience. A
summative assessment strategy could be employed if practitioners wanted
to know if students have gained a greater appreciation for the natural
world as a result of a UFE, which could be measured for example by
conducting a pre/post survey designed to measure this specific construct
(e.g. Table 1. Primary Aim: Connection to Place, Assessment Tool: Place
Attachment Inventory (PAI), Williams and Vaske 2003). Figure 1 is meant
to be useful in planning assessment strategies but could also serve as a
helpful communication tool when engaging with funders and stakeholders.
It may also be appropriate to hire an external evaluator. An advantage
of external evaluation is that it presumably provides an unbiased view
of the program, as the evaluator will assess the impacts of programming
on participants and report findings in an objective manner. From the
evaluator’s perspective, is the program meeting its intended goals? For
whom does the UFE appear to be “working”, and are there certain
student groups that are not being impacted in the way designers of the
experience had intended? An external evaluator will often work with the
team to identify goals, and then conduct a holistic programmatic
evaluation, including all stakeholders. The caveat regarding external
evaluation is cost. If grant-funded, external evaluation may be
encouraged or even required; if not grant-funded, finding funding would
be necessary in order to hire the evaluator or evaluation team.