Suggested Strategy for Assessing UFEs
We advocate that stakeholders work to understand and evaluate their UFEs
or UFE programs in clear alignment with the unique goals of each
individual field experience. Reflecting best practices in designing
learning environments that support student gains, we draw from the
process described as ‘backwards design’ (Wiggins et al. 1998).
Importantly, this method emphasizes the alignment of UFE design to the
outcomes being measured. We build a ‘how to’ strategy designed for
guidance on assessing course-based undergraduate research experiences
(CUREs) presented by Shortlidge and Brownell (2016) and have expanded
and tailored the model to be specific to UFEs. Figure 1 is to be used
both as a guide and as a mechanism for reflection, allowing
practitioners to refine a UFE to better serve the students, meet the
intended outcomes, and/or change and build upon data collection methods
already in place.
In order to clarify potential misunderstandings, we explain the language
that we use regarding assessment, evaluation, and research. We aim to
provide a guide that is inclusive to those who intend to assess,
evaluate, and/or conduct education research on UFEs, and therefore will
describe how these are separate but interrelated and likely overlapping
actions.
We use the word assessment when we are referring to
measuring student learning outcomes from UFEs. An assessment could be
formative or summative. The goal of formative assessment is to ‘educate
and improve student performance’ (Wiggins 1998). Here students and
instructors can use the information gained as feedback for improvement.
A summative assessment is often cumulative, and captures what a student
has learned or how they have changed from the entire experience.
Assessment tools refer to the instruments that are used to collect the
outcome data (e.g. a survey, rubric, or essay). Assessments can use
qualitative (e.g. interviews), quantitative (e.g. surveys), or mixed
methods approaches (Creswell 2013).
To evaluate something is to determine its merit, value or significance
(Patton 2008), and program evaluation has been described as “the
systematic assessment of the operation and/or outcomes of a program or
policy, compared to a set of explicit or implicit standards as a means
of contributing to the improvement of the program or policy” (Shackman
2008). A programmatic evaluation might aim to holistically understand
the experience that all or individual stakeholders have in a UFE; the
evaluation could include students, instructors, program directors,
community partners, etc. The evaluation would determine appropriate
assessment methodology and identify if goals are being met. Such
information can inform how the UFE can be improved. Evaluation is often
conducted by an external evaluator who may work with the UFE leadership
team to develop a plan, often through the creation and use of a
site-specific logic model (Taylor-Powell and Henert 2008). An evaluation
can target a range of UFEs, from a singular disciplinary program, or an
entire field station’s season of hosted UFEs.
The collection of empirical evidence about a UFE, which can be gathered
through assessment and evaluation, and adds new knowledge, could
potentially be used for education research . Authors
Towne & Shavelson state that: “…education research serves
two related purposes: to add to fundamental understanding of
education-related phenomena and events, and to inform practical decision
making… both require researchers to have a keen understanding of
educational practice and policy, and both can ultimately lead to
improvements in practice.” (Towne and Shavelson 2002, p. 83). Further,
if the aim is to publish research outcomes from a UFE, practitioners
will likely need to submit a proposal to an Institutional Review Board
(IRB). The IRB can then determine if a human subjects research exemption
or expedition will be necessary. If an IRB protocol is needed, this
should occur before data collection (intended for publication)
begins. Gaining IRB approval is contingent on researchers having been
certified in human subjects research as well as a robust and detailed
research plan that follows human subjects research guidelines. Thus,
conducting education research on UFEs requires advance planning, and
ideally would be conducted in partnership with education researchers.
Participants of the UFEs will also need to consent to their information
being used for research purposes.
Although publishing outcomes may be desirable, not all data will
necessarily be collected in a way that yields publishable results.
Designing effective formative assessments to understand and modify a UFE
might be the most appropriate workflow before engaging in intentional
research studies on the outcomes of a UFE. Importantly, we do not
advocate that one method is better, or more or less appropriate than
another; the approach should depend on the aims and intentions of the
stakeholders and the resources available.