Student Contextual and Moderating Factors
As with any learning environment it is critical for instructors and
staff to have a good idea of who the participating students are, and
preempt what information may be pertinent to their experiences as
practitioners plan to understand the outcomes of a UFE (Pender et al.
2010, Fakayode et al. 2014, Ireland et al. 2018, Stokes et al. 2019). In
this way, student factors may influence the selection of appropriate
assessment approaches and tools. There are a number of factors that can
be considered when designing and understanding the outcomes of
assessment; here we provide numerous examples for contemplation.
For example, a factor to consider is incoming student knowledge and
skills. Imagine two UFEs: in the first UFE, students are upper-division
physiology majors studying endemic amphibians’ responses to changes in
stream water quality; the second UFE is designed for non-science majors
to broadly survey the biodiversity of local flora and fauna. If a
practitioner decides they want to identify if/how students’ attitudes
change regarding the local environment as a result of the UFEs they
might select a survey designed to collect data on environmental
attitudes (e.g. Table 1, Primary Aim: Connection to Place; Assessment
Tool: Environmental Attitudes Inventory (EAI), Milfont and Duckitt
2010). The physiology students from the first example may begin the UFE
with largely positive environmental attitudes already. Thus,
administering a survey at the beginning and end of the UFE (pre-post) to
measure this construct may not reveal any gains. Yet, in the second UFE
example, the students are introductory, non-science majors, and they may
demonstrate significant, quantifiable gains in environmental attitudes.
Therefore, in the physiology student example, this specific outcome was
not detectable due to a measurement limitation called the ceiling
effect. This effect can occur when a large proportion of subjects begin
a study with very high scores on the measured variable(s), such that
participation in an educational experience yields no significant gains
among these learners (Austin and Brunner 2003, Judson 2012). In this
case, instead of the survey, the practitioner might learn more by
crafting an essay assignment that probes the physiology students’
environmental values. This would be a more appropriate option and
demonstrates consideration of the student population in the assessment
strategy.
Other example factors to consider include student motivation and aligned
expectations. An assessment of students in a pair of geoscience UFEs in
New Zealand showed that study abroad students were more intrinsically
motivated, pro-environmental, and had a stronger sense of place than
local students in a similar field experience, held in the same place
(Jolley et al. 2018a). This assessment highlighted the need to adapt the
design of the study abroad field experience to be more applied,
environmentally focused, and place-based, rather than simply applying
local curricula unchanged to a different student context (Jolley et al.
2018a). Here, future assessments could be targeted towards investigating
whether the revised UFE design for study abroad students effectively
captured their motivation and interest. And/or, a deeper qualitative
investigation could be conducted to characterize their field experiences
in relation to the environmental and place-based content.
Prior experiences and identity considerations are also critical (Scott
et al. 2019, Morales et al. 2020). Have the students experienced
fieldwork already? Practitioners might want to know what proportion of
the students are first-generation college students, or if students have
prior conceptions of fieldwork. Such knowledge could guide an assessment
approach aimed at understanding how first-generation students experience
the UFE compared to continuing generation students; or in the latter
case, if students hold accurate inaccurate conceptions (or any
conception at all) about fieldwork.
Also important is awareness of safety and well-being, especially for
students of identities such as BIPOC (Black, Indigenous, and People of
Color) students and LBGTQ+ students (John and Khan 2018, Anadu et al.
2020, Giles et al., 2020; Marín-Spiotta et al. 2020, Demery and Pipkin
2021). These considerations can influence the implementation of an
assessment strategy, as participants will experience different levels of
comfort and risk based on the questions being asked. Students may be
less comfortable sharing if they already have concerns about safety in
the field environment and culture of UFEs. Even on an anonymous survey,
students may be worried about being personally identifiable if they are
one of few students of a particular identity or combination of
identities. Ensure that students are provided full and complete
information about what will be done with their data, have the
opportunity to ask questions, and are free from coercion. In some cases,
this may mean having someone who is not the course instructor conduct
the assessment. Although questions like these would be addressed if the
study requires approval through an IRB or similar, we encourage their
consideration regardless as they have a bearing on student comfort and
safety.
There are also artifacts of student factors to consider such as the
intentional or unintentional recruitment and selection processes of the
program (e.g., Zavaleta et al. 2020). Are all students in a given class
participating, or is the UFE only for those who sign-up, or are they
chosen to participate based on certain criteria? It is important to keep
in mind that any outcomes from a given UFE are only representative of
the students who actually participated, and thus not broadly
representative of any student who might participate. In summary, one
must consider: Are the UFE outcomes reasonable to achieve and measure
given the specific student population? Student factors must be
considered in UFE design and will likely moderate or even become the
subject of assessment efforts.
Example Vignettes. In the vignettes, we identify various
factors that may inform program design/UFEs. The background and future
goals of students may inform program design and related assessment
strategy. For example, some programs specifically engage students with a
background or interest in STEM (e.g., Fig.2A, 2B ), others are
open to all majors (e.g. Fig. 2C ). The vignettes provide
diverse examples in which the assessment approaches are designed to be
aligned with the student population.