Data Collection and Analysis
Deciding what type of data to collect will require having a reasonable
idea of the program’s goals and anticipated outcomes, as well as an
awareness of the time it will take to collect and then analyze the type
of data collected. Practitioners may consider using quantitative
measures such as surveys, or qualitative methods such as interviews or
open-ended questions. A mixed methods approach can employ both
qualitative and quantitative methodology, allowing for a more nuanced
understanding (Creswell and Clark 2007). Identifying if the intention is
to publish the data (requiring IRB review), or to use it internally to
gain a better understanding of an aspect of programming should play a
key role in determining the approach.
Using best practices in research will help aid in avoiding conflicts of
interest, and better ensure that valid and reliable data is collected
(Ryan et al. 2009). If, for example, a program recruits students for
interviews after they participate in a UFE, someone outside of the UFE
leadership or instructional team should be the interviewer. This
practice would help to minimize the power differential between
participant and researcher, thereby ensuring that UFE interview
participants feel that they can be honest about their experiences, and
not worry about pleasing or offending those involved in the program
(Kvale and Brinkman 2009). Further, the interview questions should be
vetted by others (similar to target audience) before the interviews
begin to ensure that the questions are interpreted by the participants
as intended.
As one makes choices it is key to use appropriate research methodology
in planning data collection and analysis as this will allow for
appropriate interpretation of the results (Clift and Brady 2005). For
instance, if one does not have the resources or time to analyze or hire
researchers to analyze collected data, then conducting semi-structured
interviews with numerous students and staff would not be advisable, as
analyzing interviews can be highly time consuming and require specific
coding expertise. As illustrated in the vignettes (Fig. 2D ),
deeply understanding the lived experiences of participants may call for
qualitative methods and analysis. Qualitative research typically
includes using iterative rigorous coding protocols. Coding may be done
using either deductive or inductive methods, or a combination of
approaches (Saldaña 2015). Analysis often includes multiple trained
researchers iteratively developing and revising codebooks and then
applying those codes to the transcribed text, as well as regularly
checking for coding reliability among researchers (Saldaña 2011, Belotto
2018, O’Connor and Joffe 2020).
Similar to qualitative data, quantitative data collection and analysis
requires planning and expertise. Researchers will want to ensure that
the research aims are well-aligned with the data collection methods or
tools, and in turn, allow for appropriate interpretation the data.
Comparing pre-post survey responses would be one seemingly
straightforward way to measure change over time in participant learning
(e.g., Fig. 2C ). Yet, we do caution against simply pulling a
tool from Table 1 and simply assuming that by using it, it ‘worked’. We
recommend collaborating with experts who are familiar with reliability
and validity testing. Using a survey tool may yield quickly quantifiable
results, but if the survey has not undergone vetting with individuals
similar to the population of study, or it has not previously shown to
collect valid data in very similar populations, one cannot assume that
data collected is valid or reliable (Fink and Litwin 1995, Barbera and
VandenPlas 2011). Just as we do not use micropipettes to measure large
volumes of lake water, we would not use a tool developed for measuring
academic motivation in suburban elementary school students to measure
motivation of college students participating in a residential UFE and
expect to trust the survey results outright. If a tool seems appropriate
for a given UFE and the student population, we encourage first testing
the tool in that population and work to interpret the results using best
practices (for a comprehensive resource on these practices, see American
Educational Research Association (AERA) 2014).
It is also possible that one would want to measure an outcome for which
a tool has not yet been developed. In this case, working on an attuned
assessment strategy based on iterative adaptations and using lessons
learned may be appropriate (Adams and Wieman 2011). There are many steps
involved with designing and testing a new assessment tool that is
capable of collecting valid and reliable data. Therefore, if
stakeholders deem it necessary to create a new tool to measure a
particular outcome, or develop or modify theory based on an UFE, we
recommend working with psychometricians or education researchers.