It’s the end of the camp season, which means you are likely looking ahead at a mountain of loose ends while at the same time looking back and seeing a long list of things you wish you had done, or wish you had done differently. This is part of the beauty of camp, that we work in cycles that have clear starting points and ending points (or at least pausing points) and a lot that goes on in between. Think back to the start of summer—are there things you know now that you wish you knew then?

Campers are much the same way. They arrive at camp not fully aware of what lies ahead, of what they know, and of what they do not know. You might ask a kid how much they know about archery before camp and, because they’ve done once or twice before, they say, “Sure! I know a lot of about archery!” — until they spend two weeks in archery. Then they get to the end of camp and think, “Now I really know something about archery.” This is a simple fact of human nature: we don’t know what we don’t know.

Which presents an interesting dilemma when we are trying to evaluate how much a camper learns at camp. Or, more specifically, what they learned as a result of their time at camp. The most traditional way to do this is through a pre-camp and a post-camp evaluation. Ask a kid at the start of camp how much they know about archery and then ask at the end of camp and compare the results; simple as that.

But traditional pre/post tests are far from simple. First, they are more work to design, administer, and analyze. No one likes taking a survey, no one likes taking two surveys, especially at camp where surveys feel far too much like school. But the bigger reason pre- and post-camp surveys are challenging is because they do not account for what I’ve described above, the “we don’t know what we don’t know” phenomenon, something researchers call response shift bias. A camper’s scores on a post-camp evaluation might actually be lower than their pre-camp evaluation if they learned during their time at camp how much they didn’t know about archery, or friendship skills, or independence.

Enter the retrospective pre-test, a term we use to describe a pre- and post-test baked into one. There are lots of different ways to do this, all with the goal of asking a person about their status on a given topic (“my archery skills right now”) and the extent to which that status changed because of their time at camp. You can get at this change by either asking a person to think about their level of that skill before camp (“my archery skills before camp”), or by asking them how much their archery skills changed because of their time at camp (“my archery skills changed a lot”).

Based on the example provided by Lang and Savageau (2017)


Based on the ACA’s Youth Outcomes Battery, Detailed Version

You read more about the nuances and science of retrospective pre-tests here, but for the sake of this blog, I’ll focus on some tips for using this approach and making sense of the data.

  • This format is best for older kids (12+). It takes some mental gymnastics to think about the present day and the past, which means that this can be complicated for younger campers or campers with cognitive disabilities.
  • Even older kids will likely need to some help, so spend time with your staff practicing the instructions and how they will support campers without leading their answers. Do not make the survey too long, and leave plenty of time and quiet space for campers to do their best thinking.
  • Researchers typically use statistical tests such as t-tests to compare the average scores of campers’ “before camp” and “after camp” responses. Stats nerds — go for it! But for the rest of us, a simple eyeball test of the averages of these two sets of responses will usually give us the information we need.
  • Analyzing data for a retrospective pre-test like Example #2 is slightly trickier because the two sets of questions are slightly different, so you cannot chart them side by side (you can do this using an approach like Example #1). Instead, you get two sets of numbers that you can use to say something like “65 percent of campers feel they have a basic knowledge of archery after camp, and 78 percent of those campers felt their knowledge increased a lot because of camp.” This is a really powerful way to attribute changes in knowledge, skill, attitudes, or behavior to the camp experience, which is great information for marketing and for grant funding.
  • Retrospective pre-tests, and all surveys, for that matter, are not as easy to design as you might think. There is quite a bit of science behind a good survey, and without that science, you are likely to get bad or useless information. So, leave the science to the scientists—try to use a retrospective pre-test that has already been used and tested elsewhere.

You may have noticed that Example #2 was based on ACA’s Youth Outcomes Battery, Detailed Version. This is not by accident. This is a tool that was built and tested specifically for camps. The science is strong and exactly what funders require when reporting program evaluation results in grant applications.

The best part? The YOB Detailed Version is ready for you to use right now. So, if you still have campers on site, you can play around with one (or more, but I would recommend starting small) of these surveys right away. If not, consider adding the YOB Detailed Version to your evaluation toolbox whatever your next camp cycle might be.

Happy evaluating!

Photo courtesy of Rolling River Day Camp in East Rockaway, New York

Laurie Browne, PhD, is the director of research at ACA. She specializes in ACA's Youth Outcomes Battery and supporting camps in their research and evaluation efforts. Prior to joining ACA, Laurie was an assistant professor in the Department of Recreation, Hospitality, and Parks Management at California State University-Chico.  Laurie received her PhD from the University of Utah, where she studied youth development and research methods.

Thanks to our research partner, Redwoods.

Redwoods

Additional thanks goes to our research supporter, Chaco