Greetings Camp Evaluators! I’m glad you are here — the topic for today’s blog (and the one following this one) is likely something you have thought about extensively or will soon be thinking about as campers start arriving in the coming weeks. We are talking about enrollment, a camp professional’s most valuable statistic.

Enrollment represents the number of campers we serve — but, as straightforward as that sounds, it’s not. Are we talking about new or returning campers? Youth campers or adult campers? Campers who attend a part of the day, whole day, or as part of a rental group? Clearly there are numerous ways to measure enrollment, and not one of these measures tells a complete answer to the question are we serving the number of campers that we should be serving?

But first, a word on where enrollment sits in the overall universe of information you might gather to address this question. Consider the image below, which represents a very basic hierarchy of data available to most camp professionals.

At the foundation we see inputs, or the stuff necessary to make a program run: staff, equipment, marketing, and policies, to name just a few. The reason we placed inputs at the base of our hierarchy is because they are relatively easy and important to measure — we routinely track our budgets, monitor staff, evaluate our policies, so most camp folks have plenty of data available to assess the quantity and quality of their inputs. But inputs alone give us only one small window into the overall functioning of our program. How do we know if these ingredients are translating into something?

To see if our inputs are working, we have to move up a level. Here we find outputs, a term used to generally to refer to the things we can count at the end of the program (What activities took place? Who attended?). The most obvious measures of enrollment reside here, such as: total number of campers served; return rate; campers by age, gender, race/ethnicity; or enrollment across programs or sessions. We will review several common metrics for these in the second part of this blog, but, as we will see later, attendance alone does not tell us the whole story.

Measures of outcomes and impact will round out that picture; unfortunately, these are both more difficult to measure (compared to outputs), which is why they are at or near the top of the evaluation hierarchy — difficult to measure, but a gold mine if we want to really tell the whole enrollment story. Outcomes refer to the skills, attitudes, or behaviors campers gain as a result of the program (presumably positive), and impact refers the long-term result of those new and beneficial skills, attitudes, or behaviors—academic, community, and health-related impacts are all good examples.

All this is to say that there are many ways to measure enrollment, and they vary in terms of ease, availability, and depth. In the second part of this blog we will explore some specific strategies for measuring camper enrollment, but first we need to tackle the evaluation item that always wins the popularity contest, but often fails to give us what we really need.

What I’m talking about here is satisfaction. Most camps measure satisfaction in some way, typically by asking parents (Were you satisfied with our registration process? Staff? Check-in/check-out?) or campers (Did you like our activities? Counselors? Food? Field trips?) in some form of online or paper survey. Satisfaction is a good thing to evaluate because it is assumed that if a parent or kid likes camp, then they will come back for more (and tell their friends!). Yes, satisfaction is a decent enough indicator of future behavior, but it is not without some big limitations.

First — what is satisfaction? And whose satisfaction are we measuring? When you think about it, satisfaction is the over-filled water balloon of evaluation — hard to determine its shape and capacity, and even harder to catch. In other words, a person’s standards for satisfaction vary radically, so when we measure satisfaction, we are measuring it against a highly individual and shape-shifting baseline, and, for some, a baseline that might not even exist (How would I know what a good camp experience looks like?).

Second — when we ask a person if they are satisfied with something, we imply that we are prepared and willing to do something to increase their satisfaction should it not be very high. I read a camp survey once that asked campers if they liked their counselor or not. Seems to make sense, but are you prepared to hire or fire a staff person based on an eight-year old’s dislike for that person? Are you willing to serve sushi for lunch of a parent it dissatisfied that you did not? The point here is that survey questions send powerful implicit messages about what we can and are willing to do to keep campers and their families satisfied, so we cannot ask questions about things we cannot realistically address.

In the hierarchy of evaluation data, satisfaction doesn’t fit squarely anywhere in the model. You can argue the satisfaction is a result of the program, and therefore an output, but, so long as you are asking parents or campers about their satisfaction with certain aspects of your program, then satisfaction is a measure of inputs rather than outputs or outcomes.

Which brings us full circle, and fully prepared to tackle the question of how do we measure enrollment? We know now that no single measure, especially measures that simply count campers, gives us a complete picture of enrollment. We need information from all levels of the evaluation hierarchy to tell us who attended our program and what they gained from their attendance. In the next blog in this series we will discuss some specific metrics for measuring enrollment — spoiler alert: I will recommend you use multiple measures so you not only have a statistic; you have a story.

Photo courtesy of Camp Woodhaven in West Boylston, Massachusetts

Laurie Browne, PhD, is the director of research at ACA. She specializes in ACA's Youth Outcomes Battery and supporting camps in their research and evaluation efforts. Prior to joining ACA, Laurie was an assistant professor in the Department of Recreation, Hospitality, and Parks Management at California State University-Chico.  Laurie received her Ph.D. from the University of Utah, where she studied youth development and research methods.

Thanks to our research partner, Redwoods.

Redwoods

Additional thanks goes to our research supporter, Chaco