What Does Camp Do for Kids? Chapter 1 - Introduction

The study investigated the influence that the organized camping experience has on the development needs of youth as reflected through a change in constructs of self. A random effects model of meta-analysis was used. The Handbook of Research Synthesis (Cooper & Hedges, 1994) was the guiding work for the methodology of the analysis conducted in this study. This chapter examines the steps used to perform the meta-analysis: the selection, review and summary of applicable studies; the research design; the use of the meta-analysis as a research instrument; the method in which the data identified was analyzed; and a summary.

Selection of Studies
The selection of studies was based on a search of primary and informal channels 
Cooper & Hedges, 1994; Hunt, 1997; Rosenthal, 1984). Primary channels include database searches, bibliographic review, calls for studies and retrieval of private research reports. Informal channels included collaboration with other camping researchers, personal interviews and contact with professional organizations. The intent was to locate as many studies as could reasonably be expected from an exhaustive search (Hedges & Olkin, 1985; Hunt, 1997; Rosenthal, 1984; Wachter & Straf, 1990). The basic locating criteria was to include studies that could be considered as having some reflection on the influence of the organized camping experience on the self constructs of youth. Once a list was generated, each study was evaluated for relevance to the research question (Cooper & Hedges, 1994; Hunter, Schmidt & Jackson, 1982; Light & Pillemer, 1984; Wolfe, 1986). Refer to the section on Data Analysis in Chapter 4 for more discussion on the criteria for a study's inclusion in the final analysis.

In general, every attempt was made to locate and include all studies of scientific merit, published or unpublished. Studies that were estimated to contribute marginally to the overall analysis, given the time and effort required to locate and secure those sources, were not included (Light & Pillemer, 1984).

The Research Design
This study used the research design of a random effects meta-analysis of primary experimental, quasi-experimental, and pre-experimental studies. The purpose of the meta-analysis was to identify the significance and direction of the relationship found between the organized camping experience and the constructs of self in youth, as identified by the referenced population of primary studies on the question (Appendix B). Significance refers to statistical significance, which reflects that the finding is significantly different from zero at some level of confidence (McMillan & Schumacher, 1997). In this study, statistical significance is achieved at the 95 percent confidence level, p < .05. This confidence level reflects that there is one chance out of twenty that an effect size identified as being significant is in fact not significant. The magnitude of the significance is a another matter and is interpreted as part of the research findings.

Rigor, established by using the Research Checklist in Appendix C and by following the Meta-Analysis Flow Chart in Appendix D, was employed with the aim of controlling for bias in this study's results. The guiding statistical hand book, used to address the technical questions of the meta-analysis was The Handbook of Research Synthesis (Cooper & Hedges, 1994). In accord with the procedures for meta-analysis, aspects of the various treatment methods, or moderators, and their potential impacts on effect size were recorded as part of the coding process (Cooper & Hedges, 1994; Hines, Hungerford, & Tomera, 1986/87; Rosenthal, 1984; Wolfe, 1986).

In order to assure reliable and generalizable results the researcher was prompted to take a conservative approach to the study for two reasons. A conservative approach is defined here as a rigorous attention to the methodology (Appendix C & D) in order to control for researcher bias. The reason's for a conservative approach were: first, the controversy surrounding the potential for bias resulting from a narrowly focused methodology of meta-analysis (Coopers & Hedges, 1994; Electric..., 1994; Hunt 1997; Sacks, et al., 1987;Wachter & Straf, 1990). Secondly, the organizations funding this research are planning to use the results in order to begin to articulate outcomes, and as a foundation for further research in this area. Thus, bias in the research process could negate the potential value of this meta-analysis.

The components of a conservative approach consisted of the use of a panel of coders to verify coding, a panel of experts to help establish and then to confirm coding protocols, and the utilization of a variety of statistical methods in order to establish a spectrum of meta-analytical outcomes. The details of these components are discussed later in this chapter.

The meta-analysis included experimental, pre-experimental, and quasi-experimental studies in both an aggregated mean comparison and combination of effect sizes. The calculations were made using both types of analysis measures: d-index, for dichotomous data, and r-index, for correlational data (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Wachter & Straf, 1990). See the later section entitled Data Analysis and Interpretation for discussion of this issue.

Instrumentation
The technique of meta-analysis has evolved to the level which allows the method to be utilized to synthesize findings from a broad population of studies. These studies can include different methodologies and treatments which are equated through the calculation of the effect size for each study (Cooper & Hedges, 1994; Glass, 1976; Hedges & Olkin, 1985; Hunt, 1997; Wachter & Straf, 1990).

A comparison of the fixed and random effects models is discussed in the section on Data Analysis in Chapter 4. The use of the random effects model of meta-analysis has the advantage of generalizability from the sample of studies to the population, and recognizes the impact of the variability of the treatment on the effect size (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Wachter & Straf, 1990). Isolating the moderators that contribute to this variance across treatments provides insight into the explanatory components of the identified random effect (Cooper & Hedges, 1994).The random effects model expanded the scope of eligible studies for inclusion in the meta-analysis to include pre-experimental, and quasi-experimental studies. This expanded scope increased the sample of studies in the meta-analysis, reflecting the argument for generalizability through the strength of external validity.

According to Cooper and Hedges (1994), the guiding methodological work for this meta-analysis, pre-experimental and quasi-experimental studies conducted with no control group were included through the calculation of effect size based on pre and post treatment comparison. Precedence for this approach was established through the work of Andrews, Guitar, & Howie, (1980). The use of the pre measurement in place of a control group measurement, or the pre-as-control, approach to determining an effect size assumes that no effect on self constructs would have occurred if the treatment did not take place (Rosenthal, 1984). Thus, in these cases, a pre-treatment mean is used to calculate the control effect by comparing it to the post-treatment mean and dividing by the pre-treatment standard deviation.

In the case of the organized summer camp experience the pre-as-control treatment technique was considered to be a reasonable treatment of the data, and was based on the practice of using pre-as-control measurements to determine effect size (Andrews, Guitar, & Howie, 1980; Cooper & Hedges, 1994). The pre-treatment measurement was assumed to equate to a control group measurement. Restated, with all things being equal, the subject's self was assumed to remain unaffected without exposure to the organized camping experience. Therefore, the pre-treatment measurement was used to represent a control group which did not have the exposure to the summer camp experience. The validity of the pre-as-control assumption were tested by determining if there was any relationship between effect size and research design. No relationship was identified, therefore the assumption was deemed to be valid.

Quasi-experimental studies generally do not meet research design requirements for either statistical equivalence or the use of a control group, thus differing from pre-experimental, and experimental research (Cooper & Hedges, 1994). A comparative effect-size analysis was performed on these quasi-experiments. The resulting summary was compared and contrasted with the synthesis generated from the experimental and pre-experimental study populations. Again, no relationship was identified and the assumption was deemed to be valid.

The nature of meta-analysis is such that bias can occur at any stage of the analysis, with implications for future steps in the procedure. These threats exist in both the application of the meta-analysis methodology and through the primary studies analyzed, including the question of treatment of missing data. Rigorous attention to research method and proper statistical analysis procedures was used as the most effective methods of controlling for bias (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Hunter, Schmidt & Jackson, 1982; Light & Pillemer, 1984; Wachter & Straf, 1990).

Reliability and validity were addressed using the following approaches. Reliability of locating and evaluating research was addressed through comprehensive identification and collection techniques (Appendix E), taking steps to enhance inter-coder reliability (Appendix F), and attention to consistency of calculation and recording of effect size and significance levels (Cooper & Hedges, 1994; Wolfe, 1984). External and construct validity was assured through evaluation of inter-coder reliability and testing for homogeneity. A homogeneity test indicates that effect size variability is greater than can be explained by the chance that would result if the corresponding effect size parameters, or treatments, were identical (Cooper & Hedges, 1994).

Internal validity was addressed by examining each study for, "degree of experimenter blindness, randomization, sample size, controls for recording of errors or cheating, type of dependent variable (e.g., self-reporting versus observed), and publication bias" (Green and Hall, 1984, in Wolfe 1986, p.49). Publication bias is the propensity for a publication to reject a research article if the main hypothesis of that research was rejected. Experimenter blindness, or researcher blinding, is the blinding of the researcher to the outcomes of individual studies in order to control for potential bias that would result from the possibility of selecting only those studies with outcomes that best fit the hypothesis of the meta-analysis. The probability of Type I errors was also extracted as part of the coding process (Cooper & Hedges, 1994; Sacks, et al. 1994).

The implications of using meta-analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Light & Pillemer, 1984; Wachter & Straf, 1990) are that the method provides for a rigorous and complete synthesis of the problem being studied, producing a clearer, overall picture of the present state of knowledge on the research question. Additional avenues of research were identified through the use of the comprehensive overview inherent in the methodology. The focus of this effort was based on identifying other outcomes of the organized camping experience.

Scarcity of information needed in the meta-analytical process was also used to identify avenues where future research is needed. Additionally, positive influences of a camping experience, that were identified as having a common treatment methodology, were cited for further exploration and methodology replication.

Data Analysis and Interpretation
Coding of Data
The process of coding is the extraction of data from each study based on a coding sheet that specifies what data to extract and a key that interprets the various aspects of the coding sheet (Appendix G). Coding for all studies was completed by the researcher using protocols for blind coding, as described in the Research Checklist presented in Appendix C (Cooper & Hedges, 1994; Electric..., 1994; Sacks, et al., 1987). The coding was then verified by a panel of coders (Appendix H) and the following procedure. The coders were trained and then tasked with coding a statistically significant, random sample of the population of primary studies. The coding process was reviewed by a panel of experts (Appendix I) at two phases of the study. Initial review was conducted prior to the primary coding. The second review was conducted after the coding's validation through the coder panel verification process described earlier. Use of both a panel of coders and an expert review panel represents a conservative approach to assuring reliability of the results, as discussed earlier.

The panel of coders was retained to code a sample of studies to statistically verify the primary coding and to establish some measure of inter-coder reliability (Cooper & Hedges, 1994; Electric..., 1987; Sacks, et al., 1994). The panel of coders was trained and then coded the data based on the developed coding sheet and key (Appendix G). Coders were chosen for their impartiality to the research. In addition to verification, the panel of experts reviewed and gave input for the final coding protocol (Cooper & Hedges, 1994; Rosenthal, 1984; Wolfe, 1984).

Estimation of quality was quantified using a 9 point Likert scale based on the five criteria for internal validity presented earlier. The coder scores from each of the five criteria were averaged to establish a mean weighting (Coopers & Hedges, 1994; Wolfe, 1984). Discussion of coder reliability parameters can be found in Appendix F. The coding process attempted to record as much data as could be extracted from the source analyzed; journal article, dissertation or original study.

Effective reliability of the coders efforts was calculated using Rosenthal's (1984) table for Effective Reliability of the Mean of Judges' Ratings. Control for bias was managed using a two step process to code each study for quality and then extract relevant data. (Cooper & Hedges, 1994; Wolfe, 1984).

Statistical Analysis and Interpretation
Studies that were selected for inclusion in the meta-analysis were analyzed to generate an effect size, a measure of the magnitude of the score change between the pre and post treatment, or control and experimental groups. In this study, the effect size represents the nature, positive or negative, and magnitude of the influence of the organized camping experience on the self construct measured. This effect size was then tested for heterogeneity of results. The influences of heterogeneity, the variability of the collection of effect sizes (Cooper & Hedges, 1994), were identified through statistical and graphic tests for homogeneity, an indication that the studies tested similar hypothesis (Rosenthal, 1987; Wolfe, 1986).

Those studies that were deemed to have a heterogeneous influence were evaluated in order to identify mediating effects, the moderator variables that would account for the differences identified (Cooper & Hedges, 1994; Rosenthal, 1984; Wolfe 1986). The effect size findings were then subject to comparison and combination, and evaluated for their homogeneity of results, and generalizability; strengths and weaknesses; and similarities and trends (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Hunter, Schmidt & Jackson, 1982; Wachter & Straf, 1990). Both the comparison and combination sensitivity analyses were made based on research method, construct evaluated, research instrument utilized, and data available on the population characteristics and treatment settings. Sensitivity analysis was designed to explore the influence of these variables on the random effect generated in the meta-analysis. Those moderators identified as having a significant influence on the effect are discussed in Chapter 4.

Calculations for comparison and combination were performed using several methods. First, calculations were made using an unadjusted effect size from each study, or equal weighting regardless of quality and magnitude of population. Next, calculations were performed based on the weighting of each study's effect size by the inverse of that study's estimated effect size variance. Finally, the calculations were repeated using the study's coded mean quality score in combination with the inverse of the variance as a weighting of the effect size for that study. The inverse of the variance as a weighting measure effectively gives more weight to those studies that were conducted with more precision (Cooper & Hedges, 1994). The use of the weighted methods to compare and contrast effect sizes provided a range of results for interpretation, as opposed to a single method approach that might be subject to criticism for selection bias.

The foundation of the statistical analysis was based on the use of Hedge's g, an effect size. This measure is representative of the d-index for analysis of dichotomous data. While the d-index is the preferred measure in a mean comparison, inherent in the statistical aspects of a meta-analysis are the different formats in which primary study data is reported (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997; Wachter & Straf, 1990). There is frequently the conversion of data from one type of measure to another, using prescribed formulas. Based on this limitation, measures of the r-index, for correlational data, were also employed in the effect size analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunt, 1997). Use of the r-index measures of Pearson's r and Fisher's Zr transformation in place of the d-index is preferred by some meta-analysts (Rosenthal, 1987; Wolfe, 1984). Fisher's Zr is used to correct for error in the Pearson r as the number of subjects, N, increases (Cooper & Hedges, 1994). The utilization of the multiple methods available for calculating effect size was employed to provide a spectrum of results, thus defending against potential for bias that could result from the use of a single method. Chapter 4, Analysis and Discussion of Data, addresses the handling of the range of effect size calculations.

Ultimately, the data available was the deciding factor for the calculation indices employed. As an example, most studies provided data to calculate g from a mean comparison, while some studies supplied only a Student's t score or F value which was used to calculate an r value as an effect estimate. Utilizing the available options for calculating results provided a sensitivity analysis of a range of results that were then evaluated for consistency. Based on the sensitivity analysis and meta-analytical theory, the range of metrics for effect sizes was reduced from g, r, and Zr, to Pearson's r, the details of this decision are discussed in Chapter 4.

Summary
An extensive search of primary and informal channels was used to identify the population of studies that report the influence of the organized summer camp experience on the self constructs of youth. A sample of studies meeting the criteria for inclusion in the meta-analysis were coded by a panel of verification coders in order to confirm the reliability of the primary coding. An effect size was generated from each study and those were then compared to evaluate the moderator variables influence on the identified effect. Finally, the results were combined to establish the research findings and to create an overview of the knowledge existing on the research question. Chapter 4 addresses the Analysis and Discussion of Data. Chapter 5 presents the conclusions from this analysis.

 

Tags: