The reporter couldn't, of course, which was Relic's point. Ranking schools is a hollow, misleading exercise that can do more damage than good. Yet, while we believe passionately in the ineffable and unquantifiable elements of independent schools, we also believe that we cannot reflexively dismiss all efforts to apply some elements of quantitative analysis to our work.
Having addressed variations of the central question — How can we measure a school's success? — with mixed results across a collective five decades of service to boards, we endeavored to develop a more helpful answer. To this end, we queried 200-plus leaders of California Association of Independent Schools (CAIS) member schools: "A trustee persistently asks how to measure and communicate student outcomes as promised in our mission statement. Please share how you manage this question when it is posed by your board members."
The replies deluged our email inbox. Of the 70 responses:
- 10 offered suggestions or resources for further study;
- three offered a version of "great question, good luck";
- one suggested, "Get rid of the trustee!"; and
- 56 heads, or 80 percent of respondents, simply replied to ask that we share our findings with the group.
Assuming this sampling of heads is representative of CAIS schools, and independent schools nationwide, the level of interest affirmed for us the urgency of our project.
Traditional Measures of Successful Schools
Although schools have been fairly quiet about it until recently, some have been using traditional measures of value for years. Increasingly, to varying degrees, independent schools employ these practices today.
Parent Satisfaction
Some quantitative metrics are quite telling and can be considered valid in gauging a school's health and success. Depending on variables such as local and national economic factors, proximity to competitor schools, relative tuition, and the quality of surrounding public schools, an independent school's
level of demand, as seen through admission statistics over time, is one indicator of its success, or perceived value.
For heads and trustees,
re-enrollment is a more reliable and telling indicator of a school's appeal. Parents are likely to re-enroll if they believe the school is delivering on its promise. Independent schools have an annual contract with families, so voluntary attrition is a meaningful measure synthesizing many individual family decisions. For example, independent schools with average annual voluntary attrition rates over 15 percent across a number of years could be viewed as less successful than nearby schools with attrition rates under 8 percent during the same time period.
Some schools employ
parent satisfaction surveys to useful effect as a way of providing educators and trustees with a more textured constituent narrative. As with any survey data, the benefit provided by parent surveys depends heavily on the quality of the actual instrument. Karen Eshoo, head at Vistamar School (California), points out that parent surveys are often not well designed — with poorly framed, leading questions resulting in dubious conclusions. Survey validity can also be limited by sample size and response rates.
Sometimes, surveys can be venting mechanisms, particularly if they are anonymous and include qualitative, open-ended responses — which could be appropriate, if encouraging community venting is one of the aims of the exercise. However, Eshoo observes that heads and trustees too often allow one or two anecdotal narrative comments to disproportionately affect their interpretation of the results. This tyranny of a vocal minority opinion is one of the reasons most schools don't get involved in qualitative analysis. It's hard to do it well, and easy to get misled.
Ultimately, the goal of any parent survey instrument used to gauge a school's (or program's, or faculty's, or school leader's) effectiveness demands that we can translate the results from a valid survey into meaningful action. As Douglas Hubbard, a specialist in applied information economics, asserts, "If a measurement matters at all, it is because it must have some conceivable effect on decisions and behavior. If we can't identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value."
1
In other words, we must resist the temptation that, because we can measure, we should.
Standardized Testing
Student achievement on various standardized measures may well result from the school's own highly selective admission operation, but that doesn't mean that a longitudinal look at such numbers won't convey important information, especially if there is a statistically significant drift in one direction or another.
We find a substantial variation even within the typical repertoire of standardized tests. Many of our colleagues see the SAT II (subject) tests as being more informative than their SAT I counterparts. The AP exams, with their "table grading" practices to establish reliability, and the IB exams that eschew multiple-choice formats ("American questions"), are improvements and point to the evolution in standardized, normative tests in a world increasingly willing to work with open-ended and creative-response questions.
As William Sedlacek explains in
Beyond the Big Test: Noncognitive Assessment in Higher Education, standardized tests such as the SAT are popular because they are simple and convenient to use as a sorting and ranking mechanism, provide a common assessment for candidates from varying backgrounds, and are relatively inexpensive to administer. The public (and some educators) have come to view student standardized test scores as an indicator of a school's rigor or quality, although, as Sedlacek observes, this was never an intended purpose of standardized tests, nor is it always a fair or accurate measure.
Among the lessons emerging from more than a decade of experience with the No Child Left Behind Act in the public schools are the realizations that an overemphasis on the value of standardized test outcomes forces teachers to narrow instruction to the tested subjects and materials, and that standardized assessments only measure one type of learning — memorization and recall — which is among the lowest-order cognitive skills — what John Austin, head of King's Academy (Jordan), calls "the educational equivalent of factory work."
2 The overemphasis on standardized test scores in public schools has, in effect, homogenized and narrowed the curriculum to the detriment of nontested subjects such as art, music, science, history, and physical education.
While independent schools are unlikely to follow this path, it is still prudent for school heads to help trustees understand the limitations posed by focusing too much on standardized test results because they measure an increasingly obsolete model of learning. For 150 years, American public education has conceived of students mostly as passive consumers of information, in accordance with the needs of the Industrial Age — an era that faded decades ago — and has clung to a model of teaching and learning whose value has been eviscerated by the Internet. Students today have ready access to the same expertise and content that teachers once hoarded and dispensed in their lessons. Students of yesteryear came to school to acquire knowledge; they now come to school to make meaning.
Accreditation
Among the traditional measures of the quality of an independent school's work,
accreditation by a regional or state organization of independent schools provides the most comprehensive assessment of a school's effectiveness, as measured against its mission and aspirations and in the context of its resources and capacities. The work of accrediting teams constitutes the educational equivalent of an audit, merged with consulting (and sometimes, therapeutic) services as well.
While the actual benefit of an accreditation can vary wildly based on the quality of both the school's self-study and its visiting team, the reflective, evaluative, and generative processes embodied by accreditation offer profound opportunities for trustees and school leaders to assess and improve a school's "value added."
In recent years, regional and state accrediting agencies have encouraged the use of data to both substantiate and illustrate dimensions of student learning. This shift is underscored by the recent adoption of Criterion 13 in the Standards issued by the NAIS Commission on Accreditation:
The standards require a school to provide evidence of a thoughtful process, respectful of its mission, for the collection and use in school decision-making of data (both internal and external) about student learning.3
Some schools may not yet fully appreciate this new requirement.
Recent Developments in Measuring School Success
Multiple approaches reaching beyond traditional means of measuring school success are emerging across the country, and reflect use of both quantitative and qualitative evidence.
Value-Added Assessment: The CWRA+
There are a number of new, interesting, and increasingly popular efforts under way to establish a normative baseline for individual students upon their entry into independent schools, and then to determine their relative growth over time as a value-added (or subtracted) proxy. One example is the College and Work Readiness Assessment (CWRA+).
The CWRA+, produced by the Council for Aid to Education, is designed to measure a school's contribution to developing a student's critical thinking, analytic reasoning, problem solving, and written communication ability — collectively, what are often referred to as 21st-century skills. School leaders can use the CWRA+ to identify areas of strength or areas needing growth in their instructional programs, particularly emphasizing these higher-order skills, and can, as John Austin puts it, "initiate a process [in which] individual teachers — and the faculty as a whole — study the effectiveness of their teaching."
4
The CWRA+ presents realistic problems that require students to evaluate complex support materials and propose actions or solutions. Students' written responses, not multiple-choice answers, are graded to assess their abilities to think critically, reason analytically, solve problems, and write clearly and persuasively.
According to Chris Jackson, the director of business development for the CWRA+, this instrument evolved from the council's Collegiate Learning Assessment (CLA) several years ago when The Lawrenceville School (New Jersey) proposed administering the CLA in its upper school. The results were deemed both reliable and insightful for assessing 21st-century skills among high school students. The Council for Aid to Education has now adapted the CLA's complexity and content for use in independent and public high schools — and renamed it the CWRA+. The level of interest from those working with middle school students has been so great that CWRA+ has recently added an eighth-grade version. What is especially exciting is the opportunity these measurements provide schools to conduct helpful institutional research and not just generate numbers used for validation or self-congratulation.
While schools employ the CWRA+ for various diagnostic and evaluative purposes, value added is determined by administering the CWRA+ to entering high school freshmen and graduating seniors, and analyzing their individual growth. Given the relative importance of writing, critical and analytical thinking, and problem solving — compared with the more limited range of skills measured by standardized tests like the SAT I — the CWRA+ offers a compelling alternative for school leaders who seek to gauge and improve their students' capacity for these essential skills.
High School Survey of Student Engagement (HSSSE)
Engagement correlates highly with academic achievement. Studies indicate that student engagement is multifaceted and complex, influenced by variables including purpose, relevance, challenge, relationships, support, rigor, connectedness, and boredom.
5
Emerging from Indiana University's National Survey of Student Engagement (NSSE), which focused on postsecondary students, HSSSE has been available to high schools since 2004. The survey informs a school's work based on the ways in which students engage in the educational program and the broader life of a school. It analyzes three domains of engagement:
cognitive/intellectual/academic engagement, social/behavioral/ participatory engagement, and
emotional engagement. HSSSE test items measure a range of variables that schools control or influence. Do students feel supported by school staff and faculty? Are rules applied fairly and consistently? Do students feel safe at school? Are there opportunities for students to be creative in their schoolwork?
Potentially far more institutionally useful than data from standardized achievement testing, evidence provided by the HSSSE can help pinpoint specific educational practices that contribute to desired outcomes. A school's programming can be significantly strengthened depending on the extent to which its faculty and coursework engage students and the degree to which the school's leadership strives to improve engagement when this assessment tool exposes negative trends.
It is worth noting that both HSSSE and CWRA+ now provide independent school norms for the results of their measurements. This reflects the growing popularity of these instruments that now enables such a comparison group to be generated, and increases the usefulness of these tools as comparative measures.
Benchmarking
Trustees with private sector backgrounds tend to be eager proponents of gathering comparative data from competitors. As schools delve further into data-based decision making, benchmarking against peers offers a rich source of information to help school leaders gauge relative performance.
NAIS's statistics are among the most widely used of the benchmarking data available to NAIS-member schools. The National Business Officers Association (NBOA) also offers demographic data and extensive survey services. Numerous regional associations of business officers prepare comprehensive annual surveys of member schools, and many regional independent school accrediting associations also provide extensive data annually to members as well.
On a national level, the Independent School Data Exchange (INDEX) provides benchmarking and other services to about 100 member schools nationwide. INDEX helps school leaders share data, analysis, research, and information to assist them in collaborative decision making, policy development, and strategic planning.
Lisa Pullman, INDEX's executive director, observes that in any benchmarking effort, data quality is essential for meaningful analyses and conclusions —
i.e., comparing apples to apples. INDEX controls its data with this in mind, making adjustments for regional cost of living, examining trends over time (longitudinal measurement) rather than snapshot data, providing sophisticated ratio analyses developed by the schools themselves over the last 20-plus years, and enabling its member schools to measure against peers of similar sizes and populations. INDEX's key performance indicators lay out comparative data for academics; administrator and faculty salaries and benefits, experience, and workload; admissions and development; athletics; financial operations and endowment; governance; marketing, public relations, and communications; personnel, physical plant and campus; student and teacher diversity; technology; and tuition and fees.
Benchmarking requires and fosters cooperation among schools in a data pool. Some school leaders express concerns that it is a small step from peer benchmarking to school rankings, but we suggest this is quite preventable. Healthy benchmarking collectives, including INDEX, are characterized by member school commitment to collegiality and use of data to benefit the greater good. Further, local cohorts of schools utilizing benchmarking practices can set ground rules that guard against the use of such data for invidious comparisons.
Data Dashboards
In another example of educators adapting corporate tools to aid in leadership and governance, independent schools often employ "data dashboards" to present metrics that gauge aspects of school performance in an easy-to-understand format for busy boards: enrollment and re-enrollment history, development, budget compliance, financial aid, standardized testing, diversity initiatives, high school and college placement, and more. The new accreditation process developed jointly by the state associations in California and Hawai'i emphasizes data-driven decision making, and requires the associations' 300-plus member schools to use such a dashboard.
Ethan Williamson, head of Barnhart School (California), has found that providing dashboard reports with this information at board meetings lets his trustees know that his administration is monitoring the school's vital signs. The process invites and inspires strategic conversations around the data, while helping Williamson stay a step ahead of the sort of trustee anxiety that may arise in the absence of such practices.
Some ambitious dashboards track measures of rigor in the academic pursuits of current students and graduates by examining trends in course selection, performance, and credit earned in secondary schools and colleges their alumni attend. A few extend measures to track aggregate data-defining trends, such as satisfaction with adult life and job prowess among graduates. Trustees from the corporate world, accustomed to using such in-depth information in their decision process, would likely welcome these extensive dashboard measures.
Still, much of what we do in schools cannot be easily measured and is not always subject to quantitative analysis. Data-based decision making is a central responsibility of heads and trustees; but our decisions and actions are not always as helpfully informed by data to the degree extant in the corporate world.
As Alison Cumming, director of publications and research at Wickenden Associates, notes, performance-based dashboards are much easier to design and interpret than "measures of growth (improvement over time) and student engagement (the willingness to persevere, try new things, set high personal aspirations)"
6 Whether trustees are provided only static performance measures because they're readily available, or because quantitative measures are familiar to boards characterized by a corporate outlook, the tendency to rely too heavily on performance indicators such as test scores and college placements can eclipse other dimensions of a school's mission and thereby set priorities that run counter to the school's purposes.
Moreover, as we learn from studies such as the findings of University of Pennsylvania's Angela Duckworth about "grit," featured with other research and vignettes in journalist Paul Tough's
How Children Succeed: Grit, Curiosity, and the Hidden Power of Character,
7 social scientists are now substantiating what some educators and parents have long believed: Variables including curiosity, resilience, self-control, and determination — what we sometimes refer to as personality traits or simply "character" — are even more critical than measures of IQ or academic achievement in determining how and why children succeed. These elusive but essential attributes of character that form the core of most mission statements remain outside the grasp of the dashboards that we've examined to date.
From a practical standpoint, engaging in exhaustive data collection and analysis requires a substantial commitment of resources and time that falls beyond the capacity of many independent schools. For this reason, we need to be judicious in assessing the need for and purpose of data dashboards. As Cumming also notes, it is crucial that boards identify what additional data they need to fulfill their governance duties, while honoring the administration's practical capacity to generate, compile, evaluate, and interpret the data for the trustees.
8
Longitudinal Alumni Surveys
Michael Chun, past president of Kamehameha School, Kapalama (Hawai'i), once astutely remarked, "Great schools are measured not by the accomplishments of their students, but by the lives led by their alumni." Consider the graduates of the school where you serve. "Are they active and involved in their communities?" asks John Austin. "Have they put their own educations to work in the service of others? Are they doing what Howard Gardner and his team at Harvard call 'good work' — work that is excellent in quality, socially responsible, and meaningful to its practitioners?"
9
Beyond these sorts of reflections, how can our graduates inform efforts to improve our schools? For almost 20 years, Kevin Graham of Lookout Management has been providing customized, statistically sound surveys of alumni. With an orientation toward outcomes, these surveys aim to provide better understanding of the effectiveness of the independent school experience by surveying those with firsthand knowledge. Graham's analysis illuminates a school's ability to deliver on the promise of its mission. What do our alumni tell us we're doing well? What do they tell us we're not doing so well? What do we think we're doing well that stands at odds with what our alumni report? The survey exposes the key drivers of overall satisfaction with the independent school experience, from the perspective of those who have applied it to the next stage of their lives.
Graham's alumni survey also educates school leaders and trustees about how well a school prepares its graduates, not just in the core academic skills, but also in the relational, leadership, self-advocacy, coping, and collaborative skills, among other things.
The Harvard Assessment Seminars
Some years ago, a group of schools began to collaborate with Richard Light at Harvard University, attempting to adapt his work with the very successful Harvard Assessment Seminars to the world of secondary independent schools. More than two decades ago, Light developed a systematic approach to interviewing students at Harvard and using the results to guide the debate about how to improve the advising program, the interpersonal connections students make with their peers and their teachers, and other elements of the undergraduate experience.
Now, in conjunction with Peter Aitken of Benchmark Research, Light's survey methodology captures impressions of graduating seniors in 28 independent schools, attempting to, as Victoria Groves of Harvard's Kennedy School puts it, "harness what's working well and integrate it into the cultures of the schools." We believe that this project, still in its developmental stages, holds great promise as a valuable measure of the work we do with students.
The Mission Skills Assessment (MSA)
Educators recognize that success at school and in later life depends on skills beyond academics, such as creativity, curiosity, teamwork, and resilience — the sorts of attributes described in countless independent school mission statements and that we seek to instill in our graduates. In this regard, we are particularly excited by the work recently begun involving a cohort of schools that are now a part of INDEX.
Four years ago, a set of 20 INDEX-member schools partnered with the Educational Testing Service (ETS) to develop the Mission Skills Assessment (MSA), an instrument to measure in middle school students mission-related skills that are rarely assessed, yet critical to student success and central to the purpose of independent schools: curiosity, teamwork, creativity, resilience, ethics, and time management. Over time, as results aggregate, participating schools will use the MSA to benchmark themselves against each other and to track student performance in these skills relative to their academic progression. In doing so, the MSA schools seek to improve their ability to teach these attributes so essential to shaping confident, capable students, and to demonstrate the value our schools provide.
The MSA promises powerful new insights in addressing how we might measure success and value added by independent schools. The assessment, determined to be reliable and valid according to ETS scientific research standards, is in its third round of data collection, examining the correlation between mission skills and academic performance in a longitudinal context.
While the MSA data remain very preliminary, they are promising. Since the mission-related skills correlate closely with such whole-child priorities as health and adult happiness, a school's ability to measure (and improve) its capacity to deliver these competencies to students is a powerful companion to the metrics provided by standardized achievement tests.
Finding Our Balance
Rigorously constructed, double-blind, control group studies are rare in the world of pre-K-12 education for obvious reasons. There is a fundamentally sacred (though nonsectarian) partnership between parents and schools and, try as we might, not all parents want for their children what the schools want for their students. Teachers rarely agree universally about what is most important, and neither do trustees. In this context, the pursuit of quantifying outcomes tends to ascribe value to those measures that we believe can narrow and distort a school's or board's priorities and impair the effectiveness of its staff. As Cumming asserts, "Diligent boards are always information-hungry, and independent schools are well served by trustees who seek evidence that the school is performing well in all areas. At the same time, however, the thirst for more and more data can be insatiable and ultimately unproductive if the information presented does not directly support the board's governance function. Furthermore, the process of data collection and analysis can be very time-consuming for the school administration and staff. Time spent on this task is time not spent on other tasks that are vital to the school's success."
As we are called to support our actions and decisions with data and to quantify the value added by our schools, more substantive approaches like those outlined here offer promise and can expand the conversation in important ways. As school leaders, we must be courageous, resisting the significant pressures to commodify education as a neatly defined, measured set of metrics and programs and outcomes.
At the same time, we do our students and ourselves no favors if we reject all efforts to measure the value we know we deliver.
Notes
1. Douglas Hubbard.
How to Measure Anything: The Value of "Intangibles" in Business. New Jersey: Wiley & Sons, 2010.
2. John Austin. "Creating an Academy of Learning: Authentic Assessment, Peer Review, and the College and Work Readiness Assessment."
Independent School (Spring, 2010); 66-73.
3. NAIS Commission on Accreditation. "Criteria for Effective Independent School Accreditation Practices: Criteria for Regional and State Associations." National Association of Independent Schools (2011).
4. Austin, 66-73.
5. See, for example, Jennifer A. Fredricks, Phyllis Blumenfeld, & Allison H. Paris. "School Engagement: Potential of the Concept, State of the Evidence,"
Review of Educational Research, 74(1) (Spring, 2004): 59-109.
6. Alison Cumming. "Data Dashboards Demystified: Why So Many Trustees Are Asking for Them, and How They Can Benefit the Board and the School." (2010). Retrieved 2/19/13 at:
www.wickenden.com/Documents/Dashboards%20Demystified.pdf.
7. Angela Duckworth, et al. "Grit: Perseverance and Passion for Long-Term Goals." (2007). Retrieved 3/10/13 at:
www.sas.upenn.edu/~duckwort/images/Grit%20JPSP.pdf. Paul Tough.
How Children Succeed: Grit, Curiosity, and the Hidden Power of Character. Boston: Houghton Mifflin Harcourt (2012).
8. Cumming.
www.wickenden.com/Documents/Dashboards%20Demystified.pdf.
9. Austin, 66-73.
10. Victoria Groves. "Helping Schools Shed Light on Better Learning Practices."
Harvard Kennedy School (2012). Downloaded on March 17, 2013, at:
www.hks.harvard.edu/news-events/news/articles/helping-schools.