Learning-Centered Leadership Practices for Effective High Schools Serving At-Risk Students

by Jason HuffCourtney PrestonEllen B. Goldring & J. Edward Guthrie –

Background/Context: Modest gains in NAEP scores by American high schools over the past twenty years highlight the need to identify different factors associated with gains in student achievement. Amongst those potential factors is school leadership; limited research on leaders’ work in secondary schools highlights the need to understand how high school leaders structure their schools to promote student learning.

Purpose/Objective/Research Question: We ask the question, What distinguishes leaders’ practices in more effective high schools from those in less effective high schools that serve large proportions of at-risk youth?

Research Design: We first identify more and less effective high schools using value-added scores, and we analyze interview, observational, and survey data collected in these schools to compare and contrast how leaders support key practices and organizational routines by their staff. Our analyses include work by traditional leaders (principals and assistant principals) as well as other leaders’ (e.g. department chairs, teacher leaders) practices within the schools.

Conclusions/Recommendations: We found differences between higher and lower value-added schools in terms of leaders’ conceptions of the intended routines (those ideal policies that faculty are to carry out) and their attention to the implementation of them, through closer examination of faculty members’ actual actions or their directed support for faculty members’ practices. Two primary themes characterize the differences in their practices. First, leaders in higher value-added high schools are more involved in, intentional about, and attentive to how their ideal/intended routines are implemented, thus ensuring that teachers’ actual practices are changed. They focus on how these routines provide ongoing monitoring and feedback for their faculty to build and improve teachers’ quality instruction, alignment of curriculum, and systems of support for students. Second, higher value-added school leaders provided more targeted, systemic efforts to support personalized learning for students.

The National Assessment of Educational Progress has found only moderate gains in high school students’ learning over the past two decades, and international assessments indicate that, compared to elementary and middle school gaps, differences between American high school students and high school students in other nations are widest (Grigg, Donahue, & Dion, 2007; Provasnik, Gonzales, & Miller, 2009). This work suggests a need to investigate factors that are associated with gains in secondary student achievement specifically because high schools differ significantly from elementary schools because of their larger size, multiple departments grouped by subject area, heterogeneous student bodies, and their role in providing students with an entry into the larger society and workforce (Fuhrman & Elmore, 2004; Jacobs & Kritsonis, 2006).

The majority of such research has focused on teachers, who make the biggest in-school contributions to student achievement, often to the neglect of principals, who make the second largest contribution (Hallinger & Heck, 1996; Rockoff, 2004). In particular, there is a need to understand how principals in high-achieving high schools structure their schools in ways that promote student learning.

Prior research across all school levels suggests that schools whose leaders articulate an explicit school vision, generate high expectations and goals for all students, monitor their schools’ performance through regular use of data and frequent classroom observations, and focus on the organizational management of their schools demonstrate increases in student learning (Horng, Klasik, & Loeb, 2010; Klar & Brewer, 2013; Leithwood & Riehl, 2005; Sun & Leithwood, 2015). Research also suggests that principals play important roles in implementing reforms: in schools where principals actively work to secure curricular materials and act as resources for instructional reforms, their teachers more frequently use new instructional strategies (Nettles & Herrington, 2007; Quinn, 2002).

While earlier analyses point to the influences that principals can have through things like selecting high-quality teachers and setting high academic goals (Brewer, 1993), more work is needed to examine leaders’ influences in high schools (Halverson & Clifford, 2013; Portin, Russell, Samuelson, & Knapp, 2013). Recent studies also point to differences not only in how high school leaders actually use their time, but also in how their practices influence student achievement. For example, high school principals spend less time on instructional activities such as classroom walkthroughs than their elementary peers and the time they spend on walkthroughs is negatively associated with achievement growth in math, while time spent on coaching and evaluating teachers is related to achievement growth in mathematics (Grissom, Kalogrides, & Loeb, 2013; Grissom, Loeb, & Master, 2013). Very few studies, however, have focused on how high school principals organize and implement leadership practices around student learning (Crum & Sherman, 2010; Halverson & Clifford, 2013). These gaps in the literature are crucial to address because of high schools’ limited success in raising student achievement for all students.

The purpose of this paper is to analyze how high school leaders implement and support different practices and organizational routines that target improved instruction and learning. We ask the question: What distinguishes leaders’ practices in more effective high schools from those in less effective high schools that serve large proportions of at-risk youth? We identify more and less effective high schools using value-added scores and analyze data collected in these schools to compare and contrast how leaders support key practices and organizational routines by their staff. Our analyses include work by traditional leaders (principals and assistant principals [APs]) as well as other leaders’ (e.g., department chairs, teacher leaders) practices within the schools.

In the remainder of this paper we first summarize the literature on effective schools and learning-centered leadership to illustrate the need to understand leaders’ roles in implementing or supporting those pervasive practices that characterize effective high schools. We also explain the concepts of “practice” and “organizational routine” that informed our analyses, to identify how school leaders influence their staffs’ work to improve instruction and learning. We then detail our methodology for choosing four case study schools and describe each school before explaining our analyses. Finally, we present our findings and discuss their significance for researchers and practitioners.

THE CHALLENGE TO UNDERSTANDING EFFECTIVE HIGH SCHOOL LEADERSHIP

Many recent high school improvement efforts focus on the implementation of specific programs such as career academies and Advancement Via Individual Determination (AVID), or instructional changes such as Peer Enabled Restructured Classroom (PERC) (see, for example, Kemple, Herlihy, & Smith, 2005; Quint, Bloom, Black, & Stephens, 2005; Thomas, Bonner, Everson, & Somers, 2015). However, reviews of research on high schools suggest that three decades of urban high school reform aimed at improving the academic performance of disadvantaged students have not resulted in substantially narrowing achievement gaps (Becker & Luthar, 2002; Cook & Evans, 2000; Davison, Seo, Davenport, Butterbaugh, & Davison, 2004; Clifford & Halverson, 2013). More recent studies of secondary schools highlight the impact that high-quality teacher-student interactions can have on achievement and provide evidence that interventions targeting teacher-student interactions may improve achievement (Allen et al., 2013; Allen, Hafen, Gregory, Mikami, & Pianta, 2015). There is little evidence, however, that any single program or practice will close more than a fraction of the achievement gap and reduce high school dropout (Berends, 2000; Miller, 1995). Instead, substantially improving learning opportunities for students from traditionally low-performing subgroups may require comprehensive, integrated, and coherent designs that simultaneously influence multiple components in schools (Chatterji, 2005; Shannon & Bylsma, 2002; Thompson & O’Quinn, 2001).

Principals often play central roles in implementing such comprehensive designs. As previously mentioned, school leaders who are linked to increases in student learning are those who focus on school organization, use data and classroom observations to monitor school performance, articulate an explicit vision, create a strong learning climate, and set high expectations for all students (Horng et al., 2010; Leithwood & Riehl, 2005; Murphy, Goldring, Cravens, Elliott, & Porter, 2007; Sebastian & Allensworth, 2012). Principals’ effects on student learning are also likely mediated by their efforts to improve teacher motivation and working conditions (Louis, Leithwood, Wahlstrom, & Anderson, 2010), as well as to hire high-quality personnel (Grissom & Loeb, 2011; Horng et al., 2010). This evidence thus points to the need for principals to address the broader conditions in schools that target improved learning.

However, the body of empirical research on effective leadership practices in schools is limited both conceptually and in terms of its applicability to high schools. Conceptually, much of the research on leadership in schools starts with a predetermined dimension of leadership—such as instructional leadership—with a list of behaviors and activities, and offers assessments or comparisons of principals’ adherence to specific, discrete practices that fit within the specified dimension (see for example, Branch, Hanushek, & Rivkin, 2013; Goldring, Huff, May, & Camburn, 2008; Grissom et al., 2013; Horng et al., 2010; Supovitz, Sirinides, & May, 2010). While research has developed multiple lists of effective characteristics of school leadership, work remains to identify how leaders cultivate the conditions such that their faculty pursue improved teaching and learning over a long period of time. In this regard, work in Chicago high schools finds that principals influence classroom practices through setting high expectations for college-going, creating program coherence, and securing high-quality professional development (Sebastian & Allensworth, 2012). Similarly, Wiley (2001), in her study of high schools and mathematics achievement, found that

learning in mathematics is increased when school administrators facilitate development of shared values and beliefs about the school’s mission, support actions focused on instructional development, communicate respect and value of teachers, and when there is a minimal degree of professional community among department teachers. That is, the effect of transformational leadership is enhanced by ongoing teacher learning, teacher collaboration, and cooperative focus by teachers on improving teaching and learning—professional community. (p. 25)

In the remainder of this section, we summarize the research and concepts that emerged from a review of the literature on effective high schools (Goldring, Porter, Murphy, Elliott, & Cravens, 2009; Preston, Goldring, Berends, & Cannata, 2012) and guided our examination of principals’ work. As we discuss in our methods section, these areas of learning-centered leadership guided our data collection and initial coding of the data for these domains.

Personalized learning connections refers to opportunities and strategies for staff to develop stronger relationships with students such that staff can provide more individual attention to them and foster their sense of belonging to the school (Lee & Smith, 1999; McLaughlin, 1994; Walker & Greene, 2009). Personalized learning connections can exist in high schools on a continuum from strong and robust, leading to belonging and connectedness, to weak and nonexistent, leading to alienation and, ultimately, dropout (Crosnoe, Johnson, & Elder, 2004; Hallinan, 2008; Nasir, Jones, & McLaughlin, 2011; Rumberger, 2001).

Systemic use of data refers to data-based analyses and/or decision-making that are critical practices for school improvement efforts. Access to data alone cannot guarantee more effective practice (Ingram, Seashore Louis, & Schroeder, 2004; Schildkamp & Visscher, 2010; Spillane, 2012). Despite limited examinations of data use in high schools, existing evidence suggests a number of elements essential to its effectiveness. First is the diffusion of both the availability of data and a faculty’s ability to analyze and act on the data (Copland, 2003; Schildkamp & Visscher, 2010; Spillane, 2012). When data access is centralized in the hands of a principal, data use can be limited by the principal’s personal beliefs and skills related to data use (Luo, 2008). Second, research suggests that collaborative data-based inquiry affects intermediate outcomes, increasing teachers’ investment in schoolwide issues, strengthening instructional efficacy (Huffman & Kalnin, 2003), and characterizing both mature and successful school improvement efforts (Copland, 2003; Tedford, 2008; Wilcox & Angelis, 2011). Finally, once data are available and discussed collaboratively, their use must permeate organizational routines in order to be effective (Ingram et al., 2004; Schildkamp & Visscher, 2010; Spillane, 2012). Research shows that even when data are widely available within the school and collaborative structures exist for teachers, data analysis and use must still become standard operating procedures before they can have an impact on practice (Schildkamp, Poortman, & Handelzalts, 2016). Findings suggest that data use is one mechanism to develop educators’ shared commitment to school goals and students, and it is a mechanism for helping adults and students collaborate and receive feedback to continue engaging in the “work” of schooling.

Rigorous and aligned curriculum focuses on the roles leaders play to ensure that schools provide rigorous content in core academic subjects (Gamoran, Porter, Smithson, & White, 1997). On the whole, high school curricula are driven by state standards, as required under the No Child Left Behind Act of 2001 (2002) and the Every Student Succeeds Act (2015). Research on curriculum at the high school level centers around differences between vocational/technical curriculum or remedial courses and college preparatory curriculum, the effects of increasing curricular requirements for graduation, and access to curriculum, specifically advanced courses, for different groups of students.

Effective schools work to compress preexisting variability by promoting equal and equitable access to school resources and promoting the inclusion of all students in all aspects of the schooling experience; in other words, there is a focus on opportunities to learn. Lee and Burkham (2003) find that students in schools with more constrained curriculum have lower odds of dropping out, and literature suggests that effective schools should work to compress variability in course selection by race and class and ensure all students have access to advanced courses (Muller, Riegle-Crumb, Schiller, Wilkinson, and Frank, 2010). Further, effective schools also create variable and differentiated experiences to meet the needs of diverse learners by offering transition classes (Gamoran et al., 1997), schools within schools (Lee & Ready, 2007), career academies (Maxwell & Rubin, 2002), college outreach programs (Domina, 2009), and other differentiated programs to meet student needs. While these programs are targeted at subgroups within a school to meet a specific need, such as informing at-risk students about the college application process, research findings on the effectiveness of these programs are mixed, suggesting that the structures, programs, or practices intended to create variable experiences for certain subgroups are dependent on other domains of effective high schools, such as personalized learning connections or quality instruction.

Our final domain of focus, quality instruction, encompasses the teaching strategies that teachers employ to achieve high standards for all students. Trends in this research literature cluster around common practices and specific classroom foci. Common practices include collaborative group work (Staples, 2007), formative assessments (Brown, 2008), inquiry-based learning (Cohen & Ball, 2001), scaffolding, and introducing new concepts concretely (Alper, Fendel, Fraser, & Resek, 1997). These foci include creating structures and classroom climates where students are allowed to try and fail without negative consequences (Alper et al., 1997), making content not only relevant for real life, but important, and setting high expectations for all students (Boaler, 2008). The vast majority of more recent work on quality of instruction has focused on developing frameworks and corresponding classroom observation rubrics to define, monitor, and evaluate the quality of instruction in schools, such as the CLASS-S (Pianta, Hamre, & Mintz, 2011) and Framework for Teaching (Danielson, 2013). These frameworks, as well as others, suggest that high-quality instruction is rooted in a notion of engaged learning (instructional dialogue, feedback, responsiveness), whereas low quality instruction consistently allows students to be passive and disengaged as learners (seatwork, receivers of information, and limited accountability for learning).

TARGETING EVIDENCE OF LEADERS’ IMPACT ON PRACTICE

In our examination of leadership in higher and lower value-added schools, we studied faculty members’ direct statements about their leaders’ actions to support practices and routines that focused on improving teaching and learning. Our analyses did not rely on a set list of behaviors in order to examine how leadership influences the enactment of our four essential domains of effective schooling. Rather, we used the concept of  “practice” introduced by Spillane et al. (2011, 2012, and 2013), which they describe as “more or less coordinated, patterned, and meaningful interactions of people at work” (2011, p. 114). A key aspect of this notion of practice is the notion of Feldman and Pentland’s (2003) organizational routine, or “a repetitive, recognizable pattern of interdependent actions, carried out by multiple actors” (p. 105). According to Spillane (2011) such routines “structure day-to-day practice in schools by more or less framing and focusing interactions among school staff” (p. 116). Analyzing for repetitive practices enacted by multiple stakeholders, rather than looking for evidence of discrete behaviors, helps us to identify regular, patterned activity within schools rather than unique or random occurrences that have little broader impact on a faculty or its students. Such a focus also helps to identify what ongoing, sustained practices by staff may distinguish more versus less effective schools. Spillane et al. (2011) emphasize that such routines focus on interactions between individuals, not just their actions, and our analyses therefore targeted evidence of ongoing work by groups of individuals. With this framing of routines to include practice by multiple actors, we examined evidence of actions by traditional leaders such as administrators as well as others such as department chairs or other teacher leaders.

One final distinction is central to this part of our analyses: the intended(or “ostentive”) versus practiced (or “performative”) aspects of organizational routines. Feldman and Pentland (2003) explain, “the ostentive aspect is the ideal or schematic form of a routine. It is the abstract, generalized idea of the routine, or the routine in principle. The performative aspect of the routine consists of specific actions, by specific people, in specific places and times. It is the routine in practice” (p. 101). Spillane et al. (2011) contrast them in this way: “The ostentive aspect of organizational routines is part of the formal structure (i.e., the designed organization), whereas the performative aspect refers to administrative practice (i.e., the lived organization)” (p. 591). These authors all argue that studies of organizational routines must include examinations of the intended, ideal forms of practices, such as recommendations or formal expectations for what a group should do to examine school data, along with evidence that focuses on what different individuals actually do within the context of these expectations and their group. Only when researchers pay attention to both can they capture organizational routines in both their intent and their actual implementation. We thus used the notions of intended versus practiced aspects of routines to analyze and discuss how leadership in higher and lower value-added schools creates or supports pervasive, shared, and structured routines that guide faculty members’ practices to successfully implement the essential components.

With these four domains in mind—personalized learning connections, systemic use of data, rigorous and aligned curriculum, and quality instruction—we ask: How does leaders’ work vary in implementing and supporting key practices and organizational routines for effective schools? In this paper, we compared and contrasted how the leadership in two lower and two higher value-added high schools create routines, both intended and practiced, in these four domains.

SELECTION OF CASE STUDY SCHOOLS

We identified Broward County, Florida, as an urban district with a large population of students from traditionally low-performing groups. Broward County was chosen because of the diversity of its high schools, both in terms of performance and demographics, and for the availability of data linking students and teachers over time. We then identified four case study high schools that varied in terms of their effectiveness at improving student achievement among low-income and minority students and English language learners (ELLs). Using student scores from Florida’s Comprehensive Assessment Test (FCAT), we estimated school-level value-added models using 3 years of data, as well as separate estimates for students in different low-performing subgroups (low income, black, Hispanic, and ELL) to distinguish effective and less effective high schools for these groups.1 This value-added model takes the form

ΔAit = βXit + φm + Γit + νit  (1)

where Ait represents the achievement gain for student i in year t relative to their prior-year score in year t-1X is a vector of student characteristics for student in year t including gender, race/ethnicity, limited English proficiency (LEP) program participation, free lunch status, reduced-price lunch status, gifted program participation, a set of disability categories for students in special education, student mobility (within-year and between-year school change) and pre-high-school (Grade 8) attendance, and normed math and reading test scores, φm is a school-specific fixed effect, Γit is a set of grade-by-year indicators to account for any unmeasured grade and year influences, such as variation in the difficulty of the test, and

νit is a random disturbance term.

Such school-level value-added estimates are correlated with other measures of school performance and are better indicators of school performance than school-level average test scores (Grissom, Kalogrides, & Loeb, 2014; Meyer, 1997). Studies such as Hill, Kapitula, and Umland’s (2011) offer evidence that value-added scores relate positively to measures of teachers’ content knowledge and their quality of instruction. Further, Meyer (1997) and Ballou, Sanders, and Wright (2004) contend that school-level value-added measurement is a significant advance in comparing schools, particularly over school performance average and median test scores. Other measures such as average or median test scores can be strongly influenced by students’ previous achievement, student mobility, and other nonschool factors. Value-added measures use statistical modeling that control for, to the extent possible, all nonschool factors that influence student achievement, which enables us to isolate schools’ contributions to their students’ performance. Furthermore, while tracking in high schools has the potential to bias teacher value-added estimates, the aggregate efficacy of internal tracking mechanisms may be a desirable component of school-level value-added measures, as the organization of the schedule and the allocation of teachers to courses should influence school performance measures (Harris, 2011).

While NCLB has only required states to test once in the high school grades, for over a decade, Florida has tested English/language arts and mathematics in more than one high school grade, improving our ability to identify more and less effective schools. Because we used 4 years of test score data in both math and reading to estimate school-level value-added for all high schools in the district, our estimated school effects represent the average contribution of a high school to student learning gains in either math or reading from 2005–06 to 2008–09, controlling for observed student characteristics.

We then identified four case study high schools that were a) relatively high-performing for all student groups or b) relatively low-performing for each student group, considering average value-added rankings for reading and math. Charter and magnet schools were not considered for selection as case study schools because their choice component may have influenced school-level value-added scores. Finally, we checked that the schools we identified had graduation rates consistent with our value-added results. Thus, our case study schools (2 higher value-added, 2 lower value-added) served large proportions of students in traditionally low-performing subgroups and were higher performing or lower performing relative to their district and the state as a whole. Below, we profile these four schools before discussing our analyses of each.

CASE STUDY DATA COLLECTION

We collected data from the four case study schools during three weeklong visits in the fall, winter, and spring of the 2010–2011 school year. Data collection included observations of full faculty meetings and professional learning community teams, and semistructured interviews with principals; APs; guidance counselors; department heads of English/language arts, mathematics, and science; and 18 tenth-grade teachers who taught those three subjects in regular and upper-level classes at each school. Principals were interviewed twice, during our fall and spring visits. We conducted classroom observations during our fall and winter visits in one class of each of the 18 teachers who were interviewed. Each teacher was observed four times (two observations per week) teaching the same class. Researchers used the Classroom Assessment Scoring System-Secondary (CLASS-S) to live code instruction and the classroom environment during these observations (Pianta, Hamre, Hayes, Mintz, & La Paro, 2007). In addition, we conducted focus group interviews with students and with teachers who had been identified as coaches or leaders of student activity groups. Students were selected for focus groups by their schools in order to include students from all level classes, all grades, and with different levels of involvement. Finally, on our spring visit, we shadowed 6 tenth-grade students in each school (three students from “higher” or accelerated/AP and “lower” or regular assignment tracks who together represented the demographics of the student body) for a day and interviewed these students at the end of the school day.

We designed our data collection process to allow both the form and function of our schools’ key programs, practices, and routines to emerge from inductive analyses of fieldwork data. By collecting data from actors in multiple positions within each case study school, we were able to incorporate multiple perspectives, triangulating findings for increased credibility (Lincoln & Guba, 1985). This study draws on our interview data with both leaders and teachers in the different schools. Our interviews also probed beyond the mere existence of formal programs and routines, which often communicate only the intended practices in which faculty should engage, to understand the depth and specificity of leadership’s expectations and actions to ensure that faculty actually carried out such activities (Spillane, 2012). We asked principals and teachers questions about topics ranging from the principals’ goals and visions for their schools to how principals and other leaders provided feedback to their teachers to specific actions that individual faculty took to get to know their students. Our questions focused on not only the formal, intended structures and policies in place but on principals’ and teachers’ actual practices. Sample questions included the following:

Example Principal Questions:

1.

What is your vision of student learning and instruction for this school?

2.

How often do you observe teachers’ instruction, either formally or informally?

a.

What type of feedback do you give after the observations?

b.

What do you see as the purposes of these observations?

3.

How often do you discuss professional needs and goals with your lead teachers?

4.

What opportunities are there for teachers to grow and learn as a teacher?

5.

What do you do to facilitate teachers getting to know their students as individuals?

6.

Can you tell me how you interact and connect with your students? How do you get to know your students as individuals?

Example Teacher Questions:

1.

How would you describe what the principal’s goals for this school are?

2.

To what extent do you think teachers in this school have common ideas about what students should be learning?

3. How often are you observed, either formally or informally?

a. What type of feedback do you get from the observations?

b. To what extent do you find these observations helpful to you?

4.

How often do you talk with school administrators about your professional needs or goals?

5.

What types of opportunities does the school provide for you to grow as a teacher?

6.

What are you doing to get to know your students as individuals?

By asking participants about what actually happened in different programs or meetings, we were able to identify how closely leaders’ and faculty members’ intended and actual practices matched. Our interviews with principals and teachers enabled us to corroborate evidence across participants to determine just how broadly certain practices or policies were followed and how engaged faculty members were in different programs in the school.

DATA ANALYSES

To anchor our work and inquiry around leadership practices and routines, we focused our study of high school leadership on the domains we summarized earlier, namely: a) creating personalizing learning connections, b) providing rigorous and aligned curriculum, c) developing high-quality instruction, and d) implementing systemic data use. Existing research indicates that such broader conditions and practices in schools are key to improving student learning (Goldring et al., 2009; Preston et al., 2012).

Our analyses focused first on how leaders work to develop positive personalized learning connections; that is, schools with personalization that targets both academic and social learning, where students feel strong connections to the school, both through classroom engagement and opportunities for involvement, and where these connections exist on a schoolwide level with specific social and academic structures in place to support the development of these connections.

We also examined leadership routines and practices of effective data use in terms of access to data, what capacity teachers have to use this data and act on what they learn from it (e.g., re-teach lessons, modify lessons based on student interim assessments), and whether there is a culture of data use in the school. We collected evidence of teachers’ use of a range of data, from in-class assessments to formative tests to end-of-course exams and grades. Our analyses did not focus on specific types of data but rather asked teachers to discuss what different data they analyzed and in what groups or processes during the school year.

Finally, in this study we examined schools’ work both to align their curricula with state standards and to provide rigorous curricula across different tracks for their students.

Interview transcripts were coded using pattern coding to identify instances of leadership across our four domains—personalized learning connections, data use, rigorous curriculum and quality instruction, with a focus on practices and routines (Fetterman, 1989; Miles & Huberman, 1994; Yin, 1989). Our analyses used a three-phase approach with multiple coders working together. Coding in Phase 1 was used to construct and refine our conceptualization of learning-centered leadership, identify qualitative dimensions of learning-centered leadership, and develop rubrics that helped coders determine the intensity, depth, or quality of the different components or subcomponents in each school. In Phase 2, we used the refined definitions and newly identified dimensions of learning-centered leadership to recode the transcripts originally coded during Phase 1 in order to build reliability between coders. The team of researchers met weekly to arbitrate their coding and come to consensus. In Phase 3, after achieving a satisfactory interrater reliability with a kappa value of .74, the triad of coders analyzed additional transcripts and observation notes, meeting weekly to share findings and discuss emerging themes. The researchers wrote memos throughout the coding process to elaborate their findings regarding the components and other themes that emerged (Corbin & Strauss, 2008) and to triangulate findings across different sources. These memos form the basis of this study and addressed the following questions:

1. How and to what degree are learning-centered leadership practices and routines around the four domains manifest (or absent) at each case study school?

2. What makes these schools unique as compared to the other schools?

3. What are the similarities and differences in learning-centered leadership practices and routines among the schools?

DISTRICT CONTEXT AND CASE STUDY HIGH SCHOOLS

In this section, we briefly profile the four case study schools. Table 1 provides demographic information and performance indicators for these four schools. We first discuss the district context and its influence on all the schools’ programs and policies before describing conditions in each of the schools.

Table 1. Demographic Characteristics and Performance Indicators of Case Study High Schools

 Low value-added schools High value-added schools
 Boulder Star Coral Reef  Key Lime Loggerhead 
School characteristics     
Enrollment1,600–2,0001,900–2,300 2,600–3,0002,000–2,400
      
Percent minority 55–65%<20% 50–60%65–75%
      
Percent economically disadvantaged60–70%45–55% 30–40%45–55%
      
Percent limited English proficient10–15%5–10% 5–10%5–10%
2010 graduation rate<80%<80% >85%>85%
2011 school grade2CA3 BA
      

Note: The state accountability rating and graduation rate were the most recent data available at the time of school selection. Demographics represent the composition of the schools at the time of our visits (2010–2011). The value-added ranks are derived from 3 years of data of school-level value-added in math, science, and reading. The most recent year was 2009–2010.

DISTRICT CONTEXT

Broward County has been engaged in high school reform for the past 9 years and has received national recognition for its efforts to improve its chronically low-performing schools. Its high school reform goals include integrating an academic system with high standards, common curriculum and assessments across schools, and instructional supports for teachers. Specific strategies aimed toward achieving those goals include credit recovery programs, intensive skills classes, dual enrollment options for students, and weekend classes. In interviews in all four schools, faculty referenced a number of district policies or initiatives that influenced their work: the district’s common curriculum calendar that drove both content and timing of curriculum delivery, the centralized program to assign students to classes based on previous performance and test scores, an emphasis on the use of professional learning communities, an emphasis on more frequent classroom observations (brief “walkthroughs” or longer ones), and a focus during observations on classroom conditions such as common blackboard configurations (listing class goals and objectives), word walls, and the use of “do-now” activities to start lessons. While faculty in all four schools referenced these district policies, we found that school leaders implemented and supported these in different ways, and we focus on these differences. We next describe each of the schools (all referenced by pseudonyms) by offering brief summaries of their leadership structures, strategies for monitoring instruction, use of observation and student data, and students’ focus on learning.

BOULDER STAR HIGH SCHOOL: LOWER VALUE-ADDED

Over the last decade, Boulder Star’s grades in the Florida grading system have bounced from Cs to As, and during the 2010–2011 school year its grade had dipped to a C. This placed Boulder Star under “Correct II” status,4 which meant that it had been labeled a school in need of improvement for 4 or more years, had met less than 80% of adequate yearly progress (AYP) criteria in the previous year, and faced state-directed measures to improve student performance. With this status, it faced increased district and state oversight through closer monitoring of progress and support: if it did not make future progress and improve its school grade, it could also face state- and district-mandated schoolwide interventions.

The Boulder Star High School administrative team consists of the principal and four APs who meet once a week to plan for upcoming events, coordinate specific responsibilities, and schedule classroom observations and review prior data. APs are assigned to supervise 2–3 academic departments and individual grade levels (e.g., one AP focused on history, English/language arts, and ninth-grade students each year). A second leadership team, consisting of the principal, APs, department chairs, media specialist, Exceptional Student Education (ESE) specialist, and reading coach, meets every 2 weeks.

The principal commented that APs conduct most of the observations, which is corroborated by faculty reports, although the principal sometimes participates as well. The principal described these walkthroughs as the “backbone” of the school’s accountability efforts. For teachers, Boulder Star accountability encompasses discussing both observation data and student performance data in regular “3D Data Chats,” where administrators work with teachers to understand, interpret, and act on student data.

Faculty described a mixed culture of learning among students: a high level of academic focus among higher performing students (such as those in honors classes) but less academic focus among lower performing students in regular classes, marked by lack of engagement or unwillingness to do homework.

CORAL REEF HIGH SCHOOL: LOWER VALUE-ADDED

Over the last several years Coral Reef High School has bounced between C and D school grades, and during the 2010–2011 school year, it too was in a Correct II status under the state accountability system.

Coral Reef’s administrative team includes the principal and four APs, who meet once a week, and its leadership team consists of the principal and APs, department chairs, and four academic coaches from reading and math, who meet every two weeks and, among other activities, monitor walkthroughs. APs were assigned 2–3 subjects, and they supervised students based on last names, except for the ninth-graders, who were all supervised by one AP.

As at Boulder Star, Coral Reef faculty reported that administrators conduct brief, though sometimes irregular, classroom walkthroughs. Both of these schools’ teachers reported that walkthrough data were not presented individually but were used to discuss trends across multiple teachers that administrators observed. Coral Reef’s faculty reported that observation data and student performance data are used for a variety of purposes, but that there is a heavy emphasis on using this data for evaluation of teacher performance and accountability. Multiple teachers criticized administrators for offering little feedback after observations, and little support. Administrators also reported examining teachers’ grade books and test scores to hold them accountable, and the principal reported publicly posting student test scores for each teacher in an effort to motivate them to improve.

Faculty reported a weak sense of academic focus among students, where many students in Honors and AP classes are unprepared for the level of rigor and higher performing students in Honors and AP classes are marginalized. They described students as having problematic behavior and poor attendance, and they tended to attribute poor student performance to students’ backgrounds, poor prior performance, or lack of effort rather than their own instructional activities and strategies. Teachers were also highly critical of the principal’s offer of financial and field trip incentives to students to improve their performance on the FCAT because these offers were not actually followed through on for those students who improved.

KEY LIME HIGH SCHOOL: HIGHER VALUE-ADDED

This school has received an A over the past several years, and it is currently in Correct I status,5 which means that it had been labeled a school in need of improvement for 4 or more years, but met 80% of AYP criteria and faced district-directed (rather than state) intervention.

Its administrative team is similar to those at Boulder Star and Coral Reef, consisting of the principal and four APs who meet once a week. Its leadership team includes the principal, APs, department chairs, ESE coordinator, and team leaders from its small learning communities in science, social studies, and English/language arts. The team meets once every 2 weeks, and we saw evidence of input from informal leadership beyond departmental heads, including teacher leaders and curriculum leaders. APs and counselors are both assigned to “loop” with students, working with the same cohort of students as they progress through high school, instead of working with the same grade level every year as at Boulder Star and Coral Reef.

At Key Lime, teachers report receiving both formal and informal feedback on their performance from administrators and department chairs through annual reviews, classroom walkthroughs, and data chats with administrators and other faculty. While faculty have mixed feelings about the value of classroom walkthroughs, most teachers reported receiving useful feedback from performance reviews. In general, faculty report a high frequency of data use that is central to their practice.

Teachers reported that students in the AP/Honors track were extremely motivated both in and outside the classroom, while students in regular tracks had more problems with attendance, motivation, and behavior. Multiple teachers at Key Lime followed their comments about low behavioral engagement with descriptions of how certain school structures such as looping, small learning communities, or academic advising promote personalization and allow students to receive more individual attention from faculty over an extended period of time. We return to a discussion of leaders’ work with these structures below.

LOGGERHEAD HIGH SCHOOL: HIGHER VALUE-ADDED

Loggerhead’s school grade has bounced between an A and B over the past several years, and it was in Correct II status during the 2010–2011 school year.

The school’s leadership team consists of the principal and three APs, department chairs, team leaders, and instructional coaches, and this team meets once a week, but we did not see evidence of an administrative team as in the other schools. Similar to the other schools, APs are assigned to supervise both departments and individual grades, but as in the lower value-added schools, they do not loop with their students.

Faculty reported that administrators conduct regular classroom observations and hold quarterly discussions with teachers about their observations. Unlike the other three schools, faculty at Loggerhead characterized accountability as including test scores but emphasizing factors such as professional conduct, punctuality, specific instructional practices, and demonstrable concern for students—teachers and the principal both referenced these additional criteria. In addition to classroom observations, the principal reported observing teacher meetings as well as part of the accountability system.

Participants reported high expectations for faculty and adult actors in the school, but mixed expectations for students, specifically lower expectations for low-performing and/or low-SES students. Some faculty also reported their concerns about their own ability to meet the social and academic needs of the lowest performing students. In describing these concerns, however, many Loggerhead faculty identified student performance as a reflection of their own performance as instructors, while also expressing a need for parents and students to accept a greater share of the responsibility.

From these descriptions of leadership structures and processes, we turn to describing the differences between the intended and practiced routines of leadership in higher and lower value-added high schools.

RESULTS

We identified not only evidence of specific routines but how well their intended purposes matched their actual implementations. We found differences between these higher and lower value-added schools both in terms of leaders’ conceptions of the intended routines (those ideal policies that faculty are to carry out) and their attention to the implementation, through closer examination of faculty members’ actual actions or their directed support for faculty members’ practices. Our findings focus on two primary themes that characterize differences in the practices between lower and higher value-added high schools. First, leaders in higher value-added high schools for at-risk students are more involved in, intentional about, and attentive to how their ideal/intended routines are implemented, thus ensuring that teachers’ actual practices are changed. They focus on how these routines provide ongoing monitoring and feedback for their faculty to build and improve teachers’ quality instruction, alignment of curriculum, and systems of support for students. Second, higher value-added school leaders provided more targeted, systemic efforts to support personalized learning for students. We provide a series of contrasting cases, starting with lower and then moving to higher value-added schools, to illustrate and discuss how differences in principals’ practices cut across multiple programs to influence the extent and quality of their implementation. While these cases illustrate leadership differences in the two sets of schools we studied, we do not make the argument that these differences fully explain the range in student achievement and outcomes between the groups. As we discuss in the findings, our results highlight the need to further examine specific qualities in leaders’ work that may create school conditions that are more conducive to student success.

GREATER ATTENTION TO THE INTENDED AND PRACTICED ROUTINES THAT SUPPORT HIGH-QUALITY INSTRUCTION AND RIGOROUS, ALIGNED CURRICULUM

In higher value-added schools, we find evidence that leaders were more attentive to and involved in both the intended and practiced routines they used to support teachers’ instruction. This greater attention and involvement are evident through faculty discussions of school leaders’ more detailed conceptions of instruction and curriculum, intentions to support higher quality curriculum and instruction, and the higher priority leaders give to teacher observations by providing specific observational data and conferences to review those data.

Varying Conceptions of Leaders’ Intended Routines

In lower value-added schools (Boulder Star and Coral Reef), we found evidence for leaders’ more superficial understandings of what activities they needed to engage in to support teachers’ instruction and alignment of curriculum. When describing the content and focus of observations by administrators, Boulder Star’s principal reported that they focus on district-recommended strategies such as word walls and common blackboard configurations (such as listing class objectives), but he provided little beyond these descriptions of what was important to identify or analyze during observations. Teacher comments suggested they were unsure about leadership’s goals for instructional improvement and that leaders were looking for rote requirements in observations, rather than more substantive elements of strong instructional practice. One teacher expressed confusion regarding the specifics of the principal’s vision of learning and communication of priorities to staff.

I am not exactly sure what his particular goals are, like when it comes to figures and statistics. I know he wants us to start to really get the kids to pass the FCAT, more of the kids to pass the FCAT, because I think we were a little bit below last year. We weren’t making the standard. I don’t know what the standard is, how many kids are supposed to pass it within a school, but I think I was told that we weren’t making the standard. We need to raise the bar with our instruction on the FCAT.

As with Boulder Star, Coral Reef’s principal offered limited evidence of a more complex conceptualization of the ideal role that leaders needed to play in observing curriculum and instruction. The principal described administrators’ need to look for Marzano’s high-yield instructional strategies in their observations, but provided few, if any, further details or longer discussion regarding how recent district recommendations for instruction, such as bell-to-bell instruction and “do-now” or “bell-ringer” activities to start classes, are emphasized. Thus, while the principal offered key current catchphrases regarding the content of his observations (e.g., Marzano’s strategies), further discussions provided little, if any, evidence for a deeper understanding of strong instructional practice, and this suggested instead that leadership used a heavy emphasis on district-mandated observation priorities, like displaying objectives that are not always closely related to instruction. Multiple teachers questioned the value of the observations and feedback; the strongest evidence of a disconnect between leaders’ intended actions and their actual practices to support curriculum and instruction came from one teacher who commented that “lip service is paid to higher order thinking and high levels of thinking…(but) I’m not sure it’s supported… I think the attempt of what we want to do is there. I don’t think we are in sync with everyone doing what we should be doing.”

In contrast to Boulder Star and Coral Reef, multiple sources discuss administrators’ and department chairs’ roles in one of the higher value-added schools, Loggerhead High School, evidencing more complex conceptions of their intended practices to support quality instruction and rigorous curriculum. The fact that sources discuss more elaborate department chairs’ roles at Loggerhead is a key distinction from the lower value-added schools and is one avenue where administrators’ attention to practical steps and implementation is evident: department heads share in the responsibility for supporting instructional improvement. These comments illustrate leadership’s more detailed understanding of their roles as chairs to support teachers, rather than merely completing administrative work, such as course scheduling or distributing curricular materials to their departments. One Loggerhead department chair reported that she focuses on teachers who need help, such as new teachers who “don’t know how to teach,” and that she often helps teachers with “techniques to engage students.” She often determines which teachers need help by reviewing their lesson plans once a semester or by talking to APs to determine who needs additional help. A second department chair described herself as “the first line of defense” to provide help if she saw a struggling teacher. Loggerhead’s principal offered additional evidence of a more complex conception of teacher observations by elaborating how he looked for a “high level of rigor” comprised of “ambitious content, high cognitive demand that students are carrying” in their classes. Evidence from the Loggerhead High School principal and department chair interviews thus illustrates leaders’ more complex understanding of their formal, intended roles and routines to support instruction and curriculum.

Variations in Leaders’ Practiced Routines: Providing Specific Feedback to Guide Teachers’ Higher Quality Instruction

In regard to administrators’ observations and conferences with teachers, leaders in all four schools self-reported higher frequencies of observations than did their teachers, but teachers in higher value-added schools describe key differences in leaders’ practiced routines of following up on the observations, such as conferences/discussions and specific steps to provide support to teachers. First, leaders in higher value-added schools followed up more consistently with teachers to discuss the content of observations through conferences or brief meetings after their observations. Second, while participants in all four schools discussed having access to multiple forms of data (such as student achievement, attendance, and observation results), leaders in higher value-added schools provided more specific, actionable feedback for teachers to use to inform their own practice and improve student performance. In these, leaders use a wider range of data to give teachers more detailed evaluations and appraisals of their work, and they did so while encouraging faculty to engage in ongoing discussions of data around how to improve students’ performance and school conditions.

For example, at Boulder Star High School, multiple teachers testified to having numerous classroom walkthroughs and were broadly positive about these visits. However, when asked about provision of feedback or the use of data, leaders and teachers alike more frequently referred to the “3D Data” chats that APs led with groups of teachers every three 3, not to feedback from observations. They reported that feedback was primarily offered by leadership as various random issues arose or as part of the annual evaluation; this feedback is “minimal at best” according to one teacher. When pressed for more details on the content of the feedback and support that leaders provided, one Boulder Star teacher remained vague: “They support you. They give you a format. They give you the tools, and they are there for you. You have the knowledge, and they give you the—they give you, how do you say, the supplies that you need.” This view contrasts with administrators who report conducting frequent walkthroughs and giving “constant” feedback to their teachers about instruction, indicating that these intended routines may not have been as thoroughly implemented or practiced at Boulder Star. Taken together, these comments provide evidence of a number of conditions in low value-added schools: infrequent discussions with teachers, feedback that is nonspecific, references to few sources of data from observations, and a disparity between leadership’s and teachers’ views on the utility of walkthroughs and feedback.

Similarly, at Coral Reef High School, administrators described a process of classroom observations and feedback; however, teachers criticized them for providing little or no feedback after observations and little instructional support and instead being “concerned only with [test] scores.” One teacher focus group described receiving limited feedback or follow-through by administrators, such as providing professional development they recommended, again suggesting that instructional leadership routines remained at a formal or intended level in lower value-added schools. After one teacher commented that there was little follow-through to provide training to use more technology in the classroom that the principal had recommended, a second teacher replied, “at least you got feedback. I have never gotten feedback.” Only a few teachers discussed reviewing any data other than test scores with their administrators. Further, observations may only be occurring because walkthroughs are “forced by the district” and teachers reported that classroom walkthroughs were conducted on an intermittent basis. Administrators described providing informal feedback to teachers if they felt it was necessary, with no elaboration of goals for more consistent reviews or discussions.

In contrast, the leadership in higher value-added schools provided more consistent feedback and focused on data beyond just standardized test scores. At Key Lime High School, teachers reported receiving both formal and informal feedback on their performance from administration and department heads through annual reviews, more frequent classroom walkthroughs, data chats, and memos. While some teachers at Key Lime offered mixed accounts of the value and frequency of the shorter classroom walkthroughs, teachers primarily reported receiving useful feedback for their instruction from their performance reviews. Multiple teachers credited such feedback as informative for specific changes in their instruction during the year, and many described being engaged in meaningful, ongoing discussions of their student data throughout the year, in one-on-one meetings as well as in professional learning communities. At Loggerhead High School, administrators each year scheduled quarterly “one-on-one” data chats with teachers where they reviewed teachers’ student performance data, what they had seen in walkthroughs and longer observations, and their lesson plans, focusing more on factors such as professional conduct, punctuality, specific instructional practices, and demonstrable concern for students than on test scores. For leadership, multiple data sources were more useful in advising teachers’ practices and were key to making “a school click when it comes to performance outside.” Multiple department chairs also corroborated the timing and content of these quarterly one-on-one meetings, offering stronger evidence of a higher frequency of follow-up meetings at Loggerhead. One administrator detailed using both the data and follow-up conversations with teachers as guides for directing their department chairs and/or coaches to provide specific content or instructional support.

On the whole, we see evidence that leadership in higher value-added schools has more detailed conceptualizations of the observation and feedback cycle for their teachers and the importance of multiple forms of data that they can use in providing feedback to teachers. Furthermore, these intended routines translated into observations that have been implemented more widely and consistently. In lower value-added schools, leadership’s conceptualization of data use for instructional improvement is less developed, and the disconnects we found between leadership’s and teachers’ descriptions of the frequency of observation and feedback indicate that leadership may struggle to implement these routines in actuality. From here, we turn to the fourth essential component of focus, the role of leadership in promoting personalized learning connections.

TARGETED EFFORTS TO BUILD PERSONALIZED LEARNING CONNECTIONS WITH AND FOR STUDENTS

As previously summarized, the domain of personalized learning connections focuses on opportunities for teachers to provide more individual attention to students and discuss their unique experiences both in and out of school. Ranging from sports to extracurricular clubs and programs to in-class programs and lunchtime conversations, such activities allow adults to know their students more closely (Lee, Bryk, & Smith, 1993; Lee & Smith, 1999; McLaughlin, 1994) and to foster students’ sense of connection to the school (Walker & Greene, 2009).

Our analyses focused on evidence of leaders’ involvement in programs and practices aimed at developing these personalized learning connections, in an effort to identify how leaders’ guidance of or support for personalized connections differed between higher and lower value-added schools. The differences that we found centered on leaders’ careful attention to the broader routines that promote a larger number of adult-student connections: while leadership in lower value-added high schools more often emphasized their own or others’ individualized efforts to connect with students such as in lunchroom discussions, leaders in higher value-added schools more often discussed these connections by describing broader policies or programs they had implemented and maintained that helped to more systematically connect adults with students.  

Of the lower value-added schools, Boulder Star’s leadership offered more extensive evidence of leaders’ individualized strategies to promote connections. Boulder Star’s principal and one AP spent more time elaborating on how they made individual efforts to get “out and about” and talk with students in the halls and to participate in events such as dress-up days to help students see them in a different light. The AP commented on the importance of “being out there so students see me, knowing that we are just not people that sit in our office.” Other faculty members corroborated these accounts. One teacher described how the assistant principals had staged a “paint the AP” event during lunchtime to connect more with students. One of the APs described how the most important thing for her to do to ensure students’ success was

being their mother or father here on campus…It’s being an extension of what they may be getting here on campus, but a lot of times aren’t…students have to see you are human. They must understand you are a human being…You have to build that connection with your kids. I don’t know if it’s school-wide, I just think that would be more on an individual basis.

Coral Reef’s principal discussed efforts that also emphasized individual efforts to build relationships: he described his own work to be “visible” to students through conversations, along with his directions to APs to be in the hallways frequently. The principal described having started a mentoring program to target ninth- and tenth-graders in the lowest percentiles of performance and “personalize the experience” that different students have in school, but he offered few specifics for the program or any evidence that he or other leaders devoted much time to it, indicating that intended routines have not been translated into actual practice. An AP later reported that the program was gone due to budget cuts. When asked what administrators were doing to support better student connections, one department chair’s response suggested that some faculty saw little evidence that administrators were engaged due to their focus on accountability pressures:

Nothing. Nothing. Administration is so overwhelmed with this FCAT, and the school grade, and we got to up our scores with the AP kids, they don’t have time to make sure there is a connection. They are not doing it intentionally. They just don’t have the time. They don’t. I would say nothing. Then they wonder why attendance is going down. I tell them, why should a kid come to school every day if there is nothing else but preparing them for FCAT. That’s all we are talking about. We are not talking about pep rallies. We are not talking about having any activities, besides what’s in the textbook. We don’t have any guest speakers come out. We don’t have student assemblies. We don’t celebrate Women’s History month, Black History month, Jewish history…

Thus in these accounts from lower value-added schools, we not only see that school leaders focused on individualized strategies to build personalized connections with students, but we find evidence that these leaders may espouse certain intentions for building personal learning connections, but they did not enact or implement certain aspects of broader routines such as mentoring programs or school celebrations that could connect students more closely to their schools.

Evidence that administrators in lower value-added schools more often targeted individual practices to connect with students differed from evidence in the two higher value-added schools, where leaders and faculty indicated that school leaders focused on more systemic routines and programs to build student-adult connections. This focus on individuals developing relationships with students in lower value-added schools, rather than the systemic efforts we see in higher value-added schools, is similar to differences we see in the implementation of observational routines. In the lower value-added schools, principals and APs bear the primary responsibilities for observation, while in the higher value-added schools, responsibilities for observation are more distributed to include department heads as well.

Discussions with Key Lime’s principal (higher value-added) offer the strongest example of this. He first noted the centrality of these connections to the school’s success:

And, the reason we have made the A’s is because of the sense of personalization…They loop. 9th and 10th loop…An administrator, guidance counselor, and two academic teachers, an English and social studies teacher, are looping with these kids…So this whole idea– I keep coming back to personalization, knowing the kids, knowing their background, and creating a sense of family I think goes a long way.

He then detailed specific changes he had made for ninth and tenth grades so that students and teachers are together for more than one year, including modifications to the master class schedule, and co-locating administrative and counselor offices and classrooms for each grade level in the same area of the school: under his direction, formal, intended routines have actually been implemented. Other leaders are aware of the importance of these looping structures as well: one AP spoke of refining looping so that staff connect with both parents and their students: “all of us rotate and stay with a cohort of kids until they graduate, this is to increase the level of personalization not only with the students, but the parents as well.” Faculty also value these looping structures and describe the impacts of them, together with the resource modifications necessary for their implementation. One teacher echoed how these small learning communities were central to the school’s success:

I find them critical to our success here…Speaking about the strength (of the school) question, I would be remiss if I didn’t mention that the way that we personalize education here I think is amazing. There is the sense of community here that is palpable. You can feel it.

Comments by other faculty highlighted the importance of different programs to connecting with students: “On campus, we have a lot of clubs and that’s important to students because they have that teacher—they have asked that teacher, that they have a relationship with, to be their club sponsor, so they get exposed to being with a teacher other than teaching. So they see the interaction, normal interaction.”

Loggerhead High School also used looping, and a reading program established with their feeder middle school provides stronger evidence of the faculty’s systematic efforts to connect with their students. The principal detailed how faculty visited their feeder middle schools to meet incoming freshmen as eighth graders, to introduce the school, and to invite them to participate in a reading program in which they meet in smaller groups during their freshman year to discuss a book. Efforts such as these helped faculty to make early connections with incoming students. The principal and one AP also described a program in which administrators and teachers worked together throughout the year to identify particular student groups (e.g., those in the lowest 30% of achievement or those with excessive absences) and meet with these groups to discuss both their academic work as well as their personal experiences at the school and life issues they may be confronting. When explaining the motivation behind the program, the AP explained it was due to “Personalization. Day in and day out personalization.” Finally, administrators at Loggerhead elaborated on how a lead content teacher had engaged students in this more systemic approach to personalization and school connections by working closely with the student government to brainstorm and provide opportunities such as guest speakers and pep rallies during lunch and after school for other students and faculty to come together to build school spirit. These efforts served to both engage students in developing the programs and provide other students with activities to feel more a part of the school.

DISCUSSION

This paper has examined the notion that leadership in effective high schools is defined by engaging in and supporting articulated routines and practices that are pervasive and permeate all aspects of the school, rather than the implementation of any particular set of programs (such as ninth-grade academies). Our findings highlight leadership work by multiple actors to improve instruction and learning. Our analyses of learning-centered leadership highlighted data relating to traditional leaders’ (principals and APs) as well as other leaders’ (e.g., department chairs, teacher leaders) practices within their schools. By considering leadership in terms of intended actions and policies as well as others’ discussions of actual or performative actions and policies, we were able to look beyond leaders’ own descriptions of their roles and practices to examine others’ accounts of what strategies and/or practices actually helped distinguish leadership in more and less effective schools. We present evidence that in higher value-added high schools, leaders’ practices and routines were better matched to their intentions to support their faculty’s practices in quality instruction, systematic use of data, rigorous and aligned curriculum, and building personalized learning connections.

As we have discussed, leaders in higher value-added high schools differed from their counterparts in lower value-added high schools in three key ways. First, they described more complex conceptions of their own intended roles of observation and feedback to support teachers’ quality instruction. These leaders provided more detailed summaries of what they looked for in their observations and what information they provided in their feedback. Teachers’ comments in both sets of schools corroborated these accounts: those in higher value-added schools reported how leaders’ input had led to changes or improvements in their instruction, while some in lower value-added schools questioned the value of leaders’ feedback.

Second, leaders in higher value-added schools exhibited systemic data use practices that involved a wide variety of data and had become ingrained routines (Ingram et al., 2004; Schildkamp & Visscher, 2010; Spillane, 2012). They used multiple forms of data to provide more frequent, specific feedback and to engage teachers in ongoing reviews and discussions of their students’ progress. In this domain we found evidence of how leaders in higher value-added schools combined the data analyses with data from observations to create more coherent, ongoing discussions about instruction that included both group or team reviews of data along with their individual feedback to teachers.

Finally, leaders in higher value-added high schools focused on establishing more systemic routines in the forms of broader programs and systems (such as looping, a freshman reading program, club activities) that provided greater, more widespread opportunities for faculty to build personalized learning connections. Their focus on these routines included attention to various resources (such as location of classrooms or administrator offices) that helped to ensure that the programs had an impact on teachers’ interactions with one another and with students, all around the shared goal of connecting with their students. In effect, higher value-added school leaders’ careful attention to implementation of the formal, intended routines and details of the personalized learning connections initiatives helped to change teachers’ routines such that they actually did connect more frequently with students on topics that included both academics and broader topics (such as club activities), increasing students’ likelihood of graduation (Crosnoe et al., 2004; Peck, Roeser, Zarrett, & Eccles, 2008). This evidence came in the form of both leaders’ discussions of the programs and teachers’ description of more frequent engagement with individual students through those programs.

Across the four domains of schooling that were the focus of this paper—quality instruction, rigorous and aligned curriculum, systemic use of data, and personalized learning connections—we find evidence that leaders’ careful attention to the planned routines in higher value-added schools often helped to ensure that they are carried out and enacted with more fidelity by faculty—staff’s actual changes in practices more closely matched the intended changes. Our findings illustrate that a deeper understanding of leaders’ work to support key routines in their schools, when analyzed to compare their intended and actual implementation rather than viewed as discrete practices (Feldman & Pentland, 2003; Spillane et al., 2011 and 2012), can help to identify how their more detailed conceptions of their responsibilities and roles both inform their own actions as well as the routines and programs that they implement. Instead of referring broadly to district mandates, leaders in the higher value-added schools emphasized the value of program and policy details for success, and they made these more explicit for staff in setting expectations and following up on implementation. This paper also demonstrates the importance of further examining both the intentions and actual implementations of organizational routines to determine their success when studying school improvement. Analyses such as these not only help to determine just how new programs or policies affect leadership practice, such as day-to-day instruction or time spent connecting with students more personally, but they reveal that school leaders’ deliberate attention to routines, such as use of data, can influence multiple areas of school effectiveness.

Our findings also inform the larger field of high school improvement in a number of ways. Just as the results emphasize the need to pay attention to the formal or intended dimensions of routines, they also point to the need to examine the actual implementation of different programs such as career academies, ninth grade academies, or AVID. In this study’s higher value-added high schools, leaders’ practices helped to ensure both greater alignment of staff resources (time and materials) around specific goals such as building personalized learning connections, and they more closely supported improvements in teachers’ practices (through such actions as giving more detailed feedback on instruction). Such practices by effective leaders could certainly apply not only to specific routines but also to implementation of broader programs and/or comprehensive school reform models. These findings point to the key roles that principals can play in not only aligning and connecting different resources in their schools but also in providing guided support of teachers’ changes to or improvements of their practices.

These results also point to the need for refining our understanding of just what specific leadership practices matter most in improving student achievement in high schools. Grissom, Loeb, and Master (2013) have found that specific practices such as teacher coaching, evaluation, and developing a school’s educational program positively predict achievement gains, while principals’ time spent on brief, informal classroom walkthroughs may actually be negatively associated with achievement gains. They call for more study and definition of what specifically comprises effective instructional leadership in different contexts. Similarly, teachers in our higher value-added schools described their principals’ more detailed provision of useful feedback to inform and guide their improved practices; these results offer support for Grissom et al.’s findings that principals must connect their observations and walkthroughs to longer discussions and a coherent vision for improved instruction.

Finally, Horng et al. (2010) present evidence that other organizational management work (e.g., hiring personnel, managing budgets and resources) is key to raising student achievement in high schools. These activities fall outside of more recent conceptualizations of instructional leadership and point to the broader organizational roles that principals play in their schools. Similarly, we found that principals’ careful allocation and alignment of resources were key to the success of programs (for such priorities as improving personalized learning connections), and such work falls outside the scope of recent calls for principals to focus heavily on the teaching and learning dimensions of their schools. As the field deepens its understanding of effective school leadership, there is a need to develop both more specific conceptions of instructional leadership practices as well as broader views of the systemic impacts that principals can have on their schools.

Notes

1. Identifying reference provides additional technical detail on the specific value-added model and the selection criteria for these high schools.

2. 2010–2011 Florida School grades for high schools are derived from a combination of FCAT scores, learning gains, graduation rates, accelerated coursework, and SAT/ACT scores. Yearly guides to calculating school grades are available from http://schoolgrades.fldoe.org/reports/.

3. Because school grades include a wide variety of metrics and are largely driven by achievement levels and graduation rates rather than growth, school grades and value-added measures are not strongly correlated. Additionally, to meet the criteria for an “A” grade, a school must only achieve 62% of the available points.

4. Correct II status indicates that the school has a grade of at least C, has not made adequate yearly progress (AYP) for more than 4 preceding years in a row, and met less than 80% of its AYP criteria in the previous school year.

5. Correct I status indicates that the school has a grade of at least C, has not made AYP for more than 4 preceding years in a row, but met at least 80% of its AYP criteria in the previous school year.

References

Adams, J. E., & Kirst, M. (1999). New demands for educational accountability: Striving for results in an era of excellence. In J. Murphy & K. S. Louis (Eds.), Handbook of research in educational administration (2nd ed., pp. 463–489). San Francisco, CA: Jossey-Bass.

Allen, J., Gregory, A., Mikami, A., Lun, J., Hamre, B., & Pianta, R. (2013). Observations of effective teacher-student interactions in secondary school classrooms: Predicting student achievement with the Classroom Assessment Scoring System—Secondary. Schoology Psychology Review, 42(1), 76–98.

Allen, J., Hafen, C., Gregory, A., Mikami, A., & Pianta, R. (2015). Enhancing secondary school instruction and student achievement: Replication and extension of the My Teaching Partner—Secondary Intervention. Journal of Educational Effectiveness, 8(4), 475–489.

Alper, L., Fendel, D., Fraser, S., & Resek, D. (1997). Designing a high school mathematics curriculum for all students. American Journal of Education, 148–178.

Anderman, E. M. (2002). School effects on psychological outcomes during adolescence. Journal of Educational Psychology, 94(4), 795–808.

Ascher, C. (1988). Urban school-community alliances. New York, NY: ERIC Clearinghouse on Urban Education.

Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37–65.

Becker, B. E., & Luthar, S. S. (2002). Social-emotional factors affecting achievement outcomes among disadvantaged students: Closing the achievement gap. Educational Psychologist, 37(4), 197–214.

Berends, M. (2000). Teacher-reported effects of new American school designs: Exploring relationships to teacher background and school context. Educational Evaluation and Policy Analysis, 22(1), 65–82.

Boaler, J. (2008). Promoting “relational equity” and high mathematics achievement through an innovative mixed‐ability approach. British Educational Research Journal34(2), 167–194.

Branch, G., Hanushek, E., & Rivkin, S. (2013). School leaders matter. Education Next, 13(1)63–69.

Brewer, D. (1993). Principals and student outcomes: Evidence from U.S. high schools. Economics of Education Review, 12(4), 281–292.

Brown, G. T. (2008). Conceptions of assessment: Understanding what assessment means to teachers and students. Nova Science Publishers.

Brown, K. M., Benkovitz, J., Muttillo, A. J., & Urban, T. (2011). Leading schools of excellence and equity: Documenting effective strategies in closing achievement gaps. Teachers College Record, 113(1), 57–96.

Chatterji, C. M. (2005). Achievement gaps and correlates of early mathematics achievement: Evidence from the ECLS-K-first grade sample. Education Policy Analysis Archives13(46).

Cohen, D. K., & Ball, D. L. (2001). Making change: Instruction and its improvement. Phi Delta Kappan83(1), 73–77.

Cook, M., & Evans, W. N. (2000). Families or schools? Explaining the convergence in white and black academic performance. Journal of Labor Economics, 18, 729–754.

Copland, M. A. (2003). Leadership of inquiry: Building and sustaining capacity for school improvement. Educational Evaluation and Policy Analysis25(4), 375–395.

Corbin, J. M., & Strauss, A. L. (2008). Basics of qualitative research: Techniques and procedures for developing grounded theory. Los Angeles, CA: Sage Publications, Inc.

Crosnoe, R., Johnson, M. K., & Elder, G. H. (2004). Intergenerational bonding in school: The behavioral and contextual correlates of student-teacher relationships. Sociology of Education77(1), 60–81.

Crum, K., & Sherman, W. (2010). Best practices of successful elementary school teachers. Journal of Educational Administration, 48(1), 48–63.

Danielson, C. (2013). The framework for teaching evaluation instrument, 2013 edition. Retrieved from http://www.salemschools.com/uploads/file/Forms/2013-framework-for-teaching-evaluation-instrument.pdf

Davison, M. L., Seo, Y. S., Davenport, E. C., Butterbaugh, D., & Davison, L. J. (2004). When do children fall behind? What can be done? Phi Delta Kappan85(10), 752–761.

Domina, T. (2009). What works in college outreach: Assessing targeted and schoolwide interventions for disadvantaged students. Educational Evaluation and Policy Analysis31(2), 127–152.

Every Student Succeeds Act, Pub. L. 114-95, 129 Stat. 1802 (2015).

Feldman, M. S., & Pentland, B. T. (2003). Reconceptualizing organizational routines as a source of flexibility and change. Administrative Science Quarterly, 48(1), 94–118.

Fernandez, K. E. (2011). Evaluating school improvement plans and their effect on academic performance. Educational Policy25(2), 338.

Fetterman, D. M. (1989). Ethnography: Step by step. Newbury Park, CA: Sage Publications.

Fuhrman, S., & Elmore, R. (2004). Redesigning accountability systems for education. New York, NY: Teachers College Press.

Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997). Upgrading high school mathematics instruction: Improving learning opportunities for low-achieving, low-income youth. Educational Evaluation and Policy Analysis, 19(4), 325–338.

Goldring, E. B., Huff, J. T., May, H., & Camburn, E. (2008). School context and individual characteristics: What influences principal practice? Journal of Educational Administration, 46(3), 332–352.

Goldring, E. B., Porter, A. C., Murphy, J., Elliott, S., & Cravens, X. (2009). Assessing learning-centered leadership: Connections to research, professional standards, and current practices. Leadership and Policy in Schools, 8(1), 1–36.

Grigg, W., Donahue, P., and Dion, G. (2007). The nation’s report card: 12th-Grade reading and mathematics 2005 (NCES 2007-468). Washington, DC: National Center for Education Statistics.

Grissom, J., Kalogrides, D., & Loeb, S. (2013). Principal time management skills: Explaining patterns in principals’ time use and effectiveness (Working Paper).

Grissom, J., & Loeb, S. (2011). Triangulating principal effectiveness: How perspectives of parents, teachers, and assistant principals identify the central importance of managerial skills. American Education Research Journal, 48(5), 1091–1123.

Grissom, L., Loeb, S., & Master, B. (2013). Effective instructional time use for school leaders: Longitudinal evidence from observations of principals.Educational Researcher, 42(8), 433–444.

Hallinan, M. (2008). Teacher influences on students’ attachment to school. Sociology of Education, 81(3), 271–283.

Hallinger, P., & Heck, R. H. (1996). Reassessing the principal’s role in school effectiveness: A review of empirical research, 1980–1995. Educational Administration Quarterly32(1), 5–44.

Hallinger, P., & Heck, R. H. (2011). Conceptual and methodological issues in studying school leadership effects as a reciprocal process. School Effectiveness and School Improvement22(2), 149–173.

Halverson, R., & Clifford, M. (2013). Distributed instructional leadership in high schools. Journal of School Leadership, 23(2), 389–418.

Harris, D. N. (2011). Value-added measures in education. Cambridge, MA: Harvard Education Press.

Hill, H. C., Kapitula, L., & Umland, K. (2011). A validity argument approach to evaluating teacher value-added scores. American Educational Research Journal, 48(3), 794–831.

Horng, E. L., Klasik, D., & Loeb, S. (2010). Principal time-use and school effectiveness. American Journal of Education, 116(4), 492–523.

Huffman, D., & Kalnin, J. (2003). Collaborative inquiry to make data-based decisions in schools. Teaching and Teacher Education19(6), 569–580.

Ingram, D., Seashore Louis, K., & Schroeder, R. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record106(6), 1258–1287.

Jacobs, K., & Kritsonis, W. (2006). An assessment of secondary principals’ leadership behaviors and skills in retaining and renewing science educators in urban schools. National Journal for Publishing and Mentoring Doctoral Student Research, 3(1), 2006.

Kemple, J., Herlihy, C., & Smith, T. (2005). Making progress toward graduation: Evidence from the talent development high school model. New York, NY: MDRC.

Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112, 496–520.

Klar, H., & Brewer, C. (2013). Successful leadership in high-needs schools: An examination of core leadership practices enacted in challenging contexts. Educational Administration Quarterly, 49(5), 768–808.

Lee, V. E., Bryk, A. S., & Smith, J. B. (1993). The organization of effective secondary schools. Review of Research in Education, 19, 171–267.

Lee, V. E., & Burkam, D. T. (2003). Dropping out of high school: The role of school organization and structure. American Educational Research Journal40(2), 353–393.

Lee, V. E., & Ready, D. D. (2007). Schools within schools: Possibilities and pitfalls of high school reform. New York, NY: Teachers College Press.

Lee, V. E., & Smith, J. (1995). Effects of high school restructuring and size on early gains in achievement and engagement for early secondary school students. Sociology of Education, 68 (4), 241–70.

Lee, V. E., & Smith, J. B. (1999). Social support and achievement for young adolescents in Chicago, IL: The role of school academic press. American Educational Research Journal, 36(4), 907–945.

Leithwood, K., &, Riehl, C. (2005). What we know about successful school leadership. In W. Firestone & C. Riehl (Eds), A new agenda: Directions for research on educational leadership. New York, NY: Teacher College Press.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications, Inc.

Little, J. W. (1982). Norms of collegiality and experimentation: Workplace conditions of school sources. American Educational Research Journal, 19(1), 325–340.

Louis, K. S., Leithwood, K., Wahlstrom, K., & Anderson, S. (2010). Investigating the links to improved student learning: Final report of research findings. Retrieved from http://www.wallacefoundation.org/KnowledgeCenter/Knowledge Topics/Current AreasofFocus/EducationLeadership/Documents/Learning-from-Leadership-Investigating-Links-Final-Report.pdf.

Luo, M. (2008). Structural equation modeling for high school principals’ data-driven decision making: an analysis of information use environments. Educational Administration Quarterly44(5), 603–634.

Maxwell, N. L., & Rubin, V. (2002). High school career academies and post-secondary outcomes. Economics of Education Review21(2), 137–152.

McLaughlin, M. W. (1994). Somebody knows my name. In Issues in restructuring schools (Issue Report No. 7, pp. 9–12). Madison, WI: University of Wisconsin-Madison, School of Education, Center on Organization and Restructuring of Schools. (ERIC Document Reproduction Service No. ED 376 565).

McLaughlin, M., & Talbert, J. E. (1993). Contexts that matter for teaching and learning: Strategic opportunities for meeting the nation’s educational goals. Stanford, CA: Stanford University, Center for Research on the Context of Secondary School Teaching.

Mediratta, K., & Fruchter, N. (2001). Mapping the field of organizing for school improvement: A report on education organizing in Baltimore, Chicago, Los Angeles, The Mississippi Delta, New York City, Philadelphia, San Francisco, and Washington, DC. New York, NY: New York University, Institute for Education and Social Policy.

Meyer, R. H. (1997). Value-added indicators of school performance: A primer. Economics of Education Review16(3), 283–301.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage.

Miller, L. S. (1995). An American imperative: Accelerating minority educational advancement. New Haven, CT: Yale University Press.

Mintrop, H., & Trujillo, T. (2004). Correction action in low-performing schools: Lessons for NCLB implementation from state and district strategies in first-generation accountability systems. Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing.

Muller, C., Riegle-Crumb, C., Schiller, K. S., Wilkinson, L., & Frank, K. A. (2010). Race and academic achievement in racially diverse high schools: Opportunity and stratification. Teachers College Record, 112(4), 1038–1063.

Murphy, J. (1988). Methodological, measurement, and conceptual problems in the study of instructional leadership. Educational Evaluation and Policy Analysis10(2), 117–139.

Murphy, J., Beck, L. G., Crawford, M., Hodges, A., & McCaughy, C. L. (2001). The productive high school: Creating personalized academic communities. Thousand Oaks, CA: Corwin Press.

Murphy, J. F., Goldring, E. B., Cravens, X. C., Elliott, S. N., & Porter, A. C. (2007). The Vanderbilt assessment of leadership in education: Measuring learning-centered leadership. East China Normal University Journal, 29(1), 1–10.

Murphy, J., Hallinger, P., & Mesa, R. P. (1985). School effectiveness: Checking progress and assumptions and developing a role for state and federal government. Teachers College Record, 86(4), 615–641.

Nasir, N. S., Jones, A., & McLaughlin, M. (2011). School connectedness for students in low-income urban high schools. Teachers College Record113(8), 1755–1793.

Nettles, S. M., & Herrington, C. (2007). Revisiting the importance of the direct effects of school leadership on student achievement: The implications for school improvement policy. Peabody Journal of Education82(4), 724–736.

No Child Left Behind Act of 2001, Pub. L. 107-110, 115 Stat. 1425 (2002).

Peck, S., Roeser, R., Zarrett, N., & Eccles, J. (2008). Exploring the roles of extracurricular activity quantity and quality in the educational resilience of vulnerable adolescents: Variable- and pattern-centered approaches. Journal of Social Issues, 64(1), 135–156. doi:10.1111/josi.2008.64.issue-1

Pianta, R. C., Hamre, B. K., Haynes, N. J., Mintz, S. L., & La Paro, K. M. (2007). Classroom assessment scoring system: CLASS-secondary manual.

Pianta, R. C., Hamre, B. K., & Mintz, S. (2011). Classroom assessment scoring system: Secondary manual.

Portin, B., Russell, F. A., Samuelson, C., & Knapp, M. (2013). Leading learning-focused teacher leadership in urban high schools. Journal of School Leadership, 23(2), 220–235.

Preston, C., Goldring, E., Berends, M., & Cannata, M. (2012). School innovation in district context: Comparing traditional public schools and charter schools. Economics of Education Review, 31(2), 318–330.

Provasnik, S., Gonzales, P., and Miller, D. (2009). U.S. performance across international assessments of student achievement: Special supplement to the Condition of Education 2009 (NCES 2009-083). Washington, DC: National Center for Education Statistics.

Purkey, S. C., & Smith, M. S. (1983). Effective schools: A review. Elementary School Journal 83, 427–52.

Quinn, D. M. (2002). The impact of principal leadership behaviors on instructional practice and student engagement. Journal of Educational Administration, 40(5), 447–467.

Quint, J., Bloom, H. S., Black, A. R., & Stephens, L. (2005). The challenge of scaling up educational reform: Findings and lessons from First Things First. New York, NY: MDRC.

Robinson, V. M. J., Lloyd, C. A., & Rowe, K. J. (2008). The impact of leadership on student outcomes: An analysis of the differential effects of leadership types. Educational Administration Quarterly44(5), 635–674.

Rockoff, J. E. (2004). The impact of individual teachers on student achievement: Evidence from panel data. American Economic Review94(2), 247–252.

Rowan, B., Bossert, S., & Dwyer, D. (1983). Research on effective schools: A cautionary note. Educational Researcher, 12, 24–31.

Rumberger, R. W. (2001). Why students drop out of school and what can be done.

Sass, T. (2012). Selecting high and low-performing high schools in Broward County Florida for analysis and treatment (Working Paper). Nashville, TN: Center for Scaling Up Effective High Schools.

Schildkamp, K., Poortman, C. L., & Handelzalts, A. (2016). Data teams for school improvement. School Effectiveness and School Improvement, 27(2), 228–254.

Schildkamp, K., & Visscher, A. (2010). The utilisation of a school self‐evaluation instrument. Educational Studies36(4), 371–389.

Sebastian, J., & Allensworth, E. (2012). The influence of principal leadership on classroom instruction and student learning: A study of mediated pathways to learning. Educational Administration Quarterly, 48(4), 626–663.

Shannon, G. S., & Bylsma, P. (2002, November). Addressing the achievement gap: A challenge Washington state educators. Olympia, WA: Office of the Superintendent of Public Instruction.

Shaver, A. V., & Walls, R. T. (1998). Effect of Title I parent involvement on student reading and mathematics achievement. Journal of Research and Development in Education, 31(2), 90–97.

Spillane, J. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118(2), 113–141.

Spillane, J. (2013). The practice of leading and managing teaching in educational organizations. In Organisation for Economic Co-operation and Development, Leadership for 21st century learning (pp. 59–82). Washington, DC: OECD Publishing.

Spillane, J., Halverson, R., & Diamond, J. (2001). Investigating school leadership practice: A distributed perspective. Educational Researcher, 30(3), 23–28.

Spillane, J., Parise, L. M., & Sherer, J. Z. (2011). Organizational routines as coupling mechanisms: Policy, school administration, and the technical core. American Educational Research Journal, 48(3), 586–619.

Staples, M. (2007). Supporting whole-class collaborative inquiry in a secondary mathematics classroom. Cognition and Instruction25(2–3), 161–217.

Sun, J., & Leithwood, K. (2015). Direction-setting school leadership practices: a meta-analytical review of evidence about their influence. School Effectiveness and School Improvement, 26(4), 499–523.

Supovitz, J., Sirinides, P., & May, H. (2010). How principals and peers influence teaching and learning. Educational Administration Quarterly, 46(1), 31–56.

Teddlie, C., Reynolds, D., & Sammons, P. (2000). The methodology and scientific properties of school effectiveness research. In C. Teddlie & D. Reynolds (Eds.), The international handbook of school effectiveness research (pp. xx–xx). London, England: Falmer Press.

Tedford, J. (2008). When remedial means what it says: How teachers use data to reform instructional interventions. High School Journal92(2), 28–36.

Thomas, A., Bonner, S., Everson, H., & Somers, A. (2015). Leveraging the power of peer-led learning: Investigating effects of STEM performance in urban high schools. Educational Research and Evaluation, 21(7–8), 537–557.

Thompson, C. L., & O’Quinn, S. D. (2001). First in America special report: Eliminating the Black-White achievement gap. Chapel Hill, NC: North Carolina Education Research Council.

Walker, C., & Greene, B. (2009). The relations between student motivational beliefs and cognitive engagement in high school. Journal of Educational Research, 102(6), 463–472.

Wenglinsky, H. (2002) The link between teacher classroom practices and student academic performance. Educational Policy Analysis Archives, 10(2). http://dx.doi.org/10.14507/epaa.v10n12.2002

Wilcox, K. C., & Angelis, J. I. (2011). High school best practices: Results from cross-case comparisons. High School Journal94(4), 138–153.

Wiley, S. D. (2001). Contextual effects on student achievement: School leadership and professional community. Journal of Educational Change2(1), 1–33.

Yin, R. K. (1989). Case study research: Design and methods. Los Angeles, CA: Sage.

Cite This Article as: Teachers College Record Volume 120 Number 9, 2018, p. 1-38
https://www.tcrecord.org ID Number: 22343, Date Accessed: 9/20/2019 4:10:59 PM