This is the fifth post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part one, part two, part three, part four and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.
- Key Criticisms of Educational Effectiveness Research
Throughout its lifetime, EER has faced criticisms levelled at almost every aspect of its field, from the research design employed, to methodologies and sampling techniques. The concepts used by the field, too, have until recently been imprecisely defined and so large, detailed studies like Mortimore et al.’s School Matters (1988) were required to set out at length the terms of enquiry.
Many of the methodological, statistical and theoretical difficulties (as well as others) have now been addressed in the dynamic model of EER and the related application to school improvement. However, strong criticisms of the field persist, and it is worth addressing these before continuing with the review of the dynamic model.
A useful taxonomy for the more pressing critiques of the field is provided by Goldstein & Woodhouse (2000), who are particularly concerned about the inclusion of EER (or School Effectiveness (SE) as they term it in their paper) within government policy-making. This is, of course, a double edged sword. A key aim of the EER project is to help more proves to become effective, and help policy makers base their decisions on evidence of what works. However, if this relationship becomes too close, the independence of the researchers can be called into question. This brings us to the first group of criticisms:
a) Abuse by Government
When Ofsted commissioned a review of the Key Characteristics of Effective Schools (1997), drawing on the work of educational researchers, Hamilton raised two concerns, which together illustrate the difficulty in the relationship between educational research and policy. Firstly, the notion that schools must be as effective as they can is easy to read as an attack on those working within the system. As Goldstein and Woodhouse put it, the Key Characteristics of Effective Schools, in particular, ‘pathologises schools by accepting that economic and other problems of society can be ascribed to failings of education and of those who work in the system, especially teachers’ (2000, p.355).
The second concern is that those working within SE have an incentive to present research in a manner that reflects and reaffirms policies already favoured by government. Such an accusation is set out explicitly by Pring (1995), who argues that much school improvement research in the 1990s was completed ‘under the watchful eye of…political mentors’.
b) Oversimplification of the complex ‘causalities’ associated with schooling and sidetracking into focussing on ‘league tables’.
Those working within EER face another conundrum. In identifying what makes schools effective, they are compelled to present their findings in a manner that shows what broad factors correlate with effective schools. Thus, Edmonds (1979) grouped correlates under the headings of: strong principal leadership; an emphasis on basic skill acquisition; an orderly climate that facilitates learning; high expectations of what students can achieve; and frequent monitoring of students’ progress.
If these factors were communicated to school leaders and teachers today, the response would hardly be one of surprise. In fact, Creemers and Kyriakides (2012) concede that there is ‘something highly tautological’ in the such assertions. The distilling of complex factors at play in effective schools into ‘lists’ is, for Brown & Riddel (1995) a gross oversimplification designed to endear policy makers as opposed to explaining to practitioners who in practice improvement might be possible. This criticism strikes me as unfair, since presented the findings of school effectiveness studies without some sort of grouping would make the research impenetrable to all, not only policy makers. If a reading lacks nuance, it could be argued that the fault lies with the reader, and not the author.
However, the nature of the role of schools is also brought into focus here, as a narrow focus on academic attainment as a measure of student outcomes mirrors government preoccupation with listing schools into ‘league tables’ in which attainment in terminal exams is the only relevant measure. Whether consciously or not, school effectiveness research has ensured its relevance by focussing on such measures when assessing the ‘effectivness’ of a school.
c) That ‘theory’ in SE work is little more than reification of empirical relationships
Related to this, though, is the historic absence of a theoretical basis for the field. Without a core theory of its own, the discipline was left with little choice but the presentation of correlates and lists of factors. Scholars such as Lauder have used this absence as reason to question whether EER can be taken as a ‘field of study’ in the usual academic use of the term (1998).
Without theoretical underpinnings, there is a danger that a presentation of empirical findings without doing the work of understanding why the correlates exist and how the relationships between effective factors and student outcomes operate. Lost, then, is the contextual nature of schools, as well as key relationships such as parental involvement, individual teacher and student behaviours, as well as social backgrounds. The interaction between all of these factors can only be fully understood with a unifying and predictive theory, and without such a theory a ‘top-down’ approach to school improvement is instead promoted, with little evidence that this will work in vastly different context.
Given this criticism, it is perhaps surpassing that the global factors related to effectiveness (simplistic though they are) have remained more or less stable in reports since Edmonds (1979), and in what is probably the most comprehensive view of effective factors by Teddlie and Reynolds (2000), there exist strong echoes of Edmonds findings. Even when a different approach to a review of effectiveness was taken by Marzano (2003), the findings were ‘remarkably similar’. This suggests that, although process is important, the factors of what makes effective schools may be more transferable than some suggest.
d) Too much SE research is simply poor quality
The most sobering criticism leveled against research into effective schools is grouped under the heading ‘research quality’. Such questions continue today. At national conference in 2014, Dylan Wiliam, Emeritus Professor of Educational Assessment at the Institute of Education, gave a damning verdict on whether educational research could be used by teachers to help improve their impact on students, by stating, ‘The trouble is, when teachers go to the educational research cupboard, it is generally bare.’ (Wiliam, 2014)
Questions arise, too, over the reliability and validity of the conclusions of the school effectiveness research conducted in its third and fourth phases. Amongst these concerns, raised by Goldstein and Woodhouse (2000) are small sample sizes, non-randomly selected samples, and no proper adjustment for intake achievement (p. 358). The use of Ofsted judgments of effectiveness and the superficial understanding of effectiveness factors is also noted as a ‘deficiency’ in quality that plagues the field (ibid, p.358).