Month: August 2015


This is the sixth post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part onepart twopart threepart four and part five. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

  1. Responses to these criticisms and the emergence of The Dynamic Approach

Although not all responses to the criticisms above have been helpful – Sammons et al. responded to Hamilton’s criticisms by making the queer assertion that their findings were deliberately oversimplified so as to to be ‘accessible to non-researchers’ (1998) – in other areas the field has addressed areas of weakness and more consensus now exists concerning which methods are appropriate for the field.

The model that has managed to most successfully address such criticisms, which will now be considered, is probably the dynamic model of educational effectiveness as set out by Creemers and Kyriakides in their book Improving Quality in Education: Dynamic Approaches to School Improvement (2012). The model claims to be both evidence based and theory driven, and has been ‘systematically tested’ for its validity. The complex nature of educational effectiveness is embraced rather than ignored, with a multi-level understanding of the influences on student outcomes. Moreover, Structural Equation Modelling techniques have established five dimensions through which each of the four levels can be referenced, defined and measured (in terms of the quality, focus, stage, frequency, and differentiation).

The model also addresses earlier concerns that EER lacks ‘rich, thick’ descriptions and is preoccupied with quantitative data, by incorporating ‘qualitative characteristics of factors’ enabling the model to ‘provide more precise feedback’ for improvement (Creemers & Kyriakides, 2012, p. 24). So rational decision making lies at the heart of the model, and the practitioners role in self-evaluating the factors associated with their students’ outcomes is a key part of dynamic approach to school improvement. As such, EER is used to ‘identify needs/priorities for improvement’, but those decisions are taken at the school level, giving the approach a more ‘bottom up’ application as opposed to the top-down approach which earlier improvement efforts lent themselves towards.

  1. Emerging Themes; Research Questions Arising

As well as addressing many of the historical and more recent criticisms, then, the dynamic model of school improvement seems to coalesce with the contemporary educational landscape described at the beginning of this review.

This leads to questions as to whether a dynamic approach to school improvement (DASI), focussed at the classroom level (that is to say, teachers’ practice), could enhance students outcomes in a widespread and sustainable manner. Could teachers who are interested in accessing research evidence for their own improvement purposes utilise the knowledge base of EER and improve their own practice using the self-evaluation and action-plan approach established in the DASI? Perhaps most importantly, however: is their evidence that the DASI will improve student outcomes?

These themes are suggested by Reynolds et al. as future directions for the field of EER, with a ‘further concentration on teaching and teachers…their attitudes and values…the ‘levers’ for changing their practices and behaviours’ (p. 218) identified as a particular need for the field to focus upon. The authors also correctly assess EERs role as a ‘bolt-on not bloodstream for the practitioner communities’ and point to the neglect of practitioners ‘core concern with teaching’. Although the DASI does look at the ‘whole school’ rather than individual classrooms, it also appreciates and allows for variation between classrooms and the relationship between classroom level and school level factors, which may help to increase the uptake of the approach from practitioners.

What has become clear out of this review is that educational effectiveness research is at the same time beautifully simply and incredibly complex. At its heart, the field simply aims to work out what makes a good school and use this knowledge to help more schools improve. However, as noted there are deep-rooted and technical difficulties with this initial project, and the application to ‘school improvement’ only further complicates things.

I have suggested that we should be optimistic due to signs in the contemporary educational landscape which indicate a sector that has become receptive to educational research and its role in improving practice and student outcomes.

What is now needed, then, is experimental work to test the more mature and least problematic approaches to using EER to improve schools. The DASI has emerged as an approach that takes seriously the criticisms the discipline has faced, and has taken positive action to either adapt, develop, or mitigate the concerns. As such, the research question that arises out of this review is as follows:

Can the Dynamic Approach to School Improvement (DASI) improve outcomes for students?

By outcomes, here, I mean the social and affective outcomes valued by practitioners, alongside the more traditional ‘academic’ outcomes.

Such an approach, if successful, may answer the question of how research evidence can be used to improve our schools, by harnessing the desire of practitioners to help their students achieve the best possible outcomes. For this reason, difficulties notwithstanding, both educational researchers and practitioners should be compelled to pursue the best possible approach for securing this goal.



  1. References

Bardon, I., (1980), Review of Fifteen Thousand Hours. American Journal of Orthopsychiatry, Vol 50(3), Jul, 1980. pp. 555-558

Brown, S. & Riddel, S. (1995) School effectiveness research: the policymakers’ tool

for school improvement? European Educational Research Journal pp. 6-15.

Coe, R., (1999) Manifesto for Evidence Based Practice, available online: [accessed Feb 2015]

Creemers, B., Kyriakides, L. (2012) Improving Quality in Education, Dynamic Approaches to School Improvement. London: Routledge

Central Advisory Council for Education (England). (1967), Children and their primary schools [Plowden Report]. London: HMSO

Goldstein, H., (1995) Multilevel Models in Educational and Social Research. A Revised Edition. London: Edward Arnold

EEF, (2014). Annual Report 2013/14, available online at [accessed February 2014]

Edmonds, R. (1979). Effective schools for the urban poor. Educational Leadership, 37, 15–27.

Goldstein, H., & Woodhouse, G. (2000) School Effectiveness Research and Educational Policy. Oxford Review of Education, Vol. 26, No. 3/4, pp. 353-363

Hamilton, D. (1996) Peddling feel good fictions, Forum, 38, pp. 54-56.

HM Government, (2013), What Works: evidence centres for social policy, available online at [accessed Feb 2015]

ICSEI, (2011). Constitution of the “International Congress for School Effectiveness and School Improvement” (ICSEI), Available online at [accessed Feb 2015]

Lauder, H., Jamieson I., & Wikeley, F. (1998) Models of effective schools: limits and

capabilities, in: R. Slee, G. Weiner and S. Tomlinson (Eds) School Effectiveness for Whom?London: Falmer

Marzano, R. J. (2003). What works in schools: Translating research into action. Alexandria, VA: Association for Supervision and Curriculum Development.

Morgan, N. Laws, D., (2014). At long last, teachers are set to become high-status professionals. Available online: [accessed Feb 2015]

Mortimore, P., Sammons, P., Stoll, L., Lewis, D., Ecob, R. (1988) School Matters: the junior years. Somerset, UK, Open Books

Moss, P., Phillips, D., Erickson, F., Floden, R., Lather, P., & Schneider, B. (2009). Learning from our differences: A dialogue across perspectives on quality in educational research. Educational Researcher, 38 (7), 501-517

Muijs, d., (2006). New Directions for School Effectiveness Research: Towards School Effectiveness without Schools. Journal of Educational Change. Vol. 7. pp. 141-160

Reynolds, D., Sammons, P., De Fraine, B., Van Damme, J., Townsend, T., Teddlie, C., Stringfield S., (2014) Educational effectiveness research (EER): a state-of-the-art review. School Effectiveness and School Improvement. Vol. 25, Iss. 2 pp. 197-230

Rutter, M., Maughan, B., Mortimore, P., & Ouston, J. (with Smith, A.). (1979). Fifteen thousand hours: Secondary schools and their effects on children. London: Open Books

Strong, M., Gargani, J., & Hacifazlioglu, O., (2011). Do we know a successful teacher when we see one? Experiments in the identification of effective teachers. Journal of teacher education. 62

Teddlie, C., & Reynolds, D. (2000). The international handbook of school effectiveness research. London: Falmer Press.

Townsend, T. (ed) (2001) The background to this set of Papers on Two Decades go School Effectiveness Research, Special Issue Critique Response to Twenty Years of SER. School Effectiveness and School Improvement, Vol. 12, No. 1, pp. 3-5.

Wiliam, D., (2014) Why education will never be a research-based profession. Presentation at ResearchEd, London: UK. Slides available online at [accessed Feb 2015]; and video available online at [accessed Feb 2015]


Educational Research Literature Review: Part V

This is the fifth post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part onepart twopart threepart four and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

  1. Key Criticisms of Educational Effectiveness Research

Throughout its lifetime, EER has faced criticisms levelled at almost every aspect of its field, from the research design employed, to methodologies and sampling techniques. The concepts used by the field, too, have until recently been imprecisely defined and so large, detailed studies like Mortimore et al.’s School Matters (1988) were required to set out at length the terms of enquiry.

Many of the methodological, statistical and theoretical difficulties (as well as others) have now been addressed in the dynamic model of EER and the related application to school improvement. However, strong criticisms of the field persist, and it is worth addressing these before continuing with the review of the dynamic model.

A useful taxonomy for the more pressing critiques of the field is provided by Goldstein & Woodhouse (2000), who are particularly concerned about the inclusion of EER (or School Effectiveness (SE) as they term it in their paper) within government policy-making. This is, of course, a double edged sword. A key aim of the EER project is to help more proves to become effective, and help policy makers base their decisions on evidence of what works. However, if this relationship becomes too close, the independence of the researchers can be called into question. This brings us to the first group of criticisms:

a) Abuse by Government

When Ofsted commissioned a review of the Key Characteristics of Effective Schools (1997), drawing on the work of educational researchers, Hamilton raised two concerns, which together illustrate the difficulty in the relationship between educational research and policy. Firstly, the notion that schools must be as effective as they can is easy to read as an attack on those working within the system. As Goldstein and Woodhouse put it, the Key Characteristics of Effective Schools, in particular, ‘pathologises schools by accepting that economic and other problems of society can be ascribed to failings of education and of those who work in the system, especially teachers’ (2000, p.355).

The second concern is that those working within SE have an incentive to present research in a manner that reflects and reaffirms policies already favoured by government. Such an accusation is set out explicitly by Pring (1995), who argues that much school improvement research in the 1990s was completed ‘under the watchful eye of…political mentors’.


b) Oversimplification of the complex ‘causalities’ associated with schooling and sidetracking into focussing on league tables.


Those working within EER face another conundrum. In identifying what makes schools effective, they are compelled to present their findings in a manner that shows what broad factors correlate with effective schools. Thus, Edmonds (1979) grouped correlates under the headings of: strong principal leadership; an emphasis on basic skill acquisition; an orderly climate that facilitates learning; high expectations of what students can achieve; and frequent monitoring of students’ progress.

If these factors were communicated to school leaders and teachers today, the response would hardly be one of surprise. In fact, Creemers and Kyriakides (2012) concede that there is ‘something highly tautological’ in the such assertions. The distilling of complex factors at play in effective schools into ‘lists’ is, for Brown & Riddel (1995) a gross oversimplification designed to endear policy makers as opposed to explaining to practitioners who in practice improvement might be possible. This criticism strikes me as unfair, since presented the findings of school effectiveness studies without some sort of grouping would make the research impenetrable to all, not only policy makers. If a reading lacks nuance, it could be argued that the fault lies with the reader, and not the author.

However, the nature of the role of schools is also brought into focus here, as a narrow focus on academic attainment as a measure of student outcomes mirrors government preoccupation with listing schools into ‘league tables’ in which attainment in terminal exams is the only relevant measure. Whether consciously or not, school effectiveness research has ensured its relevance by focussing on such measures when assessing the ‘effectivness’ of a school.

c) That theory in SE work is little more than reification of empirical relationships


Related to this, though, is the historic absence of a theoretical basis for the field. Without a core theory of its own, the discipline was left with little choice but the presentation of correlates and lists of factors. Scholars such as Lauder have used this absence as reason to question whether EER can be taken as a ‘field of study’ in the usual academic use of the term (1998).

Without theoretical underpinnings, there is a danger that a presentation of empirical findings without doing the work of understanding why the correlates exist and how the relationships between effective factors and student outcomes operate. Lost, then, is the contextual nature of schools, as well as key relationships such as parental involvement, individual teacher and student behaviours, as well as social backgrounds. The interaction between all of these factors can only be fully understood with a unifying and predictive theory, and without such a theory a ‘top-down’ approach to school improvement is instead promoted, with little evidence that this will work in vastly different context.

Given this criticism, it is perhaps surpassing that the global factors related to effectiveness (simplistic though they are) have remained more or less stable in reports since Edmonds (1979), and in what is probably the most comprehensive view of effective factors by Teddlie and Reynolds (2000), there exist strong echoes of Edmonds findings. Even when a different approach to a review of effectiveness was taken by Marzano (2003), the findings were ‘remarkably similar’. This suggests that, although process is important, the factors of what makes effective schools may be more transferable than some suggest.


d) Too much SE research is simply poor quality

The most sobering criticism leveled against research into effective schools is grouped under the heading ‘research quality’. Such questions continue today. At national conference in 2014, Dylan Wiliam, Emeritus Professor of Educational Assessment at the Institute of Education, gave a damning verdict on whether educational research could be used by teachers to help improve their impact on students, by stating, ‘The trouble is, when teachers go to the educational research cupboard, it is generally bare.’ (Wiliam, 2014)

Questions arise, too, over the reliability and validity of the conclusions of the school effectiveness research conducted in its third and fourth phases. Amongst these concerns, raised by Goldstein and Woodhouse (2000) are small sample sizes, non-randomly selected samples, and no proper adjustment for intake achievement (p. 358). The use of Ofsted judgments of effectiveness and the superficial understanding of effectiveness factors is also noted as a ‘deficiency’ in quality that plagues the field (ibid, p.358).

Educational Research Literature Review: Part IV

This is the fourth post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part onepart twopart threepart five and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

  1. History of Educational Effectiveness Research

It is difficult to precisely define EER, for two principal reasons. Firstly, EER describes not just one field of research, but rather a ‘conglomerate of research in different areas’ (Creemers & Kyriakides, 2012, p.3). Secondly, the discipline has has undergone fairly major shifts over the last three or four decades, largely in response to criticisms from the research community. The more prominent and transfigurative criticisms will be explored in §5, but a good overview is provided by Townsend (2001).

It is also worth noting that I am referring to the field as educational effectiveness research throughout this review, but this name is a fairly recent development. For a long time the discipline was known as School Effectiveness Research, before the new label EER was adopted to reflect the need for a broader base, partly as a result of a rapidly changing educational landscape (Muijs, 2006). During this review I shall refer to the overarching discipline as EER, but may speak of School Effectiveness Research if the context compels such terminology. This relabelling is a good example of what could be termed a ‘retrospective reactive, not purposive’ (Reynolds et al., 2014, p.201) shift in the content of the field. It would not be entirely unfair to suggest that much of the discipline’s content has arisen as reaction to its critics, although a more recent commitment to a theoretical underpinning to the field is reversing this (Creemers and Kyriakides, 2012).

As a result of the reactive nature of EER, the field is best understood by tracing its history. It is customary to begin with the aforementioned studies by Coleman and Jencks, since it is usually agreed that EER was born as a reaction to the conclusions of these papers, which raised serious questions about whether schools made any difference to pupil outcomes in comparison to background factors such as socio-economic factors and ability. As we will see, the particular circumstances of the field’s origin continues to influence its identity. In particular, the discipline’s initial purpose to demonstrate that schools did make a difference – contra Coleman and Jencks – has resulted in a preoccupation with a quantitative study of effectiveness, focusing at the school level, which may contribute to the limited uptake of findings by practitioners.

Reynolds et al (2011) suggest that the history of EER has five distinctive phases, and this is a useful framework from which to understand the field. Greater focus will be given to the more recent developments in EER, which will allow us to arrive at the dynamic model of educational effectiveness and appreciate how it addresses key concerns from the literature. This will then guide future directions, allowing me to isolate the links between EER and school improvement, in particular practice at the classroom level.

Phase One – Demonstrating Variation in School Effects


The notion that schools were impotent to alter predetermined student outcomes was set in the   1960s and the view remained entrenched until the 1980s. Alongside the aforementioned works of Coleman and Jencks, the Plowden Report was published in England in 1967 adding weight to the idea that school influences paled in significance compared to home factors. In response to this educational researchers undertook empirical work to demonstrate that schools did vary in their effects. The case was set out most emphatically by Rutter et al. (1979) in their book book length report of a nine year longitudinal study in secondary schools in England. Many of the findings still echo in what we now know to be effective in securing good educational outcomes – the idea that children ‘live up (or down) to what was expected of them’ was considered a ‘provocative finding’ in an early review of the book (Bardon, 1980, p. 557).


Phase Two – Methodological Development, Including Multi-Level Methodologies


Once it had been established that schools do have an effect on student outcomes, the field was compelled to demonstrate more scientifically the effects. This meant that methodological advances were required, and in particular the adopting a multi-levelled understanding of schooling. Such an understanding took into account that educational effectiveness was complex, with factors at the national, school, classroom and student level all interacting.

Phase Three – Investigating Why Schools Had Different Effects


By the mid 1990s, the educational research community had successfully rebuked the idea that schools did not have a meaningful effect on student outcomes. The second phase of EER had begun to explore the relationships between the different levels of factors involved in outcome and had also examined the outcomes (and their stability) with increased nuance (Goldstein, 1995). What followed was attempts to outline why schools differed in their effects (as opposed to the earlier efforts of establishing that they differed in their effects). Thus, the contextual factors at the school level were further probed and studies moved away from an ‘input/output’ understanding towards an ‘input/process/output’ understanding (Teddlie & Reynolds, 2000).


Phase Four – Internationalisation and Links to School Improvement

The quantitative methodology approach up until this point was necessitated by the field’s initial goals, but by the mid 1990s there was greater acceptance of the qualitative methods preferred by research into school improvement. As such, the ‘rich, thick descriptions’ which could help ‘explain the relationships’ (Reynolds et al., 2014, p. 201) between effective factors and student outcomes began to be addressed, although Kyriakides and & Creemers concede that EER has continued to focus too strongly on school characteristics ‘without looking at the processes that are need to change the situation’ (2012, pp. 7-8).

During this period, however, the was successful in collaborating internationally and grew rapidly as a result. The establishment in 1988 of the International Congress of School Effectiveness and Improvement helped to spearhead this development and unify disciplines under the umbrellas of EER and school improvement. The internationalisation has allowed joint projects, helping to identify global factors associated with school improvement.

Phase Five – Development in Statistical Analysis and the Emergence of the Dynamic approach

The final phase of EER’s journey brings us up to date with the field as it stands, and is characterised by the development of increasingly sophisticated statistical analysis along with a reconceptualisation of EEF as a dynamic set of relationships. This includes not only the interaction between student and teacher, but between the different levels of influence on student outcomes: national level, school level, classroom level and student level. Each of these is related to broader outcomes that take into account what Creemers and Kyriakides term the ‘new goals of education’ including affective outcomes such as affective and psychomotor outcomes.

This dynamic model will be explored in greater depth in § 6, but for now we will turn to the more compelling criticisms that the EER faces, which will result in implications for research questions emerging in the final section (§7).

Educational Research Literature Review: Part III

This is the third post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part onepart twopart fourpart five and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

  1. The Role of Research Evidence in Englands Contemporary Educational Landscape

When, in 2013, the Department of Education invited Dr Ben Goldacre to examine the role of evidence in the education sector, it was claimed that “a great prize” was waiting to be claimed by teachers (Goldacre, 2013, p.4). The resulting report, Building Evidence into Education, notes that in some areas of education there is great enthusiasm for research trials and the use of evidence, but that “much of this enthusiasm dies out before it gets to do good, because the basic structures needed to support evidence based practice are lacking” (Goldacre, 2013, p.15).

For those within the educational research community, all of this was well trodden ground. As mentioned above, fourteen years earlier Professor Rob Coe had set out a Manifesto for Evidence Based Education (Coe, 1999) which is remarkably similar to Goldacre’s report in its tentative optimism for the education sector making its metamorphosis into a scientific enterprise. Although Goldacre and Coe differ slightly in their approach to the sorts of evidence (Goldacre advocates for randomised controlled trials to become common practice) they agree on embedding a ‘culture of evidence’ in every level of the educational sector. This narrative seems to have captured the zeitgeist amongst both practitioners and politicians. To best illustrate this, it is worth setting out some of the developments over the last few years at a policy and practitioner level.

Educational Endowment Foundation 


In 2011, a government grant of £125m was used to establish the Educational Endowment Foundation, an organisation independent of government tasked with sharing evidence and working out ‘what works’ in education. Two years later, the government designated the EEF as an independent ‘What Works Centre’, recognising that within education, ‘an evidence base exists but there is limited authoritative synthesis and communication of this evidence base’ (HM Government, 2013). As well as funding projects aimed at establishing effective school and classroom level practices (to date, £45m has been approved for initiatives involving 4100 schools), the EEF has published an online ‘teaching and learning toolkit’. The toolkit aims to communicate the findings of educational research by grouping studies under broad strategies (such as ‘meta-cognition’, ‘feedback’, or ‘repeating a year’) and presenting the outcomes enjoyed by students as additional months of progress’. In a 2014 poll commissioned by the Sutton Trust and undertaken by the National Foundation for Educational Research, the toolkit has reportedly been used by 45% of school leaders in the UK (EEF, 2014).

Government Support

The coalition government has reaffirmed this explicit support for teaching to become ‘evidence-led’. In an article for the Guardian in December 2014, the Secretary of State for Education and Minister for Schools wrote that, ‘In recent years, we’ve seen the start of a culture change, transforming teaching into a more evidence-led profession – something we wholeheartedly support’ (Guardian, 2014). For many teachers and school leaders, this has meant engaging with the Educational Endowment Foundation’s ‘Teaching and Learning Toolkit’, which summarises educational research into school structures, strategies and resources, attributing an ‘effect size’ in terms of additional months of progress.



If the establishment of the EEF is viewed as a top-down approach to building evidence into policy and school level decision making, the teacher-led organisation ‘ResearchEd’ represents the reverse of this. Established in 2013, the charity is now running conferences in the USA and Australia. ResearchEd aims to increase research literacy of teachers and school leaders, whilst bringing together academics and practitioners for collaboration and dissemination. With dozens of ResearchEd conferences taking place across the country, thousands of teachers are voluntarily giving up their weekends to listen to academics and practitioners. After receiving an EEF grant, this method of dissemination is being tested to establish whether it is an effective method of improving research literacy, evidence-informed practices in schools and student outcomes in those schools


Ofsted is the regulation and inspection body for educational providers in the UK. In 2009, the body altered its practice of grading teachers’ lessons based on short observations following a study that suggested such an approach was an unreliable and invalid method of assessing teaching quality and learning outcomes (Strong et al, 2011).

Barriers to the Effective Use of Evidence

Taken together, these factors indicate that the education sector in England is ready to embrace and embed evidence collected by researchers. Why, then, does there seem to be little acknowledgement of two traditions within educational research – educational effectiveness and school improvement – which have for several decades sought to address the very issues under discussion?

The former arose as a reaction against the belief that ‘schools make no difference’ to student outcomes compared to background factors such as ability and socio-economic status (Coleman et al, 1996; Jencks et al, 1972), and so empirical work began to challenge the view that ‘education cannot compensate for society’ (Bernstein, 1968). As evidence grew of specific school effectiveness, the related field of school improvement sought to apply these findings to specific schools to drive improvement.

More recently, different reasons have arisen for those promoting an evidence-based approach to education overlooking the body of knowledge collected by educational research. The fragmented nature of the field is an underlying concern, and the problem is candidly set out by (Moss et al, 2009, p. 501),

The state of discourse in the field of education research has been likened to the cacophony in the Tower of Babel (Phillips, 2006a). Not only is there a breakdown in communication due to…the multitude of theoretical and methodological approaches…but, in addition, proponents of different perspectives often hold strikingly different assumptions about the nature of the enterprise in which they are engaged. These include different assumptions about the ends of education research, about its epistemology, and relatedly, about whether it is or should be (or even could be) value free… As a result, researchers working in different frameworks often “talk past” each other, if they try to talk at all. (My emphasis).

Moss’s analysis is easy to illustrate by taking the assertion of Bernstein in the earlier paragraph. The view that ‘education cannot compensate for society’ seems prima facie to be relatively simple to falsify or affirm. We establish which shortcomings need to be compensated for by education, establish how those shortcomings might be said to have been ‘compensated for (or ‘overcome’), then set about investigating whether those shortcomings have been overcome. Immediately, however, we run into serious epistemological difficulties, exacerbated further by implicit and unsettled normative and value judgments. What does it mean to say that education has compensated for society? Which outcomes are we interested in for the student which will satisfy the compensation? Exam results? Social and affective outcomes? Supposing that we could confidently say that education can compensate for society, should it? Are governments deferring their responsibility for social equality to schools, instead of addressing the root problems associated with differential outcomes? And so on.

Put simply, education is unavoidably value-laden, and educational research will always be conceived, undertaken and consumed through a lens which makes certain assumptions about the ‘ends of research’. Or, as Rob Coe puts it, ‘One person’s “effective practice” is another’s “neo-liberal hegemony”’ (1999, p.3). We should be wary of ignoring such concerns, but as Daniel Muijs explains “it is of key importance that effectiveness research does not fall into the trap of so much educational research, in allowing ideology and rhetoric to come before empirical research and findings” (2006, p.156). With these initial difficulties in mind, we can turn to how the field has developed over time and how it has responded to some the concerns raised.

Educational Research Literature Review: Part II

This is the second post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read part onepart threepart four,  part five and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

  1. Rationale

The two fields under discussion in this review, educational effectiveness and school improvement, have a long and complex history. It is, in fact, difficult to pin down precisely what constitutes EER, other than as a ‘conglomerate of research in different areas: research on teacher behaviour, curriculum, grouping procedures, school organisation and educational policy,’ (Creemers & Kyriakides, 2012, p.3). To add further complication, key aims, terminology and methodologies within the field have changed over time, although the core belief (that schools have a measurable effect on student outcomes) and foundational questions (‘What makes a good school?’; and ‘How can we make more schools good?’ (Reynolds et al., 2014)) have remained consistent.

The core belief of the field provides a hypothesis by which any research under consideration can be empirically tested against. Moreover, the foundational questions link any empirical findings to school improvement, which is what gives such an endeavour its true utility. The aim of my own ultimate contribution to the field is to test the more mature models of EER within a UK context with the hope of demonstrating whether or not such an approach might help schools improve. The current political landscape in the UK (and particularly in England) is of great relevance here; there is large and growing momentum for teaching to become ‘evidence-based’.

Consequently, this review will begin with a brief commentary on the contemporary educational landscape from a practitioner perspective. As will be further explored in §4, the blurring of lines between educational researcher and policy maker has in the past brought into question the independence of what should be an impartial intellectual enterprise (Goldstein & Woodhouse, 2000) and we should be wary of preceding effectiveness claims with either rhetoric or ideology (Muijs, 2006, p.156). However, the emergence and proliferation of ‘research-leads’ (practitioners hoping to drive improvement within their schools by engaging with academic research), along with a soft touch approach from policy-makers (requiring school-level decisions to be informed by research, without mandating which research) sets, I believe, optimal conditions for EER to take hold in schools and impact on student outcomes.

With the contemporary backdrop set, then, I will undertake a brief chronological review of EER which will address some of the philosophical, theoretical, methodological and practical issues that the field has faced. Each of these difficulties have helped shape the field, and have resulted in the emergence of a theory-driven and evidence-based approach to school improvement that takes into account the multi-levelled and dynamic nature of education. It is this dynamic approach, I will argue, that provides the best vehicle for research evidence to inform decisions made by practitioners working in schools.

The role of Educational Effectiveness Research in the contemporary educational landscape

This is the first post in a series, which together make up a literature review submitted for my MEd in Educational Research at the University of Cambridge. You can read also read: part two, part three, part four,  part five and part six. I hope that this is of interest to anyone considering undertaking an MEd, or for those who wish to dig a little into the academic literature around educational effectiveness Research and School Improvement. I would be most grateful to anyone who can provide any critique to what is written. References can be found at the end of part six.

“Education may not be an exact science, but it is too important to allow it to be determined by unfounded opinion, whether of politicians, teachers, researchers or anyone else.”

Professor Rob Coe,

Manifesto for Evidence-Based Practice, 1999


  1. Introduction

Almost 15 years after Professor Rob Coe set out his Manifesto for Evidence Based Practice, the Department for Education commissioned Dr Ben Goldacre to write a report on the role of evidence in education. The resultant report ‘Building Evidence in Education’, lamented the lack of valid, reliable evidence of ‘what works’ in the sector. Entering the profession at around the same time the report was published, I was surprised that so many of the decisions taken at national, local, school and classroom level were not underpinned by evidence from research. This was one of the principal reasons for my enrolling on this course. More specifically, I am interested in examining: a) the evidence that has been produced by the educational research community; and b) how that evidence is best disseminated to and used by practitioners and policy makers to improve outcomes of children.

Accordingly, this essay will examine the body of knowledge that exists known as ‘educational effectiveness research’ and the links that such a body has with the enterprise of ‘school improvement’. The historical difficulties in the relationship between these two fields will be set out, and I will argue that the dynamic model of EER, as presented by Creemers and Kyriakides (2012) presents the least problematic basis for establishing ‘an evidence based and theory-driven approach to school improvement’ (p. xiv).

Due to the size of the fields under examination, however, this enquiry will be restricted. A full analysis would need to be of book length. After briefly setting out the broad field of EER it will, therefore, be necessary to focus in this essay on a narrower aspect of the field. I have chosen to concentrate on factors at a classroom level. There are three reasons for this. Firstly, as a current primary school teacher I have particular interest in what can be done by practitioners ‘at the chalkface’ to improve outcomes for children. Secondly, evidence presented by Scheerens & Bosker (1997) indicates that teacher effects have a greater effect compared to school effects in terms of progress over time. Thirdly, I believe that the contemporary landscape of the education sector means that a greater emphasis will be on a ‘bottom-up’ approach to engagement with research. This will be explored further in §3, but the notion of responsibility in terms of student outcomes is one that requires further exploration. In doing so, the remainder of the essay will be framed in terms of the contribution of the research community towards an effective knowledge base and the associated field of school improvement.

Although improving schools is not the responsibility of academics in education faculties, the enterprise of educational research would surely stand on shaky moral ground if it did not contribute to securing better educational outcomes for learners in schools. It is for this reason the Congress of School Effectiveness and Improvement is begins its constitution by stating that its “purpose… is to enhance the quality and equity of education for all students in schools in all countries.” (ICSEI, 2011, p.5). The ICSEI aims to do this by bringing together academics, policy makers and practitioners, promoting a coordinated information flow between these groups. It is the relationship between these three groups that is the focus of this review, and as a primary school teacher (practitioner) who is also engaged in academic research through the pursuit of this course, I am particularly interested in how the body of knowledge accumulated by educational researchers can make a positive impact on students in a meaningful manner.

We should not, however, take for granted that educational researchers have always agreed that this is something that is possible. In fact, the notion that schools make no real difference to student outcomes, or at most contribute very little, was comprehensively set out in two seminal studies around half a century ago (Coleman et al, 1966; Jencks et al, 1972). As a reaction to this, the field of educational effectiveness research has grown to provide empirical evidence of schools differing in their impacts on students outcomes. In applying the findings of EER to classrooms, schools and school systems, there have been attempts over the last few decades to contribute to school improvement. Although the relationship between EER and school improvement has historically been – and, to a lesser extent, remains – problematic, there has been considerable success in applying EER to schools using a more dynamic approach to school improvement (Creemers & Kyriakides, 2012).

In what follows, I will provide a review of relevant literature concerning the field of educational effectiveness research, the field of school improvement, and the relationship between the two. The essay will begin with a brief panorama of the contemporary educational landscape concerning research and evidence (§3), before tracing EER’s history (§4), the more prominent criticisms the field has faced (§5) and how they have led to the emergence of a more dynamic understanding of EER (§6). Finally, in §7, I will return to the contemporary landscape, placing the dynamic approach to school improvement (DASI) within this context. Out of this discussion, I will suggest a number of research questions and weigh each in terms of their value on contributing to the academic fields of EER and school improvement. More specifically, I will argue that there exists a well established, theoretically based model for school improvement, and that efforts should be made to empirically test this model. All of this will lead, ultimately, to my settling on a proposed experimental study testing the efficacy of the DASI within a UK context.

This brief plan of the essay begs further justification and illumination, and so I will begin in §2 by setting out more comprehensively the rationale for the strategy that I have chosen to adopt.