On Tuesday I attended the launch of the Learning About Culture Programme at the RSA. This new initiative is described in its prospectus as a ‘two-and-a-half year investigation into the role that cultural learning plays in improving educational outcomes for children.’ The Programme has two aims – to build a stronger evidence base for cultural learning and to improve the use of evidence in cultural learning. At the launch event which was extremely well attended, from my observation, by representatives from cultural organisations, funders, academic institutions and government departments, we heard from the project partners (the RSA and the Education Endowment Fund (EEF)), from London Bubble who run Speech Bubbles, one of the cultural learning projects taking part and from Project Oracle who are involved in the evaluation. We also had the opportunity to discuss and respond to the Programme via round table discussions.
Since Tuesday I have been wondering why, even though there is a great deal about this Programme that is constructive and will make a valuable contribution to the sector, it makes me uneasy. In part to understand my own response, I have put some thoughts down here.
As I say there are many positives with Learning About Culture. The Programme acknowledges that there is a decline in the hours spent learning arts subjects in schools and makes a valid and persuasive argument for more evaluation of arts interventions. Much is rightly made of the importance of cultural organisations having a theory of change in place about how their activities might lead to change, and a strong case is put forward for these organisations recognising that evaluation and reflective practice can and should focus on improving practice rather than justifying what has taken place for an external funder. They are mindful of the need for more training for cultural practitioners in using evaluation and will be conducting research on the use of evidence in cultural learning. As such the Programme is allied to existing studies including the recent AHRC Cultural Value Project that argue for further research to account for the human experience of art and culture, as well as restating thorny issues that have been around for some time. Francois Matarasso, for example, outlined a clear case for robust evaluation of arts programmes in the 1990s, using language and arguments that in some respects are similar to that found in the Learning About Culture document.
However where Matarasso and the Learning About Culture programme differ is in the key issue of value, for as he says in the 1996 Defining Values: Evaluating Arts Programmes report; ‘The important, and essentially political, question about evaluation is which value system is used to provide benchmarks against which work will be measured – in other words, who defines value.’ It is in relation to the value system underpinning the Learning About Culture programme that my uneasiness, as a someone working and researching in the arts, surfaces. The Programme makes it very clear that, in order to make the case for the arts in schools, what is needed is ‘evidence of the additional progress that cultural learning enables children to make.’ This progress is to be assessed primarily in terms of academic achievement and secondarily in terms of ‘non-cognitive skills – a set of attitudes, behaviours, and strategies thought to underpin success in school and at work, such as motivation, perseverance and self-control’. The main methodology adopted to provide evidence of the impact of cultural interventions in schools is large scale randomised control trials, although there will also be ‘deep-dive’ and follow-up research in schools using a range of methods.
The Programme argues that the focus on providing evidence of the impact on attainment is necessary because first this will help persuade schools of the value of the arts, and second because too often cultural organisations assert that their work makes a positive contribution to attainment without sufficiently ‘robust evidence for the impact on attainment in literacy and numeracy and limited rigorous research into impact on ‘non-cognitive’ skill development or attainment within specialist subject study.’ This is definitely where my uneasiness starts to build into full-scale worry. There is not space here to restate the arguments made elsewhere on the importance of valuing arts activities on their own terms (although I find the observation in the Cultural Value Project report that we are interested in studying whether music improves ability in maths, but not whether studying maths improves ability in music sums up the issue around subject hierarchies pretty neatly).
Instead I want to reflect on what constitutes robust evidence, comparing the Learning About Culture Programme’s understanding with mine as a practitioner researcher and what this tells us about values. The Programme has developed a typology of evidence-gathering methods, with each level providing progressively more reliable evidence for impact. Level 1 includes anecdotal quotes and personal observations rising up through three further levels to end with Level 5 which includes comparison groups or control trials, which are ‘the highest standard’. In Learning About Culture, therefore, rigour and robustness are intrinsically linked to notions of objectivity and the scientific method. Underlying this is the belief that it is possible to control the conditions and variables of an art intervention sufficiently to be able to confidently assert causation – i.e. that nothing apart from the intervention under scrutiny is responsible for the measurable change. The knowledge that has value here is that of the independent and objective evaluator who having conducted the tests to determine if progress has been made, defines success.
For those working in collaborative, qualitative and arts and practice-based research scenarios, different value-systems prevail. Rigour is linked more closely to ideas of research authenticity, applicability and replicability – do the methods and findings ring true to those who have direct experience of the intervention and are they helpful in demonstrating how work might improve in contexts other than the specific one in which the research took place. ‘Objectivity’ is questioned and instead ‘expert knowledge’ and ‘everyday perceptions,’ as Keri Facer and Kate Pahl state, inform the findings and build the evidence base. Within the arts specifically there is recognition that the practice is complex, not always linear and at times contradictory. Concepts such as joy or irreverence, frustration and being oneself are not measureable, yet can potentially be communicated through a narrative, enacted through a performance or made explicit in a film. These research outputs constitute powerful evidence of change that can be used by practitioners to support reflection on their practice and improve their work.
So I am keen to see how the Cultural Learning Programme develops. I welcome their acknowledgement that ‘cultural learning practitioners may also have distinctive strengths when it comes to monitoring, evaluation and learning’ and support their ambition to work with the sector ‘using a broad range of evaluation approaches’. With this in mind I hope that the Programme recognises that RCTs are but one method for assessing and evidencing the changes that cultural programmes can bring about. In my mind the Programme will achieve something truly extraordinary if it can communicate not only the value of cultural learning, but also the importance of evidencing it in rich and complex ways that resonate for practitioners, educators and policy makers.