Assessing ‘quality’ in practice-based research

Recently I have been having conversations about practice-based research with a foundation that is a significant funder of art historical and museum-based research in the UK. The foundation is receiving an increasing number of applications for financial support for what they refer to as practice-based research projects and wants to establish clearer and more transparent guidelines in relation to this. This has prompted me to look again at the different definitions – ‘practice-based’, ‘practice-as’ and ‘practice-led’ for example – and consider what differentiates ‘practice’ from ‘practice-based research’. I’ve also been digging deeper into what assessments of quality should be applied to practice-based research and want to share some of what I’ve found here.

The first place I looked for guidance was UK Research and Innovation (UKRI), not least because they issue guidelines in relation to how they assess research for the Research Excellence Framework (REF). It is instructive to note that UKRI score all research in terms of three things, ‘originality’, ‘significance’ and ‘rigour’. I have put down the definitions they give for each of these terms here:

Originality will be understood as the extent to which the output makes an important and innovative contribution to understanding and knowledge in the field. Research outputs that demonstrate originality may do one or more of the following: produce and interpret new empirical findings or new material; engage with new and/or complex problems; develop innovative research methods, methodologies and analytical techniques; show imaginative and creative scope; provide new arguments and/or new forms of expression, formal innovations, interpretations and/or insights; collect and engage with novel types of data; and/or advance theory or the analysis of doctrine, policy or practice, and new forms of expression.

Significance will be understood as the extent to which the work has influenced, or has the capacity to influence, knowledge and scholarly thought, or the development and understanding of policy and/or practice.

Rigour will be understood as the extent to which the work demonstrates intellectual coherence and integrity, and adopts robust and appropriate concepts, analyses, sources, theories and/or methodologies.

In addition to these three criteria, UKRI also assess:

  • The research process: (the question and/or issues being explored, the process of discovery, methods and/or methodologies, the creative and/or intellectual context or literature review upon which the work draws, or challenges or critiques),
  • Research insights: (the findings, discoveries, or creative outcomes of that process)
  • The time and manner of dissemination: (how and where the insights or discoveries were ‘effectively shared’).

UKRI’s assessment conditions are helpful, not least because they provide clear and detailed definitions of terms including ‘rigour’ that are open to varied interpretation depending on the discipline and epistemological position being taken. Notable too is their acknowledgment that ‘originality’ and ‘significance’ can be judged in terms of the impact on ‘policy or practice’ as well as ‘knowledge and scholarly thought.’ In other words, good practice-based research can and should make a positive difference to practice (and policy) as well as inform academic discourse.

The UKRI criteria also highlight that the quality of any research depends on the integrity of the process and the extent to which any findings are effectively shared as much as the originality of the discoveries. We need, therefore, to consider how the research will be/was undertaken as well as what will be/was examined and discovered.  While looking into this I came across the work of Charles Glassick and his colleagues who undertook a survey of the criteria and standards employed to evaluate scholarship in American universities. They constructed a set of useful standards that form a sequence of unfolding stages to assess the quality of any research:

  1. Clear Goals – are the questions being asked important? Are the ambitions realistic and achievable, are the purposes of the work clear?
  2. Adequate Preparation – has the researcher got the necessary skills, do they show an understanding of the existing work in the field both practical and theoretical?  Have they got adequate resources to realise it?
  3. Appropriate Methods – is the researcher using appropriate methods, are they modifying processes and methods appropriately as the work progresses?
  4. Significant Results – does the researcher achieve their aims? Does their work add to the field (of practice and/or theory), does the work open up additional areas for further exploration?
  5. Effective Presentation – does the researcher present their work effectively?  Do they communicate their work to the intended audiences? Do they communicate with clarity and integrity?
  6. Reflective Critique – does the researcher critically evaluate their own work? Do they bring an appropriate breadth of evidence to that critique?  Do they use evaluation to improve the quality of their work?

The focus within these criteria on reflective critique, and on the potential for ongoing modification in the processes and methods, seem highly relevant to practice-based research, as is the focus on clear and ethical communication.

The need for transparency and ethical research methods are some of the determinants of research quality identified by academics Norman Denzin and Yvonna Lincoln. They identify alternative measures, which are potentially relevant to practice-based research, particularly if the study seeks to effect positive change and/or involves collaboration with others.  They argue for emotionality, personal accountability, and responsibility to be considered in judging a research study’s trustworthiness, alongside an ethic of caring and political praxis.  Going still further they maintain that any social-justice oriented approach to research should be evaluated according to its emancipatory potential, with quality judged in terms of caring, personal accountability and the representation of the experience of oppressed people.

I found that a similar set of principles and guidelines for ethical collaborative research has been produced by researchers at Durham University. Their criteria include democratic participation, personal integrity, equality and inclusion and active learning. In my mind these offer compelling alternatives to more ‘conventional’ assessment criteria.

So what can we take from these different assessment criteria? My first thought is that it is helpful to have clear definitions of all the key terms . This helps in making transparent assessments of quality in relation to practice-based research. Secondly, there is much to be learnt from the more ‘alternative’ criteria that acknowledge that assessments must take account of the ethics of any research process. Finally, given the importance of researcher reflexivity within practice-based research, any quality assessment framework would benefit from including reference to this amongst its criteria.

Leave a comment