Regular readers of this blog may have noticed a downturn in the frequency of my posts over the past few months. The main reason for this is that I have been utterly consumed by writing an application to a large funding scheme. I asked you for support with that proposal in this recent post and was overwhelmed by the generosity of your responses. I will be providing a detailed reply to the comments I received from the autism community in a future post.
In the meantime I wanted to talk more generally about a theme which I think ties a lot of my research together, and certainly forms the supporting spine of the big grant I applied for last month. This theme is the idea of evidence-based practice, as applied to early years support, education and specifically technology.
Medicine has for a long time been firmly wedded to the idea of evidence-based practice. Medical undergraduate students are trained to evaluate evidence and base the care they provide on the latest scientific findings. In order to give them the skills to do this, their degree qualification will involve a number of opportunities to evaluate research – drawing on original journal articles, not just text books which summarise findings, and sometimes introduce a little spin. In addition students will normally carry out a number of small research projects including audits of existing services to evaluate on-going practices, but also the collection of new data. This latter component is currently being expanded at the University of Edinburgh with the introduction of a new 6-year degree programme incorporating a full year of research activity.
As a result of this emphasis on evaluation and collection of evidence, UK medical graduates develop, as standard, sophisticated skills in research. This allows them to stay abreast of new developments, interpreting the latest findings in terms of their application to a specific patient group or clinical setting.
How does this relate to my own work on technology and autism? Well, increasingly I find myself being asked to offer training to practitioners who work with children with autism, on the best uses of technology. I will be doing this at an upcoming National Autistic Society conference, for example. At these talks and workshops, I try hard to provide people with the skills to evaluate available technologies and apply them to their particular clients, (or pupils, or family-members) and setting. By focusing on the basic skills, I aim to get away from simply recommending this or that app, or website, and instead give people a toolkit to make their own judgements. If successful, I hope people will go away with a new confidence which they can then apply to the myriad available technological resources, matching these to detailed knowledge of a specific user’s needs and strengths.
However, frequently in the Q&A session at the end of the event, I find myself answering questions at a much more specific level – what kind of app would you recommend for a selectively mute boy of 14? are there any good apps with jigsaws? which social media outlet is best for a young woman on the autism spectrum who wants to make friends? These questions are impossible to answer. I don’t know these people well enough to make a recommendation which will definitely be right for them. And in any case, technology moves so fast that anything I recommend may quickly be superseded by a newer, better, (or different) version.
I feel that what people are looking for is something like a technology doctor. An individual trained up in the skills of evaluating the evidence. Someone with a broad but sophisticated knowledge of a huge variety of different needs and the technological ‘medicine’ to go with them. Someone who will confidently prescribe a specific ‘dose’ and ‘course of treatment’. The problem is, this person does not exist. And moreover, without the methodological tools to gather scientifically robust evidence at a pace and a scale commensurate with the range of commercial technologies available (even just limiting it to those which make educational and therapeutic claims, this is well into the 1,000s) this person can never exist. There simply isn’t an evidence base to draw on.
Instead, what I suppose I am trying to do is confer skills needed to evaluate technologies to the individuals who need them. One obstacle is the aforementioned lack of good quality evidence, and I’ve written before about the tools we might employ to get around this, at least for the time being.
However, another obstacle comes down to the basic training and knowledge that people have already. Sadly, teachers do not benefit from the six-year degree programme provided to student doctors. The time given over to evaluating evidence, auditing current practices, let alone to gathering new data is woefully brief. Likewise the resources to do this once teachers are qualified are minimal to say the least. Other practitioners may fare a little better – educational psychologists will normally have decent research training for example – but support workers in adult services, pre-school nursery nurses and so on are also poorly served in this regard.
This is not to suggest that practitioner training programmes are entirely to blame, nor even that they are wholly inadequate. Now that my children are in school I am consistently amazed by the dedication, imagination and skill of their teachers. However, from the other side of the fence, as someone called upon to teach student teachers and offer professional development training to qualified practitioners from a range of backgrounds, I am also acutely aware of the flaws in the system.
If we are ever going to get truly evidence-based practice into our schools (and nurseries, and residential centres…), we have to give our practitioners the training and resources they need to gather evidence, and to evaluate new research when it comes out. Unfortunately, it can’t be imparted in a 3 hour workshop or a two-week undergraduate project.