I was recently delighted to have the report of our randomised controlled trial of an iPad app published in a fantastic journal, Autism. The paper came out in October 2015 and, since the trial itself ended in June 2013, you’d be forgiven for wondering what on earth took us so long.
Sorting out the data
Well, of course after the last family visited us for thir final assessment, we had to spend a bit of time getting the data ready. In particular, using a coded video of children playing with their parents as one of the main ways of collecting data meant that it took a while before we had all the variables ready for analysis. Coding even ten minutes of video can take a long time and we had to get a percentage of the videos coded again by a second person, to check that the main coder was intepreting what she saw in a sensible way.
All this takes a good bit of time, and so it was September 2013 before I had the first draft ready to share with my co-authors. Then there was quite a lot of to-ing and fro-ing with them. Among other things, I had a couple of meetings with a statistician who gave me expert advice on the best way to analyse and present the data. I had to make some nice pictures to represent the trial pathway (i.e. what actually happened to the families involved at each step) and to illustrate the app. My co-authors had plenty of comments on the best way to present previous work in the field and the appropriate conclusions to draw from our study. And we also had some converations with the funder about whether we could afford open access fees for the article (so everyone can download a copy, not just people with a subscription to that particular journal).
Getting published
After all this, we finally submitted our article to Journal A in March 2014. And that was just the beginning. The timeline below gives you an idea of what was still to come…
I expect if you’re an academic this timeline won’t look too unfamiliar. Plenty of papers don’t go through the wringer quite this intensively, but I get the feeling most academics will have had this kind of agonising experience a few times. They will recognise the exhaustion from re-writing the same work over and over, and the emotional burden of the repeated rejection letters. But those of you from outside academia may have a couple of questions so I’m going to try to answer those.
1. If Autism is such a great journal, why not just submit it to them first time around?
In a nutshell, the answer to this question is “impact factor“. The impact factor of a journal is considered a measure of its quality or importance. It is based on a calculation of the average number of citations which articles in that journal receive. ‘Citations’ means all of the times that someone else mentions your paper, in their paper. So if my article has a lot of citations, that means a lot of other people have mentioned it in their research. That probably means my paper is quite important in one way or another (of course, it might mean my paper is awful and everyone mentions it as an example of what NOT to do…) The impact factor system means that, if my paper gets lot of citations, the journal it is published in will get a higher impact factor. A journal with a high impact factor is therefore an important journal which publishes lots of important (i.e. highly cited) research.
For an individual academic, it is beneficial to try to publish in journals with the highest possible impact factor. It helps with career progression and things like the REF. Because everyone wants to publish in the best journals, those journals get loads of submissions and they can be really picky. So, we tried to publish our articles in some journals with really high impact factors. It was tough trying to impress them and compete with all the other amazing submissions they get, so we got rejected a lot. Autism is a great journal – I respect their editorial board and they have a great public engagement strategy. But we would have been stupid not to have a go at getting an even higher impact journal when we first started trying to get our article published.
2. Why did you keep getting rejected – is your paper not very good?
I think the paper is a good paper – I am obviously biased, because I wrote it, but I think it is fair to say that well-designed randomised controlled trials of supports for autism are still few and far between, and so this trial adds a considerable amount to the sum of knowledge. But it wasn’t good enough for the high-ranking journals we sent it to, obviously. And what did they think was wrong with it? Well, over the 18 months of repeated submission we received 16 reviews from experts in the field, and I’d say a few themes emerged.
The first theme was that people didn’t like the fact that we had a null result. In other words, the two groups in our study weren’t different from each other. The intervention didn’t “work”, or at least not the way we measured it. One reviewer illustrated this very clearly when they commented: “Now comes the hard part. I am afraid that I am recommending that the paper be rejected. The reason for this recommendation is that, despite the innovative intervention and the above mentioned design strengths, the results showed no intervention effect.” Another reviewer noted: given the lack of meaningful results, I do not recommend publication of this manuscript at this time, as it will not advance the literature on the use of technology in autism treatment.
This is really disappointing because null results ought to be shared as widely as positive ones. They are still informative and add to understanding. As another reviewer said: Some may note that the intervention does not have a significant effect, but this is useful information. If iPad interventions do not survive the rigour of RCTs, the area really needs to be aware of this, and to reflect upon what to do about this. If future RCT publications do find an effect, this study will be a useful comparison. Another reviewer notes: Even without treatment effects I think that this paper teaches us something new about intervention approaches. Sadly, in this case we did get repeated evidence that the reviewers or editors simply felt that a positive result would have been more informative.
The second theme was that people were uncomfortable with our decision to try to deliver intervention content naturalistically, in a game format. For example: The intervention in this study appears to just be the handing over of an iPad with an application on it. There are no apparent teaching details for parents. And also: the FindMe app was not implemented uniformly across participants and meaningful conclusions from the results cannot be made. These comments were disappointing as one of the points we attempted to stress in our paper (but clearly not convincingly!) was that we wanted our app-based intervention to be firmly grounded in a realistic context. To me, one of the great strengths of learning from an app is that the child can be self-motivated, directing their own learning independently. This meant we had to make an app which was fun and which children would want to play, rather than an app which had to be adminstered by parents to a strict schedule. We were proud that we managed to do that – children played the app consistently and parents reported high levels of enjoyment. But instead of this being a strength, because it meant that every child didn’t have precisely the same experience, some reviewers identified this as a weakness in the design.
The third theme is about measurement, and I agree with the many reviewers who raised it. It was stated most generously here: One reason for not finding an effect, I imagine, is the outcome measure. This was an ambitious outcome measure, and again should be applauded. Much literature shows improvement upon the computer-based task, though this is only the first step. Attempting to identify generalization is essential. So what does this reviewer mean? As I noted at the top of this post, we gave children an app but then measured whether they showed any changes in their behaviour, when playing with a parent. This was definitely ambitious but we felt strongly that for any intervention, app-based or not, to be really worth its salt, it should be able to influence what people actually do, in day-to-day situations. Getting better at the game itsel fis all well and good, but we wanted to see changes which went beyond just that. Nevertheless, in retrospect I do wish we had tried to measure not just that ambitious outcome of behaviour during play, but also attempted to capture something a bit closer to the learning within the game itself. As another very sensible reviewer notes: I find it a pity that the ambitious aims of the study possibly stood in the way of finding an effect of the intervention. Expecting generalized and sustained change in a real world situation may not be very realistic in the context of playing an app less than 15 minutes a day, and it may have been interesting to include a more proximal measure of success as well.
3. So you’re pretty cross about the whole review process then?
No, not at all. First of all, I firmly support the principle of peer-review. Yes, it isn’t perfect and sometimes you could cry. But ultimately I am glad to have the opportunity to take on board the comments of expert peers. In this case, there is no doubt that the published article is considerably stronger than the first draft we submitted back in March 2014.
Second, as the lead author on the paper I need to take responsibility for my writing. If a reviewer is unconvinced by my argument, then I need to construct a stronger argument. If the reviewer thinks I have missed out an important detail, I need to make it more explicit. If the reviewer has concerns about my methods or theory, I need to address these. I am not writing for myself, I am writing for someone who doesn’t know my study, doesn’t know what I did or why and the review processes forces me to explain these things clearly and in full.
Third, almost without fail our 16 reviewers of this paper all went out of their way to acknowledge the strengths and phrased their comments constructively and helpfully. While I didn’t always agree with them, they respected the work we had put into the paper and in turn, I respect them.
4. Really? So you’re just fine with the fact that it took 18 months and five journals to get your paper published?
Well, I’d be lying if there weren’t times when I felt like tearing my hair out. Here’s one final reviewer comment: With increased interest in tablet devices and their widespread use, it is critically important that researchers provide information regarding their appropriate use and role in instruction. Maybe it’s not immediately obvious why this caused me so much distress? Well, they also recommended that our paper should not be published.
5. Are we done now?
Yes. Thanks. It’s been very therapeutic writing this post – publishing our trial report paper was a gruelling experience and it’s good to share. But I did also have a slightly more lofty ambition which is to try to expose some more of the realities of life as a researcher. If we seem obsessed with publication, then perhaps this goes some way to explaining why…