For today’s June Blog I thought I would share a few reflections on the research process, based on my attempts to indoctrinate / introduce my children to scientific research.
We’ve done a few scientific projects over the past few weeks but the one I thought I would talk about is my nine-year-old’s most recent research question:
Which Cat Is The Best?

ELIZA
This project deployed a wide range of research methodologies and hence revealed a wide variety of research challenges. To answer her question, my daughter – let’s call her Kitty – decided she wanted to measure a number of elements that make up a good or a bad cat. For each one, we had a slightly different kind of measure. Each cat was awarded a point for every metric on which they came out best. So you can see that the first fundamental problem here is false equivalence. Eliza could win a point for being most colourful, but Orlando might win a point for effectively defending the garden from other cats. Do we think these things are equivalent? Can we even attempt to capture their relative importance? I don’t think so, but in Kitty’s research, each one scored a point and that was that.
1. How often we see them

ORLANDO
It wasn’t immediately clear to me, but it turns out that Kitty felt that seeing a cat more often is a good thing – a controversial value judgement, at least in the view of one key stakeholder (Daddy). We interviewed each member of the household to see how often they had seen each cat that day, and then summed the reported frequencies. Kitty identified her potential conflict of interest and so did not act as informant as well as researcher. However, having dodged this methodological bullet we ran into another – the unreliability of retrospectively captured data. i.e. none of us really remembered which cats we had seen, even just that morning.
Winner: Eliza, by ten (probable) sightings to seven
2. How often they go to the vet
Here we adopted a routine data analysis approach, and our main methodological challenges were missing and poorly-coded data. We don’t have all our vet bills (or at least, not in the places we looked), and those we had didn’t necessarily include the name of the cat. While looking at the bills we also decided that maybe frequency of visit wasn’t so important as cost, so we changed our variable part way through the data collection – NOT best practice.
Winner: Orlando, total recorded bills of £237.80 to Eliza’s £742.57
3. Cutest
We went for a public consultation on this one, via twitter. Kitty proclaimed that the photos in my phone were not sufficiently adorable and took her own – a clear opportunity for researcher bias to creep in. The twitter poll returned a resounding vote but it is unclear how much differences in the photos influenced this judgement – as opposed to differences in the actual cats.
Winner: Eliza, with about 70% of the vote.
4. Stinkiest
We deployed a qualitative methodology for this one – both Kitty and I sniffed each cat. We agreed immediately that they smelled identical but Orlando smelled stronger. Now we had an operationalisation problem – is a stronger cat smell more or less stinky? A third party (Kitty’s sibling) was called in to artbitrate, giving this element of the research a slightly iterative, Delphi-study flavour. They decreed that stronger smells were better smells when it comes to cat smells.
Winner: Orlando, proudly smelling strongly of… cat.
5. Fluffiest
I thought this one was going to be hard to conceptualise, but Kitty simply gathered a sample hair from each cat and measured it’s length. Longer hair = fluffier cat. We had a brief altercation about the ethics of the sampling method, but I was reassured that the hairs were relinquished during treatment-as-usual (i.e. stroking). Another clear methodological flaw is the reliance on a single sample from each cat, but the face validity of the result (it’s really obvious who is fluffiest) suggested the method was sufficiently robust.
Winner: Eliza, by 12mm
6. Most behaved
This excellently-named category involved construction of a rating scale. Items on the scale were: a) scratches furniture, b) brings in dead animals, c) fights other cats and d) throws-up. A higher score indicates “less behaved” – an undesirable outcome for a cat, apparently, despite the fact that these normative and socialised metrics bear no resemblance to what might be desirable from the cat’s own point of view. Two informants (Daddy, sibling) offered their scores, which was insufficnent to evaluate scale validity. But the results were unanimous.
Winner: Eliza, scored zero on every item.
Conclusions
The study offers seemingly robust evidence that Eliza is the Best Cat. However, I personally know that Orlando is the best and that opinion remains unchanged despite empirical evidence to the contrary. In fact I would go so far as to question the original premise of the study. Perhaps next time some stakeholder consultation beforehand to settle on a more appropriate research priority.