So here’s the second in a series of blog posts which I’m aiming to write over a fairly condensed period of time, all drawing on recent discussions which were particularly crystallised by a pair of workshops as part of Innovative Learning Week 2016. All the posts are aiming to provide an evidence-based response to current concerns around technology use by children, which are often exacerbated in relation to children with autism spectrum disorder. To get the overall intro check out this previous post.
Big question this time around: Are we addicted to technology?
There have been a few headlines recently, like this one, raising the possibility that we may be addicted to technology. Common Sense Media recently published a report on this which extrapolates wildly from the selected publications, (the report does not appear to use systematic evidence gathering strategies), turning straightforward statements of scientific evidence into anxiety-promoting judgements. For example, on page 5 of the report they state that: A 2010 study of 8- to18-year-olds found that young people were engaging in media multitasking for 29 percent of their overall media use, fitting over 10 hours of media use into 7.5 hours of their days… But this statement is filed under the heading: Our digital lifestyles, which include frequent multitasking, may be harming our ability to remain focused. There is no evidence that I can see to support this interpretation. And don’t even get me started on their endorsement of the proposal that technology may be causing a reduction in empathy – I have covered this elsewhere.
In fact, I’ve found little direct empirical evidence testing the assumptions that underpin a range of statements pertaining to “technology addiction” – especially evidence relating to the latest generation of mobile touchscreen technologies, like iPhones. To really test these claims we would need:
- information about how interacting with novel technologies affects the activity of neurochemicals such as dopamine, in the reward centres of the brain. For technology to be even potentially addictive, it would have to have a neurochemical effect similar to other addictive substances, like nicotine or alcohol.
- information about rates of technology use and linking these in a causal, dose-dependent way to negative outcomes. The American Society of Addiction Medicine defines addiction as follows:
Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations. This is reflected in an individual pathologically pursuing reward and/or relief by substance use and other behaviors.
Addiction is characterized by inability to consistently abstain, impairment in behavioral control, craving, diminished recognition of significant problems with one’s behaviors and interpersonal relationships, and a dysfunctional emotional response. Like other chronic diseases, addiction often involves cycles of relapse and remission. Without treatment or engagement in recovery activities, addiction is progressive and can result in disability or premature death.
By this definition, a technology addiction would lead to pathological pursuit of more access to technology, with features such as a requirement for ever increasing doses over time to satisfy cravings. Even in the selected cases reported in reports of supposed technology addiction, this kind of pattern is not evident.
So, if we don’t have the evidence we need, is there anything we can say about “technology addiction”?
Yes, I think so. There are a number of ways we can explore the arguments around technology addiction made in this example article, to see if they hold water. I’ll take a few examples of assumptions which seem to be central to the “technology is addictive” argument and see if they seem to hold up, using illustrative quotes.
1. Too much technology causes distractibility
One of the main accusations made by technology addiction proponents is that technology is related to distractability, with a consequent decrease in ability to get things done. For example, in this quote from the aforementioned BBC article, an author & psychologist says:
“We see a decrease in memory, a decline in grades, they’re not developing the part of their brain that’s a muscle that needs to be developed for singular focus,” she told the BBC.
First problem with this – no evidence. There’s no empirical basis that I am aware of for saying that use of technology causes a decline in grades or a decrease in memory. On the contrary, there are attempts (though with their own limitations) to harness the power of technology to improve memory. For more details, see the work of this research group. The second issue is the characterisation of the brain (let’s just overlook the metaphor of the brain as a muscle for now…) as designed for singular focus. The brain is in fact a massive multi-tasking machine, managing a series of simultaneous basic physiological functions (heart beat, breathing), processing incoming sensory data (sounds, sights, smells, etc.), controlling movements and responses, and interpreting all of this information using high level processes – attention, inhibition, application of prior knowledge and so on – not to mention encoding some of this experience in long term memory. If we describe technology as a distraction, because it is multi-sensory, fast-moving and full of useful and interesting content, we might as well describe the entire world as a distraction.
2. Technology late at night prevents sleep
Again, a quote from the BBC piece: “They go to bed but can’t sleep, or fall asleep exhausted and wake up tired. People started telling me they couldn’t switch their brains off.”
Just to prove to you that I am not an uncritical fan of technology, I include this one as a rare example of a statement I do agree with. For once there is some decent evidence, such as this large USA survey, that using technology in the hours immediately before going to bed can be disruptive to sleep, with knock-on consequences for well-being and academic / professional efficacy. This may be because the blue light from modern phone screens is disruptive to the body’s physiological expectation of reduced light when it is time to sleep. It could also plausibly be because certain kinds of technological activity (for example, checking your work emails) may cause anxiety or stress before bed, which then leads to trouble falling asleep.
So here’s a bit of a no-brainer: if you’re having trouble falling asleep, try turning off the TV an hour before bed, leave your phone downstairs, create a routine which allows for a little space to relax before bedtime. This is good ‘sleep hygiene’ on a par with, don’t have a cup of coffee at 11pm, or avoid arguments with your partner at bedtime and it is a stretch to argue that this legitimate phenomenon provides evidence for technoloy addiction.
3. Technology overuse is associated with other negative personality traits
“Even if they are watching TV they have multi screens. It’s a level of hyperactivity driven by a fear of not being in control.” – BBC article again
One of the major tools used to bad mouth technology is the drawing of spurious associations between technology use and undesirable-sounding characteristics. My, admittedly brief, delve into the literature on personality traits and technology use suggests a lack of quality, systematic research. The findings that do exist are generally not robust, and often hard to interpret. For example this paper reports in the abstract:
More disagreeable individuals spent increased time on calls, whereas extraverted and neurotic individuals reported increased time spent text messaging. More disagreeable individuals and those with lower self-esteem spent increased time using instant messaging
In terms of arguing for the existence of technology addicition, and its association with a specific kind of person, I find the existing research evidence ambivalent at best.
4. Technology overuse is particularly bad for children
This is a really interesting one, as normally I find that people are particularly anxious about technology use in children because of a concern that the developing brain is vulnerable to the supposed negative effects. However, in this BBC piece for the first time I have come across a perspective which contradicts that point of view, and which I find both refreshing and (relatively) convincing.
“Up-and-coming digital natives will be more discerning than us,” she explains. “We’re still in the ‘Ooh, isn’t it wonderful?’ phase of technology, we are still excited by it. Our generation hasn’t got the hang of how to respond to it so we respond very reactively.”
Now just to be clear, there is no more empirical evidence for this than for any other statement. But I think an important counterpoint to the usual fears is presented here, suggesting that children growing up with mobile, connected and accessible technology in their lives will be more competent users of that technology. This makes sense to me, though it is hard to see how it might be empirically tested as a hypothesis.
More broadly, I think the logic of this argument, if you agree with it, can provide a framework for how to approach technology with children and young people – as tool or resource, from which goodness can be extracted more effectively if the appropriate skills are developed. Such skills might include mastery of interfaces (from swiping to Googling); competence in online security and personal safety; and a degree of self-regulation of activities both between different uses of technology and between digitial and non-digital activities. This describes my personal approach to technology for myself, and for my children, for sure.
Tackling problems in the digital world head on, by giving people skills and experiences, seems to me to be at odds with the way in which many parents are encouraged to reserve technology as a reward for when other ‘better’ activities have been completed, as encapsulated in this guide for parents. I would draw a parallel with the practice of tightly controlling children’s intake of sweet foods, which is at odds with development of an internalised healthy eating model. As this classic paper states:
… child-feeding practices have the potential to affect children’s energy balance via altering patterns of intake. Initial evidence indicates that imposition of stringent parental controls can potentiate preferences for high-fat, energy-dense foods, limit children’s acceptance of a variety of foods, and disrupt children’s regulation of energy intake by altering children’s responsiveness to internal cues of hunger and satiety. This can occur when well-intended but concerned parents assume that children need help in determining what, when, and how much to eat and when parents impose child-feeding practices that provide children with few opportunities for self-control.
Taking a comparable approach to a child’s ‘digital diet’ seems destined to put technology firmly in the ‘guilty pleasure’ category rather than position it as a functional tool in modern life.
5. Technology addiction requires specialist treatment
This is the one which gives me the greatest pause when I read of ‘experts’ talking up the prevalence of technology addiction. In nearly every case, I find that these individuals are also offering private diagnostic and treatment services, whether they are a clinical professional or simply a parent who has developed a spin-out “Mom tips” type business. It seems inappropriate to call out individuals here without having any personal experience of their services or knowledge of their various professional backgrounds. But I would note that this confict of interest is a factor flagged as a common feature of bad science, and would urge caution when those arguing for the existence of technology addition are also offering treatment for this condition.
Thanks for bearing with me. More blog posts using evidence to tackle modern concerns around technology to follow…