January is “Debating Digital Detox” month on the Business and Society blog. In the New Year, many of us choose to commit to healthy changes, set good habits and try and reset our bodies after a period of overindulgence. For some this means cutting out alcohol, fatty foods and exercising more. But an increasing number of people are choosing to give up their electronic devices. At a time when huge numbers of the population are still working online, we must ask if this is possible. But more importantly, we should ask why we see our relationship with digital technology so negatively. Are we so addicted that we need a period to ‘cleanse’ ourselves?
David Ellis has written extensively about our relationship with technology. We sat down with him to discuss the negative bias towards technology, his goals for his research and the importance of creating space to nurture ideas.
Can you tell us about your current role and the path that took you there?
I’m an Associate Professor in Information Systems in the School of Management. But I completed my undergraduate and PhD degrees in Psychology at the University of Glasgow, so prior to that I was always in Psychology departments. I started my career doing what most psychologists do - running very traditional experiments in laboratories. Then things like smartphones and wearable devices came along which I suppose is what got me interested in leaving the laboratory, and finding out what sorts of things you can learn and measure about people away from the lab. Before Bath, I was at Lancaster University for nearly five years. Prior to that I was at the University of Lincoln for a few years, and before that I was at Lancaster again. As many academics do, I flitted around working at various places – I visited the University of Essex, and also worked at the Scottish Government briefly as part of my PhD.
Can you explain more about your work in Psychology, and how that brought you to Management research?
My PhD was mainly about social constructions of time. A lot of it considered how people represent days of the week - one of the earliest studies I did was looking at the experience where you can't remember what day of the week it is. Psychologically there's reasons for that and it's a lot to do with the way that we represent Monday and Friday in our in our head - we have very strong associations with Monday and Friday and very weak overlapping associations with the middle of the week. It was probably one of the shortest experiments I ever ran, I think probably one of the shortest in the history of Psychology. Basically, I asked people to tell me what day of the week it was, as quickly as possible, on various days of the week. And what you find is that people take three times as long to do that on a Wednesday or a Thursday than they do on a Monday or Friday. In fact, people often panic and don't know the answer in the middle of the week.
But I was more interested in finding out what the applied application of that was. I started to wonder what other behaviours changed across the week, so this is what then led me to look at things like missed GP appointments. I found that people are more likely to miss doctor's appointments at the start of the week than the end of the week, and that the effect is stronger in younger patients than older patients. So, if you give the older patients’ appointments towards the beginning of the week and move the younger people towards the end, that can help to maximise attendance and improve the system by saving money etc. I suppose without realising, this lead me down the path towards management research. Even while I was doing purely psychological work, I was always interested in the applied angle. Often, trying to solve a problem is a really good standpoint from which to approach research.
Tell us about your recent work, and what you’re working on at the moment?
Many of my projects at the moment are looking at the use of digital traces. Digital traces are like ‘footprints’ that you leave online - from the intentional stuff like pictures posted or comments on YouTube videos, to the unintentional like the records of our google searches and the logs of our movements from our map apps. Every day we interact with all these devices and they record lots of things about us. There are tremendous implications for privacy, security, health, general well-being, which means that there are huge opportunities for psychology research and social science more generally. We’re trying to find out if this technology is good? Is it helpful? How is it changing how people interact with society? So, a lot of the projects I’m involved with kind of sit around that broad theme. Specific projects range from research looking at how digital traces can be used to understand patient behaviour in the healthcare system and how this can improve patient outcomes, through to work that asks if using your smartphone every day is bad for you or not?
I also recently completed a book about smartphones within psychological science. It explores the research around this issue, and the tension between two specific schools of thought. On one hand you’ve got all this work about how to use this new technology in a way that’s beneficial but on the other hand, psychology has a tendency to over-egg the potential harm of technology. Which then in many cases, we find out the real harm lies elsewhere. If you look at the internet or video games, there’s huge swathes of research (some of which was very poor) tried to link these new technologies to basically every problem under the sun. They make teenagers more depressed, they increase suicide rates and so on. But once we’ve built up a larger body of knowledge, including work that has been conducted to a higher standard and integrates new technology into research designs, many early findings collapse. But by that point, society has moved on and there’s a new piece of technology to demonise so the stigma remains. The views about smartphones really epitomise this reality.
So, the book is half motivated by general academic curiosity and wanting to look at stuff I’m interested in, but also partly motivated by a frustration with the school of thought within Psychology that constantly misidentifies the problems caused by mass-adopted technology. And this is damaging, because it moves the debate away from things that really matter - yes, maybe a smartphone has some effect on concentration at school but really the things that have the biggest impact are adverse childhood experiences, poverty, violence at home etc. So, the smartphone (and new technology more generally) often appears as a lazy answer to complex problems. The very same technology of course can also provide new tools and solutions to mitigate online and offline harms. It’s complicated!
Do you have any ideas as to why there’s such a negative bias towards these things, within the discipline?
Interestingly I’ve got colleagues in the School of Management that are really interested in that question, exploring why Psychology as a field has become somewhat lost in its attempts to understand the impact of new technology. I suspect part of it comes from a lack of understanding or unwillingness to engage with the technology - lots of people doing research into the effect of smartphones for example, don’t appear to know very much about the technology or its capabilities! I think a big part of that is the lack of integration between different schools of thought. You’ve got a whole group of psychologists who have developed apps for smartphones or video games that can assist with cognitive testing, which have tremendous applications for health, including early diagnosis of Parkinson’s etc. But you’ve also got a group who are looking at how smartphones might damage kids’ cognition. And they could really benefit from some of the techniques and tools used by the first group. But they don't talk to each other. At the same time, others are rightly focusing on what I would view as genuine challenges as a result of new technology that include a lack of access, misinformation and criminal activity.
And this is why interdisciplinary research is so important. We wouldn’t try and understand the health service without involving healthcare workers, so why would you try and understand something as complex as a smartphone without involving computer scientists? I think that’s something that Management is better at, by and large. As a discipline it tends to bring together everyone who might be relevant to solve a problem.
What impact do you hope your work has?
As I mentioned earlier, I’m a big believer in doing research that has some sort of practical application at the end. It’s often easier (for me) to get passionate about something if you are working towards some sort of change. I think there’s sometimes a view in academia that everything stops once the paper has been published. But I think the real test - the real peer review - happens when it goes out into the world. Does anyone care? Does anyone use it? Some of my work, I hope, has some tangible impact - from helping to change appointment systems in the NHS through to contributing to recent screen time debates within government.
But I think it’s still really important that we make space for research where we can just follow an interesting thought and see where it leads, and not worry about the big picture. One tremendous privilege of being an academic is that you can more or less just work on the things that excite you.
Universities have targets, and they are obviously important for preparing students for the workplace, but we shouldn’t forget that they are places to explore ideas. My worry is that early career researchers in particular have so many things they're supposed to do, that they struggle to find time to really sit with their ideas and figure out what they’re passionate about. I was really lucky when I was at Lancaster - as it was their 50th anniversary they hired 50 early career academics, and gave us two years to just do research. I had no teaching responsibilities, so while I was expected to hit certain targets I had freedom to explore my interests. I’ve been really lucky in my career that people have given me space, and trusted me to just get on with stuff.
What are you most proud of in your career so far?
My first thought is that I’m quite proud of the fact I managed to write a whole book! That was hard work and took a lot of time. Also, I get a lot of satisfaction from seeing the successes of people I’ve worked with or mentored. I like seeing how ideas get under their skin, and then witnessing them driving the conversation forward, making contributions to knowledge and becoming leaders in the field. I think that's really great. I’m quite proud of the people I’ve worked with who continue to do excellent research.
Who’s been your biggest influence in your career?
I don't think I could name one person but there's definitely a type of person who's inspired me. It's the sort of person who - as I mentioned before - is just all about the ideas. People that recognise that, yes science is about progress but it isn’t all going to happen at once. It’s supposed to be slow, considered and thoughtful. So, someone who will have conversations with you, and is interested in what you’re doing, but then generally give you the freedom and space to figure it out. Those type of mentors have been invaluable to me.
Also, most of my collaborators have inspired me. There’s sometimes this focus on specific individuals in academia, but every individual has groups of people around them making stuff happen. Of all my work to date, only a handful of papers are solely authored by me alone, so it’s important to acknowledge the network of people who are the joint-architects. That goes for the people who I’ve worked with only once, to those I’ve worked with dozens of times. Everyone’s only as good as the team around them!
Thank you David!
Respond