I am now well underway with the first set of interviews for my research project. Although I did spend time planning the interviews, what I did not do was properly familiarise myself with qualitative data collection and analysis.
That seems like quite a large oversight, but for whatever reason it did not occur to me to do this. I was never taught any qualitative research methods in my undergraduate degree or during the taught element of my doctorate, but that is no excuse. If anything, it is even more of a reason to learn about them!
This past week I have been doing just that, and have been enjoying it too. I started by reading through some chapters in Research Methods in Human-Computer Interaction (Lazar, Feng, & Hochheiser, 2017) to get a feel for the methods in my general research field. I try to be very systematic with my approach to my work, and I have been glad to read about some of the methodologies for analysing qualitative data.
The method that I have read the most about is called grounded theory analysis (GTA). GTA can be performed on any qualitative data, from interview transcripts to films. From my understanding, GTA aims to produce a theory from the data, which helps to understand the data in the context of the question being explored. This is achieved through coding, which involves attaching labels, or 'codes', to any part of the data that stands out as interesting or important.
In some cases, the codes used could be based on prior knowledge of established theory. For example, you may know that some of the difficulties someone with dementia might have fall into one of three categories: physical, cognitive, or perceptual. In other cases, what's known as emergent coding could be used, where codes are defined by the person performing the analysis as they read through the data.
In either case, these codes are then grouped into broader categories to try to understand the relationship between the codes. This results in a theory that can then be tested with new data to see if it still makes sense.
This process is iterative and it is encouraged that the original data and codes are revisited and adjusted as more is learned about the topic area. It is important to be open-minded and to ask questions of the data that can help discover new information from the raw data.
As with anything, there are caveats to consider, such as validity and reliability of the codes. This is why it is important to follow an established method and to describe exactly what was done, so that the results can be critically reviewed by others. One example is that, often, the person who collects the raw data also analyses it, so they may be biased towards a certain result. A second person could also code the data and a comparison could be made to see how closely they match. If both coders give similar or the same codes to the same data, then the coding is more valid.
As a next step in developing my qualitative research methods skills I plan on performing a GTA on some example data that I have found online, which has been developed specifically for this purpose. I want to explore different tools for doing this, as well as the process itself. I might need to do this a few times with different data sets to get plenty of practice.
If you regularly work with qualitative data, what methods do you use for analysing them? Have I understood the core elements of GTA correctly? Let me know!
References
Lazar, J., Feng, J. H., & Hochheiser, H. (2017). Research Methods in Human-Computer Interaction. Retrieved from https://learning.oreilly.com/library/view/research-methods-in/9780128093436/
Responses