This was a project I put together as a master’s student in Human Centered Design and Engineering at the University of Washington. HCDE 502 Empirical Traditions of Human Centered Design and Engineering is a required class for students. The course takes a deep dive into both qualitative and quantitative research and how we understand and evaluate each.At the end of the course, we were asked to put together a final project, highlighting our learning from this course. As I’ve continued to explore research in the program, I find myself continuing to return to this project to help define my research methodology.
When I first began exploring the world of research, I quickly came face-to-face with a hidden tension between qualitative and quantitative research in my own approach to data. The tension began in my years as a teacher where I struggled with assigning a quantitative measure to student work. I saw teachers around me entrenched in what I referred to as the “bucket of points.” Every assignment was tied to a value and every student received a percentage of that value. All these numbers added up, sometimes weighted, sometimes not, into a single value, that by the end, was meant to represent the sum total of a student’s learning in that class.
I was not convinced that value had any meaning. And my observations of the bad practices in statistics and data analysis of my colleagues did not help. I saw teachers try to fit their class of thirty some students into a bell curve—as if thirty high school students were representative enough to fit that statistical trend. Or basing a grade too heavily on earlier work when the student had clearly reached understanding by the end of the course.
Yet even with accurate and well planned quantitative measurements, there are still aspects of a student’s learning that cannot be captured by mere numbers. A student’s pathway towards self-discovery, forming an identity, and finding their calling, and eventually contributing to society is more complex and does not have a direct correlation with grades.
Similarly, we are seeing an ironic shift in educational philosophy on the international playing field. China, India, and Japan have historically been very driven by structure and quantitative measurements in their educational system. While this has led them to success in producing a plethora of technical workers, it has missed the mark at developing innovators, entrepreneurs, and leaders. In the past few decades, they have been shifting educational policy and practice to more closely follow the once fractured but diverse education of the United States. The United States, on the other hand, has made the opposite shift. Seeing a dearth of candidates for technology companies, we have moved towards a more unified, structured, and quantitatively-driven education system.
The danger comes from our blindness to the implications towards those soft skills that our nation has been so successful in developing. If we swing too far towards quantitatively driven education, we risk failing to teach our youth to innovate, lead, and invent. And this cut to the core of my research methodology.
What is needed is an appropriate balance between the quantitative and the qualitative. People are complex. We do not fit into nice little boxes divided up by numbers and calculations, so we must embrace those methods that do not quantify along with those that do. My affinity both to the numerical and concrete and to the descriptive and abstract strengths my ability to understand the world around me. In fact, it allows me to bridge the gap between developers and teachers, between the hard sciences and the soft sciences, between the technical and the social.
Misconceptions of Quantitative and Qualitative Research
This info graphic is a What is a Boundary Object? Susan Leigh Star coined the term boundary object as a category that crossed social, economic, political, and cultural boundaries. Although it’s interpretation changes in different environments, a boundary object is recognized across those boundaries. that takes two sarcastic accounts of data being used to reach faulty conclusions to reveal the flaw in putting too much faith in any one methodology. All methods can be used well or used poorly. Just as teachers can use valid statistical concepts (the bell curve) in invalid ways (to curve scores in a small sample population), so too can valid qualitative or quantitative methods be used in invalid ways.
A Spectrum of Methods
As I explored deeper into the nuances of qualitative and quantitative research, I discovered strong similarities and connections. Where quantitative research had internal validity, qualitative had credibility. Where quantitative had external validity, qualitative had transferability. These aspects of the two methodologies were two sides to very similar coins. Rather than two opposing fields, I discovered a spectrum between two different approaches to the same ends: understanding our world. The quote is from a prompt I wrote in a WordPress blog during this discovery process. The image below is one I came across in my studies that visually represents the spectrum of methods.
Pieces of the Puzzle
“Pieces of the Puzzle” is a research paper I wrote exploring the grounded theory approach of Dr. Ruth E. Brown’s (2001) “The Process Of Community-Building In Distance Learning Classes.” In my analysis of Brown’s (2001) work, I explore how a single method, even a single study, is just a small piece of the complex puzzle of understanding our world. Each method and each study buds upon the others in a complex network of interconnected research. Rather than find ourselves overly critical of the shortcomings of any single method, we should instead find ourselves understanding each method’s strengths and flaws. Just as Scrum Masters follow the way of the Samurai—not limited by “favorite” tools but using whatever tool will help them accomplish their ends, so should researchers be open to matching methods to the piece of the greater puzzle they are tackling at that moment.