Inclusive research validity gives us the whole truth about the systems we study and their diverse users

A guest post by Dr. Tonya Smith-Jackson, professor and chair, Department of Industrial and Systems Engineering

Picture of Dr. Tonya Smith-Jackson

Dr. Tonya Smith-Jackson

All systems are human systems, and human-systems engineering cannot advance effectively without knowledge of multicultural factors that influence system design and evaluation.

Many of us have used research designs that do not focus on the potential contributing factors that are associated with multicultural users, multicultural contexts, or multicultural ecosystems. Inclusive research validity rests on the premise that the researcher is conducting research activities, analyzing data, and translating results that, when generalized, speak truth for diverse users and ecosystems.

This is discussed extensively in many documents, one of which is a well-intentioned book by me and my co-authors (Resnick and Johnson), titled “Cultural Ergonomics: Theory, Methods, and Applications.” If the target users are diverse (gender, age, generation, ethnicity, nationality, race, religion, geographic region, etc.), then it is our responsibility as scientists and engineers to pay due diligence to inclusive research validity.

This applies to almost every system we design. I cannot think of one system where all users are exactly the same on all human attributes – impossible. Yet we continue to conduct analyses that are not telling the whole truth, and we continue to publish work (under peer review) that is accepted without reporting the sample demographics or testing for individual or important group differences.

We also rely on the old methods of aggregation rather than distribution-free methods and big data methods to identify patterns at the individual level. Aggregating values to examine central tendencies makes little sense in many instances because there is no “average user.” If a sample consists of 90 percent males, then the mean or other central tendency measure will be a closer approximation for males, not the 10 percent of females.

We use traditional statistical aggregations in a manner that will continue to yield products and systems that are at best a minor annoyance to those who are not represented in the sample, and at worst, a full-fledged safety problem or hazard to those who are not represented in the sample. We drop outliers without examining the pattern in the outliers — are all of the outliers women, older people, Latino participants, white men? We fail to check distributions and fail to analyze individual-level data to examine patterns before we aggregate.

Of course, we will not know if cultural factors play a role unless we test for them. We cannot shy away from this by arguing that there is no effect unless you have tested for an effect or found research literature indicating that a particular cultural attribute or demographic has been shown to be a non-contributor to the phenomenon under study.

It’s time to transform what we do and address the elephant in the room — too much research is occurring that over-generalizes to the rest of the world results derived from almost homogeneous samples. We cannot continue to over-generalize from majority-group samples, university student samples, etc. and expect to make a real difference for everyone. It is high time that funding agencies and journal editors insisted on inclusive validity reporting.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s