THE CASE FOR ETHICAL TECHNOLOGY:
DIVERSITY, EQUITY, AND INCLUSION

 Informed by humanistic training and with an inclusively aggregated group of people, with insights and feedback from a diverse community of trained technologists, we will have better answers to questions such as:Who will be using this product a…

 Informed by humanistic training and with an inclusively aggregated group of people, with insights and feedback from a diverse community of trained technologists, we will have better answers to questions such as:

  • Who will be using this product and how?

  • How well does this product translate across different cultures, communities, and subgroups, and how well does the product understand, consider, and respond to the values of these different cultures, communities, and subgroups in its structure and utility?

  • What cultural sensitivities must this product navigate?

  • What harm could this product potentially cause?

  • What historical wrongs must this product address?

  • Which populations might be made vulnerable through the use and distribution, and possible misuse of this product

  • What good does this product enable?

“Technology is not neutral, not ever. All technologies have built into them the biases, the blind spots, and the passions of the people who build them. If the group in charge of building them is homogenous, those blind spots, those biases, aggregate and ossify.”


Michel.png

Background:

In presenting a vision for rethinking the relationship between technology, the humanities, and diversity, equity, and inclusion, we might begin with the parable of the pilot seat.

A few years ago, at UCLA, a colleague wanted to better understand the profession of being a pilot. The profession was, and is, a male-dominated profession. There are many theories that explain why this is so. According to one theory, a main career trajectory that those who become a pilot take is the air force; former air force trainees go into the profession after their service. Since the air force, and the military more broadly, has as its base constituency a disproportionate number of men, this metric manifests itself in the pilot profession, thus skewing it male. Another such theory posits that the nature of the profession itself, constituted by long periods of travel away from home, has (at least historically) not been attractive to women, who disproportionately prefer to or need to stay home with families, and that this factor results in the profession’s gender imbalance.

This colleague, however, found one profound and significant factor that, more than any other factor determined who would become a pilot. It was the design of the pilot seat.

A pilot seat is not a random design. It is a highly specified, deeply considered element of the pilot’s interactive work space. The seat is specifically designed for a body sitting in it to be able to reach the controls with maximal agility, to be able to see panels with maximal clarity, and to be able to accommodate a body sitting in it for long periods of time without the body becoming uncomfortable or distressed in the seated position.
It is designed to fit, in other words, a human body.

But whose human body does it actually fit? Well, the seat is designed to fit a “Standard” body, which, in the metrics of that design, is a male body. Women, who tend to be slightly smaller in scale, would sit in that seat, and on average, they would feel uncomfortable. They would be less agile than their male peers; they would not use the controls or see the panels with the same ease and dexterity. And most of them wouldn’t ever know it was the seat, and not their own ability, that compromised their success. They might look around at their male peers and see that, perhaps, their peers seemed more agile than they, more able, just overall “better” at the job. They’d conclude that they were just not as competent and that they’d be better off in another profession. And those women would self-select out.

I offer this study as both a real portrayal of how technological design, even design that appears neutral, may contain bias that manifests itself as sociological, cultural, and economic inequality. The seat is a powerful metaphor: we know that it matters who sits in the seat.

But I offer this study also because the problem does not simply lie in who sits  in the seat. It is not simply a sociological, cultural, and economic problem. It is a technological problem. Who designs the seat?

Engineers design the seat. And engineers are disproportionately male. They disproportionately have male bodies, with the proportions of a male body. And they will design for themselves. We all have a limited ability to design for others who are unlike us—we build what we build primarily and firstly with our selves in mind. There is nothing insidious in this—it is automatic and morally neutral. But if it is true that we design for ourselves, then the selves involved in that design matter. It matters to have a number of selves, from a diversity of contexts, backgrounds, bodies, designing the seat.

The basic and often unconscious assumption of many engineers is that technology is neutral. Technology is not neutral, not ever. All technologies have built into them the biases, the blind spots, and the passions of the people who build them. If the group in charge of building them is homogenous, those blind spots, those biases, aggregate and ossify. That is why it is critical that we cannot look at changing the composition of the group that drives the plane without thinking about changing the composition of the group that designs the plane.

Consider the following: women are much more likely to die of a heart attack than men, because the medical industry has based its diagnostics for heart attacks on symptoms commonly experienced by men; women experiencing symptoms of a heart attack present those symptoms differently, and are less likely to report and to be recognized as in danger of an attack. People of South Asian descent are more likely to die of heart disease and suffer complications from diabetes because the metric for diagnosis has been calibrated primarily on Caucasian subjects; calibrating the risk of heart disease involves BMI evaluation; the arbiter for what is considered an unhealthy BMI, established for largely Caucasian subjects, does not present equally in Asian and South Asian populations. As a result, doctors can overlook up to 1/3 of South Asian patients at risk of T2D or heart disease. 3-5% of all test subjects for skin treatments are Black; products designed to treat skin infections or skin ailments like acne, rosacea, and inflammation have almost no data through which to analyze or produce technologies to treat black skin; in fact, many skin products contain whitening agents, an outcome of a testing and research environment entirely controlled by, and prismed through, the ideal and standards set by the beauty industry’s privilege and neutralizing of white skin as the status quo. Self-driving cars are equipped with AI that may not be able to adequately and consistently identify people on the road with dark skin, thereby making autonomous vehicles more likely to hit dark skinned people, since the AI uses sensors and camera technology that is calibrated to detect light skin.

To return back to and to continue along the metaphor of flying objects and their platforms, consider Twitter. As a technology, the most significant unintended consequence of the platform is the relentless and alarming rate of cyberbullying that persists on the platform. Dick Costolo, Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams, who invented and founded Twitter in 2006, may be brilliant technologists; objectively, the social media platform they created has had a profound impact on a global culture of communication. Dorsey and his cohort created the platform with an ethos of uninterrupted, uncompromised free speech—an undoubtedly noble premise for a communication platform. The platform has, however, been plagued by horrendous cyberbullying; cyberbullies consistently and aggressively target women and people of color with threats of sexual and physical violence. Women are three times more likely than their male counterparts to receive sexually abusive comments, including threats of rape, beatings, death, or abduction. People of color are three times more likely to experience such threats as white males.

Dorsey and his cohort are brilliant technologists, perhaps. They are also unilaterally white males. It is entirely plausible to me that were there to have been women and people of color in the room when the decisions about unmitigated free speech on the platform were made, with no safeguards for violent threats of rape, beatings, death, or abduction, there might have been some thought given as to the dangers of the free speech principle as the governing ethos of the platform. They might have brought with them into the decision making room their experiences with bullying on the internet, and knowledge about how certain communities are particularly put into danger by a platform that does not offer protection against certain forms of speech. We might have had a safer, better platform, where people did not fear from threats of bodily harm by anonymous others.

The State of the Industry:

We design the technologies that accommodate the bodies and perspectives that we know, and the experiences that we have.

Most technologists tend to understand their work as the output of skills in engineering, in coding, in computer science. But the questions about technology I am raising here—how technology is not only used but built—are humanistic questions. They are questions that we can only begin to ask and to answer with an understanding of things like ethics, implicit bias, race, class, and gender, and culture. To answer these questions, one would need training in humanistic thought, one would need to possess a methodology grounded in humanistic thought and a critical lens culled from fields of study such as philosophy, aesthetics, and cultural studies.

I am making the case, here, for the critical importance of a humanistic approach to technology, and, simultaneously, for humanists to themselves contribute to the technological sphere. Within the tech sphere, there is an increasing need for humanists with technological training, and technologists with humanistic training. The major question of the next few decades will fundamentally be humanistic ones: how to build ethical AI? How to stop assaults on democracy caused by bots hacking the massive social media platforms? How to navigate an economy of information where fictions intermingle with facts, indistinguishable? These are problems caused by technological innovation, but they cannot be solved by technological innovation. They need humanistic approaches. And they will not be solved through the same approach, the same viewpoint, the same perspective that created them; to change the approach, we need to change the composition of the viewpoint. We need a diverse and inclusive perspective. More than 65% of technologies that are backed by VCs are designed by male technologists; more than 72% are by white technologists; and close to 84% are by technologists located in the West: on the West Coast of the United States, and the Western contours of China. That these technological products emerge with biases is an obvious and predictable outcome of their means of production.

We have seen, already, some of the consequences of a technological work force that lacks in humanistic training and diversity. Technologists have seen it too, and they know that their livelihoods are endangered by what their products have wrought. To that end, they are creating new jobs to self-regulate; SalesForce just hired its first “Chief Ethical Officer,” and a growing number of positions advertised in major tech companies include descriptions with language such as “ethics,” “compliance,” “humanities,” and “cultural studies.”

At a panel on “Ethics of AI,” held by the organization “The Unintended Consequences of Technology,” I was surprised to discover that in a two hour session, AI was mentioned twice; the rest of the two hours were devoted to talking about “deontological vs consequentialist ethics,” “cultural relativity,” and E.M. Forster’s The Machine Stops, a 1909 short story that offers a dystopian prototype of the internet.

I am pleased that those in tech fields and in positions to consider the ethics of technology are now discussing such things, but dismayed that many of these conversations seem to take place in the absence of any actual humanists. Humanists spend 8-10 years becoming experts in this field, consuming and responding to entire bodies of thought developed across centuries on such topics. We train to develop the technical knowledge and expertise to speak of and consider such ideas and to build on them, the same way that technologists consume vast amounts of field-relevant content and train themselves to innovate through that content, so that they may be able to build their ideas. Humanists need to be in those rooms where ethical conversations about technology are taking place. And humanists need to know basic technological values, skills, and methods in order to be conversant and credible in those rooms.

Aims:

This is what I propose: a study aimed at devising a curriculum for a field of study I want to call “Ethical Technology.” It would include, among others, courses in the following areas

  • The history of ideas

  • Science fiction: utopia, dystopia

  • Implicit bias in the context of diversity, equity, and inclusion

  • Basic, VERY introductory level coding, with a focus on social issues and code

  • The ethics of “the good”

  • Cultural studies

Rationale:

This course of study would provide an intersection and a shared methodological understanding for humanists and technologists to think symbiotically about the culture of technological productions and the governing ethos for its function.

It would also address three critical areas fundamentally tied to inequity and structural inequality.

First, financial inequality: The tech sphere currently offers the most lucrative and financially rewarding careers. Technologists out-earn humanists; they are offered salaries and bonuses that can be leveraged into wealth. In some cities, particularly spaces where the tech sphere is dominant, non-tech workers are priced out of those spaces. If the tech sphere is homogeneous, those spaces will also be homogeneous, ossifying spatial segregation and perpetuating economic stratification along gender and racial lines.

As a sphere of study, the humanities remain an attractive path of study for women, people of color, and other minorities for good reasons. For one thing, the humanities consistently offer opportunities to explore issues of identity, of cultural critique, and of critiques of power that many students who have been impacted by these issues would not otherwise have an opportunity to explore and interrogate. To that end, I suspect that many people who go into the humanities do so because the humanities offer opportunities to be addressed and recognized by institutions of power for people who have otherwise been made to feel invisible.

Second, STEM systems often have systematic and built in deterrences for women, people of color, and historically disadvantaged communities. Often, even introductory coding and computer science classes assume a baseline knowledge. Many men and students who come in from more privileged environments have gained some technological fluency, even before entering that first introductory class. They are enabled by both: a) access to technological products and b) by the marketing of those products, and targeted by those who create those products with narratives and engagement with which they would likely identify and that they would find compelling. For example, experience with video games is one way that children start to gain agility in coding, and fluency in the language of tech. While more tech products are being created for and marketed to female users, the majority are still narratively aligned with the historical interests of male users. That means that male users are far likely to have some fluency and interest in technological products, building intuition that manifests itself as skill on the college level. Economic and racial stratification takes place similarly, along the tectonics of access to tech products and k-12 education. When students get to that first introductory course, instructors will tend to assume a baseline knowledge; students who lack that knowledge perceive that they are already behind relative to their peers; they opt out and turn to the humanities, where the knowledge and insight they possess is frequently rewarded.

Third, and finally, most tech classes are taught in the terms of technology as a neutral area of study. This is frequently not compelling to people who find that their basic daily lives are structured by crucial social, cultural, and political strictures. This could be easily addressed, by organizing tech classes around a topic or issue: coding and feminism (for example); UX design for environmental justice (for another); UI product management for LatinX entrepreneurship (for a third example).

Intervention on the institutional level:

As a humanist, I welcome as many students as want to join humanistic inquiry. But as an educator and as a humanist concerned with structural inequality perpetuated along the tectonic plates of financial hierarchies, I maintain that equity cannot be achieved without access to economic mobility. Below, I have outlined how I think the humanities can intervene into this discourse and state of affairs by creating change on the institutional level.

The form humanistic inquiry offered by educational institutions, as currently designed and codified within those institutions of learning, has for a long time generally maintained that the search for knowledge is its own ends; that critical inquiry, critique, and principled philosophical interrogation of knowledge, hermeneutics, identity, and the manifestation of these things as culture, is its own ends. I passionately agree with this. But I am also limited in my endorsement of this view by the economic realities of college. Students, particularly students of color, come to an institution of higher learning, often at great economic cost to them and/or their families. Students go into debt, often debilitating debt, in order to get degrees with the aim and aspiration of upward mobility, financial stability, and access to power. Since financial inequity is, historically, constellated through race, class, and gender, and since the humanities is particularly hospitable to those who wish to explore and gain expertise in issues that bear consequences for race, class, and gender, the group most likely to need an education that allows for immediate and sufficient financial relief may be the group most at risk for graduating without the means toward that ends, if the humanities remains an area of study without a clear professional trajectory.

When humanists staked their position as one in which the aim and the purpose of college was knowledge for its own ends, college was not a financially debilitating investment; college debt could be paid off in a reasonable amount of time. For many students of privilege, it still can; parents help with tuition or pay it in full, thus enabling this sector of students to pursue a route of study without deep concern and fear about the economic consequences. Students who do not have this support cannot afford to, in general, take a course of study simply because it is “interesting.” They do not categorically have basements where they can live with parental support once they graduate; they cannot take the 2-year internship post-college required in many sectors to get a good (meaningful and financially rewarding) job. They will graduate with, up to $100,000 of student loans to pay back. In that economy, education cannot be education simply for its own ends; it must be thought about and designed with the aims of providing students with skills and a trajectory to financial stability. To that end, a curriculum with a major in ethical technology offers a vulnerable population of students an opportunity to engage in humanistic thinking with an end view in mind, particularly in the context of a tech environment that is now specifically seeking out ethical humanists conversant in technological language.

Intervention on the industrial level:

By developing a track whereby humanists and technologists can train together, become conversant in a shared methodology, ethic, and culture, I envision an intervention into technological production with two predictable outcomes. Below, I have outlined how I think the humanities can intervene into this discourse and state of affairs by creating change on the industrial  level.

Changes to the means and output of production. We design and we imagine for the bodies we know and the experiences we have. By creating conduits through educational means that foster diversity, equity, and inclusion in the tech sphere, we will enable better technological products—better, because their design and conception will be informed in conversation and with consideration of the diversity of people who will use them. Informed by humanistic training and with an inclusively aggregated group of people, with insights and feedback from a diverse community of trained technologists, we will have better answers to questions such as:

  • Who will be using this product and how?

  • How well does this product translate across different cultures, communities, and subgroups, and how well does the product understand, consider, and respond to the values of these different cultures, communities, and subgroups in its structure and utility?

  • What cultural sensitivities must this product navigate?

  • What harm could this product potentially cause?

  • What historical wrongs must this product address?

  • Which populations might be made vulnerable through the use and distribution, and possible misuse of this product

  • What good does this product enable?

Change to the perceived value of a humanities degree in the tech sphere. By creating a curriculum aimed at responding to a growing job category, and by actively training undergraduates to develop demonstrable professional skillsets to respond to that job category, the value of a Cal Poly humanities degree and a Cal Poly tech degree will increase. Working in connection with industry leaders to gain insight about what skills undergraduates would require in order to succeed as ethical technologists in this new market specialty will put Cal Poly students at an advantage because this degree will uniquely position them to be relevant and equipped candidates, indeed industry leaders, for this new specialization.

Interventions into problematic design: The homogeneity of the tech industry causes predictable problems in designs when the designs are modeled on and envisioned by that homogeneous group and then exported as global and universal products. Below, I have outlined how I think the humanities can intervene into this discourse and state of affairs by creating change on the level of technological design and distribution.

I recently spoke to a leading FinTech company executive, who elucidated the following problem. His company, a FinTech startup, had launched with the aim of correcting a long-term inequity in banking. The banking industry has historically denied opportunities for loans and accounts to people of color and to communities in developing countries, following and perpetuating a long structural discrimination against people who have historically been denied economic opportunity. His company sought to target those communities of people who have been denied financial access to accounts and loans, aiming to democratizing banking and securities. To the company’s surprise, they discovered that their own AI disproportionately rejected applicants of color, despite the company’s own initiatives to counteract exclusionary policies.

Why?

It turns out that in order to submit an application for an account, an applicant needs to submit photographic identification. The underpinning technology, therefore, at the threshold of a successful application was the technology of film and photography. Historically, the design of film was conceived to capture white skin; photographs do not capture images. They capture the way that shapes cast angles and shadow, the way that light contrasts with angles. In fact, Kodak Eastman modeled its measurements and calibration of photographic imaging through one particular model, Shirley, a woman with pale, white skin and dark hair, whom they sat up against a background to calibrate how the skin contrasted with high light. That human face was used to calibrate the printed color stock. Shirley cards became a rubric to set up or establish what would be a much more perfected color image. It was not a deliberate and exclusionary practice; rather, it was a technological innovation where no one thought to consider that light might be differently cast, the technology less effective, if you didn’t look like Shirley.

When the FinTech company’s facial recognition algorithm scanned the applications and came across a person of color, it frequently would not recognize a face. And it would, on that basis, reject the application.

I will end with this example, as a powerful metaphor. I think it is true to suggest that the current state of technological innovation and production is that it does not recognize many faces, and that the unintended harm that frequently comes out of the technological products that we daily use and rely on are the cause, and the consequence, of the fact that so many of us are not imaged by those who engineer process of innovation, and not involved in the imagining,