Conversation: Ethics & Computers

On March 19, 2011 at 9:28 p.m., I posted the first draft of what will eventually become the Preface in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and acting.

 

joonkoo: I’m wondering if you should make a clearer definition of the user here. For example, is the user a computer programmer using the computer or just an ordinary John or Jane using the computer? I understand that knowing the exact mechanics or physiology of the computing system may tremendously expand the user’s perspectives, but I also imagine that there would be some considerable costs to learning those mechanisms. Would my mother, a middle-aged lady with few digital friends, ever want to know exactly how the processor and memory work for her to get less frustrated the next time she accesses an Internet browser to receive and view photos that I send?

david: Yes, but what either extremist position about users (ordinary John or Jane vs. super programmer) tends to ignore is the bell curve nature of the problem, which is very similar to my indictment of mainstreaming in u.s. public schools. That is, these need to be seen as somewhat unique user groups requiring distinct, differentiated approaches.

But even if you draw three divisions in the bell curve, which would split say 10/80/10, it is still an enormous design problem. People who use Photoshop are still in a discourse community with considerable depth beyond the average person. It’s even worse at the other end of the spectrum. And this is where I think Peter Lucas,9 founder of MAYA design, and resident genius, absolutely nails it, and my guess is that this is what Slim is getting at with his reference to “physics.”

What Peter says is that you must design for the “lizard brain” first, because it’s the only thing that is consistent across that entire bell curve. (Keep in mind, this is my perception of Pete’s message.) If you learn to do this well, the rest may take care of itself. But fail to get that right, and you either have very little chance, or you’ll be dragging a 200 ton freight train behind you the entire way. That is why our experience with modern technology, even the best of it, falls short.

It’s ironic because we’ve had the technology for it to be a solved problem for at least a decade, but very little work has directed all the physics and graphics innovation at solving the problem of making data manipulable objects with “thingness” much the way Bill Gates describes in “information at your fingertips.” It’s also very similar to the way the osi model10 falls out — meaning that designing for the lizard brain is like the physical layer, designing for higher order brain functions move up the brain stem and can be accounted for in a layered semantics kind-of-way.

But I think there’s an element missing here, which is that what you describe about a user’s experience with the computer crashing or slowing downs is an entirely qualitative judgment. I don’t like computers that crash or slow down, either but the experience is arguably the same or worse if I’m driving my car or bicycle. I ran over a piece of metal on my bicycle commute yesterday and was left with a huge gash in my tire, a blowout, and subsequent wheel lock when the metal piece hit the brake that could have easily caused my clipped-to-the-pedals self to go reeling into the river, but this is the experience of an unplanned and unforeseen mechanical failure. Could the bicycle be made to fail more gracefully? Certainly. But at what cost, with what trade-offs, and what marginal utility? Similarly, I had almost the same thing happen with my little Kia a few months ago in almost the same place and I’d raise exactly the same questions. Kevlar tires, tpms, run flat, oh sure, again at what cost and what compromise?

The design problem that the computer presents is no different though I think what tends to happen here is that because computer science is taught from a very narrow perspective, focused on very quantitative problems, we tend to ignore the qualitative ones, and we do that at our user’s peril. There’s also a tendency, unlike other branches of engineering, to not have much rigor in terms of seeing the trade-offs and compromises in a holistic, systems thinking kind-of-way.

I also want computers and software that fail gracefully, and are friendly and usable, but the path there is very long and very hard and is still beholden to the laws of physics, no matter how much we think we exist in a software world where none of the rules still apply and we can acquire all of these things at no cost to us (the designers) or them (the users).

slim: I’m not saying that the trouble with computers is worse than what we feel elsewhere. What I’m saying is that it’s time we consider the design of computers from the point of view of ethics, not just usability, functionality, or desirability. Why shouldn’t computer programmers and designers adopt the same kind of ethical stance that architects do, for example?

From what I have gathered taking classes in architecture,  there’s a tremendous sense of ethics (not morals) and philosophy of life that goes into educating an architect. I never got any of that as a computer scientist — although, truth be told, whether it would have sunk into me at the ripe age of 18 is questionable. But that’s a whole another discussion.

Even in human-centered design, while we talk about designing for human users, we never get deep enough to the heart of what it means to be human. How can we be human-centered, when we don’t even know what it means to be a human? I’m less interested in the computer affording user-friendliness, usability, or graceful failures. That’s a very object-oriented way of looking at this issue. I’m less interested in objects and more interested in relationships. More specifically, I’m interested in finding out how our relationship to the computer can afford the quality of being immersed in an empathic conversation. The kind of quality that, as far as I can tell, makes us become aware of who we are as human beings.

I have nothing against the laws of physics. As a matter of fact, I think the computer should be designed to accept physics as it is. When designers pretend that the laws of physics don’t apply to computers, weird things are bound to happen.

I don’t think physical materials are there to make our lives more convenient or inconvenient. It just is. Yet because of our evolutionary history, there’s something embodied within us — and something we come to embody as we mature — that allows for us to have an empathic conversation with it. I want the same qualities to be afforded in our interaction with computation.

david: Now we’re getting somewhere! So there are several interesting points I’ll make here. As to your first question regarding architects and computer designers, these comparisons usually fall down because of the chasm between consumer electronics and buildings, structures, etc. There are major differences attributing to elements such as rate of change and stability. Also, classic failures exist in that world, too, though not in the numbers of computers failing, but that’s probably a problem of sample size more than anything. To me, Frank Lloyd Wright’s cantilevers at Fallingwater are beautiful, but they’re not robust from an engineering standpoint. Hmm, where have I seen that before?

The problem with education that you describe is exactly what I was alluding to earlier with computer science’s focus on the quantitative, but I think this is a maturity issue. What I mean is that architecture is a very old discipline. Designing computers and software, not so much. That evolution would, in theory, happen in time, but this will take a long time. Imagine a world in which there are bachelor’s degrees in human factors and human-computer interaction (HCI). Oh sure, there might be one or two now, but imagine a world where they are on the same plane as computer science (CS) degrees.

But in order for such large-scale changes to happen, there needs to be economic incentives. That’s the biggest problem in the entire puzzle here because organizations have no economic incentive to make a radically “better” computer. They’re still making tons of money with “good enough.” I’m hopeful that the rise of mobile computing will give way to better design as the competitive forces there are much stronger than the pc business, just as the same was true for pcs over older mainframes and minis.

But what you seem to be getting at here is a philosophy of computing, just as you describe a philosophy of architecture. That is, not one architect, but an entire movement. This is like Sarah Susanka and the “not so big” movement.11 The conditions for that to exist in computing are not quite as clear to me as in architecture or lifestyle design. That’s possible also with computing, but again, the experience has to be so overwhelmingly great as to cause a parallel economic revolution.

I’d question whether the empathic feeling that you describe between two individuals is even possible with machines. I can’t remember whether this was touched on by Ray Kurzweil in The Age of Spiritual Machines 12 or Don Norman in Emotional Design.13 I don’t know where empathy or compassion originates in the brain, but I’m pretty sure these are very high order functions, and vary individually ( i.e. the continuum from sociopath to the Dalai Lama). Indeed, many would say that empathy and compassion is something we must cultivate within ourselves.

Which brings me to another theme: dogs. Could it be that what you describe is what humans seek in dogs? Dogs are selfless, unconditionally loving, warm, whimsical, carefree — exactly the opposite of “weight of the world” that most adults must grapple with on a daily basis. If the computer could provide a dog-like antidote to adulthood, that would be great. Crazy hard. Which describes the saying, “Anything worth doing…” pretty well.

I suspect that Cynthia Brazeal’s work14 at mit may have some links. Also, David Creswell15 at cmu. He has a publication about transcending self-interest. I think the research questions du jour are these:

What are the determinants of a disposition for empathy in humans? Where is empathy encoded in the brain? Is parity an important part of empathy, or can empathy exist effectively without parity?

The latter would be a requirement for an empathic architectural style to succeed in computing since visiting an empathic requirement on the user would be tantamount to slavery. Until you know the answers to those questions, any attempt to get computers to behave as part of an empathic conversation would be difficult, if not impossible, because there is no other model for empathy but humans. Either that, or I’m horribly confused about the animal kingdom.

Keep up the good work. This is likely to turn into a hard slog if it hasn’t already.

——

9 Peter Lucas has shaped MAYA as the premier venue for human- and informa-tion-centric product design and research. He co-founded MAYA in 1989 to remove disciplinary boundaries that cause tech-nology to be poorly suited to the needs of individuals and society. His research interests lie at the intersection of advanced technology and human capabilities. He is currently developing a distributed device architecture that is designed to scale to nearly unlimited size, depending primarily on market forces to maintain tractability and global coherence. (MAYA Design, “MAYA Design: Peter Lucas”)

10 Different communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point. (Encyclopædia Britannica Online)

11 Through her Not So Big House presentations and book series, Sarah Susanka has argues that the sense of “home” people seek has almost nothing to do with quantity and everything to do with quality. She points out that we feel “at home” in our houses when where we live reflects who we are in our hearts. In her book and presentations about The Not So Big Life, she uses this same set of notions to explain that we can feel “at home” in our lives only when what we do reflects who we truly are. Susanka unveils a process for changing the way we live by fully inhabiting each moment of our lives, and by showing up completely in whatever it is we are doing. (Susanka Studios, 2013, “About Sarah”)

12 Ray Kurzweil is a renowned inventor and an international authority on artificial intelligence. In his book Age of Spiritual Machines, he offers a framework for envisioning the twenty-first century—an age in which the marriage of human sensitivity and artificial intelligence fundamentally alters and improves the way we live. Kurzweil argues for a future where computers exceed the memory capacity and computational ability of the human brain by the year 2020 (with human-level capabilities not far behind), where we will be in relationships with automated personalities who will be our teachers, companions, and lovers; and in information fed straight into our brains along direct neural pathways. (Amazon, 2000)

13 In Emotional Design, Don Norman articulates the profound influence of the feelings that objects evoke, from our willingness to spend thousands of dollars on Gucci bags and Rolex watches, to the impact of emotion on the everyday objects of tomorrow. (Amazon, 2005)

14 Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and human robot interaction. (Dr. Cynthia Breazeal, “Biography”)

15 Dr. David Creswell’s research focuses broadly on how the mind and brain influence our physical health and performance. Much of his work examines basic questions about stress and coping, and in understanding how these factors can be modulated through stress reduction interventions. (CMU Psychology Department, “J. David Creswell: CMU Psychology Department”)