Conversation: Empathy & Mastery

On April 3, 2011 at 4:23 p.m., I posted the first draft of what will eventually become the second story of the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the woodshop. While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and ethics.

 

joonkoo: This story reminds me of my recent attempts to master bread baking, namely baguettes. I’ve been baking a batch pretty much every other weekend, and one of the most delightful things that happens after you retrieve a freshly baked baguette from the oven, is to hear them singing , which is the sound of the crust cracking, and perhaps some moisture interaction going on.  I’m nowhere near the level of mastery, but I’m sure there are different sounds that you can distinguish, once you become a master baker.

slim: Did you notice the singing from the get-go or did someone point it out to you? If the former, was it highly noticeable or did you actively have to pay attention to it? I don’t think I’ve heard that sound. I’m very curious what it is like.

joonkoo: It’s very noticeable. I noticed it from the beginning. But then I also watched this French guy making a baguette on YouTube, and he was the one who mentioned this singing sound. It’s really the sound of crust cracking, but it makes the bread sound so delicious.

slim: Did you notice the sound after you heard the French guy on YouTube, or before?

joonkoo: I noticed it before, but I didn’t care that much. Afterward, I came to like the sound. But to be honest, there has been no deep understanding of the sound.

slim: See… A question I have about this is how do we come to understand, and become sensitive to these subtle nuances? There seem to be certain things that we can proactively notice, then there are things that other people have to raise our awareness to.

Is this simply a matter of time? If I spent enough time paying attention, would I eventually become sensitive to everything there is to be sensitive about — (smiles) and become miserable? Or are there always going to be things that other people have to raise our awareness to, because there is an infinite number of things, and simply not enough time?

joonkoo: The question you are raising is an excellent one! I haven’t thought about it much, but intuitively, there seems to be a need for both internal enlightenment and external stimulation to learn such nuances.

slim: Indeed.

By the way, last semester I interviewed a child psychologist, who told me that in the beginning, babies learn how to be attached to their mother, and come to understand what it means to love their mother. Then they may feel comfortable with other people who have attributes similar to mother, which allows them to feel safe and comfortable with these other people. Then as they interact with them more, they mature, and start to appreciate the nuances that make these other people different from mother, but love them despite the differences. I found that to be a rather fascinating way to think about maturity. Don’t you think?

joonkoo: The child psychologist was perhaps referring to Piaget’s idea of assimilation and accommodation.16 I have little knowledge in developmental psychology, but you may find it relevant.

Also, when I took cognitive development, I was fascinated not only by Piaget, but also by Vygotsky.17 You might want to check out his theory. My knowledge about these is too shallow to be shared here. (Smiles)

Now, returning to the idea of non-living things telling us something, I experience very similar things when analyzing data — they tell me how they should be analyzed.

slim: Yeah, isn’t that peculiar? There’s a feeling associated with it .

I’ve also heard a firefighter say the house told him to get out, and immediately after he ran out, it crumbled. Perhaps there is a combination of pattern recognition, as well as some genetic reflex that triggers a certain physiological change in our body that results in us feeling as if we’re being told?

joonkoo: Although, I think this is a very literary way of describing the gaining of expertise.

slim: What do you mean that it is a very “literary” way of describing the gaining of expertise?

joonkoo: I think it’s just one possibility of expressing how we get to know things better. I say literary because unlike other people or other creatures, it can’t be that a piece of wood is telling you something. It’s that you think that the wood is telling you something. For example, a baseball player might claim that the ball that left the pitcher’s hand told him to hit it, and it resulted in a home run.

This kind of expertise, often described as intuition, wisdom, mastery, is something that humans — and other animals — can acquire at an incredible level, as the human brain has amazing ability to parse statistical and stochastic patterns in the environment.

However, it’s an open question, I think, to ask what it takes to gain expertise in this variety of domains such as understanding other people’s mind (e.g., theory of mind ), furniture making, and computer programming. And also whether they are different, and if so, why.

slim: When you say it’s an open question, do you mean that there is no good insight into how one gains expertise as studied by neuroscientists? That because it’s such uncharted territory that it’s hard to start a discussion on it?

joonkoo: My question was whether it takes a similar amount of time and effort — if not the same amount —to master things across domains.

I remember reading from some cognitive psychology paper that it takes 10,000 hours of practice to reach the highest end of expertise. This may be an over generalization, but it means that it takes a huge amount of time and effort to become an expert.

For example, most of us are all experts at looking at faces, extracting facial expressions and emotions — although we know that people with autism lack this ability to some extent. On the other extreme, there are expert computer game players (e.g., Starcraft18). When you look at how they play, it’s simply incredible to look at how fast they make decisions and click the mouse buttons. This is not something that everyone can easily achieve, but some people are experts in this field.

How do the two domains that I raised as examples (face perception vs. Starcraft) differ in terms of their acquisition of expertise? What about wood cutting? What about computer using /programming? Is becoming an expert wood cutter very different from becoming an expert computer user? What are the common mechanisms and what are different mechanisms? These were the questions that I had in mind when reading your post.

slim: The question of testing expertise across domains sounds like it would be a challenge in defining the boundaries of each domain, not to mention the standards against which to measure expertise , no?

For example, Isn’t facial recognition something that we are hard-wired for? Is it fair to compare that to Starcraft? What would it mean for one to be an expert in facial recognition? Be able to tell the difference between twins you’ve never seen before within a certain amount of time?

joonkoo: Some things are definitely hard-wired and some things are not. Some things presumably use a combination of more hard-wired and less hard-wired systems to achieve expertise.

In facial recognition, there are ways to quantify it experimentally using behavioral measure of inversion effect, composite effect, and such. And recent research has shown that these abilities are pretty heritable.

A few years ago, we also found that the neural basis of facial recognition may be more genetically shaped than neural substrates for processing other visual categories. Now it’s true that I don’t think it is fair to compare facial recognition to Starcraft — one of the reasons being that some things are more hard-wired than others. But I would like to raise different facets of expertise, which might be related to your question about empathizing with objects and what it means to do that in different areas.

an-lon: Ok, I’m jumping in about the subject of 10,000 hours because it’s become simultaneously trendy and misunderstood. The gist of the research is that what makes Mozart or Tiger Woods or any virtuoso great isn’t necessarily inborn talent, but the ability to hone that talent.

The 10,000 hours translates to about a decade, but here’s the key: it is not just any 10,000 hours that makes a person great, it’s 10,000 hours always at the edge of your comfort zone, constantly pushing your boundaries. Most of us simply do not have the capacity to operate at that level. Instead, we spend most of those 10,000 hours simply repeating our old habits. We practice the same thing over and over again. Phenoms19 are those extremely rare individuals who are able to push their boundaries in an extremely focused and deliberate way.

I think George Colvin’s Talent is Overrated actually covers this better than Gladwell’s Outliers. He calls it “deliberate practice,” and gives many examples, from Jerry Rice to Ben Franklin, of how those so-called geniuses balanced on that knife’s edge over the course of an entire 10,000 hours. One useful model is three concentric circles: comfort zone, learning zone, and panic zone. Only in the learning zone can we make progress. The comfort zone is too easy and the panic zone is too hard.

Most of us, when we practice, think we’re in the learning zone, when in reality we’re simply performing extra iterations within the comfort zone. Those iterations, no matter how many, do not count towards the 10,000 hours, and do not bring us any closer to a Mozart-level accomplishment. 10,000 hours in a true learning zone is incredibly difficult, which is why there are so few geniuses out there.

I think there are excellent connections to be made between your dialogue with materials and that learning zone. The key here is to leave your comfort zone, but to not venture so far from it that the result is chaos. Inevitably, finding that knife edge requires dialogue, feedback, interaction, and discomfort.

slim: Ah . . . That’s a great way to think about it! 10,000 hours of discomfort.

joonkoo: Yes, as An-Lon described — thanks, An-Lon — it’s not merely the 10,000 hours of work. But still, what is true is that effort and time is a necessity for gaining an expertise..

Sorry if my comments steered the discussion too much toward the idea of expertise. But, I thought this was exactly what you were referring to when I got a better understanding of what you meant by being able to empathize with things.

slim: Don’t worry about steering the conversation in whatever direction. The purpose of this conversation is to understand what it means to have an empathic conversation, which would naturally require a lot of empathic conversations. (Smiles) I thank you for your patience. I really could not ask for more!

And yes, An-Lon, I do see a correlation between expertise and empathizing across time and memory. The more you empathize with an other across time and memory, the more trust, discipline, and skill you are able to build in relation to them. Whether this is with physical objects, or another human being, the model seems to work equally well .

Here’s a thought: Having a conversation with someone or something who/that has a sense of integrity, or a world view, different from your own — or simply unexpected or unpredictable — is highly uncomfortable. Perhaps the capacity to handle this gap in knowledge or this discomfort — one of the abilities I would think is necessary to stay in the learning zone — is directly related to humility.

joonkoo: Here’s also another thought, which is my current research topic. We are all experts at processing words visually — or simply reading, which is to say that we can quickly parse fine squiggly lines in our mother language. There are, in fact, many experimental tricks that you can do to show your expertise in reading letters and words. However, when you think about it, it is hard to believe that our brain is hard-wired to read words.

Script was invented only very recently on an evolutionary time-scale. Most humans were not educated to read and write until much more recently. But literate adults are very good at reading. This must be due to the extensive training with letters and symbols during development.

While I’m not sure if learning to read during childhood really pushes the boundary and enters the discomfort zone, this may be illustrating another type of expertise that we go through. It’s different from others because, unlike face recognition, it’s not hard-wired, and unlike becoming an expert in Starcraft, this kind of expertise seem to be something relatively easily achieved by the masses.

slim: I want to understand better what you say about our ability to become expert readers. You are saying that, for some reason, we can learn how to read starting at a young age, although it is not something we are hard-wired for .  This is an assumption, but a fairly safe one. I think you’re also saying that it is unclear if this necessarily implies that we are in the discomfort zone when we learn to do this, which leads to the question on whether this is a different kind of learning or not. Is that the question?

joonkoo: Well, I don’t want to get into a discussion around the idea of a discomfort zone too much. That was just a side note. What I was focusing on was that learning to read —visual processing of orthographic stimuli, to be precise — and becoming an expert at reading is something that is quite different from becoming an expert in some other domain, because it is an expertise that is ,  presumably,  not based on a hard-wired system, yet acquired by pretty much all of us — except people with dyslexia.20 When you think about it, there are not many things that are like this. This is, in fact, what makes reading very interesting.

slim: Ohhhhhhh! So you’re distinguishing between learning through the use of hard-wired facilities  ( i.e., facial recognition)  vs. learning through the use of non-hard-wired facilities  ( i.e., reading). Then you’re asking how much of the learning that happens in a given domain is facilitated by hard-wired capabilities vs. non-hard-wired capabilities, and how their proportion affects the experience of learning. And you’re saying that reading is special, because almost all — possibly an overstatement — is not facilitated by hard-wired capabilities. Am I understanding you?

joonkoo: Yes, that would be a straightforward way of saying what I was trying to say. (Smiles) Thank you!

slim: What is an orthographic stimuli? I just tried looking it up, but couldn’t make much sense of the stuff I found.

joonkoo: Oh, an orthographic stimuli might be a word that I made up. (Smiles) Just think of letters and words.

slim: Oh, then by “read” do you simply mean recognizing the letter forms that one sees or do you mean making meaning from their composition into words?

joonkoo: What I mean by “reading” is the visual processing of letters. Reading is a special case because not much of it is hard-wired. In fact, one of the recent claims is that it goes against some hard-wired neural structure that is designed to carry out other activities more efficiently. That other stuff being the mirror invariant perception of visual features. For example, it takes very little effort to view some image, then view the left-to-right flipped version of the image and know that those two images are identical. It is argued that this is a kind of basic visual mechanism that is more hard-wired. However, when learning to read, b is not the same as d even though it is a left-to-right flipped image of b. So to learn that these are different, the mirror invariant perception needs to be unlearned to a certain extent before you can learn to read.

slim: Wait, wait, wait . . . mirror invariant of perception? You mean we’re hard-wired to be able to tell something is the same regardless of whether it is mirrored or not? Where did that come from? Is it because things
in nature are symmetrical?

an-lon: Seriously! Symmetry and mirror invariant of perception? That’s fascinating! What about Asian languages where there isn’t the b and d problem? I’ve often heard that there’s no such thing as dyslexia in the Chinese because of that. Is that really true? I don’t suppose there’s a good layman’s book on this subject?

joonkoo: My understanding is that the critical ability in visual processing of written words is not necessarily restricted to the b vs. d problem, but more related to discriminating the subtle nuances in the various different visual features. Mirror invariance is just one of the examples. There are many such examples in other languages for sure.

I don’t know much about dyslexia in the Chinese population. Dyslexia is something that’s a little different from pure impairment in visual processing of words.

Most current theories and findings are putting emphasis on the phonological processing of prints. Stanislas Dehaene21 is a big name for this kind of research. I’m sure he wrote books for the general public on these matters.

High-level vision is a fascinating field for research. Reading, in particular, is intriguing for all the reasons that we discussed so far.

anson: What a lively discussion! Slim, let me just say that your descriptive writing helped me imagine myself going back to a wood workshop, with all the sensations that comes with it. I took a woodworking classes from Grade seventh to ninth, way back when.

I also think you have touched on a very important topic about truth or what is true in this world. Truth is honest. Truth is simply what is. Truth neither budges nor needs to budge. To go against the truth is like kicking against the goads.

Truth is beautiful and simple. It just remains there patiently waiting for us to recognize it and embrace it. Truth sets us free. It always teaches us an easier and simpler way. It helps us to be in harmony with this world. A lot of times when we think of truth, we think of moral categories of right and wrong, but it need not be so. Rather, I think using the categories of in harmony or out-of-tune is a better way of looking at it. Finding truth is simply finding the way of how to be in harmony with everything. Although there is indeed a lot of incredulity towards truth in our postmodern sensibilities, your story is reminding us something so basic and simple — whatever is true is honest and it is what it is. There’s a video on YouTube called “Rhythm” featuring a pastor named Rob Bell 22 on this very topic from the Christian perspective. Perhaps you will find it relevant.

slim: I recently came across a book called the Empathic Civilization by an economist named Jeremy Rifkin. In the book, he writes that “when we say that we seek the ultimate truth, we are really saying that we seek to know the full extent of how all of our relationships fit together in the grand scheme.” Your comment reminded me of that sentiment, and it resonates.

In the way that he describes it, I believe both truth as well as subjectivity can coexist. If there’s a classic pattern I recognize throughout history, it is that every time someone claims the existence of a dichotomy, it is not that it is either/or, but both in some relationship constantly shifting through time. Just as the idea of balance is not some static equilibrium, but rather an ongoing process that fluctuates, I imagine this is the same.
And although I’m not Christian, I have to say that I enjoyed the video very much. The first thought that came to mind was how different it was from what I had expected a Christian video to be like. But then I realized what does that even mean to label something as a “Christian video”? It’s nothing but a projection of my biased assumptions.

It almost seems like the words “God” and “religion” play a large part in confusing and dividing people. I can tell from first-hand experience how profound the change in one’s own world view can be when words that you once thought you knew get redefined. Perhaps a relevant quote is one from philosopher Emmanuel Levinas23 who said, “Faith is not a question of the existence or nonexistence of God. It is believing that love without reward is valuable.”

——

16 Swiss psychologist Jean Piaget defined assimilation as the integration of external elements into evolving or completed structures, and accommodation as any modification of an assimilatory scheme or structure by the elements it assimilates. He said that assimilation is necessary in that it assures the continuity of structures and the integration of new elements to these structures, whereas accommodation is necessary to permit structural change, the transformation of structures as a function of the new elements encountered. An example of assimilation would be the child sucking on anything they can get their hands on. As they learn to accommodate, they discern what to suck on and what not to. (Encyclopædia Britannica Online)

17 L. S. Vygotsky, (Nov. 5, 1896 – Jun 11, 1934) was a Soviet psychologist who, while working at Moscow’s Institute of Psychology  from 1924–34, became a major figure in post-revolutionary Soviet psychology. His theory of signs and their relationship to the development of speech influenced psychologist Jean Piaget. (Encyclopædia Britannica Online)

18 Starcraft is a real-time strategy game for the personal computer. It is produced by Blizzard Entertainment. According to Scientific American, it has been labeled the chess of the 21st century, due to the demands for the pursuit of numerous simultaneous goals, any of which can change in the blink of an eye. (“How a Computer Game is Reinventing the Science of Expertise”)

19 An unusually gifted person (frequently a young sportsperson), a prodigy. (OED Online)

20 Dyslexia is an inability or pronounced difficulty to learn to read or spell, despite otherwise normal intellectual functions. Dyslexia is a chronic neurological disorder that inhibits a person’s ability to recognize and process graphic symbols, particularly those pertaining to language. Primary symptoms include extremely poor reading skills owing to no apparent cause, a tendency to read and write words and letters in reversed sequences, similar reversals of words and letters in the person’s speech, and illegible handwriting. (Encyclopædia Britannica Online)

21 Stanislas Dehaene (born May 12, 1965 Roubaix, France) is a professor at the Collège de France, who directs the Cognitive Neuroimaging unit of the French National Institute of Health and Medical Research. In his book The Number Sense, he argues that our sense of number is as basic as our perception of color, and that it is hard-wired into the brain. (“Stanislas Dehaene”)

22 Rob Bell is the founding pastor and pastor emeritus of Mars Hill Bible Church. He graduated from Wheaton College in Wheaton, Illinois, and Fuller Theological Seminary in Pasadena, California. He is the author of Love Wins, Velvet Elvis, and Sex God, and is a coauthor of Jesus Wants to Save Christians. He is also featured in the first series of spiritual short films called NOOMA. (“Rob Bell”)

23 Emmanuel Lévinas (December 30, 1905 – December 25, 1995) is a Lithuanian-born French philosopher renowned for his powerful critique of the preeminence of ontology — the philosophical study of being — in the history of Western philosophy, particularly in the work of the German philosopher Martin Heidegger. (Encyclopædia Britannica Online)

Conversation: Ethics & Computers

On March 19, 2011 at 9:28 p.m., I posted the first draft of what will eventually become the Preface in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and acting.

 

joonkoo: I’m wondering if you should make a clearer definition of the user here. For example, is the user a computer programmer using the computer or just an ordinary John or Jane using the computer? I understand that knowing the exact mechanics or physiology of the computing system may tremendously expand the user’s perspectives, but I also imagine that there would be some considerable costs to learning those mechanisms. Would my mother, a middle-aged lady with few digital friends, ever want to know exactly how the processor and memory work for her to get less frustrated the next time she accesses an Internet browser to receive and view photos that I send?

david: Yes, but what either extremist position about users (ordinary John or Jane vs. super programmer) tends to ignore is the bell curve nature of the problem, which is very similar to my indictment of mainstreaming in u.s. public schools. That is, these need to be seen as somewhat unique user groups requiring distinct, differentiated approaches.

But even if you draw three divisions in the bell curve, which would split say 10/80/10, it is still an enormous design problem. People who use Photoshop are still in a discourse community with considerable depth beyond the average person. It’s even worse at the other end of the spectrum. And this is where I think Peter Lucas,9 founder of MAYA design, and resident genius, absolutely nails it, and my guess is that this is what Slim is getting at with his reference to “physics.”

What Peter says is that you must design for the “lizard brain” first, because it’s the only thing that is consistent across that entire bell curve. (Keep in mind, this is my perception of Pete’s message.) If you learn to do this well, the rest may take care of itself. But fail to get that right, and you either have very little chance, or you’ll be dragging a 200 ton freight train behind you the entire way. That is why our experience with modern technology, even the best of it, falls short.

It’s ironic because we’ve had the technology for it to be a solved problem for at least a decade, but very little work has directed all the physics and graphics innovation at solving the problem of making data manipulable objects with “thingness” much the way Bill Gates describes in “information at your fingertips.” It’s also very similar to the way the osi model10 falls out — meaning that designing for the lizard brain is like the physical layer, designing for higher order brain functions move up the brain stem and can be accounted for in a layered semantics kind-of-way.

But I think there’s an element missing here, which is that what you describe about a user’s experience with the computer crashing or slowing downs is an entirely qualitative judgment. I don’t like computers that crash or slow down, either but the experience is arguably the same or worse if I’m driving my car or bicycle. I ran over a piece of metal on my bicycle commute yesterday and was left with a huge gash in my tire, a blowout, and subsequent wheel lock when the metal piece hit the brake that could have easily caused my clipped-to-the-pedals self to go reeling into the river, but this is the experience of an unplanned and unforeseen mechanical failure. Could the bicycle be made to fail more gracefully? Certainly. But at what cost, with what trade-offs, and what marginal utility? Similarly, I had almost the same thing happen with my little Kia a few months ago in almost the same place and I’d raise exactly the same questions. Kevlar tires, tpms, run flat, oh sure, again at what cost and what compromise?

The design problem that the computer presents is no different though I think what tends to happen here is that because computer science is taught from a very narrow perspective, focused on very quantitative problems, we tend to ignore the qualitative ones, and we do that at our user’s peril. There’s also a tendency, unlike other branches of engineering, to not have much rigor in terms of seeing the trade-offs and compromises in a holistic, systems thinking kind-of-way.

I also want computers and software that fail gracefully, and are friendly and usable, but the path there is very long and very hard and is still beholden to the laws of physics, no matter how much we think we exist in a software world where none of the rules still apply and we can acquire all of these things at no cost to us (the designers) or them (the users).

slim: I’m not saying that the trouble with computers is worse than what we feel elsewhere. What I’m saying is that it’s time we consider the design of computers from the point of view of ethics, not just usability, functionality, or desirability. Why shouldn’t computer programmers and designers adopt the same kind of ethical stance that architects do, for example?

From what I have gathered taking classes in architecture,  there’s a tremendous sense of ethics (not morals) and philosophy of life that goes into educating an architect. I never got any of that as a computer scientist — although, truth be told, whether it would have sunk into me at the ripe age of 18 is questionable. But that’s a whole another discussion.

Even in human-centered design, while we talk about designing for human users, we never get deep enough to the heart of what it means to be human. How can we be human-centered, when we don’t even know what it means to be a human? I’m less interested in the computer affording user-friendliness, usability, or graceful failures. That’s a very object-oriented way of looking at this issue. I’m less interested in objects and more interested in relationships. More specifically, I’m interested in finding out how our relationship to the computer can afford the quality of being immersed in an empathic conversation. The kind of quality that, as far as I can tell, makes us become aware of who we are as human beings.

I have nothing against the laws of physics. As a matter of fact, I think the computer should be designed to accept physics as it is. When designers pretend that the laws of physics don’t apply to computers, weird things are bound to happen.

I don’t think physical materials are there to make our lives more convenient or inconvenient. It just is. Yet because of our evolutionary history, there’s something embodied within us — and something we come to embody as we mature — that allows for us to have an empathic conversation with it. I want the same qualities to be afforded in our interaction with computation.

david: Now we’re getting somewhere! So there are several interesting points I’ll make here. As to your first question regarding architects and computer designers, these comparisons usually fall down because of the chasm between consumer electronics and buildings, structures, etc. There are major differences attributing to elements such as rate of change and stability. Also, classic failures exist in that world, too, though not in the numbers of computers failing, but that’s probably a problem of sample size more than anything. To me, Frank Lloyd Wright’s cantilevers at Fallingwater are beautiful, but they’re not robust from an engineering standpoint. Hmm, where have I seen that before?

The problem with education that you describe is exactly what I was alluding to earlier with computer science’s focus on the quantitative, but I think this is a maturity issue. What I mean is that architecture is a very old discipline. Designing computers and software, not so much. That evolution would, in theory, happen in time, but this will take a long time. Imagine a world in which there are bachelor’s degrees in human factors and human-computer interaction (HCI). Oh sure, there might be one or two now, but imagine a world where they are on the same plane as computer science (CS) degrees.

But in order for such large-scale changes to happen, there needs to be economic incentives. That’s the biggest problem in the entire puzzle here because organizations have no economic incentive to make a radically “better” computer. They’re still making tons of money with “good enough.” I’m hopeful that the rise of mobile computing will give way to better design as the competitive forces there are much stronger than the pc business, just as the same was true for pcs over older mainframes and minis.

But what you seem to be getting at here is a philosophy of computing, just as you describe a philosophy of architecture. That is, not one architect, but an entire movement. This is like Sarah Susanka and the “not so big” movement.11 The conditions for that to exist in computing are not quite as clear to me as in architecture or lifestyle design. That’s possible also with computing, but again, the experience has to be so overwhelmingly great as to cause a parallel economic revolution.

I’d question whether the empathic feeling that you describe between two individuals is even possible with machines. I can’t remember whether this was touched on by Ray Kurzweil in The Age of Spiritual Machines 12 or Don Norman in Emotional Design.13 I don’t know where empathy or compassion originates in the brain, but I’m pretty sure these are very high order functions, and vary individually ( i.e. the continuum from sociopath to the Dalai Lama). Indeed, many would say that empathy and compassion is something we must cultivate within ourselves.

Which brings me to another theme: dogs. Could it be that what you describe is what humans seek in dogs? Dogs are selfless, unconditionally loving, warm, whimsical, carefree — exactly the opposite of “weight of the world” that most adults must grapple with on a daily basis. If the computer could provide a dog-like antidote to adulthood, that would be great. Crazy hard. Which describes the saying, “Anything worth doing…” pretty well.

I suspect that Cynthia Brazeal’s work14 at mit may have some links. Also, David Creswell15 at cmu. He has a publication about transcending self-interest. I think the research questions du jour are these:

What are the determinants of a disposition for empathy in humans? Where is empathy encoded in the brain? Is parity an important part of empathy, or can empathy exist effectively without parity?

The latter would be a requirement for an empathic architectural style to succeed in computing since visiting an empathic requirement on the user would be tantamount to slavery. Until you know the answers to those questions, any attempt to get computers to behave as part of an empathic conversation would be difficult, if not impossible, because there is no other model for empathy but humans. Either that, or I’m horribly confused about the animal kingdom.

Keep up the good work. This is likely to turn into a hard slog if it hasn’t already.

——

9 Peter Lucas has shaped MAYA as the premier venue for human- and informa-tion-centric product design and research. He co-founded MAYA in 1989 to remove disciplinary boundaries that cause tech-nology to be poorly suited to the needs of individuals and society. His research interests lie at the intersection of advanced technology and human capabilities. He is currently developing a distributed device architecture that is designed to scale to nearly unlimited size, depending primarily on market forces to maintain tractability and global coherence. (MAYA Design, “MAYA Design: Peter Lucas”)

10 Different communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point. (Encyclopædia Britannica Online)

11 Through her Not So Big House presentations and book series, Sarah Susanka has argues that the sense of “home” people seek has almost nothing to do with quantity and everything to do with quality. She points out that we feel “at home” in our houses when where we live reflects who we are in our hearts. In her book and presentations about The Not So Big Life, she uses this same set of notions to explain that we can feel “at home” in our lives only when what we do reflects who we truly are. Susanka unveils a process for changing the way we live by fully inhabiting each moment of our lives, and by showing up completely in whatever it is we are doing. (Susanka Studios, 2013, “About Sarah”)

12 Ray Kurzweil is a renowned inventor and an international authority on artificial intelligence. In his book Age of Spiritual Machines, he offers a framework for envisioning the twenty-first century—an age in which the marriage of human sensitivity and artificial intelligence fundamentally alters and improves the way we live. Kurzweil argues for a future where computers exceed the memory capacity and computational ability of the human brain by the year 2020 (with human-level capabilities not far behind), where we will be in relationships with automated personalities who will be our teachers, companions, and lovers; and in information fed straight into our brains along direct neural pathways. (Amazon, 2000)

13 In Emotional Design, Don Norman articulates the profound influence of the feelings that objects evoke, from our willingness to spend thousands of dollars on Gucci bags and Rolex watches, to the impact of emotion on the everyday objects of tomorrow. (Amazon, 2005)

14 Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and human robot interaction. (Dr. Cynthia Breazeal, “Biography”)

15 Dr. David Creswell’s research focuses broadly on how the mind and brain influence our physical health and performance. Much of his work examines basic questions about stress and coping, and in understanding how these factors can be modulated through stress reduction interventions. (CMU Psychology Department, “J. David Creswell: CMU Psychology Department”)

Conversation: Acting & Computers

On March 1, 2011 at 10:14 p.m., I posted the first draft of what will eventually become split into the Prologue and the fourth story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” The story surrounded my experience observing a friend act the role of Blanche in a play called A Street Car Named Desire. While much has changed since then, I wanted to share with you an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which also includes the introduction of the interdisciplinary participants of the conversation.

 

david: I think of it this way: great actors are not really actors, they are “be-ers.” They don’t play the role, they manifest the person encoded in the role, almost to the brink of no return. It’s very dangerous territory and quite a few of them have wound up in mental institutions.

Role-playing implies expectations on reality. What’s great about great acting? The notion that our expectations are up-ended. If all the actor does is establish believability, they haven’t really succeeded, because at some point, they’ve got to go over the edge, else it would be a very boring presentation.

slim: Yes, your critique on believability not being the goal is significant. I am curious if that at all relates back to programming. We write code and expect it to produce the same results every time we run it. Not only that, but we also want others who read the code or install it on their computer to believe this to be true as well.

But the reality is that the circumstance in which the program runs changes. For example, the hardware running the code may have different capacity for memory, the memory may be filled in different ways, the hard drive has a different capacity, the power supply has different capacity, the processor, there may be other software running at the same time. The reality is much messier.

Yet programming language designers just keep abstracting all that physical reality away. Trying so hard to make it believable that the virtual machine is the real machine (e.g., Java).

david: In my opinion, nothing has done more to destroy computer science education in this country than Java.

I’d like to point out further, that what lies at the center of actors and musicians, generally, and great artists more broadly, is an ability to be present without expectations of the future or nostalgia for the past. What’s weird — and this gets into the metaphysics of quality à la Pirsig — is that, in my opinion, you can feel this presence, but there is no metric for it. That’s what makes us human.

The Eastern concept of duality rears its ugly head in this story on several occasions, and I would suggest that you might as well label it, and dive into it a little, though it’s a book unto itself. This concept resonates through a lot of what you are saying, meaning that it is another perspective on empathy. The perspective you are presenting is inherently dual, as opposed to moving toward a concept of singularity. Again, metaphysical.

slim: I hesitate to frame this as the Eastern duality. Maybe it’s a choice of words or my misinterpretation of what you mean, but I think of it as circularity as opposed to duality. Imagine a constant movement along
a continuous domain. When you stop along the path and look from any given vantage point you consider what you see to be the other —something outside of yourself. But as you move along that path, as you try to empathize, you eventually feel as if you are that other.

This is what actors do, but all they might have is a piece of script — which is just a bunch of words and some simple directions. So we have to figure out what the script actually means, from our own experiences. We have to first translate it, then interpret it. The same goes for playing from sheet music or learning how to dance. We can’t learn how to dance just by watching how the choreographer’s limbs move. We also have to find out where the invisible force is acting inside the body of the choreographer.

A friend of mine did a beautiful performance piece that speaks to this idea of meaning vs. form. She first filmed herself drawing a circle. Then she projected the film on a surface, and filmed herself again, but this time tracing her movement in the film. She would repeat this over and over, each time tracing the movements of the previous recording of herself tracing the previous recording. Each time, the shape of the circle gets more and more distorted. Eventually what gets drawn doesn’t resemble a circle at all.

In essence, the “why” of the movement gets lost, and all is left is the superficial. To have been able to draw a circle, you have had to understood why the first drawing was manifested the way it was. And once you have that understanding, you might not even draw a circle, but paint one instead. That is the kind of understanding that can only result from having tried to empathize.

jeff: Practically speaking, though, I think there are limits to empathy. For some things, you really either need a sliver of experience or some non-obvious knowledge that helps you imagine the other perspective.

slim: I think so, too. What experiences did you have in mind?

jeff: Like parenting. If you have a dog, you probably have developed a different level of patience than someone who has never tried to train a pet — or anything for that matter. Someone who has never been in a serious accident, catastrophe, combat, or other very dangerous situation probably has no idea what it means for “time to slow down” or “it happened so fast.”

A similar problem arises when people talk about spiritual or religious experiences. Some people may regard spirituality as “we are all connected somehow” or “there is some higher order in the universe.” My concept of this is “all things are connected (sort of)” but there’s nothing mystical about it because we already know we live in the same universe, but to feel it and really empathize with it and think about everything that is happening all the time, is a different concept.

When dealing with Christianity, I have always been puzzled by the idea of “God speaks to us all.” Is that what is actually happening in experience or is that metaphorically true — as in God is the universe? To even get a grasp on that, I’ve found it never makes any sense to consider Christianity from my perspective but as a box unto itself. And I may also have to consider that it is impossible for me to understand simply because I am me and not having those experiences. Perhaps God only speaks to Christians ,  which would make a ton of sense. And then there is the complication that faith is belief in precisely what does not make sense.

slim: I think a significant part of what you’re talking about has to do with language. Depending on what words you use to describe your experience, it could conjure up different experiences in different people, and unless people are willing to establish a shared language in the context of the conversation, more often than not people are not having
a meaningful conversation.

So when you say you don’t have the experience to know what Christian phrases mean, there’s also a chance that you do have the experience but you don’t use the same words used by the Christians to refer to it, which causes miscommunication, and misunderstanding.

jeff: Maybe. Although, I recently finished an excellent audio-book, Amusing Ourselves to Death by Neil Postman. He argued that the form of the media affects how we conceptualize the world so deeply that we are often not aware of how it changes us or how we are different from people in times past. Can we understand what it might be like to live in a society with no print? Or pre-television America, where people would pay money to hear authors read their books from lecturns and people would debate in a language that resembled printed prose rather than the plain-speak we use today.

Similarly, Facebook and Twitter afford relatively short updates and lend themselves to trivialities because it’s become so easy and considered non-imposing to spit out snarky one-liners to friends without considering their context (because it is unavailable). And on a blog, when someone is writing a really long reply, they can’t tell whether they’ve jumped too many topics and have lost their readers completely, because the others won’t see the post until after it’s been posted. So perhaps the rise of writing in society due to the Internet can lend itself to an egotistical style of communication, by the very nature of what the medium is.

slim: Well, I don’t find writing to be a particularly egotistical style of communication, but that of course depends on what you mean by “egotistical” and what you mean by “writing.”

I think it’s the space — I don’t just mean physical or even virtual space, I mean the feeling of space or the relationship between and among participants of interaction — in which the writing is presented can make a writing  egotistical or not egotistical.

Do you really think the nature of the medium affords a egotistical style of communication? The reason I ask is because I’ve had in-depth, thought-provoking discussions about a variety of topics that stem from just a status update on Facebook. So I’m not yet convinced that the nature of the medium somehow absolutely dictates an egotistical style of communication.

Or maybe what you mean is that it isn’t designed with the goal of facilitating a non-egotistical style of communication, and so it’s likely that many people default to something that takes less effort, which is the “egotistical style”? Am I understanding you?

an-lon: A quick note about Internet communities. The type of negative behavior Jeff described — picking fights and baiting and snark — reminds me a lot of people in their cars on the freeway. It’s as if you’re in a bubble and the usual rules don’t apply. I don’t doubt that some of the asshole drivers out there would be perfectly civil to each other in real life, where feedback is instantaneous and actions don’t go without consequences. Such is the power of anonymity.

That said, the Internet doesn’t have to be that way. In a different thread, I described how one very early Internet community evolved from the fan site of an author who was way ahead of her time :  Torey Hayden.1 One thing she had to police in her bulletin boards was language.  People were absolutely not allowed to write like Internet chimps.

The reason was that it was an international board, and she insisted that native-English speakers use proper grammar, punctuation, and capitalization in order to make it easier for the non-native English speakers to understand. Obviously, the non-native English speakers were just asked to do their best. The point wasn’t to punish people for poor English grammar per se, it was to punish lazy and avoidable misuse of the English language.

I really think the language rule made a huge difference not just in what people said, but in how they thought. It reminded them that they were holding a conversation, and that there were people on the other end who might carry with them a vastly different set of cultural assumptions and values.

The other notable feature of the message board was that it was predominantly female in an era when that was still fairly rare. The result was an extremely active and close-knit community that debated and joked about everything under the sun.

People did use avatars and screen names, and were anonymous in that sense, but in general, there wasn’t the kind of mindless hit-and-run you see in, say, the comments section of a New York Times article about politics. I only ever lurked, but for regulars it was a level of addictiveness decades before Facebook. Rather sadly, that’s where the author recently migrated her site.

Her message board had been a significant time commitment for her maintain, as it was pretty much the force of her personality and the ground rules she established that kept the board civilized. Eventually, she decided that the technology that had been cutting-edge when she created the board was hitting obsolescence, and that Facebook was an easier way to interact with her fans and keep the same conversations going.

I think what I’m saying is that there’s a bit of a founder effect2 to Internet communities. If the pioneers are assholes, everyone thinks they have a right to be an asshole. If there’s a precedent for civility, newcomers can learn to be civil too.

And there’s also no inherent reason for Facebook to be as shallow as it often is. The only reason I’m even here is that when Slim started posting substantive status updates to Facebook, I started writing substantive replies.

slim: Jeff, I think you’re saying that when left to our default vices, the way in which Twitter, Facebook, and other social media sites have been designed can direct us toward a certain kind of communication. Some of the reasons why include the fact that it makes the content seem context-free, which leads to misunderstandings, people making assumptions, passing judgments, or being downright malicious for the fun of it, as opposed to contemplating the meaning behind the content or asking a questions in order to further understand and empathize. Please correct me if I am misunderstanding.

jeff: All I’m saying is that the medium does affect how we think. Comparing writing to speech, reflection and revision makes it easier to achieve coherence. I refer to writing as most people encounter it, through online arguments in basically public forums where people don’t know each other. A person needs to make some assumptions about the person they are trying to convince — or more likely, put down. You also can’t confirm your assumptions as you can in person. Most people who argue online don’t follow the argumentative Principle of Charity.3 It’s much harder to be careful and empathic instead of being abusive. Being abusive can also be fun.

And by written communication, I was also referring to short updates like Twitter and Facebook. Those are almost inherently egotistical, not necessarily bad or harmful, but in the sense that the communication has to start with a motive within. Things appear context-free and then you get inappropriate snarkiness.

an-lon: To get back to your story in the acting class, though, isn’t this the human condition in a nutshell? When listening, I seek to be transparent. When projecting, I seek to be saturated. But the “I” remains.

slim: I resonate with those pairings. It directly maps to the pairing I have in mind, which is humility and courage. Can you say more? I would love to hear what you have to say about them.

an-lon: Well, exasperatingly, this was always a visual image first, words second, and an analytical dissection, last. The poem below4 is what planted the image in my head.

If thou couldst empty all thyself of self,
Like to a shell dishabited,
Then might He find thee on the Ocean shelf,
And say—“This is not dead,”—
And fill thee with Himself instead.

But thou art all replete with very thou,
And hast such shrewd activity,
That, when He comes, He says—“This is enow
Unto itself—’Twere better let it be:
It is so small and full, there is no room for Me.”

I am not a religious person, and perhaps not spiritual so much as simply omnivorous, but I had an odd sense from the minute Anson introduced himself that the theologian’s viewpoint was important. Perhaps because there are concepts here that can be expressed no other way, except in the language of the sacred and divine? Certainly, the theme of humility comes into play with this poem.

Anyway, the words “replete with very thou” have been part of my internal monologue since forever — whenever I realize I’m getting in my own way of understanding someone else’s viewpoint.

As for being saturated in order to project, exaggeration is the lifeblood of animation. The illusion of life is precisely that — an illusion. Whether the action in question is a walk cycle or a line of dialogue, you can’t just copy what happens in real life. You have to find the essence of what it is, amplify that, and filter out the rest.

Same with drawing caricatures. It’s not enough to simply give a guy a big nose, you really have find the essence of someone’s facial features and amp that up.

The image in my head was going into Photoshop and cranking up the color saturation of an image, but the metaphor it represents is the exaggeration that is one of the pillars of character animation.

slim: I’m intrigued by what you said about exaggeration and animation.

Is there a degree of exaggeration that is appropriate? In other words, could it be over-done? Where is this need for “amplification” coming from and where is it going? Is the kind of exaggeration you’re talking about related to generating interest in the eye of the viewer? Or is it functional (i.e., if you don’t exaggerate it doesn’t look real)? Or all of the above? Is this really about saturation or contrast?

I know nothing about animation to have any insight into this.

an-lon: Exaggeration is one of the 12 Principles of Animation,5 as developed by Disney during their golden age. If you’re curious, I’d highly recommend a look at the first chapter of The Illusion of Life by Frank Thomas and Ollie Johnston. This is pretty much the Bible for anyone studying animation today, but it’s gorgeously illustrated and extremely readable for a general audience as well.

Here’s the intro paragraph to the “Exaggeration” section:

There was some confusion among the animators when Walt first asked for more realism and then criticized the result because it was not exaggerated enough. In Walt’s mind, there was probably no difference. 

He believed in going to the heart of anything and developing the essence of what he found. If a character was to be sad, make him sadder; bright, make him brighter; worried, more worried; wild, make him wilder. 

Some of the artists had felt that “exaggeration” meant a more distorted drawing, or an action so violent it was disturbing. They found they had missed the point. When Walt asked for realism, he wanted a caricature of realism.

In answer to your specific question of “can it be overdone?” It’s surprisingly difficult to overdo the exaggeration within a drawing, if it’s going in the right direction. If the exaggeration is just going in a random direction, it looks gross and distorted almost immediately, but if it’s going towards rather than away from the heart of the action, you can get away with a really surprising amount of distortion before it falls apart.

A lot of times, we’re given the advice to “push” the pose till it breaks and then back off, rather than inching incrementally toward that imaginary breaking point.

And “is it functional (i.e., if you don’t exaggerate it doesn’t look real)?” Yes, absolutely. Rotoscoping (tracing) live action reference frame by frame almost always comes out looking strangely dead. It takes a human eye to amplify the important parts and tone down the unimportant parts, even when the goal is to be completely unobtrusive about it.

Exaggeration is in pretty much every frame of any animated movie, 2D or 3D. The 12 principles are all so fundamental, they’re in every shot. Sometimes it’s subtle, as it has to be with the handsome prince or the beautiful princess, and sometimes it’s wildly exaggerated, as with the crazy animal sidekicks, but it really is the lifeblood of animation.

When I was first talking about saturation and contrast, it was just at the level of metaphor. What I’m talking about now, you can see in the roughest of pencil tests without any color.

——

1 Torey is the author of three novels, eight non-fiction books about her experiences working with troubled children and two children’s books. In a writing career that has spanned more than three decades, her books have been worldwide best-sellers, translated into more than 35 languages and appearing as films, stage productions, an opera, and even Kabuki theatre. (Hayden, “The Official Torey Hayden Website”)

“The Official Toery Hayden Website,” Tory Hayden, accessed January 19, 2013, http://www.torey-hayden.com.

2 In genetics, the Founder Principle is a principle whereby a daughter population or migrant population may differ in genetic composition from its parent population because the founders of the daughter population were not a representative sample of the parent population. For example, if only blue-eyed inhabitants of a town whose residents included brown-eyed people decided to found a new town, their descendants would all be blue-eyed. (Encyclopædia Britannica Online)

Encyclopædia Britannica Online, s. v. “Founder Principle,” accessed December 29, 2012, http://www.britannica.com/EBchecked/topic/214776/founder-principle.

3 The Principle of Charity is a methodological presumption made in seeking to understand a point of view whereby we seek to understand that view in its strongest, most persuasive from before subjecting the view to evaluation. While suspending our own beliefs, we seek a sympathetic understanding of the new idea or ideas. We assume for the moment the new ideas are true even though our initial reaction is to disagree; we seek to tolerate ambiguity for the larger aim of understanding ideas which might prove useful and helpful. Emphasis is placed on seeking to understand rather than on seeking contradictions or difficulties. We seek to understand the ideas in their most persuasive form and actively attempt to resolve contradictions. If more than one view is presented, we choose the one that appears the most cogent. (Oriental Philosophy, “The Principle of Charity”)

“The Principle of Charity,” accessed January 19, 2013, http://philosophy.lander.edu/oriental/charity.html.

4 The poem is called “Indwelling” by T. E. Brown. (Brown, “Indwelling”)

“Indwelling,” accessed December 28, 2012, http://www.isle-of-man.com/manxnotebook/people/writers/teb/p082b.htm.

5 12 Basic Principles of Animation / Squash and stretch/ Anticipation/ Staging/ Straight ahead action and pose to pose/ Follow through and overlapping action/ Slow in and slow out/ Arcs/ Secondary action/ Timing/ Exaggeration/ Solid drawing/ Appeal (Thomas, 1981, 47–69)

Conversation: Empathy & Computers

On February 22, 2011 at 4:48 p.m., I set up a private blog, where I could regularly engage in conversation with a group of friends from across disciplines. The process was to work as follows. I would post a piece of writing on the blog, they would comment on it, then based on their comment, I would not only revise the writing, but also feel encouraged and inspired to keep writing.

The outcome of the conversations was the book “Realizing Empathy: An Inquiry Into the Meaning of Making,” which was successfully kickstarted on March 12, 2012. While much has changed since then, with their permission, I would like to share with you an edited version of several of those conversations with you, regrouped and rearranged for clarity and relevance. Here is the first installment.

 

an-lon: Chan1, can you start with a round of introductions? Who are they, the people reading this blog? How do they know you?

me: Oh, of course! Yes, let’s do that. Perhaps we can say what we do—not what our titles are—what our interests are, and where we are coming from. I think this will do wonders in enriching the conversation.

And before I forget, I just wanted to sincerely thank you all for participating in this journey of book writing. The past two-and-a-half years have been a time of divergence. It was time I desperately needed to get away from my previous environment, to find new ways of thinking.

In retrospect, the question I was ultimately after was the question of what makes us human. Much like the pioneers of computer science, I started wanting to understand better how “thinking” works, what “consciousness” is, how we “learn,” how we “understand” something, and what intelligence means. But unlike some of these pioneers, I was not interested in asking how we can abstract the “humanness” from ourselves, to disembody it, so as to put it in some other body, and to debate whether that other being is also human. Honestly, I can’t see why this line of questioning is valuable. What this kind of disembodied attempt at manifesting humanness can do, at best, is superficially mimic or simulate what one may mistakenly believe a human being to be, without any real understanding of what it actually means to be human.

At the same time, I had a deep attachment to the computer. In my professional life, I have spent a significant portion of my career programming the computer. In this process, I have often found myself totally immersed in thinking from its perspective and not mine. I would dare say that I empathize with it. Yes, I know that most of us think we only empathize with living beings. But from my experience at the Rhode Island School of Design (RISD), I’m starting to question this assumption. Because I’ve discovered many similarities between the process of trying to empathize with human beings, with that of trying to make physical things. And that’s precisely the vantage point from which I will start to write.

As I wrote in my e-mail invitation to you, my goal is to write in the company of a handful of people I feel comfortable sharing my ideas with. I then hope to get feedback, revise, and eventually integrate everything into my thesis book, which I will produce for graduation.

I would absolutely love it if you all could be candid about giving me feedback. I will inevitably make some strong statements that may seem controversial. I expect counter arguments and tangential references. I firmly believe that it is from the experience of contrasts, of seemingly unrelated or different experiences where new ways of thinking can arise. With that, I now pass the mic to you all. Thanks!

joonkoo: I thank you for inviting me to this very interesting forum of discussions. I don’t know how many are here, but let me start with the mic. I’m Joonkoo Park. I know Seung Chan hyung2 from high school. I went to the International School of Beijing (ISB) in 1994  to  1997. I’ve always been proud of Seung Chan for his free-minded spirit. He does what he wants to do and he does it well. So I was really glad to hear that he decided to study fine arts. And it looks like that was a real success. I was glad to join this discussion since I wanted to do anything to help him organize his thoughts and formulate ideas.

I study cognitive neuroscience of high-level vision. I am in the final stage of my Ph.D. at the University of Michigan. Most of my work is centered around neural organization and mechanisms of object recognition, such as faces, letters, and numbers; however, I’m getting more interested in numerical cognition, and I am planning to study the neural basis of number sense during postdoc.

That said, I’m trained as an experimentalist, and my interest is pretty focused — as many Ph.D. students are either forced to, or are trained to be so. But any questions related to how the mind works trigger my interest, and I wish to be of help by bringing in some neuroscientific and psychological ideas into Seung Chan’s thesis and the discussion.

david: My name is David Watson. I like to have “an attitude of gratitude” though I think it gets lost in a lot of what I do or say. Slim knows this from working with me for a couple years at maya design in Pittsburgh. I have a deep need to understand reality in its purest form, to seek the highest levels of production quality even when they don’t matter to anyone but me, and to achieve symmetry in literally everything.

They say that at the root of engineering is this “truth seeking” and you’ll see elements of that here from me. I apologize in advance for my forthrightness. I have a way of speaking that’ll make you think I think I’ve known you for 20 years.

I like the way Slim has defined the introduction, because while I work in software , I don’t like to define anyone singu- larly, certainly not myself, and I like to think of this more as “creative mediums of expression.” I’m a musician, a photo- grapher, a skier, a cyclist, a runner, a thinker, a reader, and a writer. And I’m glad we’re not all the same. Nice to meet you all. Cheers.

jeff: Hi, my name is Jeff Wong. Slim and I are intellectual buddies from Pittsburgh. He was a working man on the South Side of Pittsburgh at maya design, and I was a Ph.D. student in human-computer interaction (HCI) at Carnegie Mellon University (CMU).

I will be bringing my background in computer science and cognitive science. Slim says I can bring some perspective from theoretical computer science. Also, I know some of conceptual history of Artificial Intelligence (AI), psychology, cognitive science; some familiarity with psychiatry, phenomenology, and tidbits of religion and spirituality; and some dabbling experience in philosophical thinking. I don’t know how deep my knowledge is in these areas, but I think I can at least point to relevant ideas and prior explorations that have happened in these areas.

anson: Hi everyone. My name is Anson Ann. I really thank Slim for inviting me to this group. I feel so honored to be able to take part in this conversation. Reading the background stories, expertise, and interests of you folks really humbles me. It does start to feel a bit like a mini ted here!

Well, I first met Slim in CMU back in 1995 Although we were not in the same department — he was in computer science, and I was in electrical and computer engineering — our dorms were close to each other, and we had a common interest :  music and guitars. We used to take a cab together to a local musical instrument store and drool over those guitars and music gadgets. We first looked at guitars together, then he gradually moved onto djing, and I moved to synthesizers. Anyway, no matter what we do now, I think both of us will always have a musician inside us.

After CMU, I worked at bbn technologies — now Raytheon—in Boston as a speech software specialist/scientist. We customize speech software solutions for the u.s. government intelligence community, enabling them to do speech recognition, machine translation, and information extraction on Arabic, Chinese, Spanish, and English broadcast news all over the world. Much of my work involves language model training, pattern recognition, signal processing, and some human-computer interaction.

Then about six years ago, a big turning point happened in my life. I sensed this calling from God for me to become a pastor. Just like Slim who took the challenge switching from science to fine arts, I quit my software job and enrolled in a theological school. It was quite a big stretch for me, for studying the humanities requires a very different temperament. I realized that my mind, which was trained for engineering, preciseness, and comprehensiveness, wasn’t ready to deal with the ambiguity, complexity, tension, and paradoxes that you often find in history, literature, religion, and philosophy.

I have just finished my studies and now I’m an Anglican priest pastoring at the Anglican Network Church of the Good Shepherd in Vancouver, Canada. And as a pastor, what I hope to contribute to this conversation is my anthropological understanding.  ( i.e., What does it mean to be human.)

First, I am going to be upfront about my faith and convictions. I will speak from a Christian perspective and understanding of what is means to be human, for I believe anthropology stems from theology. A core doctrine in my faith tradition is that of the Holy Trinity: that God, who revealed himself to us in history, is known to be a three-in-one relationship, a unity-in-diversity, a dynamic-yet-unchanging entity, a harmonious dance in reciprocal love that overflows with creativity and creational power.

Since the entire universe is created and sustained by a relational being, the very core of our being and reality is supposed to be relational. And as we human beings are made in the image of this relational God, so we are also ontologically relational.

We are made to relate, to empathize, to love and be loved. There is something intrinsic about human beings that we want to understand and be understood. I believe the torture of imprisonment is not just lacking freedom, but more about losing the ability to relate to others and the outside world. Relational beings unable to relate are just like fish out of water.

Anyway, I know not all of you are religious or spiritual, so you may or may not share my perspectives, but I just hope I could, in some way help inspire Chan to continue exploring this topic about empathy. I’m still very much a geek at heart, so I also hope I can contribute to the other aspects of this conversation about computers, programming, and human-computer interaction.

I look forward to seeing Chan write more. Because empathy is about listening first, isn’t it? I hope we won’t flood his comments area with too many of our own ideas, but let him express what he wants to say first, then respond to him accordingly.

an-lon: Nice to meet you, I’m An-Lon Chen. As for me, I also went to the International School of Beijing (ISB) with Chan for a semester of high school in 1994 ,  I was a senior, he was a junior, and we both worked on the yearbook together. That would be the end of the story right there, except ISBers tend to be a close-knit bunch and it seems like for the past decade and a half we’ve always had each other’s contact information via some mailing list or another without ever actually interacting personally.

Let’s just say that my relevance to this project is that over the years I’ve gone from comparative literature to computer science to user interface design to computer graphics to character animation. I am now a full-time student at AnimationMentor. Prior to that I was at DreamWorks working on Shrek 4, and before that on Mummy 3 at Digital Domain.

Perhaps more than anyone else I know, I’ve had to approach computers and computer science as much from an anthropological perspective as well as a technical one —deciphering a subculture and a jargon in order to pass as a native.

I’m a geek at heart, with my fair share of the stereotypical social inadequacies . I’m pretty sure I was born socially tone deaf, and only as an adult began to figure out the nuances of interacting with others. That said, pretty much every big break in my unlikely computer science career has come from possessing some unusual degree of empathy . First, the amateur exercise in anthropology that drew me to geek culture; second, the turn towards user interface development, which brought me to LA and the film/vfx industry; third, the current foray into character animation, which is all about convincing audiences that dead pixels can walk, talk, laugh, and cry.

Point being, I care personally about Chan’s topic: computers and empathy. I have no earthly idea where this blog is going or how it’s going to become a thesis, but I’m following it because at least some aspects of it touch on things that I, too, have been wondering all my life.

But now, can we start from the top? Empathy? With computers?

me: Yes, with computers.

an-lon: The best developer is inevitably quite the computer whisperer, of course, but I would have never actually thought of that rapport as empathy.

me: What would you have thought of it as then?

an-lon: Two synonyms came to mind. One was grokking, the other was acculturation. Grokking, because, well, I’m a sci-fi geek and I like having a word to express deep understanding and truly “getting it.” Both my parents are scientists, and while they are both fairly technically-savvy — my dad has written Fortran and assembly language code and my mom uses Photoshop and Illustrator for graphics and charts in scientific journals — I’m not really sure either of them has ever grokked computers ;  their mindsets are a little too unyielding to ever completely get on the computer’s wavelength, so they often end up fighting the computer in unnecessary ways.

For example, if my mom gets a PowerPoint concept stuck in her head, she invariably has trouble figuring out the Photoshop equivalent because she’s speaking a different language without even knowing it. Both my parents are extremely good within limited contexts, but don’t have the particular empathy required to troubleshoot — learning a new domain comes slowly.

Wait…

Ha ha ha ha ha ha!

I just looked up “grok” on Wikipedia. It says, to grok is “to share the same reality or line of thinking with another physical or conceptual entity. Author Robert A. Heinlein coined the term in his best-selling 1961 book Stranger in a Strange Land. In Heinlein’s view, grokking is the intermingling of intelligence that necessarily affects both the observer and the observed.”

But, it gets even better.

It also says that the Oxford English dictionary defines grok as “to understand intuitively or by empathy; to establish rapport with” and “to empathize or communicate sympathetically (with); also, to experience enjoyment.”

Drum roll, loud banging of cymbals. (Smiles) I actually wasn’t as far away from your wavelength as I thought. I had turned to Wikipedia as a joke, but it did hit a nerve. What I took away from it is that it’s never a purely intellectual exercise to really truly understand something, be it an immediate piece of code or an underlying computer science concept.

me: Exactly! There is an inextricable link between empathy and the act of learning that is non-obvious to most people. And I think it’s non-obvious, because we’re used to separating the cognitive from the emotional, or the mind from the body. For example, I’ve heard many people say that empathizing is not the same as understanding. On the surface, there’s nothing significant about that statement. Of course they’re not the same. If they were, why would we have two separate words? But it becomes significant once you realize that what people mean is that understanding is inferior or shallower than empathizing. Well, that depends on how you define the two words. It is only so if you include inaccurate understandings as a legitimate form of understanding. I would argue that an accurate understanding of an other cannot be had without having tried to empathize with them.

anson: That reminds me of people telling me how I have an extra-ordinary amount of patience in front of a computer. I don’t know if it’s because I know what the computer is doing inside, but I can be patient even if it’s slow or stalling. And also, whenever my dad encounters a computer problem, he always asks, “How stupid is this computer! Why can’t it do this and that?” and I always feel like I’m defending the computer saying, “It just can’t… this is what it can and cannot do. Don’t be too hard on it. Be patient. It’s still crunching numbers. There’s nothing you can do except rebooting the machine. And here’s a way to work around its limitations .  Yada yada yada.”

Isn’t that also related to empathy?

me: Yes, I would definitely say so. I don’t think you can have patience for the computer, if you cannot empathize with it. We’re much more likely to lose patience for the computer, when we cannot empathize with it. Just think of a time when you got the hour glass or the beach ball for no apparent reason. That’s very difficult to empathize with. That’s like interacting with someone who is too pissed off to tell you what is going on.

I think the best example of people trying to empathize with the computer is when they’re debugging. When we are debugging a software program, we are trying to figure out why the program is behaving the way it is, and our head gets filled up with nothing but our understanding of the state and configurations of the program, not to mention the various hardware mechanisms like memory, processor, and external storage. What I’m doing is trying to think as if I were the computer.

an-lon: I couldn’t agree more about the debugging. Perhaps we use our empathetic faculties for debugging because our brain cells weren’t equipped to access that deeper cloud of intuition any other way. Our brains have been wired for millennia to interact with fellow humans and, as far as I’m concerned, it’s an extremely useful act of hijacking to tap that empathic cloud in order to outsmart a machine.

And just to be clear, I’m not talking about anthropomorphizing the machine, I’m talking about accessing our own preexisting, well-developed resource of empathic faculties to interact with it. Oddly, I anthropomorphize just about everything under the sun: teddy bears, disappearing keys, food that’s been in the fridge for too long. But I’ve never been tempted to anthropomorphize computers.

me: Didn’t you say there were two words that came to mind? What was the other one besides “grokking”?

an-lon: Oh, acculturation.

me: Why did that come to mind?

an-lon: Because computers and programming languages are created by humans. The magic that bridges the abstraction of 0’s and 1’s with human neurons is language. Windows and mice are metaphors—picture a window in a house, picture a mouse running around on its four little paws, now marvel at the metaphor that got us where we are today! Zipping, unzipping, bugs, these are all metaphors. Computer concepts are all abstract until we give them names and map them to something we can understand. Even the act of stepping through code using a debugger is a concession to a human need for a linear story line.

Learning how to program is similar to language acquisition, not because computer languages are anything like natural languages — allowing C++ to fulfill a language requirement like French or Spanish would make no sense — but because learning how to write code is very much a process of acculturation. Just as it’s pretty much impossible, or at the very least pointless to learn a natural language without a cultural context, it’s impossible to write code well without absorbing its many sub-cultures. Best practices, conventions, idioms, and design patterns are all cultural constructs within a human community, not semantic ones within the machine.

Put in that light, the idea of empathy with computers is staggeringly mundane — we’re talking about forming a rapport with the community of their very human creators, not a sentient and malevolent Hal. And yet, my rapport with, say, Linus Torvalds goes through multiple layers of translation, not the least of which is through the machine and back. And if I were to go out and write a Linux patch, I’d damn well better have empathy with the Linux operating system so I can design something appropriate  . . . and yet it doesn’t feel like real empathy. It’s not real the way a spoken word is real, a heartbeat is real, the touch of a hand is real. And yet, to anyone who’s ever gone into a programming trance and been absorbed to the point of forgetting to eat, sleep, or shower, it’s profoundly real, perhaps more real than reality. It’s an emotional state as much as a physical one.

david: Slim, how are you going to treat this subject in a secular fashion? It’s going to be very difficult, because so much of it leans toward feeling and emotion as opposed to logic, science, neurons, etc. It barks up the quality / quantity tree that is split down the center and very divisive.

me: Hmmm… I never thought of this as non-secular. Metaphysical, yes. Are you equating the two?

david: Well, just as Zen and the Art of Motorcycle Maintenance is not about Zen, I don’t think what you’re after is about empathy. It’s deeper than that. To empathize with the computer is to anthropomorphize. To anthropomorphize is to visit our expectations on reality. Computers aren’t humans and they never will be. Can man make a better human? Probably. Will that human have better distinguishing human characteristics? No. In the very same sense that James Howard Kunstler argued that architecture was moving toward a loss of a sense of place — which I agree with — robotics goes down the same boring path, most likely because it has no other choice. If it didn’t, we’d be defining an engine of individua- tion, and I’m pretty sure nobody is doing that. And that’s the miracle of humanness.

jeff: Speaking of humanness, historically speaking, artificial intelligence referred to abilities that we thought computers were incapable of. However, as solutions to problems began to appear, these abilities — like chess playing and language recognition —were no longer considered intelligence. How computer programs tackle intelligent tasks is always different from how humans actually do them; sometimes better or more thoroughly, but at other times, seemingly stupid.

me: What do you mean?

jeff: Why does ai trip up on “special cases”? Because the way we program intelligence is by making problems formal (i.e., accessible to the computer). When problems are formalized, they can be solved by rules. Where rules don’t quite work, we have rules for selecting rules or rules for creating the rules to select rules with ( i.e., machine learning). I think we approach problems this way because our way of accessing how we think, and communicating that to other people is in the framework of rationality. I think rationality is primarily a structure for thinking about thinking.

How we think isn’t quite rational. It’s more like rational++. When people appear rational, we can empathize with them. Irrational people, too, but not as easily. Part of anger is not knowing why. If you know how the machine works, you can be angry at the situation, but it’s not the machine’s fault, it’s whatever is broken or not working inside. Does anger require a thing to be angry at? If you’re angry at the thing and you don’t know how it works, it might as well have a mind of its own. It makes no difference to you. Consider the University of Texas clock tower shooter. You can imagine being angry at him for shooting someone you know, but then you find out he had a tumor and he requested an autopsy in a note he left at his house with his dead family. Somehow that kind of situation is a bit less angry because you know why.

Understanding the mechanism changes how we think about the thing. For example, I remember being excited about taking an ai class and learning the magic. But it turned out to be a whole bunch of hacks  —  or so it appeared. It’s no longer magic when you know how it works. Now, I don’t quite understand what you mean by empathizing with computers. What you’re doing when programming is simulating your program on your model of the programming language runtime. Yes, it’s sort of like empathy, but I thought empathy was being able to feel what you feel. We empathize with real people based on our concepts and experiences of other people. This is the theory of mind,3 which autistic people lack. For that, they are alone in the universe because other people simply don’t exist. So you need models of other people to empathize with them. Still, empathy is a feeling about feeling. I still don’t get empathizing with computers. My current idea of what you might be thinking seems wrong.

joonkoo: I second Jeff on this point. I don’t quite understand what it means to empathize with computers, either. I don’t necessarily think that you need to have answers to all these questions now. Some of them are certainly empirical questions, and worth investigating more. But, I would like to have a better grasp of your idea of empathizing with computers, and I still can’t quite get it. Perhaps it will get explained in your future writings?

me: Yes, it will. Although, I am starting to realize that I’ll be up against a lot of criticism, because some people may be equating the theory of mind with empathy.

Allow me to first write about my experience with physical materials, and I hope that will better explain why I think empathy is not exclusive to human relationships.

(Smiles) But most importantly—my God!—I can’t tell you all how much I love this tightly-knit discussion environment!

——

1 Some of my old friends call me Chan. It is the latter half of my full first name, Seung Chan. I had originally adopted the name to accommodate those who could not pronounce the first half of my name. But I have since abandoned this name because I feel that it robs me of my full identity. Those who cannot pronounce my name call me Slim—made from concatenating the first letter of my first name to my last name — which is a new identity I have constructed since my arrival in the U.S. How did the name come about? It was my e-mail username in college.

2 Hyung means older brother in the Korean language.

3 Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge—to oneself and others and to understand that others have beliefs, desires, and intentions that are different from one’s own. It is typically assumed that others have minds by analogy with one’s own, and based on the reciprocal nature of social interaction, as observed in joint attention, the functional use of language, and understanding of others’ emotions and actions. (Premack and Woodruff, 1978, 515–526) (Baron-Cohen, 1991, 232–251) (Bruner, 1981, 41–56)

Premack, D. G.; Woodruff, G. (1978). “Does the Chimpanzee Have a Theory of Mind?” Behavioral and Brain Sciences 1 (4): 515–526.

Baron-Cohen, S. (1991). “Precursors to a Theory of Mind: Understanding Attention in Others.” In A. Whiten (Ed.), Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreading (pp. 233-251). Oxford: Basil Blackwell.

Bruner, J. S. (1981). “Intention in the Structure of Action and Interaction.” In L. P. Lipsitt & C. K. Rovee-Collier (Eds.), Advances in Infancy Research. Vol. 1 (pp. 41-56). Norwood, NJ: Ablex Publishing Corporation.

Gordon, R. M. (1996). “‘Radical’ Simulationism.” In P. Carruthers & P. K. Smith, Eds. Theories of Theories of Mind. Cambridge: Cambridge University Press. (59-74).