Subjective Model of Self and Other

When we empathize, we feel as if we are connected or at one. To capture this subjective quality of the event, the traditional model of a static self in relation to an other is inadequate. A more useful model will be one that accommodates a dynamic way of thinking about the relationship.

One way to model this is as follows.

Say we represented our conscious and sub-conscious processing of stimuli as the center of an arbitrary plane. We can then arrange the various sources of stimuli as dots surrounding this center, and place them near or far depending on how much we can empathize with them at any particular moment in time. In other words, the closer to the center the dot is, the more we perceive them as being connected with the self. The farther out from the center the dot is, the less we perceive them as being connected with the self.

In this model, if we’re experiencing flow playing a musical instrument, the instrument would be placed close to the center. Same would happen if we’re up on the mountain immersed in nature feeling at one with it.

On the other hand, if we cannot understand the thoughts we’re having, those thoughts will be placed far away from the center.

Thus, an implication of this model is that much of what we traditionally consider to be intrinsically connected with the “self,” can, at times, be an “other” with which we cannot empathize. Moreover, what we traditionally consider to be an “other”, can, at times, be connected with the “self.”

In other words, what constitutes the self and the other can change from moment to moment across time, as our relationship to the various sources of stimuli changes from moment to moment.

In light of this, I will now modify the definition of empathizing as follows:

Empathizing is a state of feeling as if we are connected or at one. Not empathizing is a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be anything we can perceive as an object, be it a human being, an art object, or an idea.

Two Ways We Realize Empathy

We now have a definition of empathy and empathizing as follows:

Empathizing is a state of feeling as if we are connected or at one. Not empathizing is a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be anything we can perceive as an object, be it a human being, an art object, or an idea.

Empathy is a word invented to explain what makes it possible for us to move from not empathizing to empathizing.

Realizing empathy is a moment when we have a realization that moves us from not empathizing to empathizing. We know when we experience this, because there is a resonance we feel that moves us even if a tiny bit. With the experience, we may also find ourselves nodding our head or make one of three exclamations: Ah ha! Ah… or Ha ha ha! 1 To be clear, this is not to say that this behaviors are the experience. It’s simply to say that the experience often inspires these behaviors.

There are two ways in which we can realize empathy. One is for us to realize instantly  without effort. The other is for us to make an effort to make it more likely that we will realize empathy.

Think of a friend you’ve known for a long time. Think of a time when without her saying a single word, you were able to tell precisely what she was thinking, feeling, wanting, or needing. Maybe you finished her sentences or said exactly the thing that she needed to hear when she needed to hear it. You “just knew.” Those are all examples of moments when you realized empathy instantly.

Now imagine encountering someone you were unfamiliar with. Let’s also say that she was difficult to understand. How would that feel? Awkward? Confused? Frustrated? Uncomfortable? If so, it is unlikely that your empathy will realize in relation to them even if you had the will. Why? Because there exists a kind of conflict in the relationship that will provide resistance to the process.

You see, the basic feeling that precedes feelings like awkwardness, confusion, frustration, and discomfort is that of dissonance: a feeling you get when you’re faced with two or more seemingly conflicting ideas, view points, beliefs, values, or emotions.“What kind of conflict are you talking about?” you may ask. I’m talking about the conflict between your expectations on the other, and the other as they are. If you expect the other to be social, and they are not, you may feel awkward. If you expect the other to explain things a certain way, and they don’t, you may feel confused. If you expect the other to respond a certain way to your actions, and they don’t, you may feel frustrated. All these are examples of conflicts. It’s just that we rarely think of them as such. Why? Because we want the world to work the way we expect it to.

Some seem to think this is because we’re intrinsically self-centered.3 I have a slightly different take, which is that the necessary and sufficient conditions were not fulfilled at the moment of interface to facilitate an empathic conversation between us and that other. It’s unrealistic to expect (irony intended) anyone to be able to realize empathy in relation to an unfamiliar other at moments notice without such facilitation.

——

1 Koestler, Arthur. The Act of Creation: Arthur Koestler. Pan Books, 1969.

2 Festinger, Leon. A theory of cognitive dissonance. Stanford, Calif: Stanford University Press, 1985.

3 Lorenz, Konrad. On Aggression. Hoboken: Routledge, 2002.

Understanding is Never Perfect

Some seem to think that empathizing requires that we understand an “other” 100%.

First of all, as I mentioned previously, sometimes we can empathize without any understanding whatsoever.

Second of all, I do not know of any way we can objectively quantify and measure understanding. Until such means become available, we cannot claim 100% accuracy and precision.

Finally, while accuracy and precision are important, I’m not sure such absolute achievement is necessary or even desired.A far more useful measure would be to consider whether our understanding is sufficient for a particular context.

Let us revisit the definition of empathizing I put forth previously.

Empathizing is an experience, where we feel as if we are connected or at one instead of as if we are disconnected or at odds.

Now, the keyword here is “as if.” Because what we are dealing with is a relational yet subjective experience. The experience alone does not empower us to objectively claim anything about the other. Interestingly enough, neither can they. All we have are two related yet subjective experiences.

Take the story of me in conversation with my bi-polar friend. In that situation, it was important that I tried to understand my friend before I could empathize with her. As a result, I did my best to verify my understanding of her to achieve greater accuracy. Did I understand her 100%? I don’t know.

All I did was I understood her enough.

Why was that enough? Because she felt understood. How do I know that? Because she said “thank you for understanding me.” I’d say that was sufficient for that particular context.2

Is there more I could understand that would improve the accuracy and precision with which I understand her? Sure. There will always be more.3

Humility is a virtue when it comes to understanding anything or anyone. Science is marked by significant paradigm shifts that show that previous understanding was either plain wrong or incomplete. Understanding is best framed as an ongoing pursuit.
——

1 The more we think we “know” an other, or that we have “fully” understood or embodied them, the more likely it is for us to stop wanting or trying to learn about them further. This means that our empathy in relation to them will be lowered. If we value the continued improvement of accuracy and precision with which we empathize with an other, it is far more desirable to frame the act of realizing empathy as an ongoing pursuit rather than a finite goal to be reached.

Renowned psychologist Carl Rogers also mentions the “as if” condition in his work, in order to caution therapists not to get enveloped in/overwhelmed by the other’s emotions—which would not be helpful to either party.

2 This is called intersubjective verifiability.

3 Take the example I gave on my last post about parent-child relationships. Let’s say we tweak the example to where the child thinks she does understand her parents. There is still a good chance that after a decade or so, she will realize that in fact she did not. At least not as accurately and as precisely as she imagined. Without the experience her parents had, she had no choice but to miss some of the more nuanced and subtle meaning behind their words.

Cannot Empathize? Doesn’t Mean you Lack Empathy.

Previously, I defined empathizing and not empathizing as follows:

Empathizing is to be in a state of feeling as if we are connected or at one. Not empathizing is to be in a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be either a piece of artwork or another person..

Let us now dive into the part about the “moment” or the significance of “duration” in this definition.

Simply put, in a span of say, 5 minutes, we may continuously move back-and-forth between these two states: empathizing and not empathizing. There’s no saying how long we stay in which state. Maybe we empathize for 4 minutes then not empathize for 1. Maybe we empathize for 2 minutes, not empathize for the next 30 second, then empathize for the next 1 minute, and so on. We cannot predict.

We can also stay stuck in one state for a long time.

Have you ever had an experience, where you, as a teenager, could not empathize with your parents, because you could not understand the advice they were giving you?

I have.

But have you also had an experience where a decade or so passed by and you could empathize with them, because you could finally understand why they were giving you the advice?1

This has happened to me many times over.2

If this is something you have also experienced, it shouldn’t be a surprise when I say that depending on which “other” you’re trying to empathize with (i.e. your parents), through what medium (i.e. the advice they gave you in spoken words), in what context (i.e. yourself at the particular moment3 of hearing the advice) it may be more or less difficult to empathize.

You see, contrary to popular belief, empathy is not something we either have lots of or lack.4 Even if we had empathy and wanted to empathize, there are times we simply cannot.

Given our definitions for not empathizing and empathizing, let us now remember the definition I put forth for empathy.

Empathy is a word invented to explain what makes it possible for us to move from not empathizing to empathizing.

As you can see, I model empathy as a possibility. In light of what we’ve talked about in this article, a possibility that gets realized if and only if a set of conditions are fulfilled at the particular moment of interface between self and other.

In other words, if you find it easy to empathize with someone, it’s not merely because you have empathy, but because the necessary and sufficient conditions have been fulfilled in that moment of interface with that other, through the medium used. On the other hand, if you did not find it easy, it’s not necessarily because you lack empathy, but also because the required conditions have not been fulfilled. 

What I began articulating in my book, is my first attempt at answering the question of “What are these conditions?”

Let us remind ourselves, that for each and every one of us, there will always be moments when we will be unable to empathize with a certain other, through a certain medium, in a certain context. This does not make us necessarily lacking in empathy. It may simply mean that our empathy cannot always realize instantly as if an involuntary reflex. Sometimes steps need to be taken before we can realize empathy.

——

1 The classic example is advice about parenting, but I don’t yet have kids, so I don’t feel qualified to use that as an example.

2 Usually in the form of an “a-ha moment.”

3  This is not only about the limited knowledge and experience I had as a teenager, but also being in the mindset of not wanting to hear what my parents had to say or being distracted at that particular moment thinking about other things while my parents were speaking to me.

4  To this day, there is no objective, accurate, and universal way to quantify empathy, so as to be able to definitively claim that someone has lots of or are lacking in empathy.

Conversation: Language & Vision

On May 14, 2011 at 9:46 p.m., I posted the first draft of what will eventually become the third story of the “Making and Empathy” chapter in the book Realizing Empathy: An Inquiry Into the Meaning of Making surrounding my experience with poster design. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: I have always pondered whether it is possible for those born blind, deaf, and mute, to think or dream of abstract concepts that they have never encountered.

Whenever I have to process complex thoughts, I hear a voice inside my head, speaking a language with grammar that helps me understand and sort things out. How about babies? Having yet to acquire a language, how do they think properly? Do they just act on their instincts and feelings? What about grown-ups who do not have the ability to put thoughts together into sentences with proper grammar?

Some say that language is the key to our ability to process abstract thought and hence develop intelligence. I think there are many who are mentally and physically disabled, but can still think and understand things like other people. Language seems to be able to boost our ability to organize thoughts and abstract ideas, but it seems like we, humans, have a much more basic way of perceiving, feeling, and understanding the world around us, a fundamental layer of communication beneath our language that everyone has the innate ability to access. I am obviously speaking of what I do not understand, but maybe someone who does can shed light on these issues.

slim: I don’t know, either. But it occurs to me that there may be a set of perceptual triggers that encapsulate the fundamental and primitive qualities of perception, probably pre-language with the potential to be widely shared. Why couldn’t we imagine an interaction paradigm based exclusively on those triggers? After that is established, one could layer the symbolic and gestural semantics on top of it as needed.

joonkoo: These questions are very much related to the origin of knowledge, and the nature vs. nurture debate. I’m a blank slate when it comes to language, but I can point you to a few studies in the domain of vision and number processing. Just be aware that I may be over-generalizing.

The human visual cortex29   is organized in a category-selective manner. For example, the lateral part of the occipital cortex is activated when a person is viewing living things in general. On the other hand, the medial part of the cortex is activated when viewing non-living things. This category-specific organization can be driven by experience over development but it can also be somewhat hard-wired. One study looked at the patterns of neural activity in congenitally30 blind subjects, and they showed the same kind of neural activation patterns in response to these categories of objects even when they were presented auditorily. This study suggests that our visual experience is not necessarily the only critical factor that gives rise to the functional organization of our brain — at least in that context.

slim: When you say living vs. non-living, is a plant living or non-living? Is this related to how autistic people behave differently in relation to non-living vs. living things?

joonkoo: I don’t recall exactly how they categorized living vs. non-living in their study, but one thing I do think is true is that living vs. non-living is probably just one of many ways that things in the nature can naturally divide into, probably confounded with many other ways of categorizing things. For example, it may well be natural vs. man-made things that the brain really cares about. To me, the precise categorization of these things aren’t really important. What’s more interesting is that the visual cortex does not necessarily require visual input for its functional organization.

slim: If the visual cortex doesn’t require visual input for its function, it sounds like that would be a rather remarkable statement when it comes to our categorization of cortices into visual vs. others, no? Am I understanding this correctly?

joonkoo: Not exactly. Here’s another way to think about it. In normal development, the visual cortex is designed to process visual sensory information — based on the anatomical fact. But it’s used differently when it lacks visual input for any unexpected reason. What’s interesting is that even if the visual cortex is putatively31 doing something different in these congenitally blind people, there seems to be a set of universal principles that govern the functional organization of the visual cortex.

When these participants hear a living thing, for example, they have to bring up some mental image of that thing, which is probably not visual imagery, yet their visual cortex works the same way as it does on a participant.

slim: Oh, whoa .

So what you’re saying is that when blind people hear something, it triggers a mental image in their head, which uses the visual cortex, although the imagery they bring up is not visual?

joonkoo: Yes, my guess is that it’s probably a mixture of auditory and other multimodal imagery. But yes, their visual cortex works similarly to that of other subjects considered to be normal.

I guess this can be said as a form of plasticity. But I think this is much more profound than plasticity within a domain or modality (e.g., after losing a finger, the motor cortex that has been associated with that finger is now used for other fingers).

slim: When you say plasticity, I’m guessing it is a situation where a certain part of your body takes on a different role when what it was originally associated with is no longer available?

joonkoo: Yes. Evidence for brain plasticity is very cool.

To Anson’s point, however, this isn’t to say that the experience of abstract or symbolic thought is unimportant. Perhaps a more relevant story comes from a study that investigates number sense in native Amazonians,32 who lack the words for numbers. Through the use of numeric symbols, we have little problem expressing arbitrary quantity. On the other hand, Amazonians have only one, two, and many. Given this, they are pretty good at approximate arithmetic, even with numbers far beyond their naming range, but their performance on exact arithmetic tasks was poor. In fact, they failed to understand that n + 1 is an immediate successor of n.

anson: Would a relevant topic be why the Golden Ratio33 is universally pleasing to the eyes? It seems to indicate that there’s something common to human perception.

joonkoo: Yes, the Golden Ratio is interesting! In fact, there seem to be a lot of links between the biological system and math. One thing that I am more familiar with is the Power Law34 and γ, the Euler constant.35

Many of the psychophysical models are based on this constant and the natural log, and I would love to understand this more as well.

The definition of γ seems to be quite similar to neuronal firing patterns (e.g., long-term potentiation), and I speculate that all these fancy mathematics such as  γ, π, the Golden Ratio, may be driven by some of our intrinsic biological properties. I’m talking too much about things that I don’t fully understand. This should be a question for a computational biologist.

———-

29 The back area of the brain concerned with vision makes up the entire occipital lobe and the posterior parts of the temporal and parietal lobes. The visual cortex, also called the striate cortex, is on the medial side of the occipital lobe and is surrounded by the secondary visual area. This area is sensitive to the position and orientation of edges, the direction and speed of movement of objects in the visual field, and stereoscopic depth, brightness, and color; these aspects combine to produce visual perception. It is at this level that the impulses from the separate eyes meet at common cortical neurons, or nerve cells, so that when the discharges in single cortical neurons are recorded, it is usual to find that they respond to light falling in one or the other eye. It is probable that when the retinal messages have reached this level of the central nervous system, and not before, the human subject becomes aware of the visual stimulus, since destruction of the area causes absolute blindness in man. (Encyclopædia Britannica Online: 1 2)

30 Existing or dating from one’s birth, belonging to one from birth, born with one. (OED Online)

31 That is commonly believed to be such; reputed, supposed; imagined; postulated, hypothetical. (OED Online)

32 CNRS and INSERM researchers (Pierre Pica, Cathy Lemer, Véronique Izard and Stanislas Dehaene) studied the example of the Mundurucus Indians from Brazilian Amazonia, whose vocabulary includes number words only up to four or five. Tests performed over several months among this population show that the Mundurucus cannot readily perform “simple” mathematical operations with exact quantities, but their ability to use approximate numbers is comparable to our own.

This research, published in the October 15, 2004, issue of the journal Science, suggests that the human species’ capacity for approximate arithmetic is independent of language, whereas precise computation seems to be part of the technological inventions that vary largely from one population to the next. (“Cognition and Arithmetic Capability”)

33 Also known as the golden section, golden mean, or divine proportion, in mathematics, the irrational number (1 + √5)/2, often denoted by the Greek letters τ or ϕ, and approximately equal to 1.618. (Encyclopædia Britannica Online)

34 Van Mersbergen, Audrey M., “Rhetorical Prototypes in Architecture: Measuring the Acropolis with a Philosophical Polemic”, Communication Quarterly,
Vol. 46 No. 2, 1998, pp 194–213. A relationship between two quantities such that the magnitude of one is proportional to a fixed power of the magnitude of the other. (OED Online)

35 The constant that is the limit of the sum 1 + ½ + … + 1/ n − log n as n tends to infinity, approximately equal to 0.577215665 (it is not yet known whether this number is rational or irrational). (OED Online)

Conversation: Respect & Integrity

On April 17, 2011 at 5:38 p.m., I posted the first draft of what will eventually become the first story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience with glass. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: When I was studying hermeneutics,28 I remember my professor saying, “Every question presupposes you know something about the answer.”

For example, you ask, “What can I do to tear a piece of glass?” The question pre-supposes that you need to do something to achieve that effect. I don’t know much about glass-blowing, but as far as I know, you take advantage of gravity, right? Sometimes you don’t have to do anything, but just let gravity and the natural decline in temperature take care of matters.

The kind of question we bring to the table often shapes the kind of answer we expect to hear. Everyone sees through a pair of tinted glasses. It is inevitable, but it is important for us to be aware of that influence and bias and try to compensate for it. That is something people in the field of hermeneutics and epistemology have helped us to understand.

Does this make sense to you?

slim: Yes it does.

And that’s such a great point about the use of gravity in tearing glass. You’re absolutely right. I did think that I had to do something to tear glass. It is truly mind-boggling to realize that there’s no end to how many biases we may be operating under at any given moment.

You mentioning gravity also reminds me of an experience I had in my modern dance class.

One day, we were asked to roll down a small hill. The first time I did, I was somewhat apprehensive. I had never rolled down a hill before — at least not as an adult — and I was afraid that I might get hurt. So in an attempt to prevent that from happening, I tried to become very con-scious of how I rolled, so I could slow down and control where I was going. I wasn’t very successful, though.

I remember the roll being rather rough.

But the second time I did it, I was abruptly dragged away by a friend of mine who showed up out of nowhere and said “Let’s go!” Before I knew it, I was back up the hill throwing myself down again. What is interesting about this second time is that I distinctly remember how free my body felt. Maybe it’s because I didn’t have any time to think, but it felt as if I were gliding down the hill. It felt very smooth.

It was just me, the ground, and gravity working together in collaboration. In retrospect, I was biased toward assuming that to not get hurt I had to become conscious of the roll, so as to try and control every aspect of it. When in fact it was better to relax.

an-lon: Funny story. I was at a going-away party for one of my DreamWorks friends, and another coworker brought some homebrew and a beer bong. At the height of everyone’s drunkenness, Josh,  the bringer of beer, tore into Moiz, the guy who was leaving , over something involving semicolons. It took me a while to piece together the story, accompanied as it was by much shouting and laughter, but from what I gather, Moiz had managed to put a semicolon at the end of every single line of his Python code, and Josh just couldn’t believe it. He said, “We never put it in the best practices manual because we never imagined anyone would do something so goddamn stupid!”

Point being, in computer languages, people often write code in one language as if it were another — importing irrelevant habits/conventions/design patterns. The semicolons thing was funny because the vehemence of the rant far outweighed the magnitude of the infraction but I’ve seen many examples of this over the course of my programming lifetime, and I’m sure it has cost companies millions of dollars’ worth of programmer time just because the code ends up being incomprehensible.

slim: Yeah, I remember it taking me quite a bit of effort to go from programming in C to programming in Prolog. Even now I haven’t done much functional programming, so I bet the way I write functional programs is not as respectful of the functional principles as it could be. As a matter of fact, it may not be that much better than my disrespect for the material integrity of glass.

an-lon: By the way, your comment about respecting the integrity of physical materials reminds me of this old joke of a fictional radio conversation between a U.S. Navy aircraft carrier and the Canadian authorities off the coast.

U.S. Ship: Please divert your course 0.5 degrees to the south to avoid a collision.

Canadian Coast guard: Recommend you divert your course 15 degrees to the south to avoid a collision.

Ship: This is the captain of a u.s. navy ship. I say again, divert your course.

Cost guard: No. I say again: you divert your course!

Ship: This is the aircraft carrier uss coral sea. We are a large warship of the u.s. navy. Divert your course now!

Cost guard: This is a lighthouse. Your call.

slim: Ha ha ha ha ha ha! Respect the lighthouse, dammit!

an-lon: Also, here’s a quote that expresses my view of integrity, written by Mary MacCracken, a teacher of emotionally disturbed children. She’s explaining why she tries to teach reading to children who are so lacking in other life skills, it might be argued that learning to read is beside the point.

“The other teachers thought I was somewhat ambitious. They were kind and encouraging, but it did not have the same importance for them as it did for me. And yet, and yet, if what I loved and wished to teach was reading, I had as much right to teach that as potato-printing. In the children’s world of violent emotion, where everything continually changes, I thought it would be satisfying for them to know that some things remain constant. A C is a C both today and tomorrow — and C-A-T remains “cat” through tears and violence.”

For some reason, that quote has stayed with me for a long time. To me, that’s integrity:  that C-A-T spells cat today, tomorrow, and yesterday.

And incidentally, that’s what Microsoft’s never figured out — that users hate having things change from under their nose for no good reason. Remember those stupid menus whose contents shift depending on how frequently you access the menu item? Whose brilliant idea was that? Are there any users out there who actually like this feature, instead of merely tolerating it because they don’t know how to turn it off? Features like that create a vicious cycle where users become afraid of the computer, Microsoft assumes they’re idiots and dumbs down things even further — making the computer even more unpredictable and irrational. Now there’s no rhyme or reason whatsoever behind what it deigns to display. Say what you will about Mac fans, Windows and OS X are still light years apart in terms of actually respecting the user.

And here we cycle back to the initial conundrum: how to reconcile that austere landscape of programming abstractions with our emotional, embodied, messy selves; selves so much in need of human connection that we perhaps see everything through that lens.

Here’s a slightly loony bins example that I have tried and failed many times to write down. Around the time I was learning object-oriented programming, sometime in my early twenties,  my cousin went through a love life crisis.

The guy she was dating had a photo of an ex-girlfriend on his refrigerator, but none of my friend, only her business card. They somehow got into a fight over this. She went home, and, partly out of pique — but mostly to amuse herself — she got out a photo of every single one of her ex-boyfriends, put those photos on the fridge, and added the business card of the current guy. Then she forgot about it and went about her daily business. Of course, you can predict the rest of the story. The new guy somehow came over unexpectedly and saw the photos, they had another fight, and finally broke it off.

My cousin tried to explain to me later that the problem wasn’t so much the photos and business cards and exes. It was that her boyfriend just didn’t get that she does quirky things like that for her own amusement. What she did wasn’t intended as a message and wasn’t intended to be seen, it was just an expression of her own personal loopiness. The fact that he couldn’t relate to her silliness was as much the deal-breaker as the original photo of his ex.

At the time, we were both fresh out of college and lamenting the closeness of college friendships. The guy in question was older, maybe in his thirties , and he really just didn’t seem to get it.

And here is where I went into the spiel I have never been able to replicate since. Because I had just been reading about object-oriented programming, the thought in my head was that in college, we gave out pointers left and right to each other’s internal data because we just didn’t know better. All the joy and sorrow and drama was there for any close friend to read ,  and write, and modify. As we got older, we learned that this is a rather dangerous way to live, and developed more sophisticated class interfaces — getters and setters for that internal data, if you will. The guy in my cousin’s story seemed to live by those getters and setters, and was appalled when my cousin inadvertently handed him a pointer.

Here’s the part of the story I have never been able to replicate: I told my cousin all that without mentioning object-oriented programming once. I used a fair bit of object-oriented terminology, but only the words whose meanings were either immediately clear from the context or already in common usage — handle and interface, for example. She immediately understood what I was trying to say, and added that the word “handle” was a particularly poignant metaphor. When we’re young, we freely give loved ones a handle to our inner-selves, but in adulthood, we set up barriers and only let people in at predetermined checkpoints according to predetermined conventions. As adults, we give out handles to only a very few, and those already in possession of a handle can always come back from a previous life to haunt us. We interact with the rest of humanity via an increasingly intricate set of interfaces. By now, I possess a much deeper and richer set of interfaces and protocols than I did in my early twenties, so I can share a great deal more of myself without fear of being scribbled on. But I still don’t hand out raw pointers very often — the vulnerability is too much for me, and the responsibility too great for the other person.

Back to computers and HCI. I am surprised sometimes by how often I use computer terminology in daily life among non-programmers and get away with it. You don’t have to be a programmer to understand me when I complain that an instruction manual is spaghetti, or that my memory of a particular song got scribbled on by someone else’s more recent cover of it. The reason these metaphors work, of course, is that spaghetti and scribble are essentially round-tripping as metaphors — from daily life to computer science and then back to daily life. First, the English words were co-opted to convey a specific computer science concept — spaghetti code is code that is unreadable because it tangles in a million different directions, and to scribble on a memory location is to overwrite data you’re not supposed to overwrite —and then I re-co-opted them back into English — to express frustration at the unreadability of the instruction manual or lament that my memory of the original song has been tarnished.

My point here is that computer science is rich in human meaning precisely because we choose human metaphors to express otherwise abstract concepts. My analogy between object-oriented programming and human relations is surprisingly salient because object-oriented programming, at some level, had to come from human experience first. What is architecture? It was the Sistine Chapel before it was the Darwin operating system. Have you seen the ted talk by Brené Brown on the power of vulnerability? It’s what got me thinking about our longing for human connection

slim: I’m really taken by your use of pointers and getters/setters in the context of relationships. I’ve never thought of it that way, and it’s a rather interesting way of thinking about it. There’s so much in there that I’m having trouble responding in a coherent way.

And yes, I’ve watched that Brené Brown talk numerous times in the past. It’s a very good one, and it is consistent with my experience making physical things.

——

28 The art or science of interpretation, especially of Scripture. Commonly distinguished from exegesis or practical exposition. (OED Online)

Conversation: Trust & Not Expecting

On April 8, 2011 at 2:16 p.m., I posted the first draft of what will eventually become the last story in the “Making and Empathy” chapter in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience in the foundation studio. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: For me, painting requires this exact kind of courage you are talking about. I find painting very difficult, because I always need to get things right the first time around. I always need to know what to do precisely to get to the end result I want. I would use very fine brushes to get all the details of the eyes and the hair from the get-go. I would pick the exact color of paint that matches the photo. I need to get everything right with painting just one layer.

But when I saw videos24 of skilled painters painting, they didn’t seem to care if their painting looks awful in the beginning. They begin with a very rough outline and use very broad strokes. They keep painting over it again and again, refining and adjusting constantly, adding more and more details layer by layer. It is by this constant refinement that makes their painting possible, and also realistic.

To be courageous in the midst of uncertainty, trusting the process — or the journey — will work itself out, is something that I don’t think I learned from our computer science education.

slim: Having gone through a portion of the risd foundation program, I’ve come to realize that one of the most important skills of an educator is to know how to challenge the students. It’s like Randy’s story about his first building virtual worlds (BVW) class where he realized that the quality of work his students displayed on their first project was so high that all he could do was tell them to do better. It seems to me that in the right environment, we human beings can to grow in almost magical ways.

anson: I was lucky to be in that very class with Randy. But you know what? At that time, we all thought Randy was a mean and ruthless teacher. We worked so hard to get our first virtual world out in two weeks, and then he said he expected better work. We were like “What?!” After having watched his last lecture, we, of course, now empathize with why he did this, and that he is one of the best educators in the world. He saw the potential in us and he helped us to draw it out.

I think both you and I learned something precious in the past few years by jumping into a field foreign to us. And you’re right, it is education itself. We had our minds and paradigms stretched, challenged, stimulated, and inspired. I am so glad I have gone through this education process while I am still teachable. You know, some people stop being reflective after a certain age and become unwilling to change the paradigms of how they look at things.

an-lon: I’ve been through this — making forwards and backwards progress at different times in my life — learning to be prolific instead of perfectionistic, and learning that it’s the playful, throw-away variations that eventually lead to the finished work.

In one chapter of the book Art and Fear, there’s an apocryphal story about how half the students in a pottery class are told they will be graded on the quantity of work they produce, the other half that they will be graded on the quality of their work. At the end of the assigned period, the students in the quantity group have produced higher quality work than the students in the quality group because they were given the freedom to experiment and iterate, plus the mandate to work quickly.

That’s my art story. My computer science (CS) story is no less profound. Here’s the thing, I doubt that I could have survived majoring in cs back in college. I didn’t have the maturity or the study habits, and I was far too easily intimidated. I was also terrified that I wasn’t smart enough. I don’t want to go into a long song and dance about this, and fortunately, I don’t have to because Po Bronson has already written an article25 about it.

The gist of the article is that parents who overpraise their kids for being smart are setting them up to never leave their comfort zone, because the minute they encounter difficulty, the kids panic and assume “it’s tough, therefore I must not be so smart after all.”

I found CS to be tough. I assumed everyone else was smarter than me. I walked away, for a time. What brought me back? Above all, it was the change in mindset that allowed me to return to computer science. This happened over the course of several years, and I can trace much of it back to a couple of college friends.

Doug was this kid from Alabama who lived two doors down from me in my dorm freshman year; Jeff was his Jewish roommate from New Jersey. One of my very clearest college memories — the one that’s always struck me as the quintessence of dorm hall diversity — was when we somehow got into an argument about when World War II actually began. For Doug, World War II began with Pearl Harbor because that’s what we were taught in American history classrooms. For Jeff, it began with the Holocaust and the pogroms, because that’s what was in his cultural memory. For me? Japan invaded China years before any of that other event ever happened. We came from such different backgrounds, yet ended up as such good friends. Those were good times.

Anyway, Doug and Jeff were different from any of the guys I’d known in high school. Smart, yes, but this was Princeton and everyone was smart — or desperately trying to prove they were. I think, in hindsight, that those guys were among the first I’d met who were playfully smart — who tried new things because it was fun, and who ended up in computers because it was a new, fun thing to be tried.

Back then, I didn’t understand the concept of doing things for fun. My physicist father had none of that playfulness about him when it came to academic studies. For example, he could probably be a chess grandmaster if he wanted, but he never bothered to learn because it was just a game and therefore pointless.

I was never as good at math and physics as my dad. That was a losing battle from the start. And since physicists tend to see computer science as being several rungs below them on the intellectual pecking order — the equivalent of doing manual labor — I was never exactly encouraged to pursue computer science. So I went my own way and studied comparative literature — and my parents, to their everlasting credit, let me.

But I threw the baby out with the bath water. I was never meant to be a physicist — though, ironically, computer graphics has actually brought me back to physics full circle, but computer science wasn’t physics. Honestly, computer science is mostly just dicking around . You futz with it till it works. I’m not saying the theoretical underpinnings are unimportant , but honestly, the guys who are good are the ones who spent a lot of time dicking around because, it was fun. They weren’t intimidated by the difficulty factor because unlike me, they didn’t see the difficulty as an iq test. For them, an obstacle was like a video game obstacle :  a legitimate challenge to be bested, not a measuring instrument assessing whether or not they stacked up.

At first, I really couldn’t wrap my head around the fact that these guys who seemed to spend as much time playing Nethack as they did writing code were also really cool and well-rounded people. Jeff was into theater and Doug knew a ton about contemporary art. It didn’t seem fair, somehow, that the reward for goofing off was to become smarter.

I didn’t have any sort of instant epiphany, but over the course of college and my early twenties, I did rewrite my entire value system. I came to understand from observation that intelligence wasn’t about being born smart — it was about being born smart enough, and from there, being playful and willing to explore. It was about leaping in without a clue and getting your hands dirty, rather than hovering nervously on the sidelines.

After years of being told by my parents how smart I was and living with the secret fear that I really wasn’t, I finally came to value honesty, courage, and playfulness over being smart. I also came to see the excuse “well, I could have done it if I’d tried harder” as the coward’s way out. Because if you get a B on a test without studying, you can comfortably assume you might have gotten an A if you had studied. But if you study your ass off and still get a B, well, there goes all your illusions. So it’s easier never to try.

When I returned to computer science in my early twenties, I was beginning to develop some semblance of maturity. I made a conscious choice about my value system that I would quit worrying about whether I was smart enough, and instead put all my effort into making an effort. What I discovered was that playfulness (i.e.,  willingness to explore seemingly irrelevant side paths ) and work ethic  (i.e.,  setting goals and not making excuses )  led, over time, to all the analytical smarts I ever needed for my career.

This spirals back to Art and Fear because of the simple, sad observation the authors make in their opening pages, which is that many students stop doing creative work after they graduate. Without the community and structure and feedback cycle, they’re lost.

So I think the spirit of play becomes all the more important after graduation — because the girl folding paper and producing a thousand variations just because it’s interesting will keep doing it, whereas the guy who was doing it for a grade won’t. What you’ve produced as a student will most likely be forgotten, but what you’ve become won’t.

david: Slim, there’s a certain raw, honest quality to your writing that I’m just incapable of, but it feels so good reading it, because like the finest song lyric, it expresses what I felt palpably.

The overarching theme here of whimsy is spot-on. I think the greatest indictment of modern u.s. culture is the lack of whimsy and its replacement with what the writer David Foster Wallace referred to as “the entertainment” or “an orgy of spectation.” 26

If there is one thing I seek in my mostly boring middle-aged adult life is that whimsy, and childlike sense of adventure. It strikes me that the same thing that makes children so hilarious as in this conversation between a friend (the mom) and child (the son) which appeared in my e-mail today:

Son: When you’re three, sometimes they will let you out of a cage.

Mom: What? What cage are you in when you’re three?

Son: I don’t know… I think it’s the rule, though. You can get out when you’re three.

Mom: How do you know?

Son: Well, when people are let out of a cage they always say, “I’m three! I’m three!”

This is precisely the same thing that when observed in adults would be labeled as a dissociative disorder and medicated out of existence. Adulthood is so overrated. At least the politically correct version of it that most of us practice.

slim: Both of your stories resonate with me. I feel as though I have spent too much time in my 20s worrying about when I would finally be an “adult,” or at the very least “professional,” much to my own detriment. At first,

I thought there was something wrong with me for being so child-like, but once I got sufficiently close to those who I considered to be the epitome of adulthood or professionalism, I learned that they were simply hiding their child-like tendencies, because they didn’t want other people to see it as a sign of immaturity or weakness.

I also learned that the elders could see right through people who are trying to look like an “adult” or a “professional.” Those who have lived long enough know that none of us actually know anything for certain. So it’s mostly a matter of whether you trust someone or not, instead of whether that person really knows something.

david: There is no more chilling effect, as far as I’m concerned, on American culture than the one you describe here, which is to say that half the country exists in a world where everyone is pretending to be professional, instead of being authentically themselves and leaning toward self-actualization. Some form of this was the original hypothesis of the Cluetrain Manifesto,27 which seems to have had little effect outside very small circles of young people.

Of course, the individual’s self-actualization is rarely in the best interest of the corporation, at least as management sees it. This homogenization is about as disturbing a trend as we can possible endure and in fact, should be seen as an affront to the principles that we stand for, namely freedom.

I’m consistently amazed by the influence of “dress for success” on the American corporate psyche. People actually care how I cut my hair, shave, or whether I’m tattooed or pierced as if my capabilities or brain power or effectiveness change with the scenery. I’m also consistently amazed by how the basic marks of individuation aren’t seen as intrinsic or extrinsic. I started writing an essay on a philosophy of hiring recently and a lot of these kinds of themes come up there. Pittsburgh is certainly a bastion of the old school in this regard. While I understand the point in marketing and sales, the extent to which I’ve seen all manner of bizarre corporate policy developed on the altar of dress codes is mind-boggling.

I’ve seen pictures of James Watson delivering the original papers on dna just days after their publishing standing on-stage in front of his peers in shorts and then there’s Paul Erdos, who pretty much defined the picture of obsession and minimalism. I’m also told that none other than Herb Simon, when asked to choose a place to live on his arrival at cmu drew a half mile radius around the university and said, “Anywhere in that circle” owing to his particular obsession with being able to eat and breathe the work, other concerns be damned.

And of course, I’m not sure we have much in the way of counterculture outside of absurdist examples like Mike Judge’s Idiocracy.28 I must go watch that movie again soon.

Welcome to Costco; I love you!

They tell me Costco is now in downtown Chicago. I may have to move to a hill in Montana next.

an-lon: The theme of balancing grown-up responsibilities (e.g., taxes, housing, earning a living) with a childlike sense of adventure is definitely a big one for me, as well. I think the theme of rebirth is a salient one as well. For better or worse, I can’t re-live my twenties .  I need to find what works for me now, in making my second big career change,  or third, I guess, if you count comparative literature to cs to be one arc and then think tank to vfx to be another. I can’t just repeat what I did the first two times — I need to find what works now, at a different life stage with different priorities. I’m not out to reject adulthood here   but I do intend to redefine it.

anson: I think we have to question whether professionalization is doing good or not to the education of our current and next generations. Professionalization makes us feel good about ourselves and also helps us to land a job more easily, but then it doesn’t help produce people who are more well-rounded and more capable of continued learning, especially in contexts that are out of their comfort zones.

I am fortunate to have received both a technical and liberal arts education. When I raise my kids, I won’t let them become lopsided techies. I also want them to be equally exposed to a liberal arts education, including history, arts, literature, and philosophy. I think that will help them to see the world through a different pair of lens and be more embracing of diversity and creative ideas.

——

24 A good example of such a video is a Belgian documentary film from 1949 directed by Paul Haesaerts called Visit to Picasso that captures Picasso’s creative process as he paints in real time. (“Bezoek aan Picasso”)

25 American journalist Po Bronson once wrote about how a large percentage of all gifted students severely underestimate their own abilities. (“How Not to Talk to your Kids”)

26 The late David Foster Wallace, an award-winning American writer, is quoted as saying, “The great thing about not owning a TV, is that when you do have access to one, you can kind of plunge in. An orgy of spectation. Last night I watched the Golf Channel. Arnold Palmer, Jack Nicklaus. Old footage, rigid haircuts.” (Lipsky, 2010, 118)

Lipsky, David. Although Of Course You End Up Becoming Yourself: A Road Trip with David Foster Wallace. (New York: Broadway Books, 2010), 118.

27 The Cluetrain Manifesto both signals and argues that, through the Internet, people are discovering new ways to share relevant knowledge with blinding speed. As a result, markets are getting smarter than most companies. Whether management understands it or not, networked employees are an integral part of these borderless conversations. Today, customers and employees are communicating with each other in language that is natural, open, direct and often funny. Companies that aren’t engaging in them are missing an unprecedented opportunity. (“The Cluetrain Manifesto”, 2000)

28 An American film where Private Joe Bauers, the definition of “average American,” is selected by the Pentagon to be the guinea pig for a top-secret hibernation program. Forgotten, he awakes 500 years in the future. He discovers a society so incredibly dumbed-down that he’s easily the most intelligent person alive. (IMDB, 2006)

Conversation: Choice & Feeling

On April 6, 2011 at 12:31 a.m., I posted the first draft of what will eventually become the fifth story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the metal shop. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about empathy and mastery.

 

an-lon: (Smiles) Happens to me all the time when drawing and editing, squinting at it and wondering what’s wrong, and 90% of the time, whatever’s wrong is completely orthogonal to all the directions I was previously searching.

slim: The feeling of hindsight obviousness intrigues me quite a bit. I remember being dumbfounded when my friend shared her story of how she overcame her bipolar disorder. She said she finally realized that she had the power to choose not to be depressed. She told me that it was so obvious in hindsight, that she couldn’t understand why she didn’t realize it before. But the reason I was dumb-founded was that I wasn’t depressed, yet I had never realized that, either. I can choose how to feel? That was a completely novel thought.

Since then, I’ve heard many people say things like “we always have a choice.” But I think it’s imprecise to say that we always “have” a choice. I’m sure it took them a lot of struggles to come to that realization. So what they mean is that we have to become knowledgeable of the choice. Or more precisely, we have to “develop” and “make” a choice that wasn’t available to us previously. That can take quite a bit of effort. It’s not just a matter of “snapping out of it.” Once you’re able to just snap out of it, you’ve already learned it.

an-lon: Ironically, Slim, you knew me during a period when I was genuinely depressed. When I attended the International School of Beijing (isb), I was really alone and struggling. Beijing was my first time living in a big city and I experienced culture shock and extreme loneliness.

I was functional — for where I was at the time, I was pretty convinced I’d just get yelled at if I admitted I needed help —but I remember sleeping 10 hours a day because I just didn’t want to wake up, and making a deal with myself that I’d allow myself to contemplate suicide if college wasn’t better. Don’t get me wrong ,  I wasn’t actively suicidal. It was just my way of mentally kicking the can down the street. I truly have no idea what it was like for your friend.

I think there are links between add and depression, but I don’t think I was ever truly chemically predisposed to depression in the way a bipolar person is. In my case, I was depressed first because I was trapped in a small town — before Beijing — then thrown into a big city — Beijing — with no coping skills.

College and D.C. introduced me to the world, and I was fine after that. But I do know from those high school years exactly what depression is. I had plenty of roller coaster ups and downs in my twenties, but nothing like depression. Nothing like that soul-sucking lethargy of my teens.

Unfortunately, I can’t say the same of the past few years. The allergies are a long story, but basically a year into my stay in L.A., I started experiencing mysterious symptoms:  a sore throat that wouldn’t go away for two months and just overall lack of energy. It took many trips to various doctors to figure out what was going on. I’d do something that would help for a while, then get flattened by some new mystery ailments.

The infuriating thing was, that was never anything huge — I’d just be sick, and tired all the time because when you’re not breathing well, you’re not sleeping well, and when you’re not sleeping well, you’re not living well. After a while, this changed my identity, from an energetic, enthusiastic person to one who carefully rationed her energy.

This also made me realize that perhaps that enormous physical energy was all that had held depression at bay all through those 18 years between high school and l.a. I kept the demons at bay by constantly chasing after new pursuits, which was great, but what I didn’t know was that if you take away the physical energy, the scaffolding that remains is a house of cards.

Thing is, during the healthy decade of my twenties, I’d taught myself to push through fatigue, frustration, and fear. Athletics are a good example of this ; you learn to recognize when to push through pain and when to rest. You know the Nike slogan “Just do it”? Well… yeah. Just do it. And with computers, I’m sure I don’t need to explain how stubbornness pays off. Damn. I pushed hard in my twenties, but I scored a lot of victories, too.

The allergies-and-depression cycle of recent years is a bit hard to explain because I really can’t just blame the allergies. There was a breakup, job angst, and moving to a new apartment. But I’ve coped with all of the above before, and there were good things going on in my life, too. It was all incredibly frustrating because while I definitely recognized the symptoms of depression from that extended period in high school, I could not figure out why it was happening again and why I couldn’t just snap out of it.

As with that period in high school, I never stopped fighting. I never stopped going out and doing what I wanted to do. But I did cut back . There was always this triage of what I had energy for and what my priorities were. In my twenties, I just did it all. These past few years, I hit a point where I couldn’t — I had to make choices.

I’m still convinced that the only reason I snapped out of that depressive period — I can’t truly call it depression, but I felt like I was always close to the edge and could never quite get any distance from it — was that I finally got the allergies under control. Exercise and nutrition are a big part of it, but so were allergy shots and an immune system booster vaccine.

No silver bullets, but basically I feel like myself again after having had to walk through sludge the past three years. I’ve kind of forgotten how to run, but at least I know it’s possible again. (Smiles) I spent three years trying to choose not to be depressed, but the fog refused to lift until I finally got my physical health back.

Did I do it all wrong? Would therapy or medication have gotten me over it sooner? I just don’t know. And I perhaps never will. I’ve been playing these past six months entirely by ear. I do feel safe in the assumption that as long as I have my physical health, my mental health is also safe. But
I no longer take it for granted. And I also realize that the madcap coping mechanism of my twenties — constantly sprinting — literally, when it came to ultimate frisbee, probably wouldn’t have lasted forever anyway.

One thing that tends to not work is trying to will yourself into being more organized/disciplined/attentive. That tends to be a recipe for failure, with all the voices in your head yelling at you for being such a lazy slob and a waste of space. What does work is finding clever ways to set things up such that it’s a downhill slide instead of uphill battle — in essence, coming up with a system that makes the good behavior easy instead of difficult. It’s like the judo trick of using the other person’s momentum for a throw, rather than trying to absorb the force of their blow directly

slim: Indeed. I also think the kind of support structure or environment you’re talking about is essential. Although, I would rather use words like “encouraged,” “supported,” or “amplified” to describe the qualities afforded by such an environment over “easy.” I think there is a significant difference between something being easy vs feeling at ease when you’re in relation to something.

Conversation: Empathy & Mastery

On April 3, 2011 at 4:23 p.m., I posted the first draft of what will eventually become the second story of the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the woodshop. While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and ethics.

 

joonkoo: This story reminds me of my recent attempts to master bread baking, namely baguettes. I’ve been baking a batch pretty much every other weekend, and one of the most delightful things that happens after you retrieve a freshly baked baguette from the oven, is to hear them singing , which is the sound of the crust cracking, and perhaps some moisture interaction going on.  I’m nowhere near the level of mastery, but I’m sure there are different sounds that you can distinguish, once you become a master baker.

slim: Did you notice the singing from the get-go or did someone point it out to you? If the former, was it highly noticeable or did you actively have to pay attention to it? I don’t think I’ve heard that sound. I’m very curious what it is like.

joonkoo: It’s very noticeable. I noticed it from the beginning. But then I also watched this French guy making a baguette on YouTube, and he was the one who mentioned this singing sound. It’s really the sound of crust cracking, but it makes the bread sound so delicious.

slim: Did you notice the sound after you heard the French guy on YouTube, or before?

joonkoo: I noticed it before, but I didn’t care that much. Afterward, I came to like the sound. But to be honest, there has been no deep understanding of the sound.

slim: See… A question I have about this is how do we come to understand, and become sensitive to these subtle nuances? There seem to be certain things that we can proactively notice, then there are things that other people have to raise our awareness to.

Is this simply a matter of time? If I spent enough time paying attention, would I eventually become sensitive to everything there is to be sensitive about — (smiles) and become miserable? Or are there always going to be things that other people have to raise our awareness to, because there is an infinite number of things, and simply not enough time?

joonkoo: The question you are raising is an excellent one! I haven’t thought about it much, but intuitively, there seems to be a need for both internal enlightenment and external stimulation to learn such nuances.

slim: Indeed.

By the way, last semester I interviewed a child psychologist, who told me that in the beginning, babies learn how to be attached to their mother, and come to understand what it means to love their mother. Then they may feel comfortable with other people who have attributes similar to mother, which allows them to feel safe and comfortable with these other people. Then as they interact with them more, they mature, and start to appreciate the nuances that make these other people different from mother, but love them despite the differences. I found that to be a rather fascinating way to think about maturity. Don’t you think?

joonkoo: The child psychologist was perhaps referring to Piaget’s idea of assimilation and accommodation.16 I have little knowledge in developmental psychology, but you may find it relevant.

Also, when I took cognitive development, I was fascinated not only by Piaget, but also by Vygotsky.17 You might want to check out his theory. My knowledge about these is too shallow to be shared here. (Smiles)

Now, returning to the idea of non-living things telling us something, I experience very similar things when analyzing data — they tell me how they should be analyzed.

slim: Yeah, isn’t that peculiar? There’s a feeling associated with it .

I’ve also heard a firefighter say the house told him to get out, and immediately after he ran out, it crumbled. Perhaps there is a combination of pattern recognition, as well as some genetic reflex that triggers a certain physiological change in our body that results in us feeling as if we’re being told?

joonkoo: Although, I think this is a very literary way of describing the gaining of expertise.

slim: What do you mean that it is a very “literary” way of describing the gaining of expertise?

joonkoo: I think it’s just one possibility of expressing how we get to know things better. I say literary because unlike other people or other creatures, it can’t be that a piece of wood is telling you something. It’s that you think that the wood is telling you something. For example, a baseball player might claim that the ball that left the pitcher’s hand told him to hit it, and it resulted in a home run.

This kind of expertise, often described as intuition, wisdom, mastery, is something that humans — and other animals — can acquire at an incredible level, as the human brain has amazing ability to parse statistical and stochastic patterns in the environment.

However, it’s an open question, I think, to ask what it takes to gain expertise in this variety of domains such as understanding other people’s mind (e.g., theory of mind ), furniture making, and computer programming. And also whether they are different, and if so, why.

slim: When you say it’s an open question, do you mean that there is no good insight into how one gains expertise as studied by neuroscientists? That because it’s such uncharted territory that it’s hard to start a discussion on it?

joonkoo: My question was whether it takes a similar amount of time and effort — if not the same amount —to master things across domains.

I remember reading from some cognitive psychology paper that it takes 10,000 hours of practice to reach the highest end of expertise. This may be an over generalization, but it means that it takes a huge amount of time and effort to become an expert.

For example, most of us are all experts at looking at faces, extracting facial expressions and emotions — although we know that people with autism lack this ability to some extent. On the other extreme, there are expert computer game players (e.g., Starcraft18). When you look at how they play, it’s simply incredible to look at how fast they make decisions and click the mouse buttons. This is not something that everyone can easily achieve, but some people are experts in this field.

How do the two domains that I raised as examples (face perception vs. Starcraft) differ in terms of their acquisition of expertise? What about wood cutting? What about computer using /programming? Is becoming an expert wood cutter very different from becoming an expert computer user? What are the common mechanisms and what are different mechanisms? These were the questions that I had in mind when reading your post.

slim: The question of testing expertise across domains sounds like it would be a challenge in defining the boundaries of each domain, not to mention the standards against which to measure expertise , no?

For example, Isn’t facial recognition something that we are hard-wired for? Is it fair to compare that to Starcraft? What would it mean for one to be an expert in facial recognition? Be able to tell the difference between twins you’ve never seen before within a certain amount of time?

joonkoo: Some things are definitely hard-wired and some things are not. Some things presumably use a combination of more hard-wired and less hard-wired systems to achieve expertise.

In facial recognition, there are ways to quantify it experimentally using behavioral measure of inversion effect, composite effect, and such. And recent research has shown that these abilities are pretty heritable.

A few years ago, we also found that the neural basis of facial recognition may be more genetically shaped than neural substrates for processing other visual categories. Now it’s true that I don’t think it is fair to compare facial recognition to Starcraft — one of the reasons being that some things are more hard-wired than others. But I would like to raise different facets of expertise, which might be related to your question about empathizing with objects and what it means to do that in different areas.

an-lon: Ok, I’m jumping in about the subject of 10,000 hours because it’s become simultaneously trendy and misunderstood. The gist of the research is that what makes Mozart or Tiger Woods or any virtuoso great isn’t necessarily inborn talent, but the ability to hone that talent.

The 10,000 hours translates to about a decade, but here’s the key: it is not just any 10,000 hours that makes a person great, it’s 10,000 hours always at the edge of your comfort zone, constantly pushing your boundaries. Most of us simply do not have the capacity to operate at that level. Instead, we spend most of those 10,000 hours simply repeating our old habits. We practice the same thing over and over again. Phenoms19 are those extremely rare individuals who are able to push their boundaries in an extremely focused and deliberate way.

I think George Colvin’s Talent is Overrated actually covers this better than Gladwell’s Outliers. He calls it “deliberate practice,” and gives many examples, from Jerry Rice to Ben Franklin, of how those so-called geniuses balanced on that knife’s edge over the course of an entire 10,000 hours. One useful model is three concentric circles: comfort zone, learning zone, and panic zone. Only in the learning zone can we make progress. The comfort zone is too easy and the panic zone is too hard.

Most of us, when we practice, think we’re in the learning zone, when in reality we’re simply performing extra iterations within the comfort zone. Those iterations, no matter how many, do not count towards the 10,000 hours, and do not bring us any closer to a Mozart-level accomplishment. 10,000 hours in a true learning zone is incredibly difficult, which is why there are so few geniuses out there.

I think there are excellent connections to be made between your dialogue with materials and that learning zone. The key here is to leave your comfort zone, but to not venture so far from it that the result is chaos. Inevitably, finding that knife edge requires dialogue, feedback, interaction, and discomfort.

slim: Ah . . . That’s a great way to think about it! 10,000 hours of discomfort.

joonkoo: Yes, as An-Lon described — thanks, An-Lon — it’s not merely the 10,000 hours of work. But still, what is true is that effort and time is a necessity for gaining an expertise..

Sorry if my comments steered the discussion too much toward the idea of expertise. But, I thought this was exactly what you were referring to when I got a better understanding of what you meant by being able to empathize with things.

slim: Don’t worry about steering the conversation in whatever direction. The purpose of this conversation is to understand what it means to have an empathic conversation, which would naturally require a lot of empathic conversations. (Smiles) I thank you for your patience. I really could not ask for more!

And yes, An-Lon, I do see a correlation between expertise and empathizing across time and memory. The more you empathize with an other across time and memory, the more trust, discipline, and skill you are able to build in relation to them. Whether this is with physical objects, or another human being, the model seems to work equally well .

Here’s a thought: Having a conversation with someone or something who/that has a sense of integrity, or a world view, different from your own — or simply unexpected or unpredictable — is highly uncomfortable. Perhaps the capacity to handle this gap in knowledge or this discomfort — one of the abilities I would think is necessary to stay in the learning zone — is directly related to humility.

joonkoo: Here’s also another thought, which is my current research topic. We are all experts at processing words visually — or simply reading, which is to say that we can quickly parse fine squiggly lines in our mother language. There are, in fact, many experimental tricks that you can do to show your expertise in reading letters and words. However, when you think about it, it is hard to believe that our brain is hard-wired to read words.

Script was invented only very recently on an evolutionary time-scale. Most humans were not educated to read and write until much more recently. But literate adults are very good at reading. This must be due to the extensive training with letters and symbols during development.

While I’m not sure if learning to read during childhood really pushes the boundary and enters the discomfort zone, this may be illustrating another type of expertise that we go through. It’s different from others because, unlike face recognition, it’s not hard-wired, and unlike becoming an expert in Starcraft, this kind of expertise seem to be something relatively easily achieved by the masses.

slim: I want to understand better what you say about our ability to become expert readers. You are saying that, for some reason, we can learn how to read starting at a young age, although it is not something we are hard-wired for .  This is an assumption, but a fairly safe one. I think you’re also saying that it is unclear if this necessarily implies that we are in the discomfort zone when we learn to do this, which leads to the question on whether this is a different kind of learning or not. Is that the question?

joonkoo: Well, I don’t want to get into a discussion around the idea of a discomfort zone too much. That was just a side note. What I was focusing on was that learning to read —visual processing of orthographic stimuli, to be precise — and becoming an expert at reading is something that is quite different from becoming an expert in some other domain, because it is an expertise that is ,  presumably,  not based on a hard-wired system, yet acquired by pretty much all of us — except people with dyslexia.20 When you think about it, there are not many things that are like this. This is, in fact, what makes reading very interesting.

slim: Ohhhhhhh! So you’re distinguishing between learning through the use of hard-wired facilities  ( i.e., facial recognition)  vs. learning through the use of non-hard-wired facilities  ( i.e., reading). Then you’re asking how much of the learning that happens in a given domain is facilitated by hard-wired capabilities vs. non-hard-wired capabilities, and how their proportion affects the experience of learning. And you’re saying that reading is special, because almost all — possibly an overstatement — is not facilitated by hard-wired capabilities. Am I understanding you?

joonkoo: Yes, that would be a straightforward way of saying what I was trying to say. (Smiles) Thank you!

slim: What is an orthographic stimuli? I just tried looking it up, but couldn’t make much sense of the stuff I found.

joonkoo: Oh, an orthographic stimuli might be a word that I made up. (Smiles) Just think of letters and words.

slim: Oh, then by “read” do you simply mean recognizing the letter forms that one sees or do you mean making meaning from their composition into words?

joonkoo: What I mean by “reading” is the visual processing of letters. Reading is a special case because not much of it is hard-wired. In fact, one of the recent claims is that it goes against some hard-wired neural structure that is designed to carry out other activities more efficiently. That other stuff being the mirror invariant perception of visual features. For example, it takes very little effort to view some image, then view the left-to-right flipped version of the image and know that those two images are identical. It is argued that this is a kind of basic visual mechanism that is more hard-wired. However, when learning to read, b is not the same as d even though it is a left-to-right flipped image of b. So to learn that these are different, the mirror invariant perception needs to be unlearned to a certain extent before you can learn to read.

slim: Wait, wait, wait . . . mirror invariant of perception? You mean we’re hard-wired to be able to tell something is the same regardless of whether it is mirrored or not? Where did that come from? Is it because things
in nature are symmetrical?

an-lon: Seriously! Symmetry and mirror invariant of perception? That’s fascinating! What about Asian languages where there isn’t the b and d problem? I’ve often heard that there’s no such thing as dyslexia in the Chinese because of that. Is that really true? I don’t suppose there’s a good layman’s book on this subject?

joonkoo: My understanding is that the critical ability in visual processing of written words is not necessarily restricted to the b vs. d problem, but more related to discriminating the subtle nuances in the various different visual features. Mirror invariance is just one of the examples. There are many such examples in other languages for sure.

I don’t know much about dyslexia in the Chinese population. Dyslexia is something that’s a little different from pure impairment in visual processing of words.

Most current theories and findings are putting emphasis on the phonological processing of prints. Stanislas Dehaene21 is a big name for this kind of research. I’m sure he wrote books for the general public on these matters.

High-level vision is a fascinating field for research. Reading, in particular, is intriguing for all the reasons that we discussed so far.

anson: What a lively discussion! Slim, let me just say that your descriptive writing helped me imagine myself going back to a wood workshop, with all the sensations that comes with it. I took a woodworking classes from Grade seventh to ninth, way back when.

I also think you have touched on a very important topic about truth or what is true in this world. Truth is honest. Truth is simply what is. Truth neither budges nor needs to budge. To go against the truth is like kicking against the goads.

Truth is beautiful and simple. It just remains there patiently waiting for us to recognize it and embrace it. Truth sets us free. It always teaches us an easier and simpler way. It helps us to be in harmony with this world. A lot of times when we think of truth, we think of moral categories of right and wrong, but it need not be so. Rather, I think using the categories of in harmony or out-of-tune is a better way of looking at it. Finding truth is simply finding the way of how to be in harmony with everything. Although there is indeed a lot of incredulity towards truth in our postmodern sensibilities, your story is reminding us something so basic and simple — whatever is true is honest and it is what it is. There’s a video on YouTube called “Rhythm” featuring a pastor named Rob Bell 22 on this very topic from the Christian perspective. Perhaps you will find it relevant.

slim: I recently came across a book called the Empathic Civilization by an economist named Jeremy Rifkin. In the book, he writes that “when we say that we seek the ultimate truth, we are really saying that we seek to know the full extent of how all of our relationships fit together in the grand scheme.” Your comment reminded me of that sentiment, and it resonates.

In the way that he describes it, I believe both truth as well as subjectivity can coexist. If there’s a classic pattern I recognize throughout history, it is that every time someone claims the existence of a dichotomy, it is not that it is either/or, but both in some relationship constantly shifting through time. Just as the idea of balance is not some static equilibrium, but rather an ongoing process that fluctuates, I imagine this is the same.
And although I’m not Christian, I have to say that I enjoyed the video very much. The first thought that came to mind was how different it was from what I had expected a Christian video to be like. But then I realized what does that even mean to label something as a “Christian video”? It’s nothing but a projection of my biased assumptions.

It almost seems like the words “God” and “religion” play a large part in confusing and dividing people. I can tell from first-hand experience how profound the change in one’s own world view can be when words that you once thought you knew get redefined. Perhaps a relevant quote is one from philosopher Emmanuel Levinas23 who said, “Faith is not a question of the existence or nonexistence of God. It is believing that love without reward is valuable.”

——

16 Swiss psychologist Jean Piaget defined assimilation as the integration of external elements into evolving or completed structures, and accommodation as any modification of an assimilatory scheme or structure by the elements it assimilates. He said that assimilation is necessary in that it assures the continuity of structures and the integration of new elements to these structures, whereas accommodation is necessary to permit structural change, the transformation of structures as a function of the new elements encountered. An example of assimilation would be the child sucking on anything they can get their hands on. As they learn to accommodate, they discern what to suck on and what not to. (Encyclopædia Britannica Online)

17 L. S. Vygotsky, (Nov. 5, 1896 – Jun 11, 1934) was a Soviet psychologist who, while working at Moscow’s Institute of Psychology  from 1924–34, became a major figure in post-revolutionary Soviet psychology. His theory of signs and their relationship to the development of speech influenced psychologist Jean Piaget. (Encyclopædia Britannica Online)

18 Starcraft is a real-time strategy game for the personal computer. It is produced by Blizzard Entertainment. According to Scientific American, it has been labeled the chess of the 21st century, due to the demands for the pursuit of numerous simultaneous goals, any of which can change in the blink of an eye. (“How a Computer Game is Reinventing the Science of Expertise”)

19 An unusually gifted person (frequently a young sportsperson), a prodigy. (OED Online)

20 Dyslexia is an inability or pronounced difficulty to learn to read or spell, despite otherwise normal intellectual functions. Dyslexia is a chronic neurological disorder that inhibits a person’s ability to recognize and process graphic symbols, particularly those pertaining to language. Primary symptoms include extremely poor reading skills owing to no apparent cause, a tendency to read and write words and letters in reversed sequences, similar reversals of words and letters in the person’s speech, and illegible handwriting. (Encyclopædia Britannica Online)

21 Stanislas Dehaene (born May 12, 1965 Roubaix, France) is a professor at the Collège de France, who directs the Cognitive Neuroimaging unit of the French National Institute of Health and Medical Research. In his book The Number Sense, he argues that our sense of number is as basic as our perception of color, and that it is hard-wired into the brain. (“Stanislas Dehaene”)

22 Rob Bell is the founding pastor and pastor emeritus of Mars Hill Bible Church. He graduated from Wheaton College in Wheaton, Illinois, and Fuller Theological Seminary in Pasadena, California. He is the author of Love Wins, Velvet Elvis, and Sex God, and is a coauthor of Jesus Wants to Save Christians. He is also featured in the first series of spiritual short films called NOOMA. (“Rob Bell”)

23 Emmanuel Lévinas (December 30, 1905 – December 25, 1995) is a Lithuanian-born French philosopher renowned for his powerful critique of the preeminence of ontology — the philosophical study of being — in the history of Western philosophy, particularly in the work of the German philosopher Martin Heidegger. (Encyclopædia Britannica Online)

Conversation: Ethics & Computers

On March 19, 2011 at 9:28 p.m., I posted the first draft of what will eventually become the Preface in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and acting.

 

joonkoo: I’m wondering if you should make a clearer definition of the user here. For example, is the user a computer programmer using the computer or just an ordinary John or Jane using the computer? I understand that knowing the exact mechanics or physiology of the computing system may tremendously expand the user’s perspectives, but I also imagine that there would be some considerable costs to learning those mechanisms. Would my mother, a middle-aged lady with few digital friends, ever want to know exactly how the processor and memory work for her to get less frustrated the next time she accesses an Internet browser to receive and view photos that I send?

david: Yes, but what either extremist position about users (ordinary John or Jane vs. super programmer) tends to ignore is the bell curve nature of the problem, which is very similar to my indictment of mainstreaming in u.s. public schools. That is, these need to be seen as somewhat unique user groups requiring distinct, differentiated approaches.

But even if you draw three divisions in the bell curve, which would split say 10/80/10, it is still an enormous design problem. People who use Photoshop are still in a discourse community with considerable depth beyond the average person. It’s even worse at the other end of the spectrum. And this is where I think Peter Lucas,9 founder of MAYA design, and resident genius, absolutely nails it, and my guess is that this is what Slim is getting at with his reference to “physics.”

What Peter says is that you must design for the “lizard brain” first, because it’s the only thing that is consistent across that entire bell curve. (Keep in mind, this is my perception of Pete’s message.) If you learn to do this well, the rest may take care of itself. But fail to get that right, and you either have very little chance, or you’ll be dragging a 200 ton freight train behind you the entire way. That is why our experience with modern technology, even the best of it, falls short.

It’s ironic because we’ve had the technology for it to be a solved problem for at least a decade, but very little work has directed all the physics and graphics innovation at solving the problem of making data manipulable objects with “thingness” much the way Bill Gates describes in “information at your fingertips.” It’s also very similar to the way the osi model10 falls out — meaning that designing for the lizard brain is like the physical layer, designing for higher order brain functions move up the brain stem and can be accounted for in a layered semantics kind-of-way.

But I think there’s an element missing here, which is that what you describe about a user’s experience with the computer crashing or slowing downs is an entirely qualitative judgment. I don’t like computers that crash or slow down, either but the experience is arguably the same or worse if I’m driving my car or bicycle. I ran over a piece of metal on my bicycle commute yesterday and was left with a huge gash in my tire, a blowout, and subsequent wheel lock when the metal piece hit the brake that could have easily caused my clipped-to-the-pedals self to go reeling into the river, but this is the experience of an unplanned and unforeseen mechanical failure. Could the bicycle be made to fail more gracefully? Certainly. But at what cost, with what trade-offs, and what marginal utility? Similarly, I had almost the same thing happen with my little Kia a few months ago in almost the same place and I’d raise exactly the same questions. Kevlar tires, tpms, run flat, oh sure, again at what cost and what compromise?

The design problem that the computer presents is no different though I think what tends to happen here is that because computer science is taught from a very narrow perspective, focused on very quantitative problems, we tend to ignore the qualitative ones, and we do that at our user’s peril. There’s also a tendency, unlike other branches of engineering, to not have much rigor in terms of seeing the trade-offs and compromises in a holistic, systems thinking kind-of-way.

I also want computers and software that fail gracefully, and are friendly and usable, but the path there is very long and very hard and is still beholden to the laws of physics, no matter how much we think we exist in a software world where none of the rules still apply and we can acquire all of these things at no cost to us (the designers) or them (the users).

slim: I’m not saying that the trouble with computers is worse than what we feel elsewhere. What I’m saying is that it’s time we consider the design of computers from the point of view of ethics, not just usability, functionality, or desirability. Why shouldn’t computer programmers and designers adopt the same kind of ethical stance that architects do, for example?

From what I have gathered taking classes in architecture,  there’s a tremendous sense of ethics (not morals) and philosophy of life that goes into educating an architect. I never got any of that as a computer scientist — although, truth be told, whether it would have sunk into me at the ripe age of 18 is questionable. But that’s a whole another discussion.

Even in human-centered design, while we talk about designing for human users, we never get deep enough to the heart of what it means to be human. How can we be human-centered, when we don’t even know what it means to be a human? I’m less interested in the computer affording user-friendliness, usability, or graceful failures. That’s a very object-oriented way of looking at this issue. I’m less interested in objects and more interested in relationships. More specifically, I’m interested in finding out how our relationship to the computer can afford the quality of being immersed in an empathic conversation. The kind of quality that, as far as I can tell, makes us become aware of who we are as human beings.

I have nothing against the laws of physics. As a matter of fact, I think the computer should be designed to accept physics as it is. When designers pretend that the laws of physics don’t apply to computers, weird things are bound to happen.

I don’t think physical materials are there to make our lives more convenient or inconvenient. It just is. Yet because of our evolutionary history, there’s something embodied within us — and something we come to embody as we mature — that allows for us to have an empathic conversation with it. I want the same qualities to be afforded in our interaction with computation.

david: Now we’re getting somewhere! So there are several interesting points I’ll make here. As to your first question regarding architects and computer designers, these comparisons usually fall down because of the chasm between consumer electronics and buildings, structures, etc. There are major differences attributing to elements such as rate of change and stability. Also, classic failures exist in that world, too, though not in the numbers of computers failing, but that’s probably a problem of sample size more than anything. To me, Frank Lloyd Wright’s cantilevers at Fallingwater are beautiful, but they’re not robust from an engineering standpoint. Hmm, where have I seen that before?

The problem with education that you describe is exactly what I was alluding to earlier with computer science’s focus on the quantitative, but I think this is a maturity issue. What I mean is that architecture is a very old discipline. Designing computers and software, not so much. That evolution would, in theory, happen in time, but this will take a long time. Imagine a world in which there are bachelor’s degrees in human factors and human-computer interaction (HCI). Oh sure, there might be one or two now, but imagine a world where they are on the same plane as computer science (CS) degrees.

But in order for such large-scale changes to happen, there needs to be economic incentives. That’s the biggest problem in the entire puzzle here because organizations have no economic incentive to make a radically “better” computer. They’re still making tons of money with “good enough.” I’m hopeful that the rise of mobile computing will give way to better design as the competitive forces there are much stronger than the pc business, just as the same was true for pcs over older mainframes and minis.

But what you seem to be getting at here is a philosophy of computing, just as you describe a philosophy of architecture. That is, not one architect, but an entire movement. This is like Sarah Susanka and the “not so big” movement.11 The conditions for that to exist in computing are not quite as clear to me as in architecture or lifestyle design. That’s possible also with computing, but again, the experience has to be so overwhelmingly great as to cause a parallel economic revolution.

I’d question whether the empathic feeling that you describe between two individuals is even possible with machines. I can’t remember whether this was touched on by Ray Kurzweil in The Age of Spiritual Machines 12 or Don Norman in Emotional Design.13 I don’t know where empathy or compassion originates in the brain, but I’m pretty sure these are very high order functions, and vary individually ( i.e. the continuum from sociopath to the Dalai Lama). Indeed, many would say that empathy and compassion is something we must cultivate within ourselves.

Which brings me to another theme: dogs. Could it be that what you describe is what humans seek in dogs? Dogs are selfless, unconditionally loving, warm, whimsical, carefree — exactly the opposite of “weight of the world” that most adults must grapple with on a daily basis. If the computer could provide a dog-like antidote to adulthood, that would be great. Crazy hard. Which describes the saying, “Anything worth doing…” pretty well.

I suspect that Cynthia Brazeal’s work14 at mit may have some links. Also, David Creswell15 at cmu. He has a publication about transcending self-interest. I think the research questions du jour are these:

What are the determinants of a disposition for empathy in humans? Where is empathy encoded in the brain? Is parity an important part of empathy, or can empathy exist effectively without parity?

The latter would be a requirement for an empathic architectural style to succeed in computing since visiting an empathic requirement on the user would be tantamount to slavery. Until you know the answers to those questions, any attempt to get computers to behave as part of an empathic conversation would be difficult, if not impossible, because there is no other model for empathy but humans. Either that, or I’m horribly confused about the animal kingdom.

Keep up the good work. This is likely to turn into a hard slog if it hasn’t already.

——

9 Peter Lucas has shaped MAYA as the premier venue for human- and informa-tion-centric product design and research. He co-founded MAYA in 1989 to remove disciplinary boundaries that cause tech-nology to be poorly suited to the needs of individuals and society. His research interests lie at the intersection of advanced technology and human capabilities. He is currently developing a distributed device architecture that is designed to scale to nearly unlimited size, depending primarily on market forces to maintain tractability and global coherence. (MAYA Design, “MAYA Design: Peter Lucas”)

10 Different communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point. (Encyclopædia Britannica Online)

11 Through her Not So Big House presentations and book series, Sarah Susanka has argues that the sense of “home” people seek has almost nothing to do with quantity and everything to do with quality. She points out that we feel “at home” in our houses when where we live reflects who we are in our hearts. In her book and presentations about The Not So Big Life, she uses this same set of notions to explain that we can feel “at home” in our lives only when what we do reflects who we truly are. Susanka unveils a process for changing the way we live by fully inhabiting each moment of our lives, and by showing up completely in whatever it is we are doing. (Susanka Studios, 2013, “About Sarah”)

12 Ray Kurzweil is a renowned inventor and an international authority on artificial intelligence. In his book Age of Spiritual Machines, he offers a framework for envisioning the twenty-first century—an age in which the marriage of human sensitivity and artificial intelligence fundamentally alters and improves the way we live. Kurzweil argues for a future where computers exceed the memory capacity and computational ability of the human brain by the year 2020 (with human-level capabilities not far behind), where we will be in relationships with automated personalities who will be our teachers, companions, and lovers; and in information fed straight into our brains along direct neural pathways. (Amazon, 2000)

13 In Emotional Design, Don Norman articulates the profound influence of the feelings that objects evoke, from our willingness to spend thousands of dollars on Gucci bags and Rolex watches, to the impact of emotion on the everyday objects of tomorrow. (Amazon, 2005)

14 Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and human robot interaction. (Dr. Cynthia Breazeal, “Biography”)

15 Dr. David Creswell’s research focuses broadly on how the mind and brain influence our physical health and performance. Much of his work examines basic questions about stress and coping, and in understanding how these factors can be modulated through stress reduction interventions. (CMU Psychology Department, “J. David Creswell: CMU Psychology Department”)