Conversation: Language & Vision

On May 14, 2011 at 9:46 p.m., I posted the first draft of what will eventually become the third story of the “Making and Empathy” chapter in the book Realizing Empathy: An Inquiry Into the Meaning of Making surrounding my experience with poster design. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: I have always pondered whether it is possible for those born blind, deaf, and mute, to think or dream of abstract concepts that they have never encountered.

Whenever I have to process complex thoughts, I hear a voice inside my head, speaking a language with grammar that helps me understand and sort things out. How about babies? Having yet to acquire a language, how do they think properly? Do they just act on their instincts and feelings? What about grown-ups who do not have the ability to put thoughts together into sentences with proper grammar?

Some say that language is the key to our ability to process abstract thought and hence develop intelligence. I think there are many who are mentally and physically disabled, but can still think and understand things like other people. Language seems to be able to boost our ability to organize thoughts and abstract ideas, but it seems like we, humans, have a much more basic way of perceiving, feeling, and understanding the world around us, a fundamental layer of communication beneath our language that everyone has the innate ability to access. I am obviously speaking of what I do not understand, but maybe someone who does can shed light on these issues.

slim: I don’t know, either. But it occurs to me that there may be a set of perceptual triggers that encapsulate the fundamental and primitive qualities of perception, probably pre-language with the potential to be widely shared. Why couldn’t we imagine an interaction paradigm based exclusively on those triggers? After that is established, one could layer the symbolic and gestural semantics on top of it as needed.

joonkoo: These questions are very much related to the origin of knowledge, and the nature vs. nurture debate. I’m a blank slate when it comes to language, but I can point you to a few studies in the domain of vision and number processing. Just be aware that I may be over-generalizing.

The human visual cortex29   is organized in a category-selective manner. For example, the lateral part of the occipital cortex is activated when a person is viewing living things in general. On the other hand, the medial part of the cortex is activated when viewing non-living things. This category-specific organization can be driven by experience over development but it can also be somewhat hard-wired. One study looked at the patterns of neural activity in congenitally30 blind subjects, and they showed the same kind of neural activation patterns in response to these categories of objects even when they were presented auditorily. This study suggests that our visual experience is not necessarily the only critical factor that gives rise to the functional organization of our brain — at least in that context.

slim: When you say living vs. non-living, is a plant living or non-living? Is this related to how autistic people behave differently in relation to non-living vs. living things?

joonkoo: I don’t recall exactly how they categorized living vs. non-living in their study, but one thing I do think is true is that living vs. non-living is probably just one of many ways that things in the nature can naturally divide into, probably confounded with many other ways of categorizing things. For example, it may well be natural vs. man-made things that the brain really cares about. To me, the precise categorization of these things aren’t really important. What’s more interesting is that the visual cortex does not necessarily require visual input for its functional organization.

slim: If the visual cortex doesn’t require visual input for its function, it sounds like that would be a rather remarkable statement when it comes to our categorization of cortices into visual vs. others, no? Am I understanding this correctly?

joonkoo: Not exactly. Here’s another way to think about it. In normal development, the visual cortex is designed to process visual sensory information — based on the anatomical fact. But it’s used differently when it lacks visual input for any unexpected reason. What’s interesting is that even if the visual cortex is putatively31 doing something different in these congenitally blind people, there seems to be a set of universal principles that govern the functional organization of the visual cortex.

When these participants hear a living thing, for example, they have to bring up some mental image of that thing, which is probably not visual imagery, yet their visual cortex works the same way as it does on a participant.

slim: Oh, whoa .

So what you’re saying is that when blind people hear something, it triggers a mental image in their head, which uses the visual cortex, although the imagery they bring up is not visual?

joonkoo: Yes, my guess is that it’s probably a mixture of auditory and other multimodal imagery. But yes, their visual cortex works similarly to that of other subjects considered to be normal.

I guess this can be said as a form of plasticity. But I think this is much more profound than plasticity within a domain or modality (e.g., after losing a finger, the motor cortex that has been associated with that finger is now used for other fingers).

slim: When you say plasticity, I’m guessing it is a situation where a certain part of your body takes on a different role when what it was originally associated with is no longer available?

joonkoo: Yes. Evidence for brain plasticity is very cool.

To Anson’s point, however, this isn’t to say that the experience of abstract or symbolic thought is unimportant. Perhaps a more relevant story comes from a study that investigates number sense in native Amazonians,32 who lack the words for numbers. Through the use of numeric symbols, we have little problem expressing arbitrary quantity. On the other hand, Amazonians have only one, two, and many. Given this, they are pretty good at approximate arithmetic, even with numbers far beyond their naming range, but their performance on exact arithmetic tasks was poor. In fact, they failed to understand that n + 1 is an immediate successor of n.

anson: Would a relevant topic be why the Golden Ratio33 is universally pleasing to the eyes? It seems to indicate that there’s something common to human perception.

joonkoo: Yes, the Golden Ratio is interesting! In fact, there seem to be a lot of links between the biological system and math. One thing that I am more familiar with is the Power Law34 and γ, the Euler constant.35

Many of the psychophysical models are based on this constant and the natural log, and I would love to understand this more as well.

The definition of γ seems to be quite similar to neuronal firing patterns (e.g., long-term potentiation), and I speculate that all these fancy mathematics such as  γ, π, the Golden Ratio, may be driven by some of our intrinsic biological properties. I’m talking too much about things that I don’t fully understand. This should be a question for a computational biologist.

———-

29 The back area of the brain concerned with vision makes up the entire occipital lobe and the posterior parts of the temporal and parietal lobes. The visual cortex, also called the striate cortex, is on the medial side of the occipital lobe and is surrounded by the secondary visual area. This area is sensitive to the position and orientation of edges, the direction and speed of movement of objects in the visual field, and stereoscopic depth, brightness, and color; these aspects combine to produce visual perception. It is at this level that the impulses from the separate eyes meet at common cortical neurons, or nerve cells, so that when the discharges in single cortical neurons are recorded, it is usual to find that they respond to light falling in one or the other eye. It is probable that when the retinal messages have reached this level of the central nervous system, and not before, the human subject becomes aware of the visual stimulus, since destruction of the area causes absolute blindness in man. (Encyclopædia Britannica Online: 1 2)

30 Existing or dating from one’s birth, belonging to one from birth, born with one. (OED Online)

31 That is commonly believed to be such; reputed, supposed; imagined; postulated, hypothetical. (OED Online)

32 CNRS and INSERM researchers (Pierre Pica, Cathy Lemer, Véronique Izard and Stanislas Dehaene) studied the example of the Mundurucus Indians from Brazilian Amazonia, whose vocabulary includes number words only up to four or five. Tests performed over several months among this population show that the Mundurucus cannot readily perform “simple” mathematical operations with exact quantities, but their ability to use approximate numbers is comparable to our own.

This research, published in the October 15, 2004, issue of the journal Science, suggests that the human species’ capacity for approximate arithmetic is independent of language, whereas precise computation seems to be part of the technological inventions that vary largely from one population to the next. (“Cognition and Arithmetic Capability”)

33 Also known as the golden section, golden mean, or divine proportion, in mathematics, the irrational number (1 + √5)/2, often denoted by the Greek letters τ or ϕ, and approximately equal to 1.618. (Encyclopædia Britannica Online)

34 Van Mersbergen, Audrey M., “Rhetorical Prototypes in Architecture: Measuring the Acropolis with a Philosophical Polemic”, Communication Quarterly,
Vol. 46 No. 2, 1998, pp 194–213. A relationship between two quantities such that the magnitude of one is proportional to a fixed power of the magnitude of the other. (OED Online)

35 The constant that is the limit of the sum 1 + ½ + … + 1/ n − log n as n tends to infinity, approximately equal to 0.577215665 (it is not yet known whether this number is rational or irrational). (OED Online)

Conversation: Respect & Integrity

On April 17, 2011 at 5:38 p.m., I posted the first draft of what will eventually become the first story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience with glass. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: When I was studying hermeneutics,28 I remember my professor saying, “Every question presupposes you know something about the answer.”

For example, you ask, “What can I do to tear a piece of glass?” The question pre-supposes that you need to do something to achieve that effect. I don’t know much about glass-blowing, but as far as I know, you take advantage of gravity, right? Sometimes you don’t have to do anything, but just let gravity and the natural decline in temperature take care of matters.

The kind of question we bring to the table often shapes the kind of answer we expect to hear. Everyone sees through a pair of tinted glasses. It is inevitable, but it is important for us to be aware of that influence and bias and try to compensate for it. That is something people in the field of hermeneutics and epistemology have helped us to understand.

Does this make sense to you?

slim: Yes it does.

And that’s such a great point about the use of gravity in tearing glass. You’re absolutely right. I did think that I had to do something to tear glass. It is truly mind-boggling to realize that there’s no end to how many biases we may be operating under at any given moment.

You mentioning gravity also reminds me of an experience I had in my modern dance class.

One day, we were asked to roll down a small hill. The first time I did, I was somewhat apprehensive. I had never rolled down a hill before — at least not as an adult — and I was afraid that I might get hurt. So in an attempt to prevent that from happening, I tried to become very con-scious of how I rolled, so I could slow down and control where I was going. I wasn’t very successful, though.

I remember the roll being rather rough.

But the second time I did it, I was abruptly dragged away by a friend of mine who showed up out of nowhere and said “Let’s go!” Before I knew it, I was back up the hill throwing myself down again. What is interesting about this second time is that I distinctly remember how free my body felt. Maybe it’s because I didn’t have any time to think, but it felt as if I were gliding down the hill. It felt very smooth.

It was just me, the ground, and gravity working together in collaboration. In retrospect, I was biased toward assuming that to not get hurt I had to become conscious of the roll, so as to try and control every aspect of it. When in fact it was better to relax.

an-lon: Funny story. I was at a going-away party for one of my DreamWorks friends, and another coworker brought some homebrew and a beer bong. At the height of everyone’s drunkenness, Josh,  the bringer of beer, tore into Moiz, the guy who was leaving , over something involving semicolons. It took me a while to piece together the story, accompanied as it was by much shouting and laughter, but from what I gather, Moiz had managed to put a semicolon at the end of every single line of his Python code, and Josh just couldn’t believe it. He said, “We never put it in the best practices manual because we never imagined anyone would do something so goddamn stupid!”

Point being, in computer languages, people often write code in one language as if it were another — importing irrelevant habits/conventions/design patterns. The semicolons thing was funny because the vehemence of the rant far outweighed the magnitude of the infraction but I’ve seen many examples of this over the course of my programming lifetime, and I’m sure it has cost companies millions of dollars’ worth of programmer time just because the code ends up being incomprehensible.

slim: Yeah, I remember it taking me quite a bit of effort to go from programming in C to programming in Prolog. Even now I haven’t done much functional programming, so I bet the way I write functional programs is not as respectful of the functional principles as it could be. As a matter of fact, it may not be that much better than my disrespect for the material integrity of glass.

an-lon: By the way, your comment about respecting the integrity of physical materials reminds me of this old joke of a fictional radio conversation between a U.S. Navy aircraft carrier and the Canadian authorities off the coast.

U.S. Ship: Please divert your course 0.5 degrees to the south to avoid a collision.

Canadian Coast guard: Recommend you divert your course 15 degrees to the south to avoid a collision.

Ship: This is the captain of a u.s. navy ship. I say again, divert your course.

Cost guard: No. I say again: you divert your course!

Ship: This is the aircraft carrier uss coral sea. We are a large warship of the u.s. navy. Divert your course now!

Cost guard: This is a lighthouse. Your call.

slim: Ha ha ha ha ha ha! Respect the lighthouse, dammit!

an-lon: Also, here’s a quote that expresses my view of integrity, written by Mary MacCracken, a teacher of emotionally disturbed children. She’s explaining why she tries to teach reading to children who are so lacking in other life skills, it might be argued that learning to read is beside the point.

“The other teachers thought I was somewhat ambitious. They were kind and encouraging, but it did not have the same importance for them as it did for me. And yet, and yet, if what I loved and wished to teach was reading, I had as much right to teach that as potato-printing. In the children’s world of violent emotion, where everything continually changes, I thought it would be satisfying for them to know that some things remain constant. A C is a C both today and tomorrow — and C-A-T remains “cat” through tears and violence.”

For some reason, that quote has stayed with me for a long time. To me, that’s integrity:  that C-A-T spells cat today, tomorrow, and yesterday.

And incidentally, that’s what Microsoft’s never figured out — that users hate having things change from under their nose for no good reason. Remember those stupid menus whose contents shift depending on how frequently you access the menu item? Whose brilliant idea was that? Are there any users out there who actually like this feature, instead of merely tolerating it because they don’t know how to turn it off? Features like that create a vicious cycle where users become afraid of the computer, Microsoft assumes they’re idiots and dumbs down things even further — making the computer even more unpredictable and irrational. Now there’s no rhyme or reason whatsoever behind what it deigns to display. Say what you will about Mac fans, Windows and OS X are still light years apart in terms of actually respecting the user.

And here we cycle back to the initial conundrum: how to reconcile that austere landscape of programming abstractions with our emotional, embodied, messy selves; selves so much in need of human connection that we perhaps see everything through that lens.

Here’s a slightly loony bins example that I have tried and failed many times to write down. Around the time I was learning object-oriented programming, sometime in my early twenties,  my cousin went through a love life crisis.

The guy she was dating had a photo of an ex-girlfriend on his refrigerator, but none of my friend, only her business card. They somehow got into a fight over this. She went home, and, partly out of pique — but mostly to amuse herself — she got out a photo of every single one of her ex-boyfriends, put those photos on the fridge, and added the business card of the current guy. Then she forgot about it and went about her daily business. Of course, you can predict the rest of the story. The new guy somehow came over unexpectedly and saw the photos, they had another fight, and finally broke it off.

My cousin tried to explain to me later that the problem wasn’t so much the photos and business cards and exes. It was that her boyfriend just didn’t get that she does quirky things like that for her own amusement. What she did wasn’t intended as a message and wasn’t intended to be seen, it was just an expression of her own personal loopiness. The fact that he couldn’t relate to her silliness was as much the deal-breaker as the original photo of his ex.

At the time, we were both fresh out of college and lamenting the closeness of college friendships. The guy in question was older, maybe in his thirties , and he really just didn’t seem to get it.

And here is where I went into the spiel I have never been able to replicate since. Because I had just been reading about object-oriented programming, the thought in my head was that in college, we gave out pointers left and right to each other’s internal data because we just didn’t know better. All the joy and sorrow and drama was there for any close friend to read ,  and write, and modify. As we got older, we learned that this is a rather dangerous way to live, and developed more sophisticated class interfaces — getters and setters for that internal data, if you will. The guy in my cousin’s story seemed to live by those getters and setters, and was appalled when my cousin inadvertently handed him a pointer.

Here’s the part of the story I have never been able to replicate: I told my cousin all that without mentioning object-oriented programming once. I used a fair bit of object-oriented terminology, but only the words whose meanings were either immediately clear from the context or already in common usage — handle and interface, for example. She immediately understood what I was trying to say, and added that the word “handle” was a particularly poignant metaphor. When we’re young, we freely give loved ones a handle to our inner-selves, but in adulthood, we set up barriers and only let people in at predetermined checkpoints according to predetermined conventions. As adults, we give out handles to only a very few, and those already in possession of a handle can always come back from a previous life to haunt us. We interact with the rest of humanity via an increasingly intricate set of interfaces. By now, I possess a much deeper and richer set of interfaces and protocols than I did in my early twenties, so I can share a great deal more of myself without fear of being scribbled on. But I still don’t hand out raw pointers very often — the vulnerability is too much for me, and the responsibility too great for the other person.

Back to computers and HCI. I am surprised sometimes by how often I use computer terminology in daily life among non-programmers and get away with it. You don’t have to be a programmer to understand me when I complain that an instruction manual is spaghetti, or that my memory of a particular song got scribbled on by someone else’s more recent cover of it. The reason these metaphors work, of course, is that spaghetti and scribble are essentially round-tripping as metaphors — from daily life to computer science and then back to daily life. First, the English words were co-opted to convey a specific computer science concept — spaghetti code is code that is unreadable because it tangles in a million different directions, and to scribble on a memory location is to overwrite data you’re not supposed to overwrite —and then I re-co-opted them back into English — to express frustration at the unreadability of the instruction manual or lament that my memory of the original song has been tarnished.

My point here is that computer science is rich in human meaning precisely because we choose human metaphors to express otherwise abstract concepts. My analogy between object-oriented programming and human relations is surprisingly salient because object-oriented programming, at some level, had to come from human experience first. What is architecture? It was the Sistine Chapel before it was the Darwin operating system. Have you seen the ted talk by Brené Brown on the power of vulnerability? It’s what got me thinking about our longing for human connection

slim: I’m really taken by your use of pointers and getters/setters in the context of relationships. I’ve never thought of it that way, and it’s a rather interesting way of thinking about it. There’s so much in there that I’m having trouble responding in a coherent way.

And yes, I’ve watched that Brené Brown talk numerous times in the past. It’s a very good one, and it is consistent with my experience making physical things.

——

28 The art or science of interpretation, especially of Scripture. Commonly distinguished from exegesis or practical exposition. (OED Online)

Conversation: Trust & Not Expecting

On April 8, 2011 at 2:16 p.m., I posted the first draft of what will eventually become the last story in the “Making and Empathy” chapter in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience in the foundation studio. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: For me, painting requires this exact kind of courage you are talking about. I find painting very difficult, because I always need to get things right the first time around. I always need to know what to do precisely to get to the end result I want. I would use very fine brushes to get all the details of the eyes and the hair from the get-go. I would pick the exact color of paint that matches the photo. I need to get everything right with painting just one layer.

But when I saw videos24 of skilled painters painting, they didn’t seem to care if their painting looks awful in the beginning. They begin with a very rough outline and use very broad strokes. They keep painting over it again and again, refining and adjusting constantly, adding more and more details layer by layer. It is by this constant refinement that makes their painting possible, and also realistic.

To be courageous in the midst of uncertainty, trusting the process — or the journey — will work itself out, is something that I don’t think I learned from our computer science education.

slim: Having gone through a portion of the risd foundation program, I’ve come to realize that one of the most important skills of an educator is to know how to challenge the students. It’s like Randy’s story about his first building virtual worlds (BVW) class where he realized that the quality of work his students displayed on their first project was so high that all he could do was tell them to do better. It seems to me that in the right environment, we human beings can to grow in almost magical ways.

anson: I was lucky to be in that very class with Randy. But you know what? At that time, we all thought Randy was a mean and ruthless teacher. We worked so hard to get our first virtual world out in two weeks, and then he said he expected better work. We were like “What?!” After having watched his last lecture, we, of course, now empathize with why he did this, and that he is one of the best educators in the world. He saw the potential in us and he helped us to draw it out.

I think both you and I learned something precious in the past few years by jumping into a field foreign to us. And you’re right, it is education itself. We had our minds and paradigms stretched, challenged, stimulated, and inspired. I am so glad I have gone through this education process while I am still teachable. You know, some people stop being reflective after a certain age and become unwilling to change the paradigms of how they look at things.

an-lon: I’ve been through this — making forwards and backwards progress at different times in my life — learning to be prolific instead of perfectionistic, and learning that it’s the playful, throw-away variations that eventually lead to the finished work.

In one chapter of the book Art and Fear, there’s an apocryphal story about how half the students in a pottery class are told they will be graded on the quantity of work they produce, the other half that they will be graded on the quality of their work. At the end of the assigned period, the students in the quantity group have produced higher quality work than the students in the quality group because they were given the freedom to experiment and iterate, plus the mandate to work quickly.

That’s my art story. My computer science (CS) story is no less profound. Here’s the thing, I doubt that I could have survived majoring in cs back in college. I didn’t have the maturity or the study habits, and I was far too easily intimidated. I was also terrified that I wasn’t smart enough. I don’t want to go into a long song and dance about this, and fortunately, I don’t have to because Po Bronson has already written an article25 about it.

The gist of the article is that parents who overpraise their kids for being smart are setting them up to never leave their comfort zone, because the minute they encounter difficulty, the kids panic and assume “it’s tough, therefore I must not be so smart after all.”

I found CS to be tough. I assumed everyone else was smarter than me. I walked away, for a time. What brought me back? Above all, it was the change in mindset that allowed me to return to computer science. This happened over the course of several years, and I can trace much of it back to a couple of college friends.

Doug was this kid from Alabama who lived two doors down from me in my dorm freshman year; Jeff was his Jewish roommate from New Jersey. One of my very clearest college memories — the one that’s always struck me as the quintessence of dorm hall diversity — was when we somehow got into an argument about when World War II actually began. For Doug, World War II began with Pearl Harbor because that’s what we were taught in American history classrooms. For Jeff, it began with the Holocaust and the pogroms, because that’s what was in his cultural memory. For me? Japan invaded China years before any of that other event ever happened. We came from such different backgrounds, yet ended up as such good friends. Those were good times.

Anyway, Doug and Jeff were different from any of the guys I’d known in high school. Smart, yes, but this was Princeton and everyone was smart — or desperately trying to prove they were. I think, in hindsight, that those guys were among the first I’d met who were playfully smart — who tried new things because it was fun, and who ended up in computers because it was a new, fun thing to be tried.

Back then, I didn’t understand the concept of doing things for fun. My physicist father had none of that playfulness about him when it came to academic studies. For example, he could probably be a chess grandmaster if he wanted, but he never bothered to learn because it was just a game and therefore pointless.

I was never as good at math and physics as my dad. That was a losing battle from the start. And since physicists tend to see computer science as being several rungs below them on the intellectual pecking order — the equivalent of doing manual labor — I was never exactly encouraged to pursue computer science. So I went my own way and studied comparative literature — and my parents, to their everlasting credit, let me.

But I threw the baby out with the bath water. I was never meant to be a physicist — though, ironically, computer graphics has actually brought me back to physics full circle, but computer science wasn’t physics. Honestly, computer science is mostly just dicking around . You futz with it till it works. I’m not saying the theoretical underpinnings are unimportant , but honestly, the guys who are good are the ones who spent a lot of time dicking around because, it was fun. They weren’t intimidated by the difficulty factor because unlike me, they didn’t see the difficulty as an iq test. For them, an obstacle was like a video game obstacle :  a legitimate challenge to be bested, not a measuring instrument assessing whether or not they stacked up.

At first, I really couldn’t wrap my head around the fact that these guys who seemed to spend as much time playing Nethack as they did writing code were also really cool and well-rounded people. Jeff was into theater and Doug knew a ton about contemporary art. It didn’t seem fair, somehow, that the reward for goofing off was to become smarter.

I didn’t have any sort of instant epiphany, but over the course of college and my early twenties, I did rewrite my entire value system. I came to understand from observation that intelligence wasn’t about being born smart — it was about being born smart enough, and from there, being playful and willing to explore. It was about leaping in without a clue and getting your hands dirty, rather than hovering nervously on the sidelines.

After years of being told by my parents how smart I was and living with the secret fear that I really wasn’t, I finally came to value honesty, courage, and playfulness over being smart. I also came to see the excuse “well, I could have done it if I’d tried harder” as the coward’s way out. Because if you get a B on a test without studying, you can comfortably assume you might have gotten an A if you had studied. But if you study your ass off and still get a B, well, there goes all your illusions. So it’s easier never to try.

When I returned to computer science in my early twenties, I was beginning to develop some semblance of maturity. I made a conscious choice about my value system that I would quit worrying about whether I was smart enough, and instead put all my effort into making an effort. What I discovered was that playfulness (i.e.,  willingness to explore seemingly irrelevant side paths ) and work ethic  (i.e.,  setting goals and not making excuses )  led, over time, to all the analytical smarts I ever needed for my career.

This spirals back to Art and Fear because of the simple, sad observation the authors make in their opening pages, which is that many students stop doing creative work after they graduate. Without the community and structure and feedback cycle, they’re lost.

So I think the spirit of play becomes all the more important after graduation — because the girl folding paper and producing a thousand variations just because it’s interesting will keep doing it, whereas the guy who was doing it for a grade won’t. What you’ve produced as a student will most likely be forgotten, but what you’ve become won’t.

david: Slim, there’s a certain raw, honest quality to your writing that I’m just incapable of, but it feels so good reading it, because like the finest song lyric, it expresses what I felt palpably.

The overarching theme here of whimsy is spot-on. I think the greatest indictment of modern u.s. culture is the lack of whimsy and its replacement with what the writer David Foster Wallace referred to as “the entertainment” or “an orgy of spectation.” 26

If there is one thing I seek in my mostly boring middle-aged adult life is that whimsy, and childlike sense of adventure. It strikes me that the same thing that makes children so hilarious as in this conversation between a friend (the mom) and child (the son) which appeared in my e-mail today:

Son: When you’re three, sometimes they will let you out of a cage.

Mom: What? What cage are you in when you’re three?

Son: I don’t know… I think it’s the rule, though. You can get out when you’re three.

Mom: How do you know?

Son: Well, when people are let out of a cage they always say, “I’m three! I’m three!”

This is precisely the same thing that when observed in adults would be labeled as a dissociative disorder and medicated out of existence. Adulthood is so overrated. At least the politically correct version of it that most of us practice.

slim: Both of your stories resonate with me. I feel as though I have spent too much time in my 20s worrying about when I would finally be an “adult,” or at the very least “professional,” much to my own detriment. At first,

I thought there was something wrong with me for being so child-like, but once I got sufficiently close to those who I considered to be the epitome of adulthood or professionalism, I learned that they were simply hiding their child-like tendencies, because they didn’t want other people to see it as a sign of immaturity or weakness.

I also learned that the elders could see right through people who are trying to look like an “adult” or a “professional.” Those who have lived long enough know that none of us actually know anything for certain. So it’s mostly a matter of whether you trust someone or not, instead of whether that person really knows something.

david: There is no more chilling effect, as far as I’m concerned, on American culture than the one you describe here, which is to say that half the country exists in a world where everyone is pretending to be professional, instead of being authentically themselves and leaning toward self-actualization. Some form of this was the original hypothesis of the Cluetrain Manifesto,27 which seems to have had little effect outside very small circles of young people.

Of course, the individual’s self-actualization is rarely in the best interest of the corporation, at least as management sees it. This homogenization is about as disturbing a trend as we can possible endure and in fact, should be seen as an affront to the principles that we stand for, namely freedom.

I’m consistently amazed by the influence of “dress for success” on the American corporate psyche. People actually care how I cut my hair, shave, or whether I’m tattooed or pierced as if my capabilities or brain power or effectiveness change with the scenery. I’m also consistently amazed by how the basic marks of individuation aren’t seen as intrinsic or extrinsic. I started writing an essay on a philosophy of hiring recently and a lot of these kinds of themes come up there. Pittsburgh is certainly a bastion of the old school in this regard. While I understand the point in marketing and sales, the extent to which I’ve seen all manner of bizarre corporate policy developed on the altar of dress codes is mind-boggling.

I’ve seen pictures of James Watson delivering the original papers on dna just days after their publishing standing on-stage in front of his peers in shorts and then there’s Paul Erdos, who pretty much defined the picture of obsession and minimalism. I’m also told that none other than Herb Simon, when asked to choose a place to live on his arrival at cmu drew a half mile radius around the university and said, “Anywhere in that circle” owing to his particular obsession with being able to eat and breathe the work, other concerns be damned.

And of course, I’m not sure we have much in the way of counterculture outside of absurdist examples like Mike Judge’s Idiocracy.28 I must go watch that movie again soon.

Welcome to Costco; I love you!

They tell me Costco is now in downtown Chicago. I may have to move to a hill in Montana next.

an-lon: The theme of balancing grown-up responsibilities (e.g., taxes, housing, earning a living) with a childlike sense of adventure is definitely a big one for me, as well. I think the theme of rebirth is a salient one as well. For better or worse, I can’t re-live my twenties .  I need to find what works for me now, in making my second big career change,  or third, I guess, if you count comparative literature to cs to be one arc and then think tank to vfx to be another. I can’t just repeat what I did the first two times — I need to find what works now, at a different life stage with different priorities. I’m not out to reject adulthood here   but I do intend to redefine it.

anson: I think we have to question whether professionalization is doing good or not to the education of our current and next generations. Professionalization makes us feel good about ourselves and also helps us to land a job more easily, but then it doesn’t help produce people who are more well-rounded and more capable of continued learning, especially in contexts that are out of their comfort zones.

I am fortunate to have received both a technical and liberal arts education. When I raise my kids, I won’t let them become lopsided techies. I also want them to be equally exposed to a liberal arts education, including history, arts, literature, and philosophy. I think that will help them to see the world through a different pair of lens and be more embracing of diversity and creative ideas.

——

24 A good example of such a video is a Belgian documentary film from 1949 directed by Paul Haesaerts called Visit to Picasso that captures Picasso’s creative process as he paints in real time. (“Bezoek aan Picasso”)

25 American journalist Po Bronson once wrote about how a large percentage of all gifted students severely underestimate their own abilities. (“How Not to Talk to your Kids”)

26 The late David Foster Wallace, an award-winning American writer, is quoted as saying, “The great thing about not owning a TV, is that when you do have access to one, you can kind of plunge in. An orgy of spectation. Last night I watched the Golf Channel. Arnold Palmer, Jack Nicklaus. Old footage, rigid haircuts.” (Lipsky, 2010, 118)

Lipsky, David. Although Of Course You End Up Becoming Yourself: A Road Trip with David Foster Wallace. (New York: Broadway Books, 2010), 118.

27 The Cluetrain Manifesto both signals and argues that, through the Internet, people are discovering new ways to share relevant knowledge with blinding speed. As a result, markets are getting smarter than most companies. Whether management understands it or not, networked employees are an integral part of these borderless conversations. Today, customers and employees are communicating with each other in language that is natural, open, direct and often funny. Companies that aren’t engaging in them are missing an unprecedented opportunity. (“The Cluetrain Manifesto”, 2000)

28 An American film where Private Joe Bauers, the definition of “average American,” is selected by the Pentagon to be the guinea pig for a top-secret hibernation program. Forgotten, he awakes 500 years in the future. He discovers a society so incredibly dumbed-down that he’s easily the most intelligent person alive. (IMDB, 2006)

Conversation: Choice & Feeling

On April 6, 2011 at 12:31 a.m., I posted the first draft of what will eventually become the fifth story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the metal shop. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about empathy and mastery.

 

an-lon: (Smiles) Happens to me all the time when drawing and editing, squinting at it and wondering what’s wrong, and 90% of the time, whatever’s wrong is completely orthogonal to all the directions I was previously searching.

slim: The feeling of hindsight obviousness intrigues me quite a bit. I remember being dumbfounded when my friend shared her story of how she overcame her bipolar disorder. She said she finally realized that she had the power to choose not to be depressed. She told me that it was so obvious in hindsight, that she couldn’t understand why she didn’t realize it before. But the reason I was dumb-founded was that I wasn’t depressed, yet I had never realized that, either. I can choose how to feel? That was a completely novel thought.

Since then, I’ve heard many people say things like “we always have a choice.” But I think it’s imprecise to say that we always “have” a choice. I’m sure it took them a lot of struggles to come to that realization. So what they mean is that we have to become knowledgeable of the choice. Or more precisely, we have to “develop” and “make” a choice that wasn’t available to us previously. That can take quite a bit of effort. It’s not just a matter of “snapping out of it.” Once you’re able to just snap out of it, you’ve already learned it.

an-lon: Ironically, Slim, you knew me during a period when I was genuinely depressed. When I attended the International School of Beijing (isb), I was really alone and struggling. Beijing was my first time living in a big city and I experienced culture shock and extreme loneliness.

I was functional — for where I was at the time, I was pretty convinced I’d just get yelled at if I admitted I needed help —but I remember sleeping 10 hours a day because I just didn’t want to wake up, and making a deal with myself that I’d allow myself to contemplate suicide if college wasn’t better. Don’t get me wrong ,  I wasn’t actively suicidal. It was just my way of mentally kicking the can down the street. I truly have no idea what it was like for your friend.

I think there are links between add and depression, but I don’t think I was ever truly chemically predisposed to depression in the way a bipolar person is. In my case, I was depressed first because I was trapped in a small town — before Beijing — then thrown into a big city — Beijing — with no coping skills.

College and D.C. introduced me to the world, and I was fine after that. But I do know from those high school years exactly what depression is. I had plenty of roller coaster ups and downs in my twenties, but nothing like depression. Nothing like that soul-sucking lethargy of my teens.

Unfortunately, I can’t say the same of the past few years. The allergies are a long story, but basically a year into my stay in L.A., I started experiencing mysterious symptoms:  a sore throat that wouldn’t go away for two months and just overall lack of energy. It took many trips to various doctors to figure out what was going on. I’d do something that would help for a while, then get flattened by some new mystery ailments.

The infuriating thing was, that was never anything huge — I’d just be sick, and tired all the time because when you’re not breathing well, you’re not sleeping well, and when you’re not sleeping well, you’re not living well. After a while, this changed my identity, from an energetic, enthusiastic person to one who carefully rationed her energy.

This also made me realize that perhaps that enormous physical energy was all that had held depression at bay all through those 18 years between high school and l.a. I kept the demons at bay by constantly chasing after new pursuits, which was great, but what I didn’t know was that if you take away the physical energy, the scaffolding that remains is a house of cards.

Thing is, during the healthy decade of my twenties, I’d taught myself to push through fatigue, frustration, and fear. Athletics are a good example of this ; you learn to recognize when to push through pain and when to rest. You know the Nike slogan “Just do it”? Well… yeah. Just do it. And with computers, I’m sure I don’t need to explain how stubbornness pays off. Damn. I pushed hard in my twenties, but I scored a lot of victories, too.

The allergies-and-depression cycle of recent years is a bit hard to explain because I really can’t just blame the allergies. There was a breakup, job angst, and moving to a new apartment. But I’ve coped with all of the above before, and there were good things going on in my life, too. It was all incredibly frustrating because while I definitely recognized the symptoms of depression from that extended period in high school, I could not figure out why it was happening again and why I couldn’t just snap out of it.

As with that period in high school, I never stopped fighting. I never stopped going out and doing what I wanted to do. But I did cut back . There was always this triage of what I had energy for and what my priorities were. In my twenties, I just did it all. These past few years, I hit a point where I couldn’t — I had to make choices.

I’m still convinced that the only reason I snapped out of that depressive period — I can’t truly call it depression, but I felt like I was always close to the edge and could never quite get any distance from it — was that I finally got the allergies under control. Exercise and nutrition are a big part of it, but so were allergy shots and an immune system booster vaccine.

No silver bullets, but basically I feel like myself again after having had to walk through sludge the past three years. I’ve kind of forgotten how to run, but at least I know it’s possible again. (Smiles) I spent three years trying to choose not to be depressed, but the fog refused to lift until I finally got my physical health back.

Did I do it all wrong? Would therapy or medication have gotten me over it sooner? I just don’t know. And I perhaps never will. I’ve been playing these past six months entirely by ear. I do feel safe in the assumption that as long as I have my physical health, my mental health is also safe. But
I no longer take it for granted. And I also realize that the madcap coping mechanism of my twenties — constantly sprinting — literally, when it came to ultimate frisbee, probably wouldn’t have lasted forever anyway.

One thing that tends to not work is trying to will yourself into being more organized/disciplined/attentive. That tends to be a recipe for failure, with all the voices in your head yelling at you for being such a lazy slob and a waste of space. What does work is finding clever ways to set things up such that it’s a downhill slide instead of uphill battle — in essence, coming up with a system that makes the good behavior easy instead of difficult. It’s like the judo trick of using the other person’s momentum for a throw, rather than trying to absorb the force of their blow directly

slim: Indeed. I also think the kind of support structure or environment you’re talking about is essential. Although, I would rather use words like “encouraged,” “supported,” or “amplified” to describe the qualities afforded by such an environment over “easy.” I think there is a significant difference between something being easy vs feeling at ease when you’re in relation to something.

Conversation: Empathy & Mastery

On April 3, 2011 at 4:23 p.m., I posted the first draft of what will eventually become the second story of the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the woodshop. While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and ethics.

 

joonkoo: This story reminds me of my recent attempts to master bread baking, namely baguettes. I’ve been baking a batch pretty much every other weekend, and one of the most delightful things that happens after you retrieve a freshly baked baguette from the oven, is to hear them singing , which is the sound of the crust cracking, and perhaps some moisture interaction going on.  I’m nowhere near the level of mastery, but I’m sure there are different sounds that you can distinguish, once you become a master baker.

slim: Did you notice the singing from the get-go or did someone point it out to you? If the former, was it highly noticeable or did you actively have to pay attention to it? I don’t think I’ve heard that sound. I’m very curious what it is like.

joonkoo: It’s very noticeable. I noticed it from the beginning. But then I also watched this French guy making a baguette on YouTube, and he was the one who mentioned this singing sound. It’s really the sound of crust cracking, but it makes the bread sound so delicious.

slim: Did you notice the sound after you heard the French guy on YouTube, or before?

joonkoo: I noticed it before, but I didn’t care that much. Afterward, I came to like the sound. But to be honest, there has been no deep understanding of the sound.

slim: See… A question I have about this is how do we come to understand, and become sensitive to these subtle nuances? There seem to be certain things that we can proactively notice, then there are things that other people have to raise our awareness to.

Is this simply a matter of time? If I spent enough time paying attention, would I eventually become sensitive to everything there is to be sensitive about — (smiles) and become miserable? Or are there always going to be things that other people have to raise our awareness to, because there is an infinite number of things, and simply not enough time?

joonkoo: The question you are raising is an excellent one! I haven’t thought about it much, but intuitively, there seems to be a need for both internal enlightenment and external stimulation to learn such nuances.

slim: Indeed.

By the way, last semester I interviewed a child psychologist, who told me that in the beginning, babies learn how to be attached to their mother, and come to understand what it means to love their mother. Then they may feel comfortable with other people who have attributes similar to mother, which allows them to feel safe and comfortable with these other people. Then as they interact with them more, they mature, and start to appreciate the nuances that make these other people different from mother, but love them despite the differences. I found that to be a rather fascinating way to think about maturity. Don’t you think?

joonkoo: The child psychologist was perhaps referring to Piaget’s idea of assimilation and accommodation.16 I have little knowledge in developmental psychology, but you may find it relevant.

Also, when I took cognitive development, I was fascinated not only by Piaget, but also by Vygotsky.17 You might want to check out his theory. My knowledge about these is too shallow to be shared here. (Smiles)

Now, returning to the idea of non-living things telling us something, I experience very similar things when analyzing data — they tell me how they should be analyzed.

slim: Yeah, isn’t that peculiar? There’s a feeling associated with it .

I’ve also heard a firefighter say the house told him to get out, and immediately after he ran out, it crumbled. Perhaps there is a combination of pattern recognition, as well as some genetic reflex that triggers a certain physiological change in our body that results in us feeling as if we’re being told?

joonkoo: Although, I think this is a very literary way of describing the gaining of expertise.

slim: What do you mean that it is a very “literary” way of describing the gaining of expertise?

joonkoo: I think it’s just one possibility of expressing how we get to know things better. I say literary because unlike other people or other creatures, it can’t be that a piece of wood is telling you something. It’s that you think that the wood is telling you something. For example, a baseball player might claim that the ball that left the pitcher’s hand told him to hit it, and it resulted in a home run.

This kind of expertise, often described as intuition, wisdom, mastery, is something that humans — and other animals — can acquire at an incredible level, as the human brain has amazing ability to parse statistical and stochastic patterns in the environment.

However, it’s an open question, I think, to ask what it takes to gain expertise in this variety of domains such as understanding other people’s mind (e.g., theory of mind ), furniture making, and computer programming. And also whether they are different, and if so, why.

slim: When you say it’s an open question, do you mean that there is no good insight into how one gains expertise as studied by neuroscientists? That because it’s such uncharted territory that it’s hard to start a discussion on it?

joonkoo: My question was whether it takes a similar amount of time and effort — if not the same amount —to master things across domains.

I remember reading from some cognitive psychology paper that it takes 10,000 hours of practice to reach the highest end of expertise. This may be an over generalization, but it means that it takes a huge amount of time and effort to become an expert.

For example, most of us are all experts at looking at faces, extracting facial expressions and emotions — although we know that people with autism lack this ability to some extent. On the other extreme, there are expert computer game players (e.g., Starcraft18). When you look at how they play, it’s simply incredible to look at how fast they make decisions and click the mouse buttons. This is not something that everyone can easily achieve, but some people are experts in this field.

How do the two domains that I raised as examples (face perception vs. Starcraft) differ in terms of their acquisition of expertise? What about wood cutting? What about computer using /programming? Is becoming an expert wood cutter very different from becoming an expert computer user? What are the common mechanisms and what are different mechanisms? These were the questions that I had in mind when reading your post.

slim: The question of testing expertise across domains sounds like it would be a challenge in defining the boundaries of each domain, not to mention the standards against which to measure expertise , no?

For example, Isn’t facial recognition something that we are hard-wired for? Is it fair to compare that to Starcraft? What would it mean for one to be an expert in facial recognition? Be able to tell the difference between twins you’ve never seen before within a certain amount of time?

joonkoo: Some things are definitely hard-wired and some things are not. Some things presumably use a combination of more hard-wired and less hard-wired systems to achieve expertise.

In facial recognition, there are ways to quantify it experimentally using behavioral measure of inversion effect, composite effect, and such. And recent research has shown that these abilities are pretty heritable.

A few years ago, we also found that the neural basis of facial recognition may be more genetically shaped than neural substrates for processing other visual categories. Now it’s true that I don’t think it is fair to compare facial recognition to Starcraft — one of the reasons being that some things are more hard-wired than others. But I would like to raise different facets of expertise, which might be related to your question about empathizing with objects and what it means to do that in different areas.

an-lon: Ok, I’m jumping in about the subject of 10,000 hours because it’s become simultaneously trendy and misunderstood. The gist of the research is that what makes Mozart or Tiger Woods or any virtuoso great isn’t necessarily inborn talent, but the ability to hone that talent.

The 10,000 hours translates to about a decade, but here’s the key: it is not just any 10,000 hours that makes a person great, it’s 10,000 hours always at the edge of your comfort zone, constantly pushing your boundaries. Most of us simply do not have the capacity to operate at that level. Instead, we spend most of those 10,000 hours simply repeating our old habits. We practice the same thing over and over again. Phenoms19 are those extremely rare individuals who are able to push their boundaries in an extremely focused and deliberate way.

I think George Colvin’s Talent is Overrated actually covers this better than Gladwell’s Outliers. He calls it “deliberate practice,” and gives many examples, from Jerry Rice to Ben Franklin, of how those so-called geniuses balanced on that knife’s edge over the course of an entire 10,000 hours. One useful model is three concentric circles: comfort zone, learning zone, and panic zone. Only in the learning zone can we make progress. The comfort zone is too easy and the panic zone is too hard.

Most of us, when we practice, think we’re in the learning zone, when in reality we’re simply performing extra iterations within the comfort zone. Those iterations, no matter how many, do not count towards the 10,000 hours, and do not bring us any closer to a Mozart-level accomplishment. 10,000 hours in a true learning zone is incredibly difficult, which is why there are so few geniuses out there.

I think there are excellent connections to be made between your dialogue with materials and that learning zone. The key here is to leave your comfort zone, but to not venture so far from it that the result is chaos. Inevitably, finding that knife edge requires dialogue, feedback, interaction, and discomfort.

slim: Ah . . . That’s a great way to think about it! 10,000 hours of discomfort.

joonkoo: Yes, as An-Lon described — thanks, An-Lon — it’s not merely the 10,000 hours of work. But still, what is true is that effort and time is a necessity for gaining an expertise..

Sorry if my comments steered the discussion too much toward the idea of expertise. But, I thought this was exactly what you were referring to when I got a better understanding of what you meant by being able to empathize with things.

slim: Don’t worry about steering the conversation in whatever direction. The purpose of this conversation is to understand what it means to have an empathic conversation, which would naturally require a lot of empathic conversations. (Smiles) I thank you for your patience. I really could not ask for more!

And yes, An-Lon, I do see a correlation between expertise and empathizing across time and memory. The more you empathize with an other across time and memory, the more trust, discipline, and skill you are able to build in relation to them. Whether this is with physical objects, or another human being, the model seems to work equally well .

Here’s a thought: Having a conversation with someone or something who/that has a sense of integrity, or a world view, different from your own — or simply unexpected or unpredictable — is highly uncomfortable. Perhaps the capacity to handle this gap in knowledge or this discomfort — one of the abilities I would think is necessary to stay in the learning zone — is directly related to humility.

joonkoo: Here’s also another thought, which is my current research topic. We are all experts at processing words visually — or simply reading, which is to say that we can quickly parse fine squiggly lines in our mother language. There are, in fact, many experimental tricks that you can do to show your expertise in reading letters and words. However, when you think about it, it is hard to believe that our brain is hard-wired to read words.

Script was invented only very recently on an evolutionary time-scale. Most humans were not educated to read and write until much more recently. But literate adults are very good at reading. This must be due to the extensive training with letters and symbols during development.

While I’m not sure if learning to read during childhood really pushes the boundary and enters the discomfort zone, this may be illustrating another type of expertise that we go through. It’s different from others because, unlike face recognition, it’s not hard-wired, and unlike becoming an expert in Starcraft, this kind of expertise seem to be something relatively easily achieved by the masses.

slim: I want to understand better what you say about our ability to become expert readers. You are saying that, for some reason, we can learn how to read starting at a young age, although it is not something we are hard-wired for .  This is an assumption, but a fairly safe one. I think you’re also saying that it is unclear if this necessarily implies that we are in the discomfort zone when we learn to do this, which leads to the question on whether this is a different kind of learning or not. Is that the question?

joonkoo: Well, I don’t want to get into a discussion around the idea of a discomfort zone too much. That was just a side note. What I was focusing on was that learning to read —visual processing of orthographic stimuli, to be precise — and becoming an expert at reading is something that is quite different from becoming an expert in some other domain, because it is an expertise that is ,  presumably,  not based on a hard-wired system, yet acquired by pretty much all of us — except people with dyslexia.20 When you think about it, there are not many things that are like this. This is, in fact, what makes reading very interesting.

slim: Ohhhhhhh! So you’re distinguishing between learning through the use of hard-wired facilities  ( i.e., facial recognition)  vs. learning through the use of non-hard-wired facilities  ( i.e., reading). Then you’re asking how much of the learning that happens in a given domain is facilitated by hard-wired capabilities vs. non-hard-wired capabilities, and how their proportion affects the experience of learning. And you’re saying that reading is special, because almost all — possibly an overstatement — is not facilitated by hard-wired capabilities. Am I understanding you?

joonkoo: Yes, that would be a straightforward way of saying what I was trying to say. (Smiles) Thank you!

slim: What is an orthographic stimuli? I just tried looking it up, but couldn’t make much sense of the stuff I found.

joonkoo: Oh, an orthographic stimuli might be a word that I made up. (Smiles) Just think of letters and words.

slim: Oh, then by “read” do you simply mean recognizing the letter forms that one sees or do you mean making meaning from their composition into words?

joonkoo: What I mean by “reading” is the visual processing of letters. Reading is a special case because not much of it is hard-wired. In fact, one of the recent claims is that it goes against some hard-wired neural structure that is designed to carry out other activities more efficiently. That other stuff being the mirror invariant perception of visual features. For example, it takes very little effort to view some image, then view the left-to-right flipped version of the image and know that those two images are identical. It is argued that this is a kind of basic visual mechanism that is more hard-wired. However, when learning to read, b is not the same as d even though it is a left-to-right flipped image of b. So to learn that these are different, the mirror invariant perception needs to be unlearned to a certain extent before you can learn to read.

slim: Wait, wait, wait . . . mirror invariant of perception? You mean we’re hard-wired to be able to tell something is the same regardless of whether it is mirrored or not? Where did that come from? Is it because things
in nature are symmetrical?

an-lon: Seriously! Symmetry and mirror invariant of perception? That’s fascinating! What about Asian languages where there isn’t the b and d problem? I’ve often heard that there’s no such thing as dyslexia in the Chinese because of that. Is that really true? I don’t suppose there’s a good layman’s book on this subject?

joonkoo: My understanding is that the critical ability in visual processing of written words is not necessarily restricted to the b vs. d problem, but more related to discriminating the subtle nuances in the various different visual features. Mirror invariance is just one of the examples. There are many such examples in other languages for sure.

I don’t know much about dyslexia in the Chinese population. Dyslexia is something that’s a little different from pure impairment in visual processing of words.

Most current theories and findings are putting emphasis on the phonological processing of prints. Stanislas Dehaene21 is a big name for this kind of research. I’m sure he wrote books for the general public on these matters.

High-level vision is a fascinating field for research. Reading, in particular, is intriguing for all the reasons that we discussed so far.

anson: What a lively discussion! Slim, let me just say that your descriptive writing helped me imagine myself going back to a wood workshop, with all the sensations that comes with it. I took a woodworking classes from Grade seventh to ninth, way back when.

I also think you have touched on a very important topic about truth or what is true in this world. Truth is honest. Truth is simply what is. Truth neither budges nor needs to budge. To go against the truth is like kicking against the goads.

Truth is beautiful and simple. It just remains there patiently waiting for us to recognize it and embrace it. Truth sets us free. It always teaches us an easier and simpler way. It helps us to be in harmony with this world. A lot of times when we think of truth, we think of moral categories of right and wrong, but it need not be so. Rather, I think using the categories of in harmony or out-of-tune is a better way of looking at it. Finding truth is simply finding the way of how to be in harmony with everything. Although there is indeed a lot of incredulity towards truth in our postmodern sensibilities, your story is reminding us something so basic and simple — whatever is true is honest and it is what it is. There’s a video on YouTube called “Rhythm” featuring a pastor named Rob Bell 22 on this very topic from the Christian perspective. Perhaps you will find it relevant.

slim: I recently came across a book called the Empathic Civilization by an economist named Jeremy Rifkin. In the book, he writes that “when we say that we seek the ultimate truth, we are really saying that we seek to know the full extent of how all of our relationships fit together in the grand scheme.” Your comment reminded me of that sentiment, and it resonates.

In the way that he describes it, I believe both truth as well as subjectivity can coexist. If there’s a classic pattern I recognize throughout history, it is that every time someone claims the existence of a dichotomy, it is not that it is either/or, but both in some relationship constantly shifting through time. Just as the idea of balance is not some static equilibrium, but rather an ongoing process that fluctuates, I imagine this is the same.
And although I’m not Christian, I have to say that I enjoyed the video very much. The first thought that came to mind was how different it was from what I had expected a Christian video to be like. But then I realized what does that even mean to label something as a “Christian video”? It’s nothing but a projection of my biased assumptions.

It almost seems like the words “God” and “religion” play a large part in confusing and dividing people. I can tell from first-hand experience how profound the change in one’s own world view can be when words that you once thought you knew get redefined. Perhaps a relevant quote is one from philosopher Emmanuel Levinas23 who said, “Faith is not a question of the existence or nonexistence of God. It is believing that love without reward is valuable.”

——

16 Swiss psychologist Jean Piaget defined assimilation as the integration of external elements into evolving or completed structures, and accommodation as any modification of an assimilatory scheme or structure by the elements it assimilates. He said that assimilation is necessary in that it assures the continuity of structures and the integration of new elements to these structures, whereas accommodation is necessary to permit structural change, the transformation of structures as a function of the new elements encountered. An example of assimilation would be the child sucking on anything they can get their hands on. As they learn to accommodate, they discern what to suck on and what not to. (Encyclopædia Britannica Online)

17 L. S. Vygotsky, (Nov. 5, 1896 – Jun 11, 1934) was a Soviet psychologist who, while working at Moscow’s Institute of Psychology  from 1924–34, became a major figure in post-revolutionary Soviet psychology. His theory of signs and their relationship to the development of speech influenced psychologist Jean Piaget. (Encyclopædia Britannica Online)

18 Starcraft is a real-time strategy game for the personal computer. It is produced by Blizzard Entertainment. According to Scientific American, it has been labeled the chess of the 21st century, due to the demands for the pursuit of numerous simultaneous goals, any of which can change in the blink of an eye. (“How a Computer Game is Reinventing the Science of Expertise”)

19 An unusually gifted person (frequently a young sportsperson), a prodigy. (OED Online)

20 Dyslexia is an inability or pronounced difficulty to learn to read or spell, despite otherwise normal intellectual functions. Dyslexia is a chronic neurological disorder that inhibits a person’s ability to recognize and process graphic symbols, particularly those pertaining to language. Primary symptoms include extremely poor reading skills owing to no apparent cause, a tendency to read and write words and letters in reversed sequences, similar reversals of words and letters in the person’s speech, and illegible handwriting. (Encyclopædia Britannica Online)

21 Stanislas Dehaene (born May 12, 1965 Roubaix, France) is a professor at the Collège de France, who directs the Cognitive Neuroimaging unit of the French National Institute of Health and Medical Research. In his book The Number Sense, he argues that our sense of number is as basic as our perception of color, and that it is hard-wired into the brain. (“Stanislas Dehaene”)

22 Rob Bell is the founding pastor and pastor emeritus of Mars Hill Bible Church. He graduated from Wheaton College in Wheaton, Illinois, and Fuller Theological Seminary in Pasadena, California. He is the author of Love Wins, Velvet Elvis, and Sex God, and is a coauthor of Jesus Wants to Save Christians. He is also featured in the first series of spiritual short films called NOOMA. (“Rob Bell”)

23 Emmanuel Lévinas (December 30, 1905 – December 25, 1995) is a Lithuanian-born French philosopher renowned for his powerful critique of the preeminence of ontology — the philosophical study of being — in the history of Western philosophy, particularly in the work of the German philosopher Martin Heidegger. (Encyclopædia Britannica Online)

Conversation: Ethics & Computers

On March 19, 2011 at 9:28 p.m., I posted the first draft of what will eventually become the Preface in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” While much has changed since then, I wanted to share with you this edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about computer and acting.

 

joonkoo: I’m wondering if you should make a clearer definition of the user here. For example, is the user a computer programmer using the computer or just an ordinary John or Jane using the computer? I understand that knowing the exact mechanics or physiology of the computing system may tremendously expand the user’s perspectives, but I also imagine that there would be some considerable costs to learning those mechanisms. Would my mother, a middle-aged lady with few digital friends, ever want to know exactly how the processor and memory work for her to get less frustrated the next time she accesses an Internet browser to receive and view photos that I send?

david: Yes, but what either extremist position about users (ordinary John or Jane vs. super programmer) tends to ignore is the bell curve nature of the problem, which is very similar to my indictment of mainstreaming in u.s. public schools. That is, these need to be seen as somewhat unique user groups requiring distinct, differentiated approaches.

But even if you draw three divisions in the bell curve, which would split say 10/80/10, it is still an enormous design problem. People who use Photoshop are still in a discourse community with considerable depth beyond the average person. It’s even worse at the other end of the spectrum. And this is where I think Peter Lucas,9 founder of MAYA design, and resident genius, absolutely nails it, and my guess is that this is what Slim is getting at with his reference to “physics.”

What Peter says is that you must design for the “lizard brain” first, because it’s the only thing that is consistent across that entire bell curve. (Keep in mind, this is my perception of Pete’s message.) If you learn to do this well, the rest may take care of itself. But fail to get that right, and you either have very little chance, or you’ll be dragging a 200 ton freight train behind you the entire way. That is why our experience with modern technology, even the best of it, falls short.

It’s ironic because we’ve had the technology for it to be a solved problem for at least a decade, but very little work has directed all the physics and graphics innovation at solving the problem of making data manipulable objects with “thingness” much the way Bill Gates describes in “information at your fingertips.” It’s also very similar to the way the osi model10 falls out — meaning that designing for the lizard brain is like the physical layer, designing for higher order brain functions move up the brain stem and can be accounted for in a layered semantics kind-of-way.

But I think there’s an element missing here, which is that what you describe about a user’s experience with the computer crashing or slowing downs is an entirely qualitative judgment. I don’t like computers that crash or slow down, either but the experience is arguably the same or worse if I’m driving my car or bicycle. I ran over a piece of metal on my bicycle commute yesterday and was left with a huge gash in my tire, a blowout, and subsequent wheel lock when the metal piece hit the brake that could have easily caused my clipped-to-the-pedals self to go reeling into the river, but this is the experience of an unplanned and unforeseen mechanical failure. Could the bicycle be made to fail more gracefully? Certainly. But at what cost, with what trade-offs, and what marginal utility? Similarly, I had almost the same thing happen with my little Kia a few months ago in almost the same place and I’d raise exactly the same questions. Kevlar tires, tpms, run flat, oh sure, again at what cost and what compromise?

The design problem that the computer presents is no different though I think what tends to happen here is that because computer science is taught from a very narrow perspective, focused on very quantitative problems, we tend to ignore the qualitative ones, and we do that at our user’s peril. There’s also a tendency, unlike other branches of engineering, to not have much rigor in terms of seeing the trade-offs and compromises in a holistic, systems thinking kind-of-way.

I also want computers and software that fail gracefully, and are friendly and usable, but the path there is very long and very hard and is still beholden to the laws of physics, no matter how much we think we exist in a software world where none of the rules still apply and we can acquire all of these things at no cost to us (the designers) or them (the users).

slim: I’m not saying that the trouble with computers is worse than what we feel elsewhere. What I’m saying is that it’s time we consider the design of computers from the point of view of ethics, not just usability, functionality, or desirability. Why shouldn’t computer programmers and designers adopt the same kind of ethical stance that architects do, for example?

From what I have gathered taking classes in architecture,  there’s a tremendous sense of ethics (not morals) and philosophy of life that goes into educating an architect. I never got any of that as a computer scientist — although, truth be told, whether it would have sunk into me at the ripe age of 18 is questionable. But that’s a whole another discussion.

Even in human-centered design, while we talk about designing for human users, we never get deep enough to the heart of what it means to be human. How can we be human-centered, when we don’t even know what it means to be a human? I’m less interested in the computer affording user-friendliness, usability, or graceful failures. That’s a very object-oriented way of looking at this issue. I’m less interested in objects and more interested in relationships. More specifically, I’m interested in finding out how our relationship to the computer can afford the quality of being immersed in an empathic conversation. The kind of quality that, as far as I can tell, makes us become aware of who we are as human beings.

I have nothing against the laws of physics. As a matter of fact, I think the computer should be designed to accept physics as it is. When designers pretend that the laws of physics don’t apply to computers, weird things are bound to happen.

I don’t think physical materials are there to make our lives more convenient or inconvenient. It just is. Yet because of our evolutionary history, there’s something embodied within us — and something we come to embody as we mature — that allows for us to have an empathic conversation with it. I want the same qualities to be afforded in our interaction with computation.

david: Now we’re getting somewhere! So there are several interesting points I’ll make here. As to your first question regarding architects and computer designers, these comparisons usually fall down because of the chasm between consumer electronics and buildings, structures, etc. There are major differences attributing to elements such as rate of change and stability. Also, classic failures exist in that world, too, though not in the numbers of computers failing, but that’s probably a problem of sample size more than anything. To me, Frank Lloyd Wright’s cantilevers at Fallingwater are beautiful, but they’re not robust from an engineering standpoint. Hmm, where have I seen that before?

The problem with education that you describe is exactly what I was alluding to earlier with computer science’s focus on the quantitative, but I think this is a maturity issue. What I mean is that architecture is a very old discipline. Designing computers and software, not so much. That evolution would, in theory, happen in time, but this will take a long time. Imagine a world in which there are bachelor’s degrees in human factors and human-computer interaction (HCI). Oh sure, there might be one or two now, but imagine a world where they are on the same plane as computer science (CS) degrees.

But in order for such large-scale changes to happen, there needs to be economic incentives. That’s the biggest problem in the entire puzzle here because organizations have no economic incentive to make a radically “better” computer. They’re still making tons of money with “good enough.” I’m hopeful that the rise of mobile computing will give way to better design as the competitive forces there are much stronger than the pc business, just as the same was true for pcs over older mainframes and minis.

But what you seem to be getting at here is a philosophy of computing, just as you describe a philosophy of architecture. That is, not one architect, but an entire movement. This is like Sarah Susanka and the “not so big” movement.11 The conditions for that to exist in computing are not quite as clear to me as in architecture or lifestyle design. That’s possible also with computing, but again, the experience has to be so overwhelmingly great as to cause a parallel economic revolution.

I’d question whether the empathic feeling that you describe between two individuals is even possible with machines. I can’t remember whether this was touched on by Ray Kurzweil in The Age of Spiritual Machines 12 or Don Norman in Emotional Design.13 I don’t know where empathy or compassion originates in the brain, but I’m pretty sure these are very high order functions, and vary individually ( i.e. the continuum from sociopath to the Dalai Lama). Indeed, many would say that empathy and compassion is something we must cultivate within ourselves.

Which brings me to another theme: dogs. Could it be that what you describe is what humans seek in dogs? Dogs are selfless, unconditionally loving, warm, whimsical, carefree — exactly the opposite of “weight of the world” that most adults must grapple with on a daily basis. If the computer could provide a dog-like antidote to adulthood, that would be great. Crazy hard. Which describes the saying, “Anything worth doing…” pretty well.

I suspect that Cynthia Brazeal’s work14 at mit may have some links. Also, David Creswell15 at cmu. He has a publication about transcending self-interest. I think the research questions du jour are these:

What are the determinants of a disposition for empathy in humans? Where is empathy encoded in the brain? Is parity an important part of empathy, or can empathy exist effectively without parity?

The latter would be a requirement for an empathic architectural style to succeed in computing since visiting an empathic requirement on the user would be tantamount to slavery. Until you know the answers to those questions, any attempt to get computers to behave as part of an empathic conversation would be difficult, if not impossible, because there is no other model for empathy but humans. Either that, or I’m horribly confused about the animal kingdom.

Keep up the good work. This is likely to turn into a hard slog if it hasn’t already.

——

9 Peter Lucas has shaped MAYA as the premier venue for human- and informa-tion-centric product design and research. He co-founded MAYA in 1989 to remove disciplinary boundaries that cause tech-nology to be poorly suited to the needs of individuals and society. His research interests lie at the intersection of advanced technology and human capabilities. He is currently developing a distributed device architecture that is designed to scale to nearly unlimited size, depending primarily on market forces to maintain tractability and global coherence. (MAYA Design, “MAYA Design: Peter Lucas”)

10 Different communication requirements necessitate different network solutions, and these different network protocols can create significant problems of compatibility when networks are interconnected with one another. In order to overcome some of these interconnection problems, the open systems interconnection (OSI) was approved in 1983 as an international standard for communications architecture by the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT). The OSI model, as shown in the figure, consists of seven layers, each of which is selected to perform a well-defined function at a different level of abstraction. The bottom three layers provide for the timely and correct transfer of data, and the top four ensure that arriving data are recognizable and useful. While all seven layers are usually necessary at each user location, only the bottom three are normally employed at a network node, since nodes are concerned only with timely and correct data transfer from point to point. (Encyclopædia Britannica Online)

11 Through her Not So Big House presentations and book series, Sarah Susanka has argues that the sense of “home” people seek has almost nothing to do with quantity and everything to do with quality. She points out that we feel “at home” in our houses when where we live reflects who we are in our hearts. In her book and presentations about The Not So Big Life, she uses this same set of notions to explain that we can feel “at home” in our lives only when what we do reflects who we truly are. Susanka unveils a process for changing the way we live by fully inhabiting each moment of our lives, and by showing up completely in whatever it is we are doing. (Susanka Studios, 2013, “About Sarah”)

12 Ray Kurzweil is a renowned inventor and an international authority on artificial intelligence. In his book Age of Spiritual Machines, he offers a framework for envisioning the twenty-first century—an age in which the marriage of human sensitivity and artificial intelligence fundamentally alters and improves the way we live. Kurzweil argues for a future where computers exceed the memory capacity and computational ability of the human brain by the year 2020 (with human-level capabilities not far behind), where we will be in relationships with automated personalities who will be our teachers, companions, and lovers; and in information fed straight into our brains along direct neural pathways. (Amazon, 2000)

13 In Emotional Design, Don Norman articulates the profound influence of the feelings that objects evoke, from our willingness to spend thousands of dollars on Gucci bags and Rolex watches, to the impact of emotion on the everyday objects of tomorrow. (Amazon, 2005)

14 Cynthia Breazeal is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology where she founded and directs the Personal Robots Group at the Media Lab. She is a pioneer of social robotics and human robot interaction. (Dr. Cynthia Breazeal, “Biography”)

15 Dr. David Creswell’s research focuses broadly on how the mind and brain influence our physical health and performance. Much of his work examines basic questions about stress and coping, and in understanding how these factors can be modulated through stress reduction interventions. (CMU Psychology Department, “J. David Creswell: CMU Psychology Department”)

Conversation: Acting & Computers

On March 1, 2011 at 10:14 p.m., I posted the first draft of what will eventually become split into the Prologue and the fourth story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” The story surrounded my experience observing a friend act the role of Blanche in a play called A Street Car Named Desire. While much has changed since then, I wanted to share with you an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which also includes the introduction of the interdisciplinary participants of the conversation.

 

david: I think of it this way: great actors are not really actors, they are “be-ers.” They don’t play the role, they manifest the person encoded in the role, almost to the brink of no return. It’s very dangerous territory and quite a few of them have wound up in mental institutions.

Role-playing implies expectations on reality. What’s great about great acting? The notion that our expectations are up-ended. If all the actor does is establish believability, they haven’t really succeeded, because at some point, they’ve got to go over the edge, else it would be a very boring presentation.

slim: Yes, your critique on believability not being the goal is significant. I am curious if that at all relates back to programming. We write code and expect it to produce the same results every time we run it. Not only that, but we also want others who read the code or install it on their computer to believe this to be true as well.

But the reality is that the circumstance in which the program runs changes. For example, the hardware running the code may have different capacity for memory, the memory may be filled in different ways, the hard drive has a different capacity, the power supply has different capacity, the processor, there may be other software running at the same time. The reality is much messier.

Yet programming language designers just keep abstracting all that physical reality away. Trying so hard to make it believable that the virtual machine is the real machine (e.g., Java).

david: In my opinion, nothing has done more to destroy computer science education in this country than Java.

I’d like to point out further, that what lies at the center of actors and musicians, generally, and great artists more broadly, is an ability to be present without expectations of the future or nostalgia for the past. What’s weird — and this gets into the metaphysics of quality à la Pirsig — is that, in my opinion, you can feel this presence, but there is no metric for it. That’s what makes us human.

The Eastern concept of duality rears its ugly head in this story on several occasions, and I would suggest that you might as well label it, and dive into it a little, though it’s a book unto itself. This concept resonates through a lot of what you are saying, meaning that it is another perspective on empathy. The perspective you are presenting is inherently dual, as opposed to moving toward a concept of singularity. Again, metaphysical.

slim: I hesitate to frame this as the Eastern duality. Maybe it’s a choice of words or my misinterpretation of what you mean, but I think of it as circularity as opposed to duality. Imagine a constant movement along
a continuous domain. When you stop along the path and look from any given vantage point you consider what you see to be the other —something outside of yourself. But as you move along that path, as you try to empathize, you eventually feel as if you are that other.

This is what actors do, but all they might have is a piece of script — which is just a bunch of words and some simple directions. So we have to figure out what the script actually means, from our own experiences. We have to first translate it, then interpret it. The same goes for playing from sheet music or learning how to dance. We can’t learn how to dance just by watching how the choreographer’s limbs move. We also have to find out where the invisible force is acting inside the body of the choreographer.

A friend of mine did a beautiful performance piece that speaks to this idea of meaning vs. form. She first filmed herself drawing a circle. Then she projected the film on a surface, and filmed herself again, but this time tracing her movement in the film. She would repeat this over and over, each time tracing the movements of the previous recording of herself tracing the previous recording. Each time, the shape of the circle gets more and more distorted. Eventually what gets drawn doesn’t resemble a circle at all.

In essence, the “why” of the movement gets lost, and all is left is the superficial. To have been able to draw a circle, you have had to understood why the first drawing was manifested the way it was. And once you have that understanding, you might not even draw a circle, but paint one instead. That is the kind of understanding that can only result from having tried to empathize.

jeff: Practically speaking, though, I think there are limits to empathy. For some things, you really either need a sliver of experience or some non-obvious knowledge that helps you imagine the other perspective.

slim: I think so, too. What experiences did you have in mind?

jeff: Like parenting. If you have a dog, you probably have developed a different level of patience than someone who has never tried to train a pet — or anything for that matter. Someone who has never been in a serious accident, catastrophe, combat, or other very dangerous situation probably has no idea what it means for “time to slow down” or “it happened so fast.”

A similar problem arises when people talk about spiritual or religious experiences. Some people may regard spirituality as “we are all connected somehow” or “there is some higher order in the universe.” My concept of this is “all things are connected (sort of)” but there’s nothing mystical about it because we already know we live in the same universe, but to feel it and really empathize with it and think about everything that is happening all the time, is a different concept.

When dealing with Christianity, I have always been puzzled by the idea of “God speaks to us all.” Is that what is actually happening in experience or is that metaphorically true — as in God is the universe? To even get a grasp on that, I’ve found it never makes any sense to consider Christianity from my perspective but as a box unto itself. And I may also have to consider that it is impossible for me to understand simply because I am me and not having those experiences. Perhaps God only speaks to Christians ,  which would make a ton of sense. And then there is the complication that faith is belief in precisely what does not make sense.

slim: I think a significant part of what you’re talking about has to do with language. Depending on what words you use to describe your experience, it could conjure up different experiences in different people, and unless people are willing to establish a shared language in the context of the conversation, more often than not people are not having
a meaningful conversation.

So when you say you don’t have the experience to know what Christian phrases mean, there’s also a chance that you do have the experience but you don’t use the same words used by the Christians to refer to it, which causes miscommunication, and misunderstanding.

jeff: Maybe. Although, I recently finished an excellent audio-book, Amusing Ourselves to Death by Neil Postman. He argued that the form of the media affects how we conceptualize the world so deeply that we are often not aware of how it changes us or how we are different from people in times past. Can we understand what it might be like to live in a society with no print? Or pre-television America, where people would pay money to hear authors read their books from lecturns and people would debate in a language that resembled printed prose rather than the plain-speak we use today.

Similarly, Facebook and Twitter afford relatively short updates and lend themselves to trivialities because it’s become so easy and considered non-imposing to spit out snarky one-liners to friends without considering their context (because it is unavailable). And on a blog, when someone is writing a really long reply, they can’t tell whether they’ve jumped too many topics and have lost their readers completely, because the others won’t see the post until after it’s been posted. So perhaps the rise of writing in society due to the Internet can lend itself to an egotistical style of communication, by the very nature of what the medium is.

slim: Well, I don’t find writing to be a particularly egotistical style of communication, but that of course depends on what you mean by “egotistical” and what you mean by “writing.”

I think it’s the space — I don’t just mean physical or even virtual space, I mean the feeling of space or the relationship between and among participants of interaction — in which the writing is presented can make a writing  egotistical or not egotistical.

Do you really think the nature of the medium affords a egotistical style of communication? The reason I ask is because I’ve had in-depth, thought-provoking discussions about a variety of topics that stem from just a status update on Facebook. So I’m not yet convinced that the nature of the medium somehow absolutely dictates an egotistical style of communication.

Or maybe what you mean is that it isn’t designed with the goal of facilitating a non-egotistical style of communication, and so it’s likely that many people default to something that takes less effort, which is the “egotistical style”? Am I understanding you?

an-lon: A quick note about Internet communities. The type of negative behavior Jeff described — picking fights and baiting and snark — reminds me a lot of people in their cars on the freeway. It’s as if you’re in a bubble and the usual rules don’t apply. I don’t doubt that some of the asshole drivers out there would be perfectly civil to each other in real life, where feedback is instantaneous and actions don’t go without consequences. Such is the power of anonymity.

That said, the Internet doesn’t have to be that way. In a different thread, I described how one very early Internet community evolved from the fan site of an author who was way ahead of her time :  Torey Hayden.1 One thing she had to police in her bulletin boards was language.  People were absolutely not allowed to write like Internet chimps.

The reason was that it was an international board, and she insisted that native-English speakers use proper grammar, punctuation, and capitalization in order to make it easier for the non-native English speakers to understand. Obviously, the non-native English speakers were just asked to do their best. The point wasn’t to punish people for poor English grammar per se, it was to punish lazy and avoidable misuse of the English language.

I really think the language rule made a huge difference not just in what people said, but in how they thought. It reminded them that they were holding a conversation, and that there were people on the other end who might carry with them a vastly different set of cultural assumptions and values.

The other notable feature of the message board was that it was predominantly female in an era when that was still fairly rare. The result was an extremely active and close-knit community that debated and joked about everything under the sun.

People did use avatars and screen names, and were anonymous in that sense, but in general, there wasn’t the kind of mindless hit-and-run you see in, say, the comments section of a New York Times article about politics. I only ever lurked, but for regulars it was a level of addictiveness decades before Facebook. Rather sadly, that’s where the author recently migrated her site.

Her message board had been a significant time commitment for her maintain, as it was pretty much the force of her personality and the ground rules she established that kept the board civilized. Eventually, she decided that the technology that had been cutting-edge when she created the board was hitting obsolescence, and that Facebook was an easier way to interact with her fans and keep the same conversations going.

I think what I’m saying is that there’s a bit of a founder effect2 to Internet communities. If the pioneers are assholes, everyone thinks they have a right to be an asshole. If there’s a precedent for civility, newcomers can learn to be civil too.

And there’s also no inherent reason for Facebook to be as shallow as it often is. The only reason I’m even here is that when Slim started posting substantive status updates to Facebook, I started writing substantive replies.

slim: Jeff, I think you’re saying that when left to our default vices, the way in which Twitter, Facebook, and other social media sites have been designed can direct us toward a certain kind of communication. Some of the reasons why include the fact that it makes the content seem context-free, which leads to misunderstandings, people making assumptions, passing judgments, or being downright malicious for the fun of it, as opposed to contemplating the meaning behind the content or asking a questions in order to further understand and empathize. Please correct me if I am misunderstanding.

jeff: All I’m saying is that the medium does affect how we think. Comparing writing to speech, reflection and revision makes it easier to achieve coherence. I refer to writing as most people encounter it, through online arguments in basically public forums where people don’t know each other. A person needs to make some assumptions about the person they are trying to convince — or more likely, put down. You also can’t confirm your assumptions as you can in person. Most people who argue online don’t follow the argumentative Principle of Charity.3 It’s much harder to be careful and empathic instead of being abusive. Being abusive can also be fun.

And by written communication, I was also referring to short updates like Twitter and Facebook. Those are almost inherently egotistical, not necessarily bad or harmful, but in the sense that the communication has to start with a motive within. Things appear context-free and then you get inappropriate snarkiness.

an-lon: To get back to your story in the acting class, though, isn’t this the human condition in a nutshell? When listening, I seek to be transparent. When projecting, I seek to be saturated. But the “I” remains.

slim: I resonate with those pairings. It directly maps to the pairing I have in mind, which is humility and courage. Can you say more? I would love to hear what you have to say about them.

an-lon: Well, exasperatingly, this was always a visual image first, words second, and an analytical dissection, last. The poem below4 is what planted the image in my head.

If thou couldst empty all thyself of self,
Like to a shell dishabited,
Then might He find thee on the Ocean shelf,
And say—“This is not dead,”—
And fill thee with Himself instead.

But thou art all replete with very thou,
And hast such shrewd activity,
That, when He comes, He says—“This is enow
Unto itself—’Twere better let it be:
It is so small and full, there is no room for Me.”

I am not a religious person, and perhaps not spiritual so much as simply omnivorous, but I had an odd sense from the minute Anson introduced himself that the theologian’s viewpoint was important. Perhaps because there are concepts here that can be expressed no other way, except in the language of the sacred and divine? Certainly, the theme of humility comes into play with this poem.

Anyway, the words “replete with very thou” have been part of my internal monologue since forever — whenever I realize I’m getting in my own way of understanding someone else’s viewpoint.

As for being saturated in order to project, exaggeration is the lifeblood of animation. The illusion of life is precisely that — an illusion. Whether the action in question is a walk cycle or a line of dialogue, you can’t just copy what happens in real life. You have to find the essence of what it is, amplify that, and filter out the rest.

Same with drawing caricatures. It’s not enough to simply give a guy a big nose, you really have find the essence of someone’s facial features and amp that up.

The image in my head was going into Photoshop and cranking up the color saturation of an image, but the metaphor it represents is the exaggeration that is one of the pillars of character animation.

slim: I’m intrigued by what you said about exaggeration and animation.

Is there a degree of exaggeration that is appropriate? In other words, could it be over-done? Where is this need for “amplification” coming from and where is it going? Is the kind of exaggeration you’re talking about related to generating interest in the eye of the viewer? Or is it functional (i.e., if you don’t exaggerate it doesn’t look real)? Or all of the above? Is this really about saturation or contrast?

I know nothing about animation to have any insight into this.

an-lon: Exaggeration is one of the 12 Principles of Animation,5 as developed by Disney during their golden age. If you’re curious, I’d highly recommend a look at the first chapter of The Illusion of Life by Frank Thomas and Ollie Johnston. This is pretty much the Bible for anyone studying animation today, but it’s gorgeously illustrated and extremely readable for a general audience as well.

Here’s the intro paragraph to the “Exaggeration” section:

There was some confusion among the animators when Walt first asked for more realism and then criticized the result because it was not exaggerated enough. In Walt’s mind, there was probably no difference. 

He believed in going to the heart of anything and developing the essence of what he found. If a character was to be sad, make him sadder; bright, make him brighter; worried, more worried; wild, make him wilder. 

Some of the artists had felt that “exaggeration” meant a more distorted drawing, or an action so violent it was disturbing. They found they had missed the point. When Walt asked for realism, he wanted a caricature of realism.

In answer to your specific question of “can it be overdone?” It’s surprisingly difficult to overdo the exaggeration within a drawing, if it’s going in the right direction. If the exaggeration is just going in a random direction, it looks gross and distorted almost immediately, but if it’s going towards rather than away from the heart of the action, you can get away with a really surprising amount of distortion before it falls apart.

A lot of times, we’re given the advice to “push” the pose till it breaks and then back off, rather than inching incrementally toward that imaginary breaking point.

And “is it functional (i.e., if you don’t exaggerate it doesn’t look real)?” Yes, absolutely. Rotoscoping (tracing) live action reference frame by frame almost always comes out looking strangely dead. It takes a human eye to amplify the important parts and tone down the unimportant parts, even when the goal is to be completely unobtrusive about it.

Exaggeration is in pretty much every frame of any animated movie, 2D or 3D. The 12 principles are all so fundamental, they’re in every shot. Sometimes it’s subtle, as it has to be with the handsome prince or the beautiful princess, and sometimes it’s wildly exaggerated, as with the crazy animal sidekicks, but it really is the lifeblood of animation.

When I was first talking about saturation and contrast, it was just at the level of metaphor. What I’m talking about now, you can see in the roughest of pencil tests without any color.

——

1 Torey is the author of three novels, eight non-fiction books about her experiences working with troubled children and two children’s books. In a writing career that has spanned more than three decades, her books have been worldwide best-sellers, translated into more than 35 languages and appearing as films, stage productions, an opera, and even Kabuki theatre. (Hayden, “The Official Torey Hayden Website”)

“The Official Toery Hayden Website,” Tory Hayden, accessed January 19, 2013, http://www.torey-hayden.com.

2 In genetics, the Founder Principle is a principle whereby a daughter population or migrant population may differ in genetic composition from its parent population because the founders of the daughter population were not a representative sample of the parent population. For example, if only blue-eyed inhabitants of a town whose residents included brown-eyed people decided to found a new town, their descendants would all be blue-eyed. (Encyclopædia Britannica Online)

Encyclopædia Britannica Online, s. v. “Founder Principle,” accessed December 29, 2012, http://www.britannica.com/EBchecked/topic/214776/founder-principle.

3 The Principle of Charity is a methodological presumption made in seeking to understand a point of view whereby we seek to understand that view in its strongest, most persuasive from before subjecting the view to evaluation. While suspending our own beliefs, we seek a sympathetic understanding of the new idea or ideas. We assume for the moment the new ideas are true even though our initial reaction is to disagree; we seek to tolerate ambiguity for the larger aim of understanding ideas which might prove useful and helpful. Emphasis is placed on seeking to understand rather than on seeking contradictions or difficulties. We seek to understand the ideas in their most persuasive form and actively attempt to resolve contradictions. If more than one view is presented, we choose the one that appears the most cogent. (Oriental Philosophy, “The Principle of Charity”)

“The Principle of Charity,” accessed January 19, 2013, http://philosophy.lander.edu/oriental/charity.html.

4 The poem is called “Indwelling” by T. E. Brown. (Brown, “Indwelling”)

“Indwelling,” accessed December 28, 2012, http://www.isle-of-man.com/manxnotebook/people/writers/teb/p082b.htm.

5 12 Basic Principles of Animation / Squash and stretch/ Anticipation/ Staging/ Straight ahead action and pose to pose/ Follow through and overlapping action/ Slow in and slow out/ Arcs/ Secondary action/ Timing/ Exaggeration/ Solid drawing/ Appeal (Thomas, 1981, 47–69)

Conversation: Empathy & Computers

On February 22, 2011 at 4:48 p.m., I set up a private blog, where I could regularly engage in conversation with a group of friends from across disciplines. The process was to work as follows. I would post a piece of writing on the blog, they would comment on it, then based on their comment, I would not only revise the writing, but also feel encouraged and inspired to keep writing.

The outcome of the conversations was the book “Realizing Empathy: An Inquiry Into the Meaning of Making,” which was successfully kickstarted on March 12, 2012. While much has changed since then, with their permission, I would like to share with you an edited version of several of those conversations with you, regrouped and rearranged for clarity and relevance. Here is the first installment.

 

an-lon: Chan1, can you start with a round of introductions? Who are they, the people reading this blog? How do they know you?

me: Oh, of course! Yes, let’s do that. Perhaps we can say what we do—not what our titles are—what our interests are, and where we are coming from. I think this will do wonders in enriching the conversation.

And before I forget, I just wanted to sincerely thank you all for participating in this journey of book writing. The past two-and-a-half years have been a time of divergence. It was time I desperately needed to get away from my previous environment, to find new ways of thinking.

In retrospect, the question I was ultimately after was the question of what makes us human. Much like the pioneers of computer science, I started wanting to understand better how “thinking” works, what “consciousness” is, how we “learn,” how we “understand” something, and what intelligence means. But unlike some of these pioneers, I was not interested in asking how we can abstract the “humanness” from ourselves, to disembody it, so as to put it in some other body, and to debate whether that other being is also human. Honestly, I can’t see why this line of questioning is valuable. What this kind of disembodied attempt at manifesting humanness can do, at best, is superficially mimic or simulate what one may mistakenly believe a human being to be, without any real understanding of what it actually means to be human.

At the same time, I had a deep attachment to the computer. In my professional life, I have spent a significant portion of my career programming the computer. In this process, I have often found myself totally immersed in thinking from its perspective and not mine. I would dare say that I empathize with it. Yes, I know that most of us think we only empathize with living beings. But from my experience at the Rhode Island School of Design (RISD), I’m starting to question this assumption. Because I’ve discovered many similarities between the process of trying to empathize with human beings, with that of trying to make physical things. And that’s precisely the vantage point from which I will start to write.

As I wrote in my e-mail invitation to you, my goal is to write in the company of a handful of people I feel comfortable sharing my ideas with. I then hope to get feedback, revise, and eventually integrate everything into my thesis book, which I will produce for graduation.

I would absolutely love it if you all could be candid about giving me feedback. I will inevitably make some strong statements that may seem controversial. I expect counter arguments and tangential references. I firmly believe that it is from the experience of contrasts, of seemingly unrelated or different experiences where new ways of thinking can arise. With that, I now pass the mic to you all. Thanks!

joonkoo: I thank you for inviting me to this very interesting forum of discussions. I don’t know how many are here, but let me start with the mic. I’m Joonkoo Park. I know Seung Chan hyung2 from high school. I went to the International School of Beijing (ISB) in 1994  to  1997. I’ve always been proud of Seung Chan for his free-minded spirit. He does what he wants to do and he does it well. So I was really glad to hear that he decided to study fine arts. And it looks like that was a real success. I was glad to join this discussion since I wanted to do anything to help him organize his thoughts and formulate ideas.

I study cognitive neuroscience of high-level vision. I am in the final stage of my Ph.D. at the University of Michigan. Most of my work is centered around neural organization and mechanisms of object recognition, such as faces, letters, and numbers; however, I’m getting more interested in numerical cognition, and I am planning to study the neural basis of number sense during postdoc.

That said, I’m trained as an experimentalist, and my interest is pretty focused — as many Ph.D. students are either forced to, or are trained to be so. But any questions related to how the mind works trigger my interest, and I wish to be of help by bringing in some neuroscientific and psychological ideas into Seung Chan’s thesis and the discussion.

david: My name is David Watson. I like to have “an attitude of gratitude” though I think it gets lost in a lot of what I do or say. Slim knows this from working with me for a couple years at maya design in Pittsburgh. I have a deep need to understand reality in its purest form, to seek the highest levels of production quality even when they don’t matter to anyone but me, and to achieve symmetry in literally everything.

They say that at the root of engineering is this “truth seeking” and you’ll see elements of that here from me. I apologize in advance for my forthrightness. I have a way of speaking that’ll make you think I think I’ve known you for 20 years.

I like the way Slim has defined the introduction, because while I work in software , I don’t like to define anyone singu- larly, certainly not myself, and I like to think of this more as “creative mediums of expression.” I’m a musician, a photo- grapher, a skier, a cyclist, a runner, a thinker, a reader, and a writer. And I’m glad we’re not all the same. Nice to meet you all. Cheers.

jeff: Hi, my name is Jeff Wong. Slim and I are intellectual buddies from Pittsburgh. He was a working man on the South Side of Pittsburgh at maya design, and I was a Ph.D. student in human-computer interaction (HCI) at Carnegie Mellon University (CMU).

I will be bringing my background in computer science and cognitive science. Slim says I can bring some perspective from theoretical computer science. Also, I know some of conceptual history of Artificial Intelligence (AI), psychology, cognitive science; some familiarity with psychiatry, phenomenology, and tidbits of religion and spirituality; and some dabbling experience in philosophical thinking. I don’t know how deep my knowledge is in these areas, but I think I can at least point to relevant ideas and prior explorations that have happened in these areas.

anson: Hi everyone. My name is Anson Ann. I really thank Slim for inviting me to this group. I feel so honored to be able to take part in this conversation. Reading the background stories, expertise, and interests of you folks really humbles me. It does start to feel a bit like a mini ted here!

Well, I first met Slim in CMU back in 1995 Although we were not in the same department — he was in computer science, and I was in electrical and computer engineering — our dorms were close to each other, and we had a common interest :  music and guitars. We used to take a cab together to a local musical instrument store and drool over those guitars and music gadgets. We first looked at guitars together, then he gradually moved onto djing, and I moved to synthesizers. Anyway, no matter what we do now, I think both of us will always have a musician inside us.

After CMU, I worked at bbn technologies — now Raytheon—in Boston as a speech software specialist/scientist. We customize speech software solutions for the u.s. government intelligence community, enabling them to do speech recognition, machine translation, and information extraction on Arabic, Chinese, Spanish, and English broadcast news all over the world. Much of my work involves language model training, pattern recognition, signal processing, and some human-computer interaction.

Then about six years ago, a big turning point happened in my life. I sensed this calling from God for me to become a pastor. Just like Slim who took the challenge switching from science to fine arts, I quit my software job and enrolled in a theological school. It was quite a big stretch for me, for studying the humanities requires a very different temperament. I realized that my mind, which was trained for engineering, preciseness, and comprehensiveness, wasn’t ready to deal with the ambiguity, complexity, tension, and paradoxes that you often find in history, literature, religion, and philosophy.

I have just finished my studies and now I’m an Anglican priest pastoring at the Anglican Network Church of the Good Shepherd in Vancouver, Canada. And as a pastor, what I hope to contribute to this conversation is my anthropological understanding.  ( i.e., What does it mean to be human.)

First, I am going to be upfront about my faith and convictions. I will speak from a Christian perspective and understanding of what is means to be human, for I believe anthropology stems from theology. A core doctrine in my faith tradition is that of the Holy Trinity: that God, who revealed himself to us in history, is known to be a three-in-one relationship, a unity-in-diversity, a dynamic-yet-unchanging entity, a harmonious dance in reciprocal love that overflows with creativity and creational power.

Since the entire universe is created and sustained by a relational being, the very core of our being and reality is supposed to be relational. And as we human beings are made in the image of this relational God, so we are also ontologically relational.

We are made to relate, to empathize, to love and be loved. There is something intrinsic about human beings that we want to understand and be understood. I believe the torture of imprisonment is not just lacking freedom, but more about losing the ability to relate to others and the outside world. Relational beings unable to relate are just like fish out of water.

Anyway, I know not all of you are religious or spiritual, so you may or may not share my perspectives, but I just hope I could, in some way help inspire Chan to continue exploring this topic about empathy. I’m still very much a geek at heart, so I also hope I can contribute to the other aspects of this conversation about computers, programming, and human-computer interaction.

I look forward to seeing Chan write more. Because empathy is about listening first, isn’t it? I hope we won’t flood his comments area with too many of our own ideas, but let him express what he wants to say first, then respond to him accordingly.

an-lon: Nice to meet you, I’m An-Lon Chen. As for me, I also went to the International School of Beijing (ISB) with Chan for a semester of high school in 1994 ,  I was a senior, he was a junior, and we both worked on the yearbook together. That would be the end of the story right there, except ISBers tend to be a close-knit bunch and it seems like for the past decade and a half we’ve always had each other’s contact information via some mailing list or another without ever actually interacting personally.

Let’s just say that my relevance to this project is that over the years I’ve gone from comparative literature to computer science to user interface design to computer graphics to character animation. I am now a full-time student at AnimationMentor. Prior to that I was at DreamWorks working on Shrek 4, and before that on Mummy 3 at Digital Domain.

Perhaps more than anyone else I know, I’ve had to approach computers and computer science as much from an anthropological perspective as well as a technical one —deciphering a subculture and a jargon in order to pass as a native.

I’m a geek at heart, with my fair share of the stereotypical social inadequacies . I’m pretty sure I was born socially tone deaf, and only as an adult began to figure out the nuances of interacting with others. That said, pretty much every big break in my unlikely computer science career has come from possessing some unusual degree of empathy . First, the amateur exercise in anthropology that drew me to geek culture; second, the turn towards user interface development, which brought me to LA and the film/vfx industry; third, the current foray into character animation, which is all about convincing audiences that dead pixels can walk, talk, laugh, and cry.

Point being, I care personally about Chan’s topic: computers and empathy. I have no earthly idea where this blog is going or how it’s going to become a thesis, but I’m following it because at least some aspects of it touch on things that I, too, have been wondering all my life.

But now, can we start from the top? Empathy? With computers?

me: Yes, with computers.

an-lon: The best developer is inevitably quite the computer whisperer, of course, but I would have never actually thought of that rapport as empathy.

me: What would you have thought of it as then?

an-lon: Two synonyms came to mind. One was grokking, the other was acculturation. Grokking, because, well, I’m a sci-fi geek and I like having a word to express deep understanding and truly “getting it.” Both my parents are scientists, and while they are both fairly technically-savvy — my dad has written Fortran and assembly language code and my mom uses Photoshop and Illustrator for graphics and charts in scientific journals — I’m not really sure either of them has ever grokked computers ;  their mindsets are a little too unyielding to ever completely get on the computer’s wavelength, so they often end up fighting the computer in unnecessary ways.

For example, if my mom gets a PowerPoint concept stuck in her head, she invariably has trouble figuring out the Photoshop equivalent because she’s speaking a different language without even knowing it. Both my parents are extremely good within limited contexts, but don’t have the particular empathy required to troubleshoot — learning a new domain comes slowly.

Wait…

Ha ha ha ha ha ha!

I just looked up “grok” on Wikipedia. It says, to grok is “to share the same reality or line of thinking with another physical or conceptual entity. Author Robert A. Heinlein coined the term in his best-selling 1961 book Stranger in a Strange Land. In Heinlein’s view, grokking is the intermingling of intelligence that necessarily affects both the observer and the observed.”

But, it gets even better.

It also says that the Oxford English dictionary defines grok as “to understand intuitively or by empathy; to establish rapport with” and “to empathize or communicate sympathetically (with); also, to experience enjoyment.”

Drum roll, loud banging of cymbals. (Smiles) I actually wasn’t as far away from your wavelength as I thought. I had turned to Wikipedia as a joke, but it did hit a nerve. What I took away from it is that it’s never a purely intellectual exercise to really truly understand something, be it an immediate piece of code or an underlying computer science concept.

me: Exactly! There is an inextricable link between empathy and the act of learning that is non-obvious to most people. And I think it’s non-obvious, because we’re used to separating the cognitive from the emotional, or the mind from the body. For example, I’ve heard many people say that empathizing is not the same as understanding. On the surface, there’s nothing significant about that statement. Of course they’re not the same. If they were, why would we have two separate words? But it becomes significant once you realize that what people mean is that understanding is inferior or shallower than empathizing. Well, that depends on how you define the two words. It is only so if you include inaccurate understandings as a legitimate form of understanding. I would argue that an accurate understanding of an other cannot be had without having tried to empathize with them.

anson: That reminds me of people telling me how I have an extra-ordinary amount of patience in front of a computer. I don’t know if it’s because I know what the computer is doing inside, but I can be patient even if it’s slow or stalling. And also, whenever my dad encounters a computer problem, he always asks, “How stupid is this computer! Why can’t it do this and that?” and I always feel like I’m defending the computer saying, “It just can’t… this is what it can and cannot do. Don’t be too hard on it. Be patient. It’s still crunching numbers. There’s nothing you can do except rebooting the machine. And here’s a way to work around its limitations .  Yada yada yada.”

Isn’t that also related to empathy?

me: Yes, I would definitely say so. I don’t think you can have patience for the computer, if you cannot empathize with it. We’re much more likely to lose patience for the computer, when we cannot empathize with it. Just think of a time when you got the hour glass or the beach ball for no apparent reason. That’s very difficult to empathize with. That’s like interacting with someone who is too pissed off to tell you what is going on.

I think the best example of people trying to empathize with the computer is when they’re debugging. When we are debugging a software program, we are trying to figure out why the program is behaving the way it is, and our head gets filled up with nothing but our understanding of the state and configurations of the program, not to mention the various hardware mechanisms like memory, processor, and external storage. What I’m doing is trying to think as if I were the computer.

an-lon: I couldn’t agree more about the debugging. Perhaps we use our empathetic faculties for debugging because our brain cells weren’t equipped to access that deeper cloud of intuition any other way. Our brains have been wired for millennia to interact with fellow humans and, as far as I’m concerned, it’s an extremely useful act of hijacking to tap that empathic cloud in order to outsmart a machine.

And just to be clear, I’m not talking about anthropomorphizing the machine, I’m talking about accessing our own preexisting, well-developed resource of empathic faculties to interact with it. Oddly, I anthropomorphize just about everything under the sun: teddy bears, disappearing keys, food that’s been in the fridge for too long. But I’ve never been tempted to anthropomorphize computers.

me: Didn’t you say there were two words that came to mind? What was the other one besides “grokking”?

an-lon: Oh, acculturation.

me: Why did that come to mind?

an-lon: Because computers and programming languages are created by humans. The magic that bridges the abstraction of 0’s and 1’s with human neurons is language. Windows and mice are metaphors—picture a window in a house, picture a mouse running around on its four little paws, now marvel at the metaphor that got us where we are today! Zipping, unzipping, bugs, these are all metaphors. Computer concepts are all abstract until we give them names and map them to something we can understand. Even the act of stepping through code using a debugger is a concession to a human need for a linear story line.

Learning how to program is similar to language acquisition, not because computer languages are anything like natural languages — allowing C++ to fulfill a language requirement like French or Spanish would make no sense — but because learning how to write code is very much a process of acculturation. Just as it’s pretty much impossible, or at the very least pointless to learn a natural language without a cultural context, it’s impossible to write code well without absorbing its many sub-cultures. Best practices, conventions, idioms, and design patterns are all cultural constructs within a human community, not semantic ones within the machine.

Put in that light, the idea of empathy with computers is staggeringly mundane — we’re talking about forming a rapport with the community of their very human creators, not a sentient and malevolent Hal. And yet, my rapport with, say, Linus Torvalds goes through multiple layers of translation, not the least of which is through the machine and back. And if I were to go out and write a Linux patch, I’d damn well better have empathy with the Linux operating system so I can design something appropriate  . . . and yet it doesn’t feel like real empathy. It’s not real the way a spoken word is real, a heartbeat is real, the touch of a hand is real. And yet, to anyone who’s ever gone into a programming trance and been absorbed to the point of forgetting to eat, sleep, or shower, it’s profoundly real, perhaps more real than reality. It’s an emotional state as much as a physical one.

david: Slim, how are you going to treat this subject in a secular fashion? It’s going to be very difficult, because so much of it leans toward feeling and emotion as opposed to logic, science, neurons, etc. It barks up the quality / quantity tree that is split down the center and very divisive.

me: Hmmm… I never thought of this as non-secular. Metaphysical, yes. Are you equating the two?

david: Well, just as Zen and the Art of Motorcycle Maintenance is not about Zen, I don’t think what you’re after is about empathy. It’s deeper than that. To empathize with the computer is to anthropomorphize. To anthropomorphize is to visit our expectations on reality. Computers aren’t humans and they never will be. Can man make a better human? Probably. Will that human have better distinguishing human characteristics? No. In the very same sense that James Howard Kunstler argued that architecture was moving toward a loss of a sense of place — which I agree with — robotics goes down the same boring path, most likely because it has no other choice. If it didn’t, we’d be defining an engine of individua- tion, and I’m pretty sure nobody is doing that. And that’s the miracle of humanness.

jeff: Speaking of humanness, historically speaking, artificial intelligence referred to abilities that we thought computers were incapable of. However, as solutions to problems began to appear, these abilities — like chess playing and language recognition —were no longer considered intelligence. How computer programs tackle intelligent tasks is always different from how humans actually do them; sometimes better or more thoroughly, but at other times, seemingly stupid.

me: What do you mean?

jeff: Why does ai trip up on “special cases”? Because the way we program intelligence is by making problems formal (i.e., accessible to the computer). When problems are formalized, they can be solved by rules. Where rules don’t quite work, we have rules for selecting rules or rules for creating the rules to select rules with ( i.e., machine learning). I think we approach problems this way because our way of accessing how we think, and communicating that to other people is in the framework of rationality. I think rationality is primarily a structure for thinking about thinking.

How we think isn’t quite rational. It’s more like rational++. When people appear rational, we can empathize with them. Irrational people, too, but not as easily. Part of anger is not knowing why. If you know how the machine works, you can be angry at the situation, but it’s not the machine’s fault, it’s whatever is broken or not working inside. Does anger require a thing to be angry at? If you’re angry at the thing and you don’t know how it works, it might as well have a mind of its own. It makes no difference to you. Consider the University of Texas clock tower shooter. You can imagine being angry at him for shooting someone you know, but then you find out he had a tumor and he requested an autopsy in a note he left at his house with his dead family. Somehow that kind of situation is a bit less angry because you know why.

Understanding the mechanism changes how we think about the thing. For example, I remember being excited about taking an ai class and learning the magic. But it turned out to be a whole bunch of hacks  —  or so it appeared. It’s no longer magic when you know how it works. Now, I don’t quite understand what you mean by empathizing with computers. What you’re doing when programming is simulating your program on your model of the programming language runtime. Yes, it’s sort of like empathy, but I thought empathy was being able to feel what you feel. We empathize with real people based on our concepts and experiences of other people. This is the theory of mind,3 which autistic people lack. For that, they are alone in the universe because other people simply don’t exist. So you need models of other people to empathize with them. Still, empathy is a feeling about feeling. I still don’t get empathizing with computers. My current idea of what you might be thinking seems wrong.

joonkoo: I second Jeff on this point. I don’t quite understand what it means to empathize with computers, either. I don’t necessarily think that you need to have answers to all these questions now. Some of them are certainly empirical questions, and worth investigating more. But, I would like to have a better grasp of your idea of empathizing with computers, and I still can’t quite get it. Perhaps it will get explained in your future writings?

me: Yes, it will. Although, I am starting to realize that I’ll be up against a lot of criticism, because some people may be equating the theory of mind with empathy.

Allow me to first write about my experience with physical materials, and I hope that will better explain why I think empathy is not exclusive to human relationships.

(Smiles) But most importantly—my God!—I can’t tell you all how much I love this tightly-knit discussion environment!

——

1 Some of my old friends call me Chan. It is the latter half of my full first name, Seung Chan. I had originally adopted the name to accommodate those who could not pronounce the first half of my name. But I have since abandoned this name because I feel that it robs me of my full identity. Those who cannot pronounce my name call me Slim—made from concatenating the first letter of my first name to my last name — which is a new identity I have constructed since my arrival in the U.S. How did the name come about? It was my e-mail username in college.

2 Hyung means older brother in the Korean language.

3 Theory of mind is the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge—to oneself and others and to understand that others have beliefs, desires, and intentions that are different from one’s own. It is typically assumed that others have minds by analogy with one’s own, and based on the reciprocal nature of social interaction, as observed in joint attention, the functional use of language, and understanding of others’ emotions and actions. (Premack and Woodruff, 1978, 515–526) (Baron-Cohen, 1991, 232–251) (Bruner, 1981, 41–56)

Premack, D. G.; Woodruff, G. (1978). “Does the Chimpanzee Have a Theory of Mind?” Behavioral and Brain Sciences 1 (4): 515–526.

Baron-Cohen, S. (1991). “Precursors to a Theory of Mind: Understanding Attention in Others.” In A. Whiten (Ed.), Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mindreading (pp. 233-251). Oxford: Basil Blackwell.

Bruner, J. S. (1981). “Intention in the Structure of Action and Interaction.” In L. P. Lipsitt & C. K. Rovee-Collier (Eds.), Advances in Infancy Research. Vol. 1 (pp. 41-56). Norwood, NJ: Ablex Publishing Corporation.

Gordon, R. M. (1996). “‘Radical’ Simulationism.” In P. Carruthers & P. K. Smith, Eds. Theories of Theories of Mind. Cambridge: Cambridge University Press. (59-74).