Micro-Innovation by Melissa at US Airways

Yesterday, I was at the Ithaca airport on my way back from a day trip working with the executive MBA students at Cornell. As soon as I got to the airport, I tried to check myself in at the Kiosk. For some odd reason, the kiosk wasn’t able to find my reservation. So this lady (pictured) at US Airways helped look my reservation up manually.

tumblr_inline_mz3iuuwjTi1r7x5gf

And now get this.

After looking up my reservation, she said:

“Ha… I see that you’re taking a stop at Philly, then another one at Charlotte before getting into Providence. Would you like to take a direct flight to Providence from Philly instead? That’ll shave you a few hours.”

And I was like. “Uh… Sure?”

At first, I couldn’t believe my ears. In my head, I’m thinking “Am I getting charged extra for this or what?” But, no. Through the magic of her typing she just made it happen.

After receiving the new flight assignment, I felt that something was off, but I wasn’t sure what. I walked through security, and sat down to process my emotion. After several minutes, I slowly came to the realization that what I was experiencing was an overwhelming sense of gratitude.

Several minutes had already passed since I had uttered two reactionary words, “thank you,” to this lady. It felt awkward to go back and bring it up again. I tried to distract myself for a few minutes, but the feeling wouldn’t subside. So I finally decided I had to do something. I stood up, walked up to her, and told her that I would really like to mark this event as a special moment. I asked if we could take a picture together. She seemed surprised, and probably thought that I was an odd ball, which I can totally understand. Thankfully she agreed, and we smiled together at the camera before snapping a picture. I thanked her once again.

I don’t know of a time in my recent flight history, where I felt such sense of gratitude in relation to someone behind the ticket counter. Flying back and forth over the course of an overnight trip can be tiring. The last thing you want to do is spend more time in the plane or waiting in the airport. (Especially after experiencing several hours of delay the day before) What she did was not only surprising, but also meaningful and valuable to me. It was a great example of something I would consider a micro innovation. The kind that can only arise from realizing empathy. Thank you once again, dear lady whose name I failed to get. I will not forget the experience you made possible today.

May you stay beautiful,

Seung Chan Lim

UPDATE: I tweeted this story to US Airways, and they promised to let her manager know. They just made my day!

tumblr_inline_mz55zcdQcK1r7x5gf.png

MORE UPDATE: Corporate communications at Piedmont Airlines (operating for US Airways) has contacted me to let me know that Melissa (I now know her name!) and her boss has seen it. Love the internet. Love it.

tumblr_inline_mz5677adxU1r7x5gf.png

Embodying and Understanding

Some people seem to think that if we can merely understand someone we can empathize with them.

This is not true.

In Korean, we often say “이해는 되는데 납득은 안돼.” Literally translated, this means I can understand, but I cannot let the understanding in. Figurately translated, this means “I can understand, but I cannot empathize.”

To explore what this means, we need to talk about the difference between understanding and embodying.

Understanding something implies that we have a model, which we can use to articulate the underlying structures and relations of that thing.1

Embodying does not automatically connote understanding.

Take walking as an example. Most of us have never bothered to understand walking, but it’s something we have embodied nonetheless. Although, if you spent a few minutes right now, you could probably arrive at an understanding—regardless of how inaccurate, imprecise, or limited it may be.2  And once you do, you will be able to articulate your model in some way, be it using words, images, or physical demonstrations.

Understanding does not connote embodying, either.

Let’s say you spent a whole year reading a book that articulates a model of how snowboarding works. Even if you have become a master at articulating this model, when you actually get on a snowboard, chances are good that you’ll fall flat on your ass. That’s because you have yet to embody it.

Given this, one can say that even if you understand, if you are unable to augment it with an embodied experience you can have a difficult time empathizing or letting the understanding in [to your body].

Say you’re engaged in an empathic conversation with another person through words. Chances are good that you’ll start developing an understanding based solely on what you can directly perceive from them—their words, tone of voice, facial expressions, gestures, etc.

But an understanding of the other is not all you’ve got.

You also have embodied experiences from your own past. If you’re able to appropriately use these past experiences as references, you will be able to augment the understanding you are developing. In other words, by relating an experience you have embodied in the past with what the other is articulating, you can start to appreciate and resonate with the qualitative aspect of what the other is articulating. This can help you empathize, even if you do not understand.

The catch, however, is that you have to relate it to an experience that is qualitative similar enough instead of superficially similar enough. For example, if you try to empathize with someone’s experience of going to school you may fail to empathize simply by relating it to your own experience of going to school. That’s because while the experience may be superficially similar enough they are not necessarily qualitative similar enough. In other words, to empathize you may have to augment your understanding with an experience that has nothing to do with going to school, but nonetheless qualitative similar enough.

——

Wiggins, Grant P., and Jay McTighe. Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development, 2005.

It’s tempting to define understanding as being intrinsically accurate and complete, but this is overly ambitious given that understandings often prove to be inaccurate/imprecise/limited only in hindsight. For example, geocentricity was a model proven to be inaccurate and Newton’s theory of gravity was proven to be accurate and precise only within a certain range of scale, and therefore limited. By allowing room for error or incompleteness, we can more precisely refer to these understandings as inaccurate/ imprecise/ or incomplete in hindsight instead of having to retroactively refer to them as not understandings.

3 For a related discussion surrounding various kinds of understandings, check out Dr. John Bigg’s research on the SOLO taxonomy.

Subjective Model of Self and Other

When we empathize, we feel as if we are connected or at one. To capture this subjective quality of the event, the traditional model of a static self in relation to an other is inadequate. A more useful model will be one that accommodates a dynamic way of thinking about the relationship.

One way to model this is as follows.

Say we represented our conscious and sub-conscious processing of stimuli as the center of an arbitrary plane. We can then arrange the various sources of stimuli as dots surrounding this center, and place them near or far depending on how much we can empathize with them at any particular moment in time. In other words, the closer to the center the dot is, the more we perceive them as being connected with the self. The farther out from the center the dot is, the less we perceive them as being connected with the self.

In this model, if we’re experiencing flow playing a musical instrument, the instrument would be placed close to the center. Same would happen if we’re up on the mountain immersed in nature feeling at one with it.

On the other hand, if we cannot understand the thoughts we’re having, those thoughts will be placed far away from the center.

Thus, an implication of this model is that much of what we traditionally consider to be intrinsically connected with the “self,” can, at times, be an “other” with which we cannot empathize. Moreover, what we traditionally consider to be an “other”, can, at times, be connected with the “self.”

In other words, what constitutes the self and the other can change from moment to moment across time, as our relationship to the various sources of stimuli changes from moment to moment.

In light of this, I will now modify the definition of empathizing as follows:

Empathizing is a state of feeling as if we are connected or at one. Not empathizing is a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be anything we can perceive as an object, be it a human being, an art object, or an idea.

Two Ways We Realize Empathy

We now have a definition of empathy and empathizing as follows:

Empathizing is a state of feeling as if we are connected or at one. Not empathizing is a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be anything we can perceive as an object, be it a human being, an art object, or an idea.

Empathy is a word invented to explain what makes it possible for us to move from not empathizing to empathizing.

Realizing empathy is a moment when we have a realization that moves us from not empathizing to empathizing. We know when we experience this, because there is a resonance we feel that moves us even if a tiny bit. With the experience, we may also find ourselves nodding our head or make one of three exclamations: Ah ha! Ah… or Ha ha ha! 1 To be clear, this is not to say that this behaviors are the experience. It’s simply to say that the experience often inspires these behaviors.

There are two ways in which we can realize empathy. One is for us to realize instantly  without effort. The other is for us to make an effort to make it more likely that we will realize empathy.

Think of a friend you’ve known for a long time. Think of a time when without her saying a single word, you were able to tell precisely what she was thinking, feeling, wanting, or needing. Maybe you finished her sentences or said exactly the thing that she needed to hear when she needed to hear it. You “just knew.” Those are all examples of moments when you realized empathy instantly.

Now imagine encountering someone you were unfamiliar with. Let’s also say that she was difficult to understand. How would that feel? Awkward? Confused? Frustrated? Uncomfortable? If so, it is unlikely that your empathy will realize in relation to them even if you had the will. Why? Because there exists a kind of conflict in the relationship that will provide resistance to the process.

You see, the basic feeling that precedes feelings like awkwardness, confusion, frustration, and discomfort is that of dissonance: a feeling you get when you’re faced with two or more seemingly conflicting ideas, view points, beliefs, values, or emotions.“What kind of conflict are you talking about?” you may ask. I’m talking about the conflict between your expectations on the other, and the other as they are. If you expect the other to be social, and they are not, you may feel awkward. If you expect the other to explain things a certain way, and they don’t, you may feel confused. If you expect the other to respond a certain way to your actions, and they don’t, you may feel frustrated. All these are examples of conflicts. It’s just that we rarely think of them as such. Why? Because we want the world to work the way we expect it to.

Some seem to think this is because we’re intrinsically self-centered.3 I have a slightly different take, which is that the necessary and sufficient conditions were not fulfilled at the moment of interface to facilitate an empathic conversation between us and that other. It’s unrealistic to expect (irony intended) anyone to be able to realize empathy in relation to an unfamiliar other at moments notice without such facilitation.

——

1 Koestler, Arthur. The Act of Creation: Arthur Koestler. Pan Books, 1969.

2 Festinger, Leon. A theory of cognitive dissonance. Stanford, Calif: Stanford University Press, 1985.

3 Lorenz, Konrad. On Aggression. Hoboken: Routledge, 2002.

Understanding is Never Perfect

Some seem to think that empathizing requires that we understand an “other” 100%.

First of all, as I mentioned previously, sometimes we can empathize without any understanding whatsoever.

Second of all, I do not know of any way we can objectively quantify and measure understanding. Until such means become available, we cannot claim 100% accuracy and precision.

Finally, while accuracy and precision are important, I’m not sure such absolute achievement is necessary or even desired.A far more useful measure would be to consider whether our understanding is sufficient for a particular context.

Let us revisit the definition of empathizing I put forth previously.

Empathizing is an experience, where we feel as if we are connected or at one instead of as if we are disconnected or at odds.

Now, the keyword here is “as if.” Because what we are dealing with is a relational yet subjective experience. The experience alone does not empower us to objectively claim anything about the other. Interestingly enough, neither can they. All we have are two related yet subjective experiences.

Take the story of me in conversation with my bi-polar friend. In that situation, it was important that I tried to understand my friend before I could empathize with her. As a result, I did my best to verify my understanding of her to achieve greater accuracy. Did I understand her 100%? I don’t know.

All I did was I understood her enough.

Why was that enough? Because she felt understood. How do I know that? Because she said “thank you for understanding me.” I’d say that was sufficient for that particular context.2

Is there more I could understand that would improve the accuracy and precision with which I understand her? Sure. There will always be more.3

Humility is a virtue when it comes to understanding anything or anyone. Science is marked by significant paradigm shifts that show that previous understanding was either plain wrong or incomplete. Understanding is best framed as an ongoing pursuit.
——

1 The more we think we “know” an other, or that we have “fully” understood or embodied them, the more likely it is for us to stop wanting or trying to learn about them further. This means that our empathy in relation to them will be lowered. If we value the continued improvement of accuracy and precision with which we empathize with an other, it is far more desirable to frame the act of realizing empathy as an ongoing pursuit rather than a finite goal to be reached.

Renowned psychologist Carl Rogers also mentions the “as if” condition in his work, in order to caution therapists not to get enveloped in/overwhelmed by the other’s emotions—which would not be helpful to either party.

2 This is called intersubjective verifiability.

3 Take the example I gave on my last post about parent-child relationships. Let’s say we tweak the example to where the child thinks she does understand her parents. There is still a good chance that after a decade or so, she will realize that in fact she did not. At least not as accurately and as precisely as she imagined. Without the experience her parents had, she had no choice but to miss some of the more nuanced and subtle meaning behind their words.

Cannot Empathize? Doesn’t Mean you Lack Empathy.

Previously, I defined empathizing and not empathizing as follows:

Empathizing is to be in a state of feeling as if we are connected or at one. Not empathizing is to be in a state of feeling as if we are disconnected or at odds with an “other.” These feelings may last a brief moment or a prolonged duration of time and the “other” may be either a piece of artwork or another person..

Let us now dive into the part about the “moment” or the significance of “duration” in this definition.

Simply put, in a span of say, 5 minutes, we may continuously move back-and-forth between these two states: empathizing and not empathizing. There’s no saying how long we stay in which state. Maybe we empathize for 4 minutes then not empathize for 1. Maybe we empathize for 2 minutes, not empathize for the next 30 second, then empathize for the next 1 minute, and so on. We cannot predict.

We can also stay stuck in one state for a long time.

Have you ever had an experience, where you, as a teenager, could not empathize with your parents, because you could not understand the advice they were giving you?

I have.

But have you also had an experience where a decade or so passed by and you could empathize with them, because you could finally understand why they were giving you the advice?1

This has happened to me many times over.2

If this is something you have also experienced, it shouldn’t be a surprise when I say that depending on which “other” you’re trying to empathize with (i.e. your parents), through what medium (i.e. the advice they gave you in spoken words), in what context (i.e. yourself at the particular moment3 of hearing the advice) it may be more or less difficult to empathize.

You see, contrary to popular belief, empathy is not something we either have lots of or lack.4 Even if we had empathy and wanted to empathize, there are times we simply cannot.

Given our definitions for not empathizing and empathizing, let us now remember the definition I put forth for empathy.

Empathy is a word invented to explain what makes it possible for us to move from not empathizing to empathizing.

As you can see, I model empathy as a possibility. In light of what we’ve talked about in this article, a possibility that gets realized if and only if a set of conditions are fulfilled at the particular moment of interface between self and other.

In other words, if you find it easy to empathize with someone, it’s not merely because you have empathy, but because the necessary and sufficient conditions have been fulfilled in that moment of interface with that other, through the medium used. On the other hand, if you did not find it easy, it’s not necessarily because you lack empathy, but also because the required conditions have not been fulfilled. 

What I began articulating in my book, is my first attempt at answering the question of “What are these conditions?”

Let us remind ourselves, that for each and every one of us, there will always be moments when we will be unable to empathize with a certain other, through a certain medium, in a certain context. This does not make us necessarily lacking in empathy. It may simply mean that our empathy cannot always realize instantly as if an involuntary reflex. Sometimes steps need to be taken before we can realize empathy.

——

1 The classic example is advice about parenting, but I don’t yet have kids, so I don’t feel qualified to use that as an example.

2 Usually in the form of an “a-ha moment.”

3  This is not only about the limited knowledge and experience I had as a teenager, but also being in the mindset of not wanting to hear what my parents had to say or being distracted at that particular moment thinking about other things while my parents were speaking to me.

4  To this day, there is no objective, accurate, and universal way to quantify empathy, so as to be able to definitively claim that someone has lots of or are lacking in empathy.

Conversation: Language & Vision

On May 14, 2011 at 9:46 p.m., I posted the first draft of what will eventually become the third story of the “Making and Empathy” chapter in the book Realizing Empathy: An Inquiry Into the Meaning of Making surrounding my experience with poster design. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: I have always pondered whether it is possible for those born blind, deaf, and mute, to think or dream of abstract concepts that they have never encountered.

Whenever I have to process complex thoughts, I hear a voice inside my head, speaking a language with grammar that helps me understand and sort things out. How about babies? Having yet to acquire a language, how do they think properly? Do they just act on their instincts and feelings? What about grown-ups who do not have the ability to put thoughts together into sentences with proper grammar?

Some say that language is the key to our ability to process abstract thought and hence develop intelligence. I think there are many who are mentally and physically disabled, but can still think and understand things like other people. Language seems to be able to boost our ability to organize thoughts and abstract ideas, but it seems like we, humans, have a much more basic way of perceiving, feeling, and understanding the world around us, a fundamental layer of communication beneath our language that everyone has the innate ability to access. I am obviously speaking of what I do not understand, but maybe someone who does can shed light on these issues.

slim: I don’t know, either. But it occurs to me that there may be a set of perceptual triggers that encapsulate the fundamental and primitive qualities of perception, probably pre-language with the potential to be widely shared. Why couldn’t we imagine an interaction paradigm based exclusively on those triggers? After that is established, one could layer the symbolic and gestural semantics on top of it as needed.

joonkoo: These questions are very much related to the origin of knowledge, and the nature vs. nurture debate. I’m a blank slate when it comes to language, but I can point you to a few studies in the domain of vision and number processing. Just be aware that I may be over-generalizing.

The human visual cortex29   is organized in a category-selective manner. For example, the lateral part of the occipital cortex is activated when a person is viewing living things in general. On the other hand, the medial part of the cortex is activated when viewing non-living things. This category-specific organization can be driven by experience over development but it can also be somewhat hard-wired. One study looked at the patterns of neural activity in congenitally30 blind subjects, and they showed the same kind of neural activation patterns in response to these categories of objects even when they were presented auditorily. This study suggests that our visual experience is not necessarily the only critical factor that gives rise to the functional organization of our brain — at least in that context.

slim: When you say living vs. non-living, is a plant living or non-living? Is this related to how autistic people behave differently in relation to non-living vs. living things?

joonkoo: I don’t recall exactly how they categorized living vs. non-living in their study, but one thing I do think is true is that living vs. non-living is probably just one of many ways that things in the nature can naturally divide into, probably confounded with many other ways of categorizing things. For example, it may well be natural vs. man-made things that the brain really cares about. To me, the precise categorization of these things aren’t really important. What’s more interesting is that the visual cortex does not necessarily require visual input for its functional organization.

slim: If the visual cortex doesn’t require visual input for its function, it sounds like that would be a rather remarkable statement when it comes to our categorization of cortices into visual vs. others, no? Am I understanding this correctly?

joonkoo: Not exactly. Here’s another way to think about it. In normal development, the visual cortex is designed to process visual sensory information — based on the anatomical fact. But it’s used differently when it lacks visual input for any unexpected reason. What’s interesting is that even if the visual cortex is putatively31 doing something different in these congenitally blind people, there seems to be a set of universal principles that govern the functional organization of the visual cortex.

When these participants hear a living thing, for example, they have to bring up some mental image of that thing, which is probably not visual imagery, yet their visual cortex works the same way as it does on a participant.

slim: Oh, whoa .

So what you’re saying is that when blind people hear something, it triggers a mental image in their head, which uses the visual cortex, although the imagery they bring up is not visual?

joonkoo: Yes, my guess is that it’s probably a mixture of auditory and other multimodal imagery. But yes, their visual cortex works similarly to that of other subjects considered to be normal.

I guess this can be said as a form of plasticity. But I think this is much more profound than plasticity within a domain or modality (e.g., after losing a finger, the motor cortex that has been associated with that finger is now used for other fingers).

slim: When you say plasticity, I’m guessing it is a situation where a certain part of your body takes on a different role when what it was originally associated with is no longer available?

joonkoo: Yes. Evidence for brain plasticity is very cool.

To Anson’s point, however, this isn’t to say that the experience of abstract or symbolic thought is unimportant. Perhaps a more relevant story comes from a study that investigates number sense in native Amazonians,32 who lack the words for numbers. Through the use of numeric symbols, we have little problem expressing arbitrary quantity. On the other hand, Amazonians have only one, two, and many. Given this, they are pretty good at approximate arithmetic, even with numbers far beyond their naming range, but their performance on exact arithmetic tasks was poor. In fact, they failed to understand that n + 1 is an immediate successor of n.

anson: Would a relevant topic be why the Golden Ratio33 is universally pleasing to the eyes? It seems to indicate that there’s something common to human perception.

joonkoo: Yes, the Golden Ratio is interesting! In fact, there seem to be a lot of links between the biological system and math. One thing that I am more familiar with is the Power Law34 and γ, the Euler constant.35

Many of the psychophysical models are based on this constant and the natural log, and I would love to understand this more as well.

The definition of γ seems to be quite similar to neuronal firing patterns (e.g., long-term potentiation), and I speculate that all these fancy mathematics such as  γ, π, the Golden Ratio, may be driven by some of our intrinsic biological properties. I’m talking too much about things that I don’t fully understand. This should be a question for a computational biologist.

———-

29 The back area of the brain concerned with vision makes up the entire occipital lobe and the posterior parts of the temporal and parietal lobes. The visual cortex, also called the striate cortex, is on the medial side of the occipital lobe and is surrounded by the secondary visual area. This area is sensitive to the position and orientation of edges, the direction and speed of movement of objects in the visual field, and stereoscopic depth, brightness, and color; these aspects combine to produce visual perception. It is at this level that the impulses from the separate eyes meet at common cortical neurons, or nerve cells, so that when the discharges in single cortical neurons are recorded, it is usual to find that they respond to light falling in one or the other eye. It is probable that when the retinal messages have reached this level of the central nervous system, and not before, the human subject becomes aware of the visual stimulus, since destruction of the area causes absolute blindness in man. (Encyclopædia Britannica Online: 1 2)

30 Existing or dating from one’s birth, belonging to one from birth, born with one. (OED Online)

31 That is commonly believed to be such; reputed, supposed; imagined; postulated, hypothetical. (OED Online)

32 CNRS and INSERM researchers (Pierre Pica, Cathy Lemer, Véronique Izard and Stanislas Dehaene) studied the example of the Mundurucus Indians from Brazilian Amazonia, whose vocabulary includes number words only up to four or five. Tests performed over several months among this population show that the Mundurucus cannot readily perform “simple” mathematical operations with exact quantities, but their ability to use approximate numbers is comparable to our own.

This research, published in the October 15, 2004, issue of the journal Science, suggests that the human species’ capacity for approximate arithmetic is independent of language, whereas precise computation seems to be part of the technological inventions that vary largely from one population to the next. (“Cognition and Arithmetic Capability”)

33 Also known as the golden section, golden mean, or divine proportion, in mathematics, the irrational number (1 + √5)/2, often denoted by the Greek letters τ or ϕ, and approximately equal to 1.618. (Encyclopædia Britannica Online)

34 Van Mersbergen, Audrey M., “Rhetorical Prototypes in Architecture: Measuring the Acropolis with a Philosophical Polemic”, Communication Quarterly,
Vol. 46 No. 2, 1998, pp 194–213. A relationship between two quantities such that the magnitude of one is proportional to a fixed power of the magnitude of the other. (OED Online)

35 The constant that is the limit of the sum 1 + ½ + … + 1/ n − log n as n tends to infinity, approximately equal to 0.577215665 (it is not yet known whether this number is rational or irrational). (OED Online)

Conversation: Respect & Integrity

On April 17, 2011 at 5:38 p.m., I posted the first draft of what will eventually become the first story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience with glass. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: When I was studying hermeneutics,28 I remember my professor saying, “Every question presupposes you know something about the answer.”

For example, you ask, “What can I do to tear a piece of glass?” The question pre-supposes that you need to do something to achieve that effect. I don’t know much about glass-blowing, but as far as I know, you take advantage of gravity, right? Sometimes you don’t have to do anything, but just let gravity and the natural decline in temperature take care of matters.

The kind of question we bring to the table often shapes the kind of answer we expect to hear. Everyone sees through a pair of tinted glasses. It is inevitable, but it is important for us to be aware of that influence and bias and try to compensate for it. That is something people in the field of hermeneutics and epistemology have helped us to understand.

Does this make sense to you?

slim: Yes it does.

And that’s such a great point about the use of gravity in tearing glass. You’re absolutely right. I did think that I had to do something to tear glass. It is truly mind-boggling to realize that there’s no end to how many biases we may be operating under at any given moment.

You mentioning gravity also reminds me of an experience I had in my modern dance class.

One day, we were asked to roll down a small hill. The first time I did, I was somewhat apprehensive. I had never rolled down a hill before — at least not as an adult — and I was afraid that I might get hurt. So in an attempt to prevent that from happening, I tried to become very con-scious of how I rolled, so I could slow down and control where I was going. I wasn’t very successful, though.

I remember the roll being rather rough.

But the second time I did it, I was abruptly dragged away by a friend of mine who showed up out of nowhere and said “Let’s go!” Before I knew it, I was back up the hill throwing myself down again. What is interesting about this second time is that I distinctly remember how free my body felt. Maybe it’s because I didn’t have any time to think, but it felt as if I were gliding down the hill. It felt very smooth.

It was just me, the ground, and gravity working together in collaboration. In retrospect, I was biased toward assuming that to not get hurt I had to become conscious of the roll, so as to try and control every aspect of it. When in fact it was better to relax.

an-lon: Funny story. I was at a going-away party for one of my DreamWorks friends, and another coworker brought some homebrew and a beer bong. At the height of everyone’s drunkenness, Josh,  the bringer of beer, tore into Moiz, the guy who was leaving , over something involving semicolons. It took me a while to piece together the story, accompanied as it was by much shouting and laughter, but from what I gather, Moiz had managed to put a semicolon at the end of every single line of his Python code, and Josh just couldn’t believe it. He said, “We never put it in the best practices manual because we never imagined anyone would do something so goddamn stupid!”

Point being, in computer languages, people often write code in one language as if it were another — importing irrelevant habits/conventions/design patterns. The semicolons thing was funny because the vehemence of the rant far outweighed the magnitude of the infraction but I’ve seen many examples of this over the course of my programming lifetime, and I’m sure it has cost companies millions of dollars’ worth of programmer time just because the code ends up being incomprehensible.

slim: Yeah, I remember it taking me quite a bit of effort to go from programming in C to programming in Prolog. Even now I haven’t done much functional programming, so I bet the way I write functional programs is not as respectful of the functional principles as it could be. As a matter of fact, it may not be that much better than my disrespect for the material integrity of glass.

an-lon: By the way, your comment about respecting the integrity of physical materials reminds me of this old joke of a fictional radio conversation between a U.S. Navy aircraft carrier and the Canadian authorities off the coast.

U.S. Ship: Please divert your course 0.5 degrees to the south to avoid a collision.

Canadian Coast guard: Recommend you divert your course 15 degrees to the south to avoid a collision.

Ship: This is the captain of a u.s. navy ship. I say again, divert your course.

Cost guard: No. I say again: you divert your course!

Ship: This is the aircraft carrier uss coral sea. We are a large warship of the u.s. navy. Divert your course now!

Cost guard: This is a lighthouse. Your call.

slim: Ha ha ha ha ha ha! Respect the lighthouse, dammit!

an-lon: Also, here’s a quote that expresses my view of integrity, written by Mary MacCracken, a teacher of emotionally disturbed children. She’s explaining why she tries to teach reading to children who are so lacking in other life skills, it might be argued that learning to read is beside the point.

“The other teachers thought I was somewhat ambitious. They were kind and encouraging, but it did not have the same importance for them as it did for me. And yet, and yet, if what I loved and wished to teach was reading, I had as much right to teach that as potato-printing. In the children’s world of violent emotion, where everything continually changes, I thought it would be satisfying for them to know that some things remain constant. A C is a C both today and tomorrow — and C-A-T remains “cat” through tears and violence.”

For some reason, that quote has stayed with me for a long time. To me, that’s integrity:  that C-A-T spells cat today, tomorrow, and yesterday.

And incidentally, that’s what Microsoft’s never figured out — that users hate having things change from under their nose for no good reason. Remember those stupid menus whose contents shift depending on how frequently you access the menu item? Whose brilliant idea was that? Are there any users out there who actually like this feature, instead of merely tolerating it because they don’t know how to turn it off? Features like that create a vicious cycle where users become afraid of the computer, Microsoft assumes they’re idiots and dumbs down things even further — making the computer even more unpredictable and irrational. Now there’s no rhyme or reason whatsoever behind what it deigns to display. Say what you will about Mac fans, Windows and OS X are still light years apart in terms of actually respecting the user.

And here we cycle back to the initial conundrum: how to reconcile that austere landscape of programming abstractions with our emotional, embodied, messy selves; selves so much in need of human connection that we perhaps see everything through that lens.

Here’s a slightly loony bins example that I have tried and failed many times to write down. Around the time I was learning object-oriented programming, sometime in my early twenties,  my cousin went through a love life crisis.

The guy she was dating had a photo of an ex-girlfriend on his refrigerator, but none of my friend, only her business card. They somehow got into a fight over this. She went home, and, partly out of pique — but mostly to amuse herself — she got out a photo of every single one of her ex-boyfriends, put those photos on the fridge, and added the business card of the current guy. Then she forgot about it and went about her daily business. Of course, you can predict the rest of the story. The new guy somehow came over unexpectedly and saw the photos, they had another fight, and finally broke it off.

My cousin tried to explain to me later that the problem wasn’t so much the photos and business cards and exes. It was that her boyfriend just didn’t get that she does quirky things like that for her own amusement. What she did wasn’t intended as a message and wasn’t intended to be seen, it was just an expression of her own personal loopiness. The fact that he couldn’t relate to her silliness was as much the deal-breaker as the original photo of his ex.

At the time, we were both fresh out of college and lamenting the closeness of college friendships. The guy in question was older, maybe in his thirties , and he really just didn’t seem to get it.

And here is where I went into the spiel I have never been able to replicate since. Because I had just been reading about object-oriented programming, the thought in my head was that in college, we gave out pointers left and right to each other’s internal data because we just didn’t know better. All the joy and sorrow and drama was there for any close friend to read ,  and write, and modify. As we got older, we learned that this is a rather dangerous way to live, and developed more sophisticated class interfaces — getters and setters for that internal data, if you will. The guy in my cousin’s story seemed to live by those getters and setters, and was appalled when my cousin inadvertently handed him a pointer.

Here’s the part of the story I have never been able to replicate: I told my cousin all that without mentioning object-oriented programming once. I used a fair bit of object-oriented terminology, but only the words whose meanings were either immediately clear from the context or already in common usage — handle and interface, for example. She immediately understood what I was trying to say, and added that the word “handle” was a particularly poignant metaphor. When we’re young, we freely give loved ones a handle to our inner-selves, but in adulthood, we set up barriers and only let people in at predetermined checkpoints according to predetermined conventions. As adults, we give out handles to only a very few, and those already in possession of a handle can always come back from a previous life to haunt us. We interact with the rest of humanity via an increasingly intricate set of interfaces. By now, I possess a much deeper and richer set of interfaces and protocols than I did in my early twenties, so I can share a great deal more of myself without fear of being scribbled on. But I still don’t hand out raw pointers very often — the vulnerability is too much for me, and the responsibility too great for the other person.

Back to computers and HCI. I am surprised sometimes by how often I use computer terminology in daily life among non-programmers and get away with it. You don’t have to be a programmer to understand me when I complain that an instruction manual is spaghetti, or that my memory of a particular song got scribbled on by someone else’s more recent cover of it. The reason these metaphors work, of course, is that spaghetti and scribble are essentially round-tripping as metaphors — from daily life to computer science and then back to daily life. First, the English words were co-opted to convey a specific computer science concept — spaghetti code is code that is unreadable because it tangles in a million different directions, and to scribble on a memory location is to overwrite data you’re not supposed to overwrite —and then I re-co-opted them back into English — to express frustration at the unreadability of the instruction manual or lament that my memory of the original song has been tarnished.

My point here is that computer science is rich in human meaning precisely because we choose human metaphors to express otherwise abstract concepts. My analogy between object-oriented programming and human relations is surprisingly salient because object-oriented programming, at some level, had to come from human experience first. What is architecture? It was the Sistine Chapel before it was the Darwin operating system. Have you seen the ted talk by Brené Brown on the power of vulnerability? It’s what got me thinking about our longing for human connection

slim: I’m really taken by your use of pointers and getters/setters in the context of relationships. I’ve never thought of it that way, and it’s a rather interesting way of thinking about it. There’s so much in there that I’m having trouble responding in a coherent way.

And yes, I’ve watched that Brené Brown talk numerous times in the past. It’s a very good one, and it is consistent with my experience making physical things.

——

28 The art or science of interpretation, especially of Scripture. Commonly distinguished from exegesis or practical exposition. (OED Online)

Conversation: Trust & Not Expecting

On April 8, 2011 at 2:16 p.m., I posted the first draft of what will eventually become the last story in the “Making and Empathy” chapter in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making” surrounding my experience in the foundation studio. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance.

 

anson: For me, painting requires this exact kind of courage you are talking about. I find painting very difficult, because I always need to get things right the first time around. I always need to know what to do precisely to get to the end result I want. I would use very fine brushes to get all the details of the eyes and the hair from the get-go. I would pick the exact color of paint that matches the photo. I need to get everything right with painting just one layer.

But when I saw videos24 of skilled painters painting, they didn’t seem to care if their painting looks awful in the beginning. They begin with a very rough outline and use very broad strokes. They keep painting over it again and again, refining and adjusting constantly, adding more and more details layer by layer. It is by this constant refinement that makes their painting possible, and also realistic.

To be courageous in the midst of uncertainty, trusting the process — or the journey — will work itself out, is something that I don’t think I learned from our computer science education.

slim: Having gone through a portion of the risd foundation program, I’ve come to realize that one of the most important skills of an educator is to know how to challenge the students. It’s like Randy’s story about his first building virtual worlds (BVW) class where he realized that the quality of work his students displayed on their first project was so high that all he could do was tell them to do better. It seems to me that in the right environment, we human beings can to grow in almost magical ways.

anson: I was lucky to be in that very class with Randy. But you know what? At that time, we all thought Randy was a mean and ruthless teacher. We worked so hard to get our first virtual world out in two weeks, and then he said he expected better work. We were like “What?!” After having watched his last lecture, we, of course, now empathize with why he did this, and that he is one of the best educators in the world. He saw the potential in us and he helped us to draw it out.

I think both you and I learned something precious in the past few years by jumping into a field foreign to us. And you’re right, it is education itself. We had our minds and paradigms stretched, challenged, stimulated, and inspired. I am so glad I have gone through this education process while I am still teachable. You know, some people stop being reflective after a certain age and become unwilling to change the paradigms of how they look at things.

an-lon: I’ve been through this — making forwards and backwards progress at different times in my life — learning to be prolific instead of perfectionistic, and learning that it’s the playful, throw-away variations that eventually lead to the finished work.

In one chapter of the book Art and Fear, there’s an apocryphal story about how half the students in a pottery class are told they will be graded on the quantity of work they produce, the other half that they will be graded on the quality of their work. At the end of the assigned period, the students in the quantity group have produced higher quality work than the students in the quality group because they were given the freedom to experiment and iterate, plus the mandate to work quickly.

That’s my art story. My computer science (CS) story is no less profound. Here’s the thing, I doubt that I could have survived majoring in cs back in college. I didn’t have the maturity or the study habits, and I was far too easily intimidated. I was also terrified that I wasn’t smart enough. I don’t want to go into a long song and dance about this, and fortunately, I don’t have to because Po Bronson has already written an article25 about it.

The gist of the article is that parents who overpraise their kids for being smart are setting them up to never leave their comfort zone, because the minute they encounter difficulty, the kids panic and assume “it’s tough, therefore I must not be so smart after all.”

I found CS to be tough. I assumed everyone else was smarter than me. I walked away, for a time. What brought me back? Above all, it was the change in mindset that allowed me to return to computer science. This happened over the course of several years, and I can trace much of it back to a couple of college friends.

Doug was this kid from Alabama who lived two doors down from me in my dorm freshman year; Jeff was his Jewish roommate from New Jersey. One of my very clearest college memories — the one that’s always struck me as the quintessence of dorm hall diversity — was when we somehow got into an argument about when World War II actually began. For Doug, World War II began with Pearl Harbor because that’s what we were taught in American history classrooms. For Jeff, it began with the Holocaust and the pogroms, because that’s what was in his cultural memory. For me? Japan invaded China years before any of that other event ever happened. We came from such different backgrounds, yet ended up as such good friends. Those were good times.

Anyway, Doug and Jeff were different from any of the guys I’d known in high school. Smart, yes, but this was Princeton and everyone was smart — or desperately trying to prove they were. I think, in hindsight, that those guys were among the first I’d met who were playfully smart — who tried new things because it was fun, and who ended up in computers because it was a new, fun thing to be tried.

Back then, I didn’t understand the concept of doing things for fun. My physicist father had none of that playfulness about him when it came to academic studies. For example, he could probably be a chess grandmaster if he wanted, but he never bothered to learn because it was just a game and therefore pointless.

I was never as good at math and physics as my dad. That was a losing battle from the start. And since physicists tend to see computer science as being several rungs below them on the intellectual pecking order — the equivalent of doing manual labor — I was never exactly encouraged to pursue computer science. So I went my own way and studied comparative literature — and my parents, to their everlasting credit, let me.

But I threw the baby out with the bath water. I was never meant to be a physicist — though, ironically, computer graphics has actually brought me back to physics full circle, but computer science wasn’t physics. Honestly, computer science is mostly just dicking around . You futz with it till it works. I’m not saying the theoretical underpinnings are unimportant , but honestly, the guys who are good are the ones who spent a lot of time dicking around because, it was fun. They weren’t intimidated by the difficulty factor because unlike me, they didn’t see the difficulty as an iq test. For them, an obstacle was like a video game obstacle :  a legitimate challenge to be bested, not a measuring instrument assessing whether or not they stacked up.

At first, I really couldn’t wrap my head around the fact that these guys who seemed to spend as much time playing Nethack as they did writing code were also really cool and well-rounded people. Jeff was into theater and Doug knew a ton about contemporary art. It didn’t seem fair, somehow, that the reward for goofing off was to become smarter.

I didn’t have any sort of instant epiphany, but over the course of college and my early twenties, I did rewrite my entire value system. I came to understand from observation that intelligence wasn’t about being born smart — it was about being born smart enough, and from there, being playful and willing to explore. It was about leaping in without a clue and getting your hands dirty, rather than hovering nervously on the sidelines.

After years of being told by my parents how smart I was and living with the secret fear that I really wasn’t, I finally came to value honesty, courage, and playfulness over being smart. I also came to see the excuse “well, I could have done it if I’d tried harder” as the coward’s way out. Because if you get a B on a test without studying, you can comfortably assume you might have gotten an A if you had studied. But if you study your ass off and still get a B, well, there goes all your illusions. So it’s easier never to try.

When I returned to computer science in my early twenties, I was beginning to develop some semblance of maturity. I made a conscious choice about my value system that I would quit worrying about whether I was smart enough, and instead put all my effort into making an effort. What I discovered was that playfulness (i.e.,  willingness to explore seemingly irrelevant side paths ) and work ethic  (i.e.,  setting goals and not making excuses )  led, over time, to all the analytical smarts I ever needed for my career.

This spirals back to Art and Fear because of the simple, sad observation the authors make in their opening pages, which is that many students stop doing creative work after they graduate. Without the community and structure and feedback cycle, they’re lost.

So I think the spirit of play becomes all the more important after graduation — because the girl folding paper and producing a thousand variations just because it’s interesting will keep doing it, whereas the guy who was doing it for a grade won’t. What you’ve produced as a student will most likely be forgotten, but what you’ve become won’t.

david: Slim, there’s a certain raw, honest quality to your writing that I’m just incapable of, but it feels so good reading it, because like the finest song lyric, it expresses what I felt palpably.

The overarching theme here of whimsy is spot-on. I think the greatest indictment of modern u.s. culture is the lack of whimsy and its replacement with what the writer David Foster Wallace referred to as “the entertainment” or “an orgy of spectation.” 26

If there is one thing I seek in my mostly boring middle-aged adult life is that whimsy, and childlike sense of adventure. It strikes me that the same thing that makes children so hilarious as in this conversation between a friend (the mom) and child (the son) which appeared in my e-mail today:

Son: When you’re three, sometimes they will let you out of a cage.

Mom: What? What cage are you in when you’re three?

Son: I don’t know… I think it’s the rule, though. You can get out when you’re three.

Mom: How do you know?

Son: Well, when people are let out of a cage they always say, “I’m three! I’m three!”

This is precisely the same thing that when observed in adults would be labeled as a dissociative disorder and medicated out of existence. Adulthood is so overrated. At least the politically correct version of it that most of us practice.

slim: Both of your stories resonate with me. I feel as though I have spent too much time in my 20s worrying about when I would finally be an “adult,” or at the very least “professional,” much to my own detriment. At first,

I thought there was something wrong with me for being so child-like, but once I got sufficiently close to those who I considered to be the epitome of adulthood or professionalism, I learned that they were simply hiding their child-like tendencies, because they didn’t want other people to see it as a sign of immaturity or weakness.

I also learned that the elders could see right through people who are trying to look like an “adult” or a “professional.” Those who have lived long enough know that none of us actually know anything for certain. So it’s mostly a matter of whether you trust someone or not, instead of whether that person really knows something.

david: There is no more chilling effect, as far as I’m concerned, on American culture than the one you describe here, which is to say that half the country exists in a world where everyone is pretending to be professional, instead of being authentically themselves and leaning toward self-actualization. Some form of this was the original hypothesis of the Cluetrain Manifesto,27 which seems to have had little effect outside very small circles of young people.

Of course, the individual’s self-actualization is rarely in the best interest of the corporation, at least as management sees it. This homogenization is about as disturbing a trend as we can possible endure and in fact, should be seen as an affront to the principles that we stand for, namely freedom.

I’m consistently amazed by the influence of “dress for success” on the American corporate psyche. People actually care how I cut my hair, shave, or whether I’m tattooed or pierced as if my capabilities or brain power or effectiveness change with the scenery. I’m also consistently amazed by how the basic marks of individuation aren’t seen as intrinsic or extrinsic. I started writing an essay on a philosophy of hiring recently and a lot of these kinds of themes come up there. Pittsburgh is certainly a bastion of the old school in this regard. While I understand the point in marketing and sales, the extent to which I’ve seen all manner of bizarre corporate policy developed on the altar of dress codes is mind-boggling.

I’ve seen pictures of James Watson delivering the original papers on dna just days after their publishing standing on-stage in front of his peers in shorts and then there’s Paul Erdos, who pretty much defined the picture of obsession and minimalism. I’m also told that none other than Herb Simon, when asked to choose a place to live on his arrival at cmu drew a half mile radius around the university and said, “Anywhere in that circle” owing to his particular obsession with being able to eat and breathe the work, other concerns be damned.

And of course, I’m not sure we have much in the way of counterculture outside of absurdist examples like Mike Judge’s Idiocracy.28 I must go watch that movie again soon.

Welcome to Costco; I love you!

They tell me Costco is now in downtown Chicago. I may have to move to a hill in Montana next.

an-lon: The theme of balancing grown-up responsibilities (e.g., taxes, housing, earning a living) with a childlike sense of adventure is definitely a big one for me, as well. I think the theme of rebirth is a salient one as well. For better or worse, I can’t re-live my twenties .  I need to find what works for me now, in making my second big career change,  or third, I guess, if you count comparative literature to cs to be one arc and then think tank to vfx to be another. I can’t just repeat what I did the first two times — I need to find what works now, at a different life stage with different priorities. I’m not out to reject adulthood here   but I do intend to redefine it.

anson: I think we have to question whether professionalization is doing good or not to the education of our current and next generations. Professionalization makes us feel good about ourselves and also helps us to land a job more easily, but then it doesn’t help produce people who are more well-rounded and more capable of continued learning, especially in contexts that are out of their comfort zones.

I am fortunate to have received both a technical and liberal arts education. When I raise my kids, I won’t let them become lopsided techies. I also want them to be equally exposed to a liberal arts education, including history, arts, literature, and philosophy. I think that will help them to see the world through a different pair of lens and be more embracing of diversity and creative ideas.

——

24 A good example of such a video is a Belgian documentary film from 1949 directed by Paul Haesaerts called Visit to Picasso that captures Picasso’s creative process as he paints in real time. (“Bezoek aan Picasso”)

25 American journalist Po Bronson once wrote about how a large percentage of all gifted students severely underestimate their own abilities. (“How Not to Talk to your Kids”)

26 The late David Foster Wallace, an award-winning American writer, is quoted as saying, “The great thing about not owning a TV, is that when you do have access to one, you can kind of plunge in. An orgy of spectation. Last night I watched the Golf Channel. Arnold Palmer, Jack Nicklaus. Old footage, rigid haircuts.” (Lipsky, 2010, 118)

Lipsky, David. Although Of Course You End Up Becoming Yourself: A Road Trip with David Foster Wallace. (New York: Broadway Books, 2010), 118.

27 The Cluetrain Manifesto both signals and argues that, through the Internet, people are discovering new ways to share relevant knowledge with blinding speed. As a result, markets are getting smarter than most companies. Whether management understands it or not, networked employees are an integral part of these borderless conversations. Today, customers and employees are communicating with each other in language that is natural, open, direct and often funny. Companies that aren’t engaging in them are missing an unprecedented opportunity. (“The Cluetrain Manifesto”, 2000)

28 An American film where Private Joe Bauers, the definition of “average American,” is selected by the Pentagon to be the guinea pig for a top-secret hibernation program. Forgotten, he awakes 500 years in the future. He discovers a society so incredibly dumbed-down that he’s easily the most intelligent person alive. (IMDB, 2006)

Conversation: Choice & Feeling

On April 6, 2011 at 12:31 a.m., I posted the first draft of what will eventually become the fifth story in the “Making and Empathy” chapter in the book “Realizing Empathy: An Inquiry Into the Meaning of Making.” surrounding my experience in the metal shop. This is an edited version of the conversation that followed, regrouped and rearranged for clarity and relevance. Click here for the previous installment, which talks about empathy and mastery.

 

an-lon: (Smiles) Happens to me all the time when drawing and editing, squinting at it and wondering what’s wrong, and 90% of the time, whatever’s wrong is completely orthogonal to all the directions I was previously searching.

slim: The feeling of hindsight obviousness intrigues me quite a bit. I remember being dumbfounded when my friend shared her story of how she overcame her bipolar disorder. She said she finally realized that she had the power to choose not to be depressed. She told me that it was so obvious in hindsight, that she couldn’t understand why she didn’t realize it before. But the reason I was dumb-founded was that I wasn’t depressed, yet I had never realized that, either. I can choose how to feel? That was a completely novel thought.

Since then, I’ve heard many people say things like “we always have a choice.” But I think it’s imprecise to say that we always “have” a choice. I’m sure it took them a lot of struggles to come to that realization. So what they mean is that we have to become knowledgeable of the choice. Or more precisely, we have to “develop” and “make” a choice that wasn’t available to us previously. That can take quite a bit of effort. It’s not just a matter of “snapping out of it.” Once you’re able to just snap out of it, you’ve already learned it.

an-lon: Ironically, Slim, you knew me during a period when I was genuinely depressed. When I attended the International School of Beijing (isb), I was really alone and struggling. Beijing was my first time living in a big city and I experienced culture shock and extreme loneliness.

I was functional — for where I was at the time, I was pretty convinced I’d just get yelled at if I admitted I needed help —but I remember sleeping 10 hours a day because I just didn’t want to wake up, and making a deal with myself that I’d allow myself to contemplate suicide if college wasn’t better. Don’t get me wrong ,  I wasn’t actively suicidal. It was just my way of mentally kicking the can down the street. I truly have no idea what it was like for your friend.

I think there are links between add and depression, but I don’t think I was ever truly chemically predisposed to depression in the way a bipolar person is. In my case, I was depressed first because I was trapped in a small town — before Beijing — then thrown into a big city — Beijing — with no coping skills.

College and D.C. introduced me to the world, and I was fine after that. But I do know from those high school years exactly what depression is. I had plenty of roller coaster ups and downs in my twenties, but nothing like depression. Nothing like that soul-sucking lethargy of my teens.

Unfortunately, I can’t say the same of the past few years. The allergies are a long story, but basically a year into my stay in L.A., I started experiencing mysterious symptoms:  a sore throat that wouldn’t go away for two months and just overall lack of energy. It took many trips to various doctors to figure out what was going on. I’d do something that would help for a while, then get flattened by some new mystery ailments.

The infuriating thing was, that was never anything huge — I’d just be sick, and tired all the time because when you’re not breathing well, you’re not sleeping well, and when you’re not sleeping well, you’re not living well. After a while, this changed my identity, from an energetic, enthusiastic person to one who carefully rationed her energy.

This also made me realize that perhaps that enormous physical energy was all that had held depression at bay all through those 18 years between high school and l.a. I kept the demons at bay by constantly chasing after new pursuits, which was great, but what I didn’t know was that if you take away the physical energy, the scaffolding that remains is a house of cards.

Thing is, during the healthy decade of my twenties, I’d taught myself to push through fatigue, frustration, and fear. Athletics are a good example of this ; you learn to recognize when to push through pain and when to rest. You know the Nike slogan “Just do it”? Well… yeah. Just do it. And with computers, I’m sure I don’t need to explain how stubbornness pays off. Damn. I pushed hard in my twenties, but I scored a lot of victories, too.

The allergies-and-depression cycle of recent years is a bit hard to explain because I really can’t just blame the allergies. There was a breakup, job angst, and moving to a new apartment. But I’ve coped with all of the above before, and there were good things going on in my life, too. It was all incredibly frustrating because while I definitely recognized the symptoms of depression from that extended period in high school, I could not figure out why it was happening again and why I couldn’t just snap out of it.

As with that period in high school, I never stopped fighting. I never stopped going out and doing what I wanted to do. But I did cut back . There was always this triage of what I had energy for and what my priorities were. In my twenties, I just did it all. These past few years, I hit a point where I couldn’t — I had to make choices.

I’m still convinced that the only reason I snapped out of that depressive period — I can’t truly call it depression, but I felt like I was always close to the edge and could never quite get any distance from it — was that I finally got the allergies under control. Exercise and nutrition are a big part of it, but so were allergy shots and an immune system booster vaccine.

No silver bullets, but basically I feel like myself again after having had to walk through sludge the past three years. I’ve kind of forgotten how to run, but at least I know it’s possible again. (Smiles) I spent three years trying to choose not to be depressed, but the fog refused to lift until I finally got my physical health back.

Did I do it all wrong? Would therapy or medication have gotten me over it sooner? I just don’t know. And I perhaps never will. I’ve been playing these past six months entirely by ear. I do feel safe in the assumption that as long as I have my physical health, my mental health is also safe. But
I no longer take it for granted. And I also realize that the madcap coping mechanism of my twenties — constantly sprinting — literally, when it came to ultimate frisbee, probably wouldn’t have lasted forever anyway.

One thing that tends to not work is trying to will yourself into being more organized/disciplined/attentive. That tends to be a recipe for failure, with all the voices in your head yelling at you for being such a lazy slob and a waste of space. What does work is finding clever ways to set things up such that it’s a downhill slide instead of uphill battle — in essence, coming up with a system that makes the good behavior easy instead of difficult. It’s like the judo trick of using the other person’s momentum for a throw, rather than trying to absorb the force of their blow directly

slim: Indeed. I also think the kind of support structure or environment you’re talking about is essential. Although, I would rather use words like “encouraged,” “supported,” or “amplified” to describe the qualities afforded by such an environment over “easy.” I think there is a significant difference between something being easy vs feeling at ease when you’re in relation to something.