Saturday, March 7, 2015

RPG Systems: An Analogy with UI Design

The current game in our weekly role-playing group is Deadlands. The previous game was Shadowrun. Both rule systems lie closer to the “chunky” side of the spectrum. Shadowrun has a particular reputation for its complex and somewhat cumbersome rules, and while Deadlands has less overall complexity, the system has a degree of granularity that interrupts play more often than it enhances narration. I enjoy role-playing games because I like participating in a good story. The rules system provides a set of constraints for the characters, the setting, and the conflicts. They help give the narrative structure, a background against which the story will take place. Too few rules, and telling an interesting and well-developed story becomes difficult. Too many rules tend to get in the way of individual scenes or events. With the right balance, it’s possible for the game master, usually me, to be sufficiently fluent in the rules system to resolve any conflict without extended consultation of one (or more) books. When I describe my ideal role-playing system, I am reminded of user interface design. A good user interface gets out of the user’s way. The user shouldn’t have to think about UI elements like chrome, throbbers, or buttons. Everything should just work, leaving the user to focus on tasks, specific applications, and workflow. Likewise, I like a role-playing system that fades into the background of the story. If the structure is too obtrusive, there is no room left for the narrative.

Since I’m a game master and not a game designer, I can understand that striking this balance is not easy. A rules system can provide an amazing formalization of the setting’s flavor, defining the boundaries of the ordinary and extraordinary for the setting. I sometimes find myself running chunky systems just because the setting is provocative enough to motivate a bit of extra study and note-taking before game.

Saturday, February 28, 2015

The Incredible Lightness of Collaborative Consumption

Last week, we had to exchange our defective futon frame for a new one. The store didn't want to cover transport cost in either direction, so we had to figure out how to get our re-boxed frame from Mountain View to Los Altos. If we had a car, it would not have been very simple since we were aiming to buy a small sedan, nothing that can easily carry the frame and its box.

Fortunately, we have a car sharing service that gives us access to a range of vehicles, including a van stored down the street from my building. After work, I grabbed the van, picked up the frame at our place, and then Tara and I drove to the futon to make the swap. I dropped off Tara and the new frame at our place, and then headed back to campus. On returning the van to its parking space, I hopped on a shuttle back to downtown Mountain View.

We were able to do all of this because we're not tied to a specific vehicle for all of our transportation needs. The last car we owned was a van, and it came in handy on more than one occasion. We used it to move our household twice and for several craft fairs and art markets. However, most of the time, it was simply more vehicle than we needed. In the end, we spent more money on gas regularly for the convenience of having a vehicle that accommodated all of our use cases. Collaborative consumption allows us to have the vehicle we need for each case: a small car for most errands and a van or SUV for the occasional cargo load.

Saturday, February 14, 2015

Carless in California

For various reasons, we do not own a car despite living deep in American car country. The reasons are largely financial; the cost of living in downtown Mountain View crowds car ownership out of our budget. We pay more to live in a pedestrian friendly neighborhood, so we are less able to afford a car. At the same time, I don't need a car to get to work, and Tara doesn't drive, so any car we had would sit in the carport most of the week. Combine that waste of resources with a reluctance to contribute to the Bay Area's traffic congestion, and forgoing car ownership doesn't sound all that bad.

Car sharing services allow us to grab a vehicle as long as we plan ahead a bit. The Caltrain provides access to San Francisco. There are convenience stores and cafes in walking distance, so we don't feel the absence of a car too often. Last night was one of the few times where I did. After getting home from work, we wanted a dinner cheaper than nearby delivery options. The nearest quick bite came from a fast food place about 1 kilometer down the street. I'm a car, the trip is so fast, it almost feels excessive to drive. On foot, the trip is a bit longer, requires dodging some poorly controlled intersections, and doesn't prove particularly scenic.

I enjoy walking because it clears my head, lets me reflect and unwind a bit. On yesterday's walk, I was surprised by the absence of any other walkers. Make no mistake, this is a car town. Scurry from the driveway to the door, and no further. Children and sometimes the elderly take to the sidewalk, but the bulk of travel relies on four wheels rather than two feet. It's a pity, too. All of that blue sky and sunshine shouldn't go to waste. I don't mind the solitude, I suppose. It's fine to walk, observed by the occasional driver and the rising mountains from which the city takes its name. If folks would rather clutch the wheel and weave among the horde of mobile boxes, I shouldn't judge. I'll just walk tall, stretch my legs, and take in the air.

Sunday, January 25, 2015


The most striking feature of living and working in Silicon Valley is the extreme contrast between technology and nature. On campus, I am surrounded by towering redwood trees and verdant hillsides. As my eyes trace serpentine trails in the distance, the click-whir of an electric car startles me from my reverie. The sheet metal giant who shares the view with me is unmoved by such noises, continuing his silent meditation as I navigate between his feet. The air hums with connection, thick with the invisible media of 21st Century communication.

The transition from garden to workstation does not jar the senses as it might in a less cared-for space. The glass walls leave the room open to the wild, filling my eyes with trees and sunshine. Ephemeralization of devices enables attending to work, ears filled with the murmur of winding water. The utopic vision made evident here is healing, a restorative against the bare concrete freeways and the cacophony of cars, malls, and music that make up much of our shared human space.

In this place, the wild, untamed frontier of the planet meets the wild, untamed frontier of human endeavors. Rather than meet in conflict, here they merge into one another, each one growing and thriving in the same space. Achieving the balance requires continued maintenance, care, and compromise, along with attention to the needs of the natural world, human and non-human alike.

Friday, November 28, 2014

What do you do with a degree in philosophy?

Anyone who majors in the humanities has had to endure a version of that question more than once. As I went through graduate school, people asked the question less and less. By the time I was teaching classes, I had a pretty ready answer (teaching is paying work, you know?). As a professor, the question answers itself.

Of course, being a philosophy professor is not for everybody. The crowded academic job market alone is enough to dissuade the faint of heart. The work is demanding, involving wearing the hats of instructor, researcher, and administrator. To succeed, one has to be flexible, creative, think on one's feet, and be ready to ask hard questions of oneself and of others. As academic institutions rely on more part-time and temporary staff, success often translates into more work without longer-term commitment from the organization. One can invest a whole lot of time and energy without knowing whether that organization will continue to provide support.

Living the life of a professor for a little while, I've tasted some of the good and the bad. I've taught over a thousand students in seven years as an instructor, published on intellectual property, privacy rights, and the ethics of emerging technologies. I've overseen the intellectual development of students, graduate assistants, and junior colleagues, counseling them on their academic and personal lives. I've also had my share of being buried in grading, bouncing from class to meeting to class, and working under a deadline, all on the same day. My faith in my students' potential has clashed with the dissatisfied and disillusioned, and I've vindicated my faith with great student performance. I love to teach the thinkers that are too difficult for students, Leibniz, Marx, Nietzsche, and Nagarjuna, and I've been rewarded by their insights. I've also had that faith dashed by recalcitrant classes and the pressure of other responsibilities, leaving me to figure out what went wrong and how to do it better next time.

I did all of that with a degree in philosophy, and I have to say I did it well. Despite those results and the continued push for excellence, temporary employment with no future guarantees remained the order of the day. Many academics at the same stage of their career are in the same position, and it is a pity how much talent will be lost to the effort of a continued job hunt that must be bolstered by yet more research, teaching, and administration.

Fortunately, one thing I learned in my academic career is that I should not underestimate myself. I've achieved things I didn't think possible. Why limit my imagination and career to one path? My flexibility and creativity serve me well as a professor, and they can serve just as well in a post-academic career. The only problem left to solve, at that rate, is to find the right position for the skills I develop. After a few years in academia, I found myself asking the same old question: What do I do with a degree in philosophy?

Well, it turns out that the question can be answered in more than one way. When the degree includes research in technology ethics, intellectual property, privacy, information security, and free expression, there are opportunities for writing policy in the technology industry. After some months of exploring and interviewing, my academic career is coming to a close. In January I take up a regular full time position working on user policies for Google. The position is located at the headquarters in Mountain View, so we will also be leaving the Netherlands and the friends we've made here and taking up residence near other friends and family in California. I'm excited for the new possibilities that come with this career change, and I'm glad that I'll be able to leave the university and continue to work as an ethicist in such a vibrant and dynamic environment.

What do you do with  degree in philosophy? The question is not hard to answer because there are so few options. The question is hard to answer because there are so many. You need imagination, and you need to challenge yourself, but if you do so, you can decide what you will do with it. Just make it something awesome, and the rest follows.

Thursday, October 2, 2014

But we've always had X...

In teaching ethics, and in paying too much attention to politics, I encounter the sentiment that "We've always had [insert great misfortune], so we'll never be without it" over and over again. The sentiment is offered as a reason not to work toward alleviating poverty, warfare, disease, and all manner of problems that simply affect the whole globe and likely look to big to overcome. Still, I think this is a problematic line of reasoning, and one that we should stamp out as if it were a logical fallacy (and might trade on one, more below).

Ok, so why is it a problem? For one, it's simply conversation-stopping in any ethical debate. Should we devote resources to researching Sudden Infant Death Syndrome? Well, babies have always died for no reason, so we'll never prevent that...There is simply nothing to do but throw one's hands in the air and give up.

Now, in ethics, there is some reason to take this argument seriously. There is a very general principle that guides normative theory: Ought implies can. We cannot demand that people do the impossible, so morality can never require that we act in some way beyond our capabilities. We work toward the good insofar as we are able.

On the other hand, the argument also trades on the Naturalistic Fallacy: you can't derive a normative statement from a descriptive one. Women were treated as property for centuries, and in some places still are, but that doesn't make it right. People murder each other every day, but we still put murders in prison. Morality does not describe the world as we find it; it describes the world as we should leave it.

Now that we see how the sentiment has some intuitive appeal, and a sense of why we should be suspect of it, how should we respond to these assertions? What should we get our students (and our peers) to think about when they say "But this is just how it is"?

For me, the most important thing to grasp is this: true moral evils stem from the decisions of human beings. We live in a causal world, and the things we see around us are effects of existing causes and conditions. There is, as it were, nothing that "just is" any particular way. There is always something that sustains a particular state of affairs. As such, there is no prevailing condition in the world that is truly necessary, only the contingent result of contingent circumstance.

Contingency is a powerful concept. It strips our world of intrinsic, given meaning. It also forces us to understand ourselves as both agents and patients of causation. We are affected, but we also affect. Even if the causes of world hunger or distributive justice are systemic and institutional (and some are), by surrendering to this contingent state, we implicitly endorse all of the causes and conditions that create that state. We validate the unfairness that prevents food from reaching the people who need it, that confines medication to the boundaries of patent law and wealthy patients, that ensures that some people have to work much harder to achieve an economic status that others reach through failure.

Our task is to make the world fair, to correct these injustices and leave the world better off than we found it. Causation is both blind and brutal. We can be likewise cold and accepting, or we can choose the harder path and create kindness and compassion. The choice of what we accept is ours, and the remaining question is how to do it, not whether we should. 

Wednesday, May 14, 2014

History and Identity

Yesterday the European Court of Justice issued an important ruling that has the tech policy world buzzing about privacy, search engines and personal history. In short, the court ruled that the EU Data Protection Directive gives a person the right to demand that old information be purged from search results. The particular case involves an attorney seeking removal of links to announcements about a real-estate auction connected with a debt settlement in 1998. While the ECJ made a number of interesting moves in the case (including a welcome argument that the distinction between data processors and data controllers does not make as much sense today as it did in 1995 when the Directive went into effect), the big consequence everyone is talking is the right to be forgotten.

The long memory of the Internet is a feature it's hard not to love and fear at the same time. Whether you have something to hide or not, if it's on the Internet, it stays on the Internet (most of the time, at least, all of the time if you count the Wayback Machine). For most of us, this means that our embarrassing undergrad escapades remain on Facebook for the world to see if they look hard enough. For most of us, it means that we are constantly hearing about politicians or other public figures with this or that skeleton in the closet. For the most part, it's a good thing. The long memory of the Internet promises us that we will not lose another Library of Alexandria or Dharmaganja, the great library of Nalanda.

On the other hand, it also means that we are very easily haunted by our pasts. Even analyses critical of the ECJ rulling (this one presents an argument worth thinking about) acknowledge that the debate is about the power to shape one's public image. On the one hand, we value honesty, truth, and accuracy, but on the other hand, we value autonomy that presumably includes choosing how we present ourselves to the world.

This case, and similar ones mentioned in the ruling and other analyses, brings to the front important questions about identity. Can we understand who we are as nothing other than points of data, or is our identity more located in the narrative that links those points together? Quantified self tools advocate the former to liberate us from false self-perceptions and cleanse bias from our self-reflection. As such, there is clearly a liberating potential to such tools, and embracing mindfulness of objective metrics can have a powerful revelatory effect.

Nevertheless, there is also a risk of bondage to data. Individual data points are by themselves very uninteresting. They are static, frozen points in time, so they do not really do anything. They merely sit as recorded. The patterns we draw between those points, the transitions and changes, turn that data into an event, an event we know as human life. Even in a post-modern context where we understand that there are many possible stories to tell about the same dataset, selecting and validating a story is a deep expression of autonomy. In the end, we must look back on our lives, on a collection of frozen points, and decide, for ourselves, whether we regret or celebrate, whether we feel relief or anguish.

Insofar as the right to be forgotten allows us to take ownership of who we are now, it contributes to that autonomy. Honesty and truth are important ethical values, but so is forgiveness. If we shackle ourselves entirely to our pasts, if we allow others to tell our stories through points of data, we do not allow people to change, to express regret for what they have done, to make amends, and to move forward.

It is important to remember something here; we are not talking about removing information, only about removing results from an index. Anyone who wants to find out can still do so through regular channels of public record. They simply do not appear in search results that might color the present with a past more than 15 years distant. Maybe the case should be very different for different issues or types of information. Still, we should remember that the issue at hand isn't as simple as history or the preservation of information or even the crafting of a public persona. It is also about the crafting of personal identity, something very difficult to do when we are reduced to a static array of data.