Saturday, March 29, 2014

Living Philosophy

Over the last year, my professional life has undergone a number of major changes. Obviously, moving to the Netherlands is on the list, but I have in mind more differences in how I view myself and my work. While finishing my dissertation gave me a sense of completion, it took a while to find a well-developed sense of myself as a philosopher. In particular, I have a very different relationship to my research today than I had when I defended my dissertation.

The dissertation stage is filled with lots of uncertainties and fear along with the other challenges of actually writing the thing. For one thing, I had never written anything that long or unified. I had to design and execute a book-length argument on one topic, and I had to say something relatively novel. Thankfully, my supervisor Bruce Brower was an excellent mentor. He helped me identify the topic very early in my doctoral studies, so I spent two years or so thinking about it before I began principal writing. We worked the topic into one of my qualifying assignments, so I had the chance to do some preliminary work, and he helped through applying for the fellowship that supported one year of writing.

During the writing, the research was a task, a very demanding task. My life became a routine of read-write-recover or occasionally write-read-read. Through the hours spent writing and revising, the research was a challenge, a wall I had to climb. It made demands like a physical force, pulling me just enough to force me to trudge through the rough terrain. It became a real presence in my life, an invisible ball and chain.

For about a year after defending, the ball and chain hung around. I knew I was supposed to continue working, but I had only a vague idea about how to begin writing and publishing articles. Again, other postdocs seem to report the same learning curve unless they have solid early career mentorship, so I know I'm not the only one with that problem, at least. I did have a sense that the research wasn't supposed to be like that. There is a cultural narrative about relating to one's work, especially creative/intellectual work, as a bittersweet challenge. It pushes the scholar, but also motivates. The philosophy drives you, not the other way around.

Well, I felt not so much pushed but dragged. There were topics that I wanted to develop, but I still didn't feel all that sure about how to go about it. I wrote but just couldn't make the arguments do anything for me. Last summer, I started to rethink my relationship with philosophy to find a way to turn things around.

One thing I appreciate about the Buddhist tradition is the sense of lineage. The Dharma stretches back in an unbroken line to Siddhartha Gautama. Every Buddhist teacher should understand that the insight she has is the very same insight held in the mind of the Buddha. The Dharma is a living thing, passed through the centuries in texts, by word of mouth, and by example. These ideas give the Buddhist tradition a resonance, an existence alongside our own.

Thinking about these things, I began to ask myself what philosophy is, exactly. This is a question I became absolutely sick of during my MA studies in Vancouver. Many philosophers pose that question, and few agree. At the time, I thought it was a hindrance to philosophy to worry about what it is. It is something, and we do it, so let's get on with it. Now, I can humbly say that I understand why the question arises over and over again. The answer to that question is the name that gives philosophy its living form. Philosophers don't have to agree on what it is completely, but they should know what it means to them.

I started rereading Wittgenstein, then Heidegger, Carnap, Quine, Nietzsche, Descartes, and Kant. This time, I didn't pay attention to the content of their arguments but to the care with which the arguments are framed. In their master works, philosophers put a great deal of effort into putting forth something that is in principle very difficult to describe or articulate in words. If the "something" were obvious, the description would be trivial. At its best, philosopher plunges into the fringes of our understanding. For me, philosophy became Vipassana. In Pali, the word means "insight/investigation" and is used to describe meditation practices directed at a clear understanding of reality.

At this point, two things happened. First, I had an evaluation standard for my own work. If it's too easy to say, I haven't thought about it enough. Every article should contain a key insight that is difficult to see and understand, but can be brought out with great care. In addition, by giving it a name, my research became a living thing. It now pushes me to the ends of my understanding and motivates me to go as far as I can. I'm still working on testing, developing, and writing, but I now know what it means to hit the mark, even I haven't hit it yet.

Wednesday, March 26, 2014

Pedagogy of Prestidigitation

I put what might be too much thought into presentation when I teach. I say it's too much because I don't know how much of it comes across to my students, but insofar as a teacher must entertain, it seems appropriate to work on one's showmanship. Over time, I've developed some particular aesthetics of teaching that both keep me motivated and focused in the task, and hopefully contribute something unique to my students' experience.

My basic model is jazz improvisation, for reasons perhaps best understood by fellow initiates of Robert Anton Wilson. The presentation slides give me an overall structure and contain the essential information. For the most part, the slides are supposed to be springboards for verbal improvisation. I like the idea of running discussion sessions, and when it happens I enjoy it, but I find it hard to get the students going. In introductory ethics courses, when I include assignments that require them to read before coming to class, it's easy because everybody knows what's right (before they take philosophy, at least). In most courses, I think I scare them too much. It's not intentional or anything, but I've been given to understand that I have a forceful presence. As much as I try to dial it down, it seems to come across anyway.

Still, that's just about lecture style, and not really all that different from the most general public speaking advice. In addition to that, I give some thought to the peripherals. For instance, I value minimalism in my self-presentation. Remember, I said my conditioned response to teaching is to reach for the chalk? I value that model because I (usually) don't have to bring the chalk and board.

The blackboard is classroom infrastructure; I walk into a room and expect to see one. The tools are simply at hand, something I find in the environment, take up, and use. Most of the time I taught at Tulane, I had a pile of books and notes and papers to hand back. Way too much baggage for someone teaching about letting go and liberation, right? As I got more comfortable in the classroom, I started trying to scale back and bring only what I really needed. At Twente the classroom tech is so reliable that I don't even need notes or textbooks. I can walk in with no materials, log into a computer, fire up my Google Presentation, and get to work.

If there is any magic in it, it happens there. To walk in with nothing and create something wonderful using nothing other than what is to hand is the work of an illusionist. Behind the scenes, there's preparation and reading and notes and consultation, but the students don't see that, and they don't need to see it. If I've done it right, they're too occupied with the illusion to think about it.

At least, that's what I tell myself the good days are like. I know it's more like some stuttering, some swearing, the occasional funny joke, and the ubiquitous unfunny joke. Still, if I don't imagine something better, I have no incentive to improve. Even fictions have their function, in the end.


Tuesday, March 25, 2014

Flipped Off Pedagogy

Everyone who works in education is trying to figure out what to do with the new capabilities afforded by IT. The most prominent example is the move toward MOOCs, the massively-open online courses made visible by the efforts of EdX, Coursera, and associated institutional partners. For those of us in the trenches, MOOCs represent the least imaginative application of information tech to the classical challenge of enlightening young minds. Think about it this way: you have any and all documented facts at your fingertips, and the ability to connect with experts anywhere in the world, and you use it to turn university lectures into a Netflix product? Michael Sandel is a talented lecturer, but I don't see philosophers binging on his Justice course the way we all do with Orange is the New Black.

So, if MOOCs aren't the big challenge, what is? As far as I can tell, educators (self included) have the most trouble coping with the "flipped classroom." A "flipped classroom" is one in which the teacher takes the backseat and acts as a facilitator or (maybe) a critic for student-centered activities. For the cynical, the concept caters to the Millennial affection for the spotlight, but even if that's a driving force, I have some sympathy for the model. After all, with the massive external memory of Wikipedia available, rote memorization is obsolete. The students can get the facts from the source, just like we (experts) do. We all use the same tools now, so there's no magic in it. I use Google Scholar because the interface for databases like the Philosopher's Index and JSTOR run as smoothly as a house drives.

We don't need to convey facts, but we do have something to convey, something about how to use available research tools, and something about how to put all of that information to work for you. As that's the case, the best thing we could do for our students is put them to work and help them along with the hard parts. Show them how to get started, how to get unstuck, how to evaluate sources, how to master a field. Show them how to do. Flipped classrooms are great environments for doing all of that, but they have to be used well. We have to have well-designed projects, something more sophisticated than the five-paragraph essay, please? While we're at it, something more entertaining than presentation slides would be nice, too.

The problem for many of us is that we have no idea how to do any of that well. We weren't trained that way, and we weren't trained to teach that way. I have tons of sympathy for the flipped classroom. I work to include more group projects and unconventional assignments into my classes in an attempt to convince my students that they can be keen analytic and critical thinkers about things they care about, not just things I care about. Nevertheless, my conditioned response to a teaching situation is to go old school. Give me a blackboard and a pile of chalk, and I could teach the world. If the topic is Buddhism, I wouldn't even need notes.

Last year, I started making presentation slides because I know my students expect them, but I don't do anything fancy with them. It's enough of a challenge to condense the lecture into slide-sized chunks. The exercise has shown me the value of creating and communicating some structure, a map of the topic to be covered, detailed signposts along the way, and a summary of what they should take away. At the same time, I don't see slide presentations as much in the way of innovation in the classroom. I'm not doing anything that couldn't have been done with Ektachrome slides in the 1950's, and it goes without saying that my free-form verbal improvisations represent a pedagogy older than Plato.

The bottom line is that as technology and culture (especially media culture) change, pedagogy has to change. Furthermore, the rate of change may outpace the normal generational turnover of teachers, so we have to change, too. I don't have answers, but I am willing to explore them with my colleagues and my students. I don't think we'll figure it out without some experimenting, so I hope my colleagues will join me in being courageous enough to try new things, and that our students will be tolerant of us when we fuck it up. 






Monday, March 24, 2014

Ambivalence on Ethically Challenging Research

I'm the middle of one of those research projects I feel obligated to do, but at the time can't bring myself to feel entirely passionate about. There really is nothing that brings out ambivalence in me like ethics and cyber-warfare. First and foremost, I am no big fan of war, warfare, or the military broadly construed. For that reason alone, the ethics of war should be a topic of great interest. If it's the case that person most fit for office is the one who wants it least, then the best war ethicist should be an absolute pacifist. Think about it this way: what would war ethics look like according to Genghis Khan or Napoleon? I think Atlanta still wakes up in hots sweats over Sherman's ideas about conducting a just war.

Of course, when you actually have to think about the ethics of just war, you have to confront the realist/idealist problem. War is awful and nothing good comes of it (anyone who says otherwise has way too much invested to be unbiased), so the most just war is the one we avoid. In a perfect world, there'd be no armed conflict. Unfortunately, our world is somewhat far from the best imaginable world even if Leibniz is right and it's the best possible one. As such, it feels worse than useless to devote space to an ethics of war that begins and ends with a norm against engaging in armed conflict. Even if it's right, it'll be too readily drowned out by warfare-apologists who give the status quo more room to operate even if it would be better for all humanity for the military-industrial complex to close up shop immediately.

So, what's the ethical course for a would-be war ethicist? First, a healthy dose of realism: just as there is war, there is good philosophical thinking about it. Just War Theory has a long tradition of outlining the framework for a conducting something that could be called an ethical war. Second, a healthy dose of idealism: even if the norm is demanding, a strong argument has force. If there's a general consensus that doing a particular thing turns a justified actor into a malicious actor, there will be a need to address that consensus before crossing the line. It may not prevent the pushing of the button, but it gives sanity and reason one more chance to prevail.

Finally, focus on what happens when things go wrong because that's what will happen. I can say lots of things about the ethics of cyber-conflict, but the most useful things I could say concern how to remain a justified actor in a world of malicious actors. What are the responsibilities of the defender with regard to remaining ethical when the opponent has forsaken ethics? I feel generally ambivalent about "sinking to their level" arguments, but I do think that in the moment where you confront an immediate moment of injustice, you learn something important about yourself. What choices are made beyond that moment will determine who you are and how you evaluate yourself, so it's important to have some clear choices in view. If I can contribute a picture of a just reactions to malicious actors, then I offer something that is both useful and a step in the right direction.

Friday, March 14, 2014

Surveillance and Servitude

A response to Kevin Kelly’s “Why You Should Embrace Surveillance, Not Fight It” in Wired

In “Why You Should Embrace Surveillance, Not Fight It” Kevin Kelly offers some possibilities for a positive view of ubiquitous surveillance. The solution to our concerns about privacy, according to Kelly, is more, rather than less surveillance. By embracing “coveillance,” collective monitoring of one another, we can recapture some of the influence and transparency currently lost to surveillance, top-down monitoring of citizens by an authority.
While Kelly is right that coveillance gives us transparency, he may be wrong about freedom. Let’s begin with the idea that Big Data firms will pay coveillers for self-monitoring and reporting. The idea that we could make our data more valuable by invoking a sense of entitlement and demanding direct compensation misunderstands the “big” in “Big Data.” The personal data of one citizen is really not all that valuable to data analysis. You can’t create general projections about the behavior of people without the collected data of many individuals. When Big Data gets big enough, very personal information does not matter at all. That’s why Google can happily anonymize the information it collects about you. It doesn’t need the details that distinguish you from someone very much like you. It just needs enough information to draw some conclusions about general trends such as buying habits.
If we do begin to press an entitlement to our personal data and demand payment in exchange for consistent and active self-monitoring, how much will they pay and for how much monitoring? Clearly, Big Data is profitable with what it can get for free right now, so we have to imagine that contracted monitors will get paid a little bit for a lot of inconvenience. After all, there’s little incentive for Google, Microsoft, or Facebook to pay you for what you already give them in exchange for some mighty convenient services.
In asking for some compensation beyond free email, news, and cloud storage, we may find ourselves in binding contracts inspired by our favorite mobile service providers. Free email? Sure, for two years you get a 500gb searchable inbox as long as the provider gets to track every email-related activity and log all contacts to form a social profile. Did I mention you’ll have to click a pop-up or sign in again if you leave your browser open but inactive for more than 10 minutes? Well, if you don’t like the terms, you can pay our opt-out fee. Indentured data servitude doesn’t promise the consumer more freedom.
Likewise, the idyllic image of life in tribal societies where everyone knows everyone else’s business obscures the extreme constraints of a forced public life. Let’s not forget that the same highly open societies that humankind lived in for hundreds of years were societies of little freedom. Tyrannical chiefs or high priests could ostracize or punish anyone for any difference from the normal. It’s no coincidence that those same authorities also decided what is and is not normal.

    We worry about losing privacy for a good reason: the loss of privacy is the loss of freedom. If we cannot choose what we present about ourselves and how we present it, we lose the freedom to decide who we are and who we trust. We lose the freedom to be different, to be unique, and to offer that uniqueness as a token of trust and companionship. In 1921, Russian novelist Yevgeny Zamyatin completed We a dystopia exploration of total transparency. In We, the citizens live in a city of clear glass. Everyone can see everyone else, and everyone is accountable to same standards and rules. Zamyatin’s characters live out fully transparent lives in servitude to their city, unable to change their society or themselves for fear of deviation and punishment. Transparency is their master, and none of them are free.