Friday, November 28, 2014

What do you do with a degree in philosophy?

Anyone who majors in the humanities has had to endure a version of that question more than once. As I went through graduate school, people asked the question less and less. By the time I was teaching classes, I had a pretty ready answer (teaching is paying work, you know?). As a professor, the question answers itself.

Of course, being a philosophy professor is not for everybody. The crowded academic job market alone is enough to dissuade the faint of heart. The work is demanding, involving wearing the hats of instructor, researcher, and administrator. To succeed, one has to be flexible, creative, think on one's feet, and be ready to ask hard questions of oneself and of others. As academic institutions rely on more part-time and temporary staff, success often translates into more work without longer-term commitment from the organization. One can invest a whole lot of time and energy without knowing whether that organization will continue to provide support.

Living the life of a professor for a little while, I've tasted some of the good and the bad. I've taught over a thousand students in seven years as an instructor, published on intellectual property, privacy rights, and the ethics of emerging technologies. I've overseen the intellectual development of students, graduate assistants, and junior colleagues, counseling them on their academic and personal lives. I've also had my share of being buried in grading, bouncing from class to meeting to class, and working under a deadline, all on the same day. My faith in my students' potential has clashed with the dissatisfied and disillusioned, and I've vindicated my faith with great student performance. I love to teach the thinkers that are too difficult for students, Leibniz, Marx, Nietzsche, and Nagarjuna, and I've been rewarded by their insights. I've also had that faith dashed by recalcitrant classes and the pressure of other responsibilities, leaving me to figure out what went wrong and how to do it better next time.

I did all of that with a degree in philosophy, and I have to say I did it well. Despite those results and the continued push for excellence, temporary employment with no future guarantees remained the order of the day. Many academics at the same stage of their career are in the same position, and it is a pity how much talent will be lost to the effort of a continued job hunt that must be bolstered by yet more research, teaching, and administration.

Fortunately, one thing I learned in my academic career is that I should not underestimate myself. I've achieved things I didn't think possible. Why limit my imagination and career to one path? My flexibility and creativity serve me well as a professor, and they can serve just as well in a post-academic career. The only problem left to solve, at that rate, is to find the right position for the skills I develop. After a few years in academia, I found myself asking the same old question: What do I do with a degree in philosophy?

Well, it turns out that the question can be answered in more than one way. When the degree includes research in technology ethics, intellectual property, privacy, information security, and free expression, there are opportunities for writing policy in the technology industry. After some months of exploring and interviewing, my academic career is coming to a close. In January I take up a regular full time position working on user policies for Google. The position is located at the headquarters in Mountain View, so we will also be leaving the Netherlands and the friends we've made here and taking up residence near other friends and family in California. I'm excited for the new possibilities that come with this career change, and I'm glad that I'll be able to leave the university and continue to work as an ethicist in such a vibrant and dynamic environment.

What do you do with  degree in philosophy? The question is not hard to answer because there are so few options. The question is hard to answer because there are so many. You need imagination, and you need to challenge yourself, but if you do so, you can decide what you will do with it. Just make it something awesome, and the rest follows.

Thursday, October 2, 2014

But we've always had X...

In teaching ethics, and in paying too much attention to politics, I encounter the sentiment that "We've always had [insert great misfortune], so we'll never be without it" over and over again. The sentiment is offered as a reason not to work toward alleviating poverty, warfare, disease, and all manner of problems that simply affect the whole globe and likely look to big to overcome. Still, I think this is a problematic line of reasoning, and one that we should stamp out as if it were a logical fallacy (and might trade on one, more below).

Ok, so why is it a problem? For one, it's simply conversation-stopping in any ethical debate. Should we devote resources to researching Sudden Infant Death Syndrome? Well, babies have always died for no reason, so we'll never prevent that...There is simply nothing to do but throw one's hands in the air and give up.

Now, in ethics, there is some reason to take this argument seriously. There is a very general principle that guides normative theory: Ought implies can. We cannot demand that people do the impossible, so morality can never require that we act in some way beyond our capabilities. We work toward the good insofar as we are able.

On the other hand, the argument also trades on the Naturalistic Fallacy: you can't derive a normative statement from a descriptive one. Women were treated as property for centuries, and in some places still are, but that doesn't make it right. People murder each other every day, but we still put murders in prison. Morality does not describe the world as we find it; it describes the world as we should leave it.

Now that we see how the sentiment has some intuitive appeal, and a sense of why we should be suspect of it, how should we respond to these assertions? What should we get our students (and our peers) to think about when they say "But this is just how it is"?

For me, the most important thing to grasp is this: true moral evils stem from the decisions of human beings. We live in a causal world, and the things we see around us are effects of existing causes and conditions. There is, as it were, nothing that "just is" any particular way. There is always something that sustains a particular state of affairs. As such, there is no prevailing condition in the world that is truly necessary, only the contingent result of contingent circumstance.

Contingency is a powerful concept. It strips our world of intrinsic, given meaning. It also forces us to understand ourselves as both agents and patients of causation. We are affected, but we also affect. Even if the causes of world hunger or distributive justice are systemic and institutional (and some are), by surrendering to this contingent state, we implicitly endorse all of the causes and conditions that create that state. We validate the unfairness that prevents food from reaching the people who need it, that confines medication to the boundaries of patent law and wealthy patients, that ensures that some people have to work much harder to achieve an economic status that others reach through failure.

Our task is to make the world fair, to correct these injustices and leave the world better off than we found it. Causation is both blind and brutal. We can be likewise cold and accepting, or we can choose the harder path and create kindness and compassion. The choice of what we accept is ours, and the remaining question is how to do it, not whether we should. 

Wednesday, May 14, 2014

History and Identity

Yesterday the European Court of Justice issued an important ruling that has the tech policy world buzzing about privacy, search engines and personal history. In short, the court ruled that the EU Data Protection Directive gives a person the right to demand that old information be purged from search results. The particular case involves an attorney seeking removal of links to announcements about a real-estate auction connected with a debt settlement in 1998. While the ECJ made a number of interesting moves in the case (including a welcome argument that the distinction between data processors and data controllers does not make as much sense today as it did in 1995 when the Directive went into effect), the big consequence everyone is talking is the right to be forgotten.

The long memory of the Internet is a feature it's hard not to love and fear at the same time. Whether you have something to hide or not, if it's on the Internet, it stays on the Internet (most of the time, at least, all of the time if you count the Wayback Machine). For most of us, this means that our embarrassing undergrad escapades remain on Facebook for the world to see if they look hard enough. For most of us, it means that we are constantly hearing about politicians or other public figures with this or that skeleton in the closet. For the most part, it's a good thing. The long memory of the Internet promises us that we will not lose another Library of Alexandria or Dharmaganja, the great library of Nalanda.

On the other hand, it also means that we are very easily haunted by our pasts. Even analyses critical of the ECJ rulling (this one presents an argument worth thinking about) acknowledge that the debate is about the power to shape one's public image. On the one hand, we value honesty, truth, and accuracy, but on the other hand, we value autonomy that presumably includes choosing how we present ourselves to the world.

This case, and similar ones mentioned in the ruling and other analyses, brings to the front important questions about identity. Can we understand who we are as nothing other than points of data, or is our identity more located in the narrative that links those points together? Quantified self tools advocate the former to liberate us from false self-perceptions and cleanse bias from our self-reflection. As such, there is clearly a liberating potential to such tools, and embracing mindfulness of objective metrics can have a powerful revelatory effect.

Nevertheless, there is also a risk of bondage to data. Individual data points are by themselves very uninteresting. They are static, frozen points in time, so they do not really do anything. They merely sit as recorded. The patterns we draw between those points, the transitions and changes, turn that data into an event, an event we know as human life. Even in a post-modern context where we understand that there are many possible stories to tell about the same dataset, selecting and validating a story is a deep expression of autonomy. In the end, we must look back on our lives, on a collection of frozen points, and decide, for ourselves, whether we regret or celebrate, whether we feel relief or anguish.


Insofar as the right to be forgotten allows us to take ownership of who we are now, it contributes to that autonomy. Honesty and truth are important ethical values, but so is forgiveness. If we shackle ourselves entirely to our pasts, if we allow others to tell our stories through points of data, we do not allow people to change, to express regret for what they have done, to make amends, and to move forward.

It is important to remember something here; we are not talking about removing information, only about removing results from an index. Anyone who wants to find out can still do so through regular channels of public record. They simply do not appear in search results that might color the present with a past more than 15 years distant. Maybe the case should be very different for different issues or types of information. Still, we should remember that the issue at hand isn't as simple as history or the preservation of information or even the crafting of a public persona. It is also about the crafting of personal identity, something very difficult to do when we are reduced to a static array of data.

Tuesday, April 29, 2014

Autocorrect: A Sailor's Perspective

For the most part, autocorrect is a useful tool for avoiding spelling mistakes. Sometimes, it feels more like a very subtle tool for censorship, and that really passes me off. Swearing is not always the last resort of the unskilled communicator. In the right hands, it can be a dam good way to express frustration or even righteous indignation. If I want to communicate my emotional state more than any semantic content, a good round of cursing just does the ducking trick. Unfortunately, autocorrect developers must keep in mind that parents will get upset if their computer teaches their kids to swear, so I understand the rationale. Still, I would appreciate being treated like an adult and having an effective "suggest offensive words" option. I've seen such options, but they work like add, and when I'm trying to send a quick message that contains a swear, I don't want to type out the entire word or phrase like an assume. In short, autocorrect, I don't want to live in your censored world. Next time you think I should avoid cursing, you can go duck yourself, and when you're done, just sock a great big bag of docks.

Thursday, April 3, 2014

Correctly Valuing the Writing Process

There are no good writing days or bad writing days. There are only days where there is writing and days where there is no writing. Recently, my main professional ambition is to minimize the latter, preferably limiting them to weekends and the occasional holiday. The imperative originated in a concern to pick up the pace on my research and to meet a submission deadline on a promising call for papers. I'm glad to say that I made the deadline and decided to use the momentum to send out some projects that have been lying fallow for a couple of months. I went from having nothing significant in submission to having three articles in submission in the course of four days. Three submissions, four days.

Beyond those submissions, I started on another three projects, some now in draft, some still in extended abstract. Now that I have the back-burner projects out of the way, I can start some revision and further research on the current projects, and hopefully get those off sometime soon as well. The best part is that I can feel my arguments changing, heading into deeper theoretical concerns that shape debates rather than the debates themselves. I don't how all of that will come out, but the articles can't get rejected if they're not submitted. At this point, I have enough encouragement to keep going, which is the the most important part of any routine. With that, I should go read one more article before I collapse for the night.

Tuesday, April 1, 2014

The Death of Socrates

Today, I told my students that while Socrates was not the first philosopher, he is the one who really set what would come to be called Western Philosophy in motion. I don't know exactly how accurate that view is since there were a number of odd mystery cults circulated in the Mediterranean, Pythagoras and his crew for instance. Nevertheless, there is something about the drama of the trial and death of Socrates that seemed to energize the philosophical project, such as it was at the time.

Even if created in retrospect, the narrative of a person dying for asking questions sends a powerful signal that there is something important about what he was doing. Remember that it's not quite right to say that Socrates died for his ideas. The early dialogues offer little in the way of a positive project, and what is there is usually attributed to Plato working out the early stages of his project. In the end, Socrates is executed because asking questions is dangerous. It undermines the structure of authority, erodes certainty in traditional values, and disrupts social routines.

Nevertheless, it's also the ability to question that grounds our capacity for self-understanding and rational thought. The death of Socrates marks a cultural awakening to sentience mirrored in the roots of other great world traditions, named and unnamed. All of us inherit those traditions, whether we recognize the lineage or not.




Saturday, March 29, 2014

Living Philosophy

Over the last year, my professional life has undergone a number of major changes. Obviously, moving to the Netherlands is on the list, but I have in mind more differences in how I view myself and my work. While finishing my dissertation gave me a sense of completion, it took a while to find a well-developed sense of myself as a philosopher. In particular, I have a very different relationship to my research today than I had when I defended my dissertation.

The dissertation stage is filled with lots of uncertainties and fear along with the other challenges of actually writing the thing. For one thing, I had never written anything that long or unified. I had to design and execute a book-length argument on one topic, and I had to say something relatively novel. Thankfully, my supervisor Bruce Brower was an excellent mentor. He helped me identify the topic very early in my doctoral studies, so I spent two years or so thinking about it before I began principal writing. We worked the topic into one of my qualifying assignments, so I had the chance to do some preliminary work, and he helped through applying for the fellowship that supported one year of writing.

During the writing, the research was a task, a very demanding task. My life became a routine of read-write-recover or occasionally write-read-read. Through the hours spent writing and revising, the research was a challenge, a wall I had to climb. It made demands like a physical force, pulling me just enough to force me to trudge through the rough terrain. It became a real presence in my life, an invisible ball and chain.

For about a year after defending, the ball and chain hung around. I knew I was supposed to continue working, but I had only a vague idea about how to begin writing and publishing articles. Again, other postdocs seem to report the same learning curve unless they have solid early career mentorship, so I know I'm not the only one with that problem, at least. I did have a sense that the research wasn't supposed to be like that. There is a cultural narrative about relating to one's work, especially creative/intellectual work, as a bittersweet challenge. It pushes the scholar, but also motivates. The philosophy drives you, not the other way around.

Well, I felt not so much pushed but dragged. There were topics that I wanted to develop, but I still didn't feel all that sure about how to go about it. I wrote but just couldn't make the arguments do anything for me. Last summer, I started to rethink my relationship with philosophy to find a way to turn things around.

One thing I appreciate about the Buddhist tradition is the sense of lineage. The Dharma stretches back in an unbroken line to Siddhartha Gautama. Every Buddhist teacher should understand that the insight she has is the very same insight held in the mind of the Buddha. The Dharma is a living thing, passed through the centuries in texts, by word of mouth, and by example. These ideas give the Buddhist tradition a resonance, an existence alongside our own.

Thinking about these things, I began to ask myself what philosophy is, exactly. This is a question I became absolutely sick of during my MA studies in Vancouver. Many philosophers pose that question, and few agree. At the time, I thought it was a hindrance to philosophy to worry about what it is. It is something, and we do it, so let's get on with it. Now, I can humbly say that I understand why the question arises over and over again. The answer to that question is the name that gives philosophy its living form. Philosophers don't have to agree on what it is completely, but they should know what it means to them.

I started rereading Wittgenstein, then Heidegger, Carnap, Quine, Nietzsche, Descartes, and Kant. This time, I didn't pay attention to the content of their arguments but to the care with which the arguments are framed. In their master works, philosophers put a great deal of effort into putting forth something that is in principle very difficult to describe or articulate in words. If the "something" were obvious, the description would be trivial. At its best, philosopher plunges into the fringes of our understanding. For me, philosophy became Vipassana. In Pali, the word means "insight/investigation" and is used to describe meditation practices directed at a clear understanding of reality.

At this point, two things happened. First, I had an evaluation standard for my own work. If it's too easy to say, I haven't thought about it enough. Every article should contain a key insight that is difficult to see and understand, but can be brought out with great care. In addition, by giving it a name, my research became a living thing. It now pushes me to the ends of my understanding and motivates me to go as far as I can. I'm still working on testing, developing, and writing, but I now know what it means to hit the mark, even I haven't hit it yet.

Wednesday, March 26, 2014

Pedagogy of Prestidigitation

I put what might be too much thought into presentation when I teach. I say it's too much because I don't know how much of it comes across to my students, but insofar as a teacher must entertain, it seems appropriate to work on one's showmanship. Over time, I've developed some particular aesthetics of teaching that both keep me motivated and focused in the task, and hopefully contribute something unique to my students' experience.

My basic model is jazz improvisation, for reasons perhaps best understood by fellow initiates of Robert Anton Wilson. The presentation slides give me an overall structure and contain the essential information. For the most part, the slides are supposed to be springboards for verbal improvisation. I like the idea of running discussion sessions, and when it happens I enjoy it, but I find it hard to get the students going. In introductory ethics courses, when I include assignments that require them to read before coming to class, it's easy because everybody knows what's right (before they take philosophy, at least). In most courses, I think I scare them too much. It's not intentional or anything, but I've been given to understand that I have a forceful presence. As much as I try to dial it down, it seems to come across anyway.

Still, that's just about lecture style, and not really all that different from the most general public speaking advice. In addition to that, I give some thought to the peripherals. For instance, I value minimalism in my self-presentation. Remember, I said my conditioned response to teaching is to reach for the chalk? I value that model because I (usually) don't have to bring the chalk and board.

The blackboard is classroom infrastructure; I walk into a room and expect to see one. The tools are simply at hand, something I find in the environment, take up, and use. Most of the time I taught at Tulane, I had a pile of books and notes and papers to hand back. Way too much baggage for someone teaching about letting go and liberation, right? As I got more comfortable in the classroom, I started trying to scale back and bring only what I really needed. At Twente the classroom tech is so reliable that I don't even need notes or textbooks. I can walk in with no materials, log into a computer, fire up my Google Presentation, and get to work.

If there is any magic in it, it happens there. To walk in with nothing and create something wonderful using nothing other than what is to hand is the work of an illusionist. Behind the scenes, there's preparation and reading and notes and consultation, but the students don't see that, and they don't need to see it. If I've done it right, they're too occupied with the illusion to think about it.

At least, that's what I tell myself the good days are like. I know it's more like some stuttering, some swearing, the occasional funny joke, and the ubiquitous unfunny joke. Still, if I don't imagine something better, I have no incentive to improve. Even fictions have their function, in the end.


Tuesday, March 25, 2014

Flipped Off Pedagogy

Everyone who works in education is trying to figure out what to do with the new capabilities afforded by IT. The most prominent example is the move toward MOOCs, the massively-open online courses made visible by the efforts of EdX, Coursera, and associated institutional partners. For those of us in the trenches, MOOCs represent the least imaginative application of information tech to the classical challenge of enlightening young minds. Think about it this way: you have any and all documented facts at your fingertips, and the ability to connect with experts anywhere in the world, and you use it to turn university lectures into a Netflix product? Michael Sandel is a talented lecturer, but I don't see philosophers binging on his Justice course the way we all do with Orange is the New Black.

So, if MOOCs aren't the big challenge, what is? As far as I can tell, educators (self included) have the most trouble coping with the "flipped classroom." A "flipped classroom" is one in which the teacher takes the backseat and acts as a facilitator or (maybe) a critic for student-centered activities. For the cynical, the concept caters to the Millennial affection for the spotlight, but even if that's a driving force, I have some sympathy for the model. After all, with the massive external memory of Wikipedia available, rote memorization is obsolete. The students can get the facts from the source, just like we (experts) do. We all use the same tools now, so there's no magic in it. I use Google Scholar because the interface for databases like the Philosopher's Index and JSTOR run as smoothly as a house drives.

We don't need to convey facts, but we do have something to convey, something about how to use available research tools, and something about how to put all of that information to work for you. As that's the case, the best thing we could do for our students is put them to work and help them along with the hard parts. Show them how to get started, how to get unstuck, how to evaluate sources, how to master a field. Show them how to do. Flipped classrooms are great environments for doing all of that, but they have to be used well. We have to have well-designed projects, something more sophisticated than the five-paragraph essay, please? While we're at it, something more entertaining than presentation slides would be nice, too.

The problem for many of us is that we have no idea how to do any of that well. We weren't trained that way, and we weren't trained to teach that way. I have tons of sympathy for the flipped classroom. I work to include more group projects and unconventional assignments into my classes in an attempt to convince my students that they can be keen analytic and critical thinkers about things they care about, not just things I care about. Nevertheless, my conditioned response to a teaching situation is to go old school. Give me a blackboard and a pile of chalk, and I could teach the world. If the topic is Buddhism, I wouldn't even need notes.

Last year, I started making presentation slides because I know my students expect them, but I don't do anything fancy with them. It's enough of a challenge to condense the lecture into slide-sized chunks. The exercise has shown me the value of creating and communicating some structure, a map of the topic to be covered, detailed signposts along the way, and a summary of what they should take away. At the same time, I don't see slide presentations as much in the way of innovation in the classroom. I'm not doing anything that couldn't have been done with Ektachrome slides in the 1950's, and it goes without saying that my free-form verbal improvisations represent a pedagogy older than Plato.

The bottom line is that as technology and culture (especially media culture) change, pedagogy has to change. Furthermore, the rate of change may outpace the normal generational turnover of teachers, so we have to change, too. I don't have answers, but I am willing to explore them with my colleagues and my students. I don't think we'll figure it out without some experimenting, so I hope my colleagues will join me in being courageous enough to try new things, and that our students will be tolerant of us when we fuck it up. 






Monday, March 24, 2014

Ambivalence on Ethically Challenging Research

I'm the middle of one of those research projects I feel obligated to do, but at the time can't bring myself to feel entirely passionate about. There really is nothing that brings out ambivalence in me like ethics and cyber-warfare. First and foremost, I am no big fan of war, warfare, or the military broadly construed. For that reason alone, the ethics of war should be a topic of great interest. If it's the case that person most fit for office is the one who wants it least, then the best war ethicist should be an absolute pacifist. Think about it this way: what would war ethics look like according to Genghis Khan or Napoleon? I think Atlanta still wakes up in hots sweats over Sherman's ideas about conducting a just war.

Of course, when you actually have to think about the ethics of just war, you have to confront the realist/idealist problem. War is awful and nothing good comes of it (anyone who says otherwise has way too much invested to be unbiased), so the most just war is the one we avoid. In a perfect world, there'd be no armed conflict. Unfortunately, our world is somewhat far from the best imaginable world even if Leibniz is right and it's the best possible one. As such, it feels worse than useless to devote space to an ethics of war that begins and ends with a norm against engaging in armed conflict. Even if it's right, it'll be too readily drowned out by warfare-apologists who give the status quo more room to operate even if it would be better for all humanity for the military-industrial complex to close up shop immediately.

So, what's the ethical course for a would-be war ethicist? First, a healthy dose of realism: just as there is war, there is good philosophical thinking about it. Just War Theory has a long tradition of outlining the framework for a conducting something that could be called an ethical war. Second, a healthy dose of idealism: even if the norm is demanding, a strong argument has force. If there's a general consensus that doing a particular thing turns a justified actor into a malicious actor, there will be a need to address that consensus before crossing the line. It may not prevent the pushing of the button, but it gives sanity and reason one more chance to prevail.

Finally, focus on what happens when things go wrong because that's what will happen. I can say lots of things about the ethics of cyber-conflict, but the most useful things I could say concern how to remain a justified actor in a world of malicious actors. What are the responsibilities of the defender with regard to remaining ethical when the opponent has forsaken ethics? I feel generally ambivalent about "sinking to their level" arguments, but I do think that in the moment where you confront an immediate moment of injustice, you learn something important about yourself. What choices are made beyond that moment will determine who you are and how you evaluate yourself, so it's important to have some clear choices in view. If I can contribute a picture of a just reactions to malicious actors, then I offer something that is both useful and a step in the right direction.

Friday, March 14, 2014

Surveillance and Servitude

A response to Kevin Kelly’s “Why You Should Embrace Surveillance, Not Fight It” in Wired

In “Why You Should Embrace Surveillance, Not Fight It” Kevin Kelly offers some possibilities for a positive view of ubiquitous surveillance. The solution to our concerns about privacy, according to Kelly, is more, rather than less surveillance. By embracing “coveillance,” collective monitoring of one another, we can recapture some of the influence and transparency currently lost to surveillance, top-down monitoring of citizens by an authority.
While Kelly is right that coveillance gives us transparency, he may be wrong about freedom. Let’s begin with the idea that Big Data firms will pay coveillers for self-monitoring and reporting. The idea that we could make our data more valuable by invoking a sense of entitlement and demanding direct compensation misunderstands the “big” in “Big Data.” The personal data of one citizen is really not all that valuable to data analysis. You can’t create general projections about the behavior of people without the collected data of many individuals. When Big Data gets big enough, very personal information does not matter at all. That’s why Google can happily anonymize the information it collects about you. It doesn’t need the details that distinguish you from someone very much like you. It just needs enough information to draw some conclusions about general trends such as buying habits.
If we do begin to press an entitlement to our personal data and demand payment in exchange for consistent and active self-monitoring, how much will they pay and for how much monitoring? Clearly, Big Data is profitable with what it can get for free right now, so we have to imagine that contracted monitors will get paid a little bit for a lot of inconvenience. After all, there’s little incentive for Google, Microsoft, or Facebook to pay you for what you already give them in exchange for some mighty convenient services.
In asking for some compensation beyond free email, news, and cloud storage, we may find ourselves in binding contracts inspired by our favorite mobile service providers. Free email? Sure, for two years you get a 500gb searchable inbox as long as the provider gets to track every email-related activity and log all contacts to form a social profile. Did I mention you’ll have to click a pop-up or sign in again if you leave your browser open but inactive for more than 10 minutes? Well, if you don’t like the terms, you can pay our opt-out fee. Indentured data servitude doesn’t promise the consumer more freedom.
Likewise, the idyllic image of life in tribal societies where everyone knows everyone else’s business obscures the extreme constraints of a forced public life. Let’s not forget that the same highly open societies that humankind lived in for hundreds of years were societies of little freedom. Tyrannical chiefs or high priests could ostracize or punish anyone for any difference from the normal. It’s no coincidence that those same authorities also decided what is and is not normal.

    We worry about losing privacy for a good reason: the loss of privacy is the loss of freedom. If we cannot choose what we present about ourselves and how we present it, we lose the freedom to decide who we are and who we trust. We lose the freedom to be different, to be unique, and to offer that uniqueness as a token of trust and companionship. In 1921, Russian novelist Yevgeny Zamyatin completed We a dystopia exploration of total transparency. In We, the citizens live in a city of clear glass. Everyone can see everyone else, and everyone is accountable to same standards and rules. Zamyatin’s characters live out fully transparent lives in servitude to their city, unable to change their society or themselves for fear of deviation and punishment. Transparency is their master, and none of them are free.