Ambient Location and the Future of the Interface

Amber Case (CC-licensed by webvisionsevent.jpg
Amber Case (CC-licensed by webvisionsevent)

Amber Case is a founder of Geoloqi and a cyborg anthropologist. Some of the themes that interest her are “ambient intimacy”, “the automatic production of space” (you can put stuff in your computer, but it doesn’t get heavier), “the digital dark age”, “persistent paleontology”, “information jetlag”, her dislike for skeuomorphs and “cellphones as providing temporarily negotiated private space”. According to her we are all cyborgs now. The minute you look at a screen you are in a symbiotic relationship with technology. Most of the tools from the beginning of time were an extension of our physical selves. Right now we are starting to see tools emerging that are extensions of our mental selves. From a fixed interface, we have now moved to liquid interfaces on our screens which can be infinitely configured. What will be the next step in the interface world?

Steve Mann already wore a computer in 1981. He was the first person to lifestream his life. He wrote a great paper titled WearCam, the wearable camera. He worked on the concept of diminished reality where he would cancel out the ads and brands that he didn’t like. One of his issues was how do you type? So the Twiddler,a one-handed USB keyboard, came out. He could get about 90 words a minute with it. He created contextual notification systems for “remember the milk” type of things. He built in face recognition software. He went from 80 pounds of gear in the early 80s to the current situation where he has the information laser projected onto his eye.

The Twiddler
The Twiddler

“Calm technology” was research done at Xerox Parc in the 70s. It is exactly what it says that it is: actions become buttons, there are invisible interfaces and there are trigger-based actions. The haptic compass belt is a great example of calm technology. At Geoloqi they realised that with geo-technology you can now make invisible buttons. So for example your lights will turn on when you enter your geo-fenced house. This will allow your phone to become a remote control for reality. She showed a lot of examples of what you can do with geofencing. One example was mapattack.org.

The conclusion of her talk: The best technology is invisible, gets out of your way and helps you connect to people.

Future15 at SxSW

One format at SxSW is called “Future15”. These are five solo presentations of 15 minutes each. I attended one of these sessions.

Demystifying Design: Fewer Secrets, Greater Impact

Jeff Gothelf is a lean UX advocate with a book that will hit the stores soon. He describes what the design process looks like for people outside of the profession: basically it looks like a mystery. He is convinced that designers actually feel this mystery as empowering and giving them control. Jeff thinks the mystery isolates the designers from the non-designers. It creates artificial silos which quickly degenerates into an us and them situation. There is no shared language or common understanding. This means that a project manager is usually the person in the middle. This will get you to the end-state, but it will be lacklustre. He says that transparency is the key component to making the best products, mainly because the efficacy of collaboration is directly related to team cohesion. The onus is on the designers to demystify the way that they work. If they do this, the “fingerprints” of the customer and the other team members will show up in the work. This doesn’t make everybody a designer, it actually make the role of the designer more valuable. One thing that bothers designers with this way of working is that it “shows your gut” and that might be very tough to do.

There a few simple things that designers can do to help this process along:

  • Draw together (share “trade secrets”)
  • Show raw works (frequently)
  • Teach the discipline
  • Demystify the jargon
  • Be transparent

The Art of the No-Decision Decision

Peter Sheahan is a founder of Change|Labs and has all the mannerisms of a bullshit artist. He talked about decision makingHe drew a simple framework that showed that people make decisions on the basis of their identity or the consequences. These he calls “decisions”. Habits and structural things also make us make decisions. He calls this “no-decision decisions”. He gives an example of a Japanese toilet that measure people’s blood sugar levels and contacts a doctor when the levels are not what they need to be: a no-decision decision. I guess his main point was that we need to remove the user from the experience and design technology and the environment in a way to get behaviour you want. There are four things we can do to help people make no-decision decisions:

  1. Be intentional with design (physical and flow)
  2. Build in real-time feedback loops
  3. Put “new” behaviour in flow
  4. Put new behaviour “in the way”

Creating Engagement: Brains, Games & Design

Pamela Rutledge at SxSW
Pamela Rutledge at SxSW

Pamela Rutledge is a psychologist. She talked a little bit about the different systems in our brain and the ages of those (the reptile brain, the emotional brain and the new brain, you know the drill). The only way to really engage the “new brain” is through turning the experience into a story. She then went on to talk about “flow” the optimal balance between anxiety and boredom and challenge and skill. The easiest way to get the brain into a flow state is by using story. The story is the real secret weapon to get the brain engaged, this is becayse they are both instinctual and abstract and can speak to all parts of the brain. She finished off with saying the following: “You design for people. So the psychology matters.”

Why Mobile Apps Must Die

Scott Jenson from Frog Design made a humble proposal: mobile apps must die. The usual reaction from people is to ask him what he is smoking. He doesn’t mean to say that native UX isn’t currently better than web UX. What he does mean is that only focusing on native UX we have stopped thinking about things and our design future. To him we are currently on a local optimum. He sees three trends:

  1. App glut. Are we really going to have an app for every story we are going to walk into. The user is becoming the bottleneck now: customers are now gardening their phones and removing cruft. The pain of apps is starting to be bigger than the value of them.
  2. Size and cost reduction. The cost of computing and connectivity is going down. There are going to be an incredible amount of devices.
  3. Leveraging other platforms.

These three trends all work against the native apps paradigm. He sees a lot of “just-in-time” interaction with the web. Installing an app is too much work for these type of interactions. It is hard to change away from the “app” paradigm. Scott showed us an alternative: Active RFID, GPS, Bluetooth and Wifi are four technologies that can help us show us what is around us. They cn show us what is nearby without us having to discover it. Companies that can deliver on this promise might become the next Google. This will be hard to do: we don’t yet have the just-in-time ecosystem where we have phones asking “what’s here” and other objects going “I’m here”. If we don’t start thinking about these things now, then we won’t have them tomorrow.

Geo Interfaces for Actual Humans

Eric Gelinas works at Flickr. They are using geolocated photos to make the photos more discoverable and easier to navigate. Location very often helps to contextualize a photo. To make this easier they created a 3-step zoom. The default is zoomed out, when you hover over the maps you are zoomed into the city and then when you hover over the point on the map you are fully zoomed in. A lot of map interfaces have traditionally be very awkward: not dynamic, information overload. A lot of websites are switching away from Google Maps now and move towards OpenStreetMap data often in combination with MapBox (e.g. Foursquare, it allowed them to put much more design into the maps and escape the licensing costs from Google). There is a campaign to help others switch over to OpenStreetMap: Switch2OSM. Eric wrote a lot of code for Flickr to make their maps. Right he might have used something like Mapstraction. Another great library is Leaflet.

In Flickr you can put in your geo-preferences which allow you to hide your detailed location for particular locations. Merlin Mann wrote the following tweet about that technology:
<phttps://twitter.com/#!/hotdogsladies/status/108613619989757952

Right to Be Forgotten: Forgiveness or Censorship

Meg Ambrose and Jill van Matre, two lawyers and privacy policy thinkers, hosted a conversation in which from the outset there was the ambition to answer some of the following questions:

  • Is forgiving and forgetting worth protecting in the digital age?
  • How does the Right to be Forgotten work in EU member countries?
  • Does the First Amendment prevent any possibility of a Right to be Forgotten in the US?
  • How does time change the value of information?
  • Can anything ever really be deleted from the Internet?

From the online summary of the session:

The digital age has eternalized information that was once fleeting, and the Right to be Forgotten has gained traction in the EU. A controversial aspect of these rights is that truthful, newsworthy information residing online may be removed after a certain amount of time in an attempt to make the information private again. Two compelling camps have arisen: Preservationists and Deletionists. Preservationists believe the web offers the most comprehensive history of humanity ever collected and feel a duty to protect digital legacies without censorship. Deletionists argue that the web must learn to forget in order to preserve vital societal values and that threats to the dignity and privacy of individuals will create an oppressive networked space.

The Starwars Kid: undeletable
The Starwars Kid: undeletable

They kicked off the conversation with the example of a girl in college who did somebody stupid which was posted on Facebook and surfaced 6 years later. We were also asked to remember our most embarassing moment and then imagine that being posted on the Internet and showing up as the first result on a search for your name. They then handed out a roadmap for the discussion (which I think some of the other discussion sessions could have used).

Forgetting is incredibly important to our emotional health. How human do we want our technologies to be? This becomes more important now that it is becoming harder to keep yourself out of the online context and are forced to live some of your civic life online. Digital life is also core to our expression rights. Somebody in the audience had a disability with his hands. He is now scared to post pictures of himself online being happy, because they might take his disability insurance away if he doesn’t look “disabled” enough.

In general the tone of the discussion seemed to be very pro right-to-forget (so deletionist). One German lady brought in the perspective of her press job. The press is very nervous about how this right will be used to censor the press.

My personal question on this topic relates to the ability to reinvent yourself. This will become much harder once everybody has something like a “Facebook timeline”. The assumption behind this seems to be that people don’t change and that identity is a constant concept. This semi-objective (it still is a subjective lens) digital history might become the single source of truth about who you are.

There are a lot of behaviours and social norms that are coming through that are helping us cope with this situation. There are also options to enforce law with forced technological solutions.

Welcome to the Age of Hyperspecialization

Christina Hamlin, a technology and design consultant and Robert Hughes, President and COO of Topcoder led a conversation that was introduced as follows:

The work of the future will be atomized, with many workers doing pieces of what is today a single job. The hyperspecialization of workers may be inevitable given the quality, speed and cost advantages it offers- and the power it gives individuals to devote flexible hours to tasks of their choice. Just like craft workers of the past, knowledge workers, or hyperspecialists, will engage in peripheral activities that could be done better or more cheaply by others. Using real world business examples the panel will explore directed innovation through hyperspecialization.

The discussion was based on an Harvard Business Review article titled: The Age of Hyperspecialization. From the summary:

Just as people in the early days of industrialization saw single jobs (such as a pin maker’s) transformed into many jobs (Adam Smith observed 18 separate steps in a pin factory), we will now see knowledge-worker jobs — salesperson, secretary, engineer — atomize into complex networks of people all over the world performing highly specialized tasks. Even job titles of recent vintage will soon strike us as quaint. “Software developer,” for example, already obscures the reality that often in a software project, different specialists are responsible for design, coding, and testing.

Or check out this video by Thomas Malone from MIT:

[youtube=http://www.youtube.com/watch?v=slK1RbPPGqY]

The beginning of the hour was mostly dedicated to the methodology that Topcoder uses to do software projects. They use many true specialists (that compete against each other on getting jobs for these projects) and then a generalist (or co-pilot) whose task it is to pull everything together. One problem with this model was addressed by my colleague Ronald In’t Velt: people might lose passion for their job (and thus engagement) when they have too narrow of a focus in their specialty. According to Christina some people actually enjoy digging down in their specialization, whereas other still manage to reach outside their scope, just because they are interested.

One question that I was asking myself is how you prove your (or someone’s) competency in a very specialized field. Topcoders solution to this is to focus on outcomes rather than on the skills. If people have shown that they can create things that the user likes or fulfil a need, than that is a good predictor for the next project. For me this does not solve the inherent paradox in this. We need hyperspecialized people because our needs have hyperspecialized too. There is therefore a big chance that you are embarking on a project for which there are no previous outcomes. I am not sure that either of the presenters have really thought hard about this issue. If they don’t see it as a problem, then they are likely working with specialists, rather than hyperspecialists.

The Mind and Consciousness As an Interface

Julian Bleecker and Nicolas Nova, both from the Near Future Laboratory, presented on the future of user interfaces. Julian sees that the semantics of the discussion around interfaces leads to a more direct coupling between thought and action: basically brain control.

He kicked off the presentation with a set of clips from science fiction films (e.g. Brainstorm). In one of them people were now able to directly control replicant versions of themselves. If you are trying to control a computer, then it is likely that you will need to concentrate on a single thing which goes around the natural way of how our brains work. We can’t let our minds wander anymore. What does that mean for our imagination? He then focused on how hands are very a much a way that we exert control over the world.

After Julian’s cultural backdrop, Nicolas showed some real (scientific) examples. He showed the “hello world” tweet of mind control which was sent by an EEG, a monkey operating a robotic arm with its brain and a weird device made by Neurowear:

[youtube=http://www.youtube.com/watch?v=w06zvM2x_lw]

There a basically two ways of creating this type of interface:

  • By implanting sensors directly and invasively into the brain. They use this a lot in research on how to help people with disabilities.
  • There are also non-invasive solutions using EEG or fMRI. We are getting better at interpreting the data that comes out of these measurement devices.

There is a whole set of applications for which this can be used. Examples include: gaming, spelling applications, 2D cursor control, relaxation tool, access to dreams/consciousness, brain training programs, brain to brain communication, a modern day lie detector, mind-controlled whatever (see the Mind controlled parachute) or zen-like interfaces (like the PLX wave).

The interaction design space (or repertoire) that this opens up has these possibilities:

  • Explicit versus implicit user interactions
  • Synchronous versus asynchronous
  • Detection of cognitive states/brain activity
  • Stand-alone brain-computer interface (BCI) or BCI plus other physiological data (e.g. a heartbeat or turning your head)

There are a few problems:

  1. It is easy to measure the base cognitive state of somebody, but it is very hard to reconstruct this semantically.
  2. It will be hard to train users. They will have to learn a new vocabulary and the feedback that you are getting from most of these systems is hard to interpret directly.
  3. Signal versus noise.
  4. Taking context into account is hard. Most existing projects are done in the lab now (the skateboard below is an exception!)

[vimeo http://vimeo.com/37232050]

There are important questions to ask about the future. We need to build an interaction design perspective, ask design issues and not only address technological problems. What’s the equivalent of the blue screen of death for brain controlled interfaces and what will happen with social norms in the long run?