Dutch Presentation about the Quantified Self (Leren is Meten Weten)

I presented the following keynote (in Dutch) at the the e-Learning Event 2012 (an English version of this message is available here):

Deze presentatie legt in vijf delen uit waarom de trend om jezelf te meten (quantified self) grote gevolgen gaat hebben voor hoe wij in de toekomst gaan leren (je kunt de presentatie ook als PDF downloaden en dan werken de overlay quotes bij de foto’s wel of je kunt een opname van de hele keynote bekijken):


Een korte uitleg over wat een innovatie manager doet en over de innovatie funnel.


Het scenario proces wordt uitgelegd en de vier scenarios die uit een workshop op de Online Educa zijn gekomen worden toegelicht.


Quantified Self

De geschiedenis van de trend om jezelf te meten wordt uit de doeken gedaan. Met consumentenvoorbeelden is te zien dat het niet meer alleen voor wetenschappers en artiesten is weggelegd.



Een verkenning van wat de Quantified Self trend kan betekenen voor leren (in organisaties).



Er kleven ook risico’s aan jezelf meten.



De volledige presentatie kan hier als PDF gedownload worden.

Erik Duval on Learning Analytics at the e-Learning Event

Erik Duval is a professor at the Catholic University in Leuven. His team works on Human Computer Interaction. In the last few years, he has done a lot of work around Learning Analytics, which he defines as being about collecting traces that learners leave behind and using those traces to improve learning.

His students at the university do everything (and he means everything) using blogs and Twitter. He stopped giving lectures and instead works with students in a single place a few times a week. This makes it very hard for him to follow what is going on. The number of posts that are generated in his courses are too many for him to read them all. If you are facilitating a Massive Open Online Course (MOOC) this gets worse. This is why we do learning analytics. This has a lot of attention now with a conference and a Society for Learning Analytics Research

Nike+ Fuelband

Nike+ Fuelband

Next he mentions the quantified self movement: self-knowledge to self-tracking. If a tool gives you a good mirror about your behaviour, then this might make it easier to actually change your behaviour. He showed many examples from the consumer market (i.e. Nike+ Fuelband or the Fitbit. He is trying to see if you could develop similar applications for learning. Imagine setting a goal for how many words you want to learn every day and a device that shows you how many you’ve learned for the day. He wants to create awareness in the student, so that they can “drive” themselves better. This is different from the current efforts in learning analytics where they are mostly used to give more information to the institution (Duval doesn’t like that). He showed us an example of the dashboard that he uses to see the student’s activity on the blogs and on Twitter. The students have access to this information too and can see that data for their peers: openness and full transparency. This measuring leads to externalities that aren’t necessarily good (think students writing tweetbots to get good score). Duval depends on the self-regulating abilities of the group of students.

At the beginning of each course he tells his student that everything will be open in the course. He might have a debate about this, but he never gives in. He doesn’t think you can become an engineer without having the ability to engage openly with the society. If a student has very conscionable objections around privatey, then he sometimes allows them to publish under an alias.

If you collect a lot of data about people, then you can make technology enhanced learning more of an exact (i.e. hard) science. He wrote a paper titled: Dataset-driven Research for Improving Recommender Systems for Learning.

This whole field has a couple of issues:

  • What can we measure? Time, time spent artefact produced, socal interactions, location. Many other things might be important.
  • Privacy might become an issue: we will know so many things about everybody. One solution might be Attention Trust which defines four consumer rights for your (attention) data: property, mobility, economy and transparency. Our idea about privacy is changing, he referred to Public Parts by Jeff Jarvis.
  • When does support become enslaving (see this blogpost)

His solution for the problems (once again): openness.

Duval’s talk had a lot of similarities with the talk I will be delivering tomorrow. Luckily we both come from slightly different angles and don’t share all our examples. If you attended his talk and didn’t enjoy his, then you can skip mine! If you loved it, come and get more tomorrow morning.

Using Scenarios to Think About the Future of Corporate Learning

At the 2011 Online Educa, I co-facilitated a workshop with Willem Manders, Laura Overton, Charles Jennings and David Mallon in which we used a scenario methodology to gain insight into the future of corporate learning.

This video provides a short introduction into the work that was done there (a thank you to Fusion Universal for producing the video, a transcript is available here):

If this triggers you, then do read more about the old boy network, the in-crowd, big data and the quantified self on the learningscenarios.org website and follow the Twitter account to stay up to date.

These are only the first steps, we need people to start bringing the scenarios to life, so help us if you are interested.

Memory Feed: Reclaiming a Sense of Place through Mobile AR


Jie-Eun Hwang and Yasmine Abbas are leading a workshop titled: Memory Feed: Reclaiming a Sense of Place through Mobile Augmented Reality. From their introduction:

With Mobile technologies, Augmented Reality (AR) entered a whole new phase. Mobile AR promises to enable in-situ activities and kinds of communications that allow people to solicit memories of places. Nevertheless, a series of mobile apps that simply overlay bubble icons on the camera viewfinder rather limit our imagination for what we could do with this (possibly) innovative, necessary, and if not useful channel of communication.

This session is held in a small sweaty square room in a building that has the boring non-appeal that only municipal buildings can have. After a struggle with both the beamer and the Internet connection (for security reasons nobody can go on the network…) we manage to get going.

The group of participants is diverse: there are some people who consult around social media or around innovation (e.g. Merkapt), there is somebody working in the research department of an office furniture manufacturer and thinking about the future of work and the workspace, there is a student who is building a web platform for managing student events, there is the CTO of Evenium, the app that is used at the conference and there is somebody who has started an organisation focused on urban memory as a way to improve the perception of the suburbs.

Jie-Eun is teaching in the department of architecture in the university in Seoul. Yasmine is also an architect, writing a book a neo-nomadism. They both focus on how to integrate digital technologies in the urban fabric. They are currently focused on mobile technologies, mainly augmented reality. How we translate our memories into digital media. Can these technologies be used to regain a sense of space when travelling through the city as a nomad.

Mobile Augmented Reality

Jie-Eun is part of a team developing an AR management platform for the web titled Cellophane funded by the culture/tourism ministry. One part of the project is mapping cultural expressions (like movies, drama, pictures, drawings and advertising) onto the city. Imagine being able to watch a movie and seeing a place you are interested in. You would then be able to visit the place either virtually or in real life. It can also work in the other direction: what movies are shot in the area? The tool comes with a nice admin interface allowing you to match the cultural expression to the physical space with a simple point and click interface.

They have the ambition to push beyond the current capabilities of apps like Layar. They overlay some icons and text on the camera view. For some reason it is quite difficult to use and doesn’t have a very good user experience.

Use cases

What invisible elements can we reveal through this medium? What types of data would we like to get (that go beyond the obvious things like gaming and tourism). In small groups we prototyped a couple of ideas using a use-case template.

I worked with Catherine Gall, Director of Workspace Futures at Steelcase. We first thought about the potential for mobile augmented technology to help in never making the same mistake twice. This could be at the level of the individual, the organisation or maybe even larger concepts like cities. How come you make the same mistake on that tax form every year? Why do you go a second time to a restaurant that you don’t like. We reflected on how a sense of space could help you in memorize things. We finally settled on an idea titled Location based well-being analytics. Certain places (in the sense of locations, but also spaces), events and situations affect our well-being in a consistent matter (be it positively or negatively) without us necessarily being aware of that. Many companies our now designing little monitors that measure your body for things like activity/movement, calorie intake, blood pressure, temperature, sugar levels and more. In the future these devices might even measure some form of quantified emotional state. Some mobile technology could combine your (intended) location with the historical data of these devices to predict how the location will affect your well-being and give out recommendations. This could be useful for people with fragile health or people who are rehabilitating. Alternatively it could just help people become more aware of their own well-being and how the environment affects this.

Other groups had ideas like:

  • Moody community: in a community you would have a wall where you would be able to see the mood of the community as it is aggregated by individual “mood” statements by the residents of the community. This could actually help build a community. Who would use this data?
  • An augmented mirror that you can use to try on clothes in which you can easily change the colour or fit etc.
  • Supporting professional teams during crisis with incredibly relevant and targeted information.
  • Maintenance: the system would recognise the part you are working on and it would recognize the context of what you are trying to do. The system would then be able to overlay extra information on reality, including maintenance history, particular advice or the gesture that you need to do.

My personal open questions after the session

  • All of the solutions assume that you are connected to the net for them to work. Can we afford to make this assumption or should we still explore ways of having the data that augments locally? Might there be other models? Mesh networked? Where the device would get the data from the environment on demand?
  • Imagine a future in which everything you do is recorded in many dimensions (solving the problem of needing to capture your learned lessons). Would this help you in not making the same mistake twice? What kind of interfaces and experiences would be necessary to not only learn from your own mistakes, but learn from other people’s mistakes? How would you now a “lesson from a mistake” would exist? Would it need to be pushed to you?
  • For current mobile performance support technology we usually think about location, direction, and maybe some RFID technology as “cues” to match the virtual content to reality. What other cues can be used sensibly? Light? Sound? Temperature?
  • A recurring question for me in the last couple of years is whether we start lusting for a non-technology mediated experience of reality. Will we put a premium on experiencing something for “real”? Can you see a future where you have “Augmented Reality Retreat Zones”?