Shifting Trends in Buying Learning

The Panel
The Panel

My first session for this second day of Learning 2012 started with a session of shifting trends in buying learning.

Masie has been involved in a survey aroud the learning market place. According to him there is an anomaly: there are more dollars to buy learning than robust providers/suppliers that can deliver to their needs.

One example is the market for Learning Management Systems (LMSs). There is an increasing frustration from members of the Learning Consortium that their LMS cannot deliver video very well. People are ready to buy technology to create, edit and deliver video, but they can’t find the right vendor to buy it from. Another area where people want to spend is Performance Support. There isn’t a perception that there is a large number of solution providers who can deliver this. Ironically content is also hard to get nowadays. There has been a drying up of content providers (all of them are very verticalized). New ways of doing assessment (e.g. badging models) are also hard to buy. Most people want to use mobile learning as an on-the-job performance support and also see it as a way to connect staff. 87% of the respondents are interested in mobile, but the percentage of companies really doing something in the space is in the teens. A similar thing can be said about social and collaborative learning.

Masie also made an argument that e-books will be very prevalent in the future. Everybody has a tablet, but there is no decent model for corporate learning e-book creation. He is pushing Adobe to start creating software that will allow people to author their own learning e-books.

All in all this means there are massive opportunities for vendors in this marketplace.

After Masie’s introduction we had a small discussion on the topic. One thing that came up was the disconnect between relatively young and agile companies with innovative solutions and the traditional procurement processes of big companies. The panel also discussed that it is often difficult to make the step from a small pilot or proof of concept to the larger implementation. Unfortunately nobody seemed to have a really sharp idea on how to solve the conundrum.

Memory Feed: Reclaiming a Sense of Place through Mobile AR


Jie-Eun Hwang and Yasmine Abbas are leading a workshop titled: Memory Feed: Reclaiming a Sense of Place through Mobile Augmented Reality. From their introduction:

With Mobile technologies, Augmented Reality (AR) entered a whole new phase. Mobile AR promises to enable in-situ activities and kinds of communications that allow people to solicit memories of places. Nevertheless, a series of mobile apps that simply overlay bubble icons on the camera viewfinder rather limit our imagination for what we could do with this (possibly) innovative, necessary, and if not useful channel of communication.

This session is held in a small sweaty square room in a building that has the boring non-appeal that only municipal buildings can have. After a struggle with both the beamer and the Internet connection (for security reasons nobody can go on the network…) we manage to get going.

The group of participants is diverse: there are some people who consult around social media or around innovation (e.g. Merkapt), there is somebody working in the research department of an office furniture manufacturer and thinking about the future of work and the workspace, there is a student who is building a web platform for managing student events, there is the CTO of Evenium, the app that is used at the conference and there is somebody who has started an organisation focused on urban memory as a way to improve the perception of the suburbs.

Jie-Eun is teaching in the department of architecture in the university in Seoul. Yasmine is also an architect, writing a book a neo-nomadism. They both focus on how to integrate digital technologies in the urban fabric. They are currently focused on mobile technologies, mainly augmented reality. How we translate our memories into digital media. Can these technologies be used to regain a sense of space when travelling through the city as a nomad.

Mobile Augmented Reality

Jie-Eun is part of a team developing an AR management platform for the web titled Cellophane funded by the culture/tourism ministry. One part of the project is mapping cultural expressions (like movies, drama, pictures, drawings and advertising) onto the city. Imagine being able to watch a movie and seeing a place you are interested in. You would then be able to visit the place either virtually or in real life. It can also work in the other direction: what movies are shot in the area? The tool comes with a nice admin interface allowing you to match the cultural expression to the physical space with a simple point and click interface.


They have the ambition to push beyond the current capabilities of apps like Layar. They overlay some icons and text on the camera view. For some reason it is quite difficult to use and doesn’t have a very good user experience.

Use cases

What invisible elements can we reveal through this medium? What types of data would we like to get (that go beyond the obvious things like gaming and tourism). In small groups we prototyped a couple of ideas using a use-case template.

I worked with Catherine Gall, Director of Workspace Futures at Steelcase. We first thought about the potential for mobile augmented technology to help in never making the same mistake twice. This could be at the level of the individual, the organisation or maybe even larger concepts like cities. How come you make the same mistake on that tax form every year? Why do you go a second time to a restaurant that you don’t like. We reflected on how a sense of space could help you in memorize things. We finally settled on an idea titled Location based well-being analytics. Certain places (in the sense of locations, but also spaces), events and situations affect our well-being in a consistent matter (be it positively or negatively) without us necessarily being aware of that. Many companies our now designing little monitors that measure your body for things like activity/movement, calorie intake, blood pressure, temperature, sugar levels and more. In the future these devices might even measure some form of quantified emotional state. Some mobile technology could combine your (intended) location with the historical data of these devices to predict how the location will affect your well-being and give out recommendations. This could be useful for people with fragile health or people who are rehabilitating. Alternatively it could just help people become more aware of their own well-being and how the environment affects this.

Other groups had ideas like:

  • Moody community: in a community you would have a wall where you would be able to see the mood of the community as it is aggregated by individual “mood” statements by the residents of the community. This could actually help build a community. Who would use this data?
  • An augmented mirror that you can use to try on clothes in which you can easily change the colour or fit etc.
  • Supporting professional teams during crisis with incredibly relevant and targeted information.
  • Maintenance: the system would recognise the part you are working on and it would recognize the context of what you are trying to do. The system would then be able to overlay extra information on reality, including maintenance history, particular advice or the gesture that you need to do.

My personal open questions after the session

  • All of the solutions assume that you are connected to the net for them to work. Can we afford to make this assumption or should we still explore ways of having the data that augments locally? Might there be other models? Mesh networked? Where the device would get the data from the environment on demand?
  • Imagine a future in which everything you do is recorded in many dimensions (solving the problem of needing to capture your learned lessons). Would this help you in not making the same mistake twice? What kind of interfaces and experiences would be necessary to not only learn from your own mistakes, but learn from other people’s mistakes? How would you now a “lesson from a mistake” would exist? Would it need to be pushed to you?
  • For current mobile performance support technology we usually think about location, direction, and maybe some RFID technology as “cues” to match the virtual content to reality. What other cues can be used sensibly? Light? Sound? Temperature?
  • A recurring question for me in the last couple of years is whether we start lusting for a non-technology mediated experience of reality. Will we put a premium on experiencing something for “real”? Can you see a future where you have “Augmented Reality Retreat Zones”?