Using Scenarios to Think About the Future of Corporate Learning

At the 2011 Online Educa, I co-facilitated a workshop with Willem Manders, Laura Overton, Charles Jennings and David Mallon in which we used a scenario methodology to gain insight into the future of corporate learning.

This video provides a short introduction into the work that was done there (a thank you to Fusion Universal for producing the video, a transcript is available here):

[youtube=http://www.youtube.com/watch?v=jiBtFJ_Omv8&W=450&H=259&REL=0]

If this triggers you, then do read more about the old boy network, the in-crowd, big data and the quantified self on the learningscenarios.org website and follow the Twitter account to stay up to date.

These are only the first steps, we need people to start bringing the scenarios to life, so help us if you are interested.

Reflecting on Lift France 2011: Key Themes

A couple of weeks ago I attended the Lift France 2011 conference. For me this was different than my usual conference experience. I have written before how Anglo-Saxon my perspective is, so to be at a conference where the majority of the audience is French was refreshing.

Although there was a track about learning, most of the conference approached the effects of digital technology on society from angles that were relatively new to me. In a pure learning conference, I am usually able to contextualize what I see immediately and do some real time reflecting. This time I had to stick to reporting on what I saw (all my #lift11 posts are listed here) and was forced to take a few days and reflect on what I had seen.

Below, in random order, an overview of what I would consider to be the big themes of the conference. Occasionally I will try to speculate on what these themes might mean for learning and for innovation.

Utilization of excess capacity empowered by collaborative platforms

Robin Chase gave the clearest explanation of this theme that many speakers kept referring back to:

Economic Logic of Using Access Capacity by Robin Chase
Economic Logic of Using Access Capacity by Robin Chase

This world has large amounts of excess capacity that isn’t used. In the past, the transaction costs of sharing (or renting out) this capacity was too high to make it worthwhile. The Internet has facilitated the creation of collaborative platforms that lower these transaction costs and make trust explicit. Chase’s most simple example is the couch surfing idea and her Zipcar and Buzzcar businesses are examples of this too.

Entangled with the idea of sharing capacity is the idea of access being more important than ownership. This will likely come with a change in the models for consumption: from owning a product to consuming a service. The importance of access shows why it is important to pay attention to the (legal) battles being fought on patents, copyrights, trademarks and licenses.

I had some good discussions with colleagues about this topic. Many facilities, like desks in offices, are underused and it would be good to try and find ways of getting the percentage of utilization up. One problem we saw is how to deal with peak demand. Rick Marriner made the valid suggestion that transparency about the demand (e.g. knowing how many cars are booked in the near future) will actually feed back into the demand and thus flatten the peaks.

A quick question that any (part of an) organization should ask itself is which assets and resources have excess capacity because in the past transaction costs for sharing them across the organization were too high. Would it now be possible to create platforms that allow the use of this extra capacity?

Another question to which I currently do not have an answer is whether we can translate this story to cognitive capacity. Do we have excess cognitive capacity and would there be a way of sharing this? Shirky’s Cognitive Surplus and the Wikipedia project seem to suggest we do. Can organizations capture this value?

Disintermediation

The idea of the Internet getting rid of intermediaries is very much related to the point above. Intermediaries were a big part of the transaction costs and they are disappearing everywhere. Travel agents are the canonical example, but at the conference, Paul Wicks talked about PatientsLikeMe, a site that partially tries to disintermediate doctors out of the patient-medicine relationship.

What candidates for disintermediation exist in learning? Is the Learning Management System the intermediary or the disintermediator? I think the former. What about the learning function itself? In the last years I have seen a shift where the learning function is moving away from designing learning programs into becoming a curator of content and service providers and a manager of logistics. These are exactly the type of activities that are not needed anymore in the networked world. Is this why the learning profession is in crisis? I certainly think so.

The primacy (and urgency) of design

Maybe it was the fact that the conference was full of French designeurs (with the characteristic Philippe Starck-ish eccentricities that I enjoy so much), but it really did put the urgency of design to the forefront once again for me. I would argue that design means you think about the effects that you would like to have in this world. As a creator it is your responsibility to think deeply and holistically. I will not say that you can always know the results of your design (product, service, building, city, organization, etc.), there will be externalities, but it is important that you leave nothing to chance (accident) or to convenience (laziness).

There is a wealth of productivity to be gained here. I am bombarded by bad (non-)design every single day. Large corporations are the worst offenders. The only design parameter that seems to be relevant for processes is whether they reduce risk enough, not whether they are usable for somebody trying to get something done. Most templates focus on completeness and not on aesthetics or ease of use. When last did you receive a PowerPoint deck that wasn’t full of superfluous elements that the author couldn’t be bothered to remove?

Ivo Wenzler reminded me of Checkhov’s gun (no unnecessary elements in a story). What percentage of the learning events that you have attended in the last couple of years adhered to this?

We can’t afford not to design. The company I work for is full of brilliant engineers. Where are the brilliant designers?

Distributed, federated and networked systems

Robin Chase used the image below and explicitly said that we now finally realize that distributed networks are the right model to overcome the problems of centralized and decentralized systems.

From "On Distributed Communication Networks", Baran, 1962
From "On Distributed Communication Networks", Baran, 1962

I have to admit that the distinction between decentralized and distributed eludes me for now (I guess I should read Baran’s paper), but I did notice at Fosdem earlier this year that the open source world is urgently trying to create alternatives to big centralized services like Twitter and Facebook. Moglen talked about the Freedombox as a small local computer that would do all the tasks that the cloud would normally do, there is StatusNet, unhosted and even talk of distributed redundant file systems and wireless mesh networking.

Can large organizations learn from this? I always see a tension between the need for central governance, standardization and uniformity on the one hand and the local and specific requirements on the other hand. More and more systems are now designed to allow for central governance and the advantages of interoperability and integration, while at the same time providing configurability away from the center. Call it organized customization or maybe even federation. I truly believe you should think deeply about this whenever you are implementing (or designing!) large scale information systems.

Blurring the distinction between the real and the virtual worlds

Lift also had an exhibitors section titled “the lift experience“, mostly a place for multimedia art (imagine a goldfish in a bowl sat atop an electric wheelchair, a camera captured the direction the fish swam in and the wheelchair would then move in the same direction). There were quite a few projects using the Arduino and even more that used “hacked” Kinects to enable new types of interaction languages.

Photo by Rick Marriner
Photo by Rick Marriner

Most projects tried, in some way, to negotiate a new way of working between the virtual and the real (or should I call it the visceral). As soon as those boundaries disappear designers will have an increased ability to shape reality. One of the projects that I engaged with the most was the UrbanMusicalGame: a set of gyroscopes and accelerometers hidden in soft balls. By playing with these balls you could make beautiful music while using an iPhone app to change the settings (unfortunately the algorithms were not yet optimized for my juggling). This type of project is the vanguard of what we will see in the near term.

Discomfort with the dehumanizing aspects of technology

A surprising theme for me was the well articulated discomfort with the dehumanizing aspects of some of the emerging digital technologies. As Benkler says: technology creates feasibility spaces for social practice and not all practices that are becoming feasible now have positive societal impact.

One artist, Emmanuel Germond, seemed to be very much in touch with these feeling. His project, Exposition au Danger Psychologique, made fun of people’s inability to deal with all this information and provided some coy solutions. Alex Peng talked about contemplative computing, Chris de Decker showed examples of low-tech solutions from the past that can help solve our current problems and projects in the Lift Experience showed things like analog wooden interfaces for manipulating digital music.

This leads me to believe that both physical reality and being disconnected will come at a premium in the near future. People will be willing to pay for having real experiences versus the ubiquitous virtual experiences. Not being connected to the virtual world will become more expensive as it becomes more difficult. Imagine a retreat which markets itself as having no wifi and a giving you a free physical newspaper in the morning (places like this are starting to pop up, see this unplugged conference or this reporter’s unconnected weekend).

There will be consequences for Learning and HR at large. For the last couple of years we have been moving more and more of our learning interventions into the virtual space. Companies have set up virtual universities with virtual classrooms, thousands and thousands of hours of e-learning are produced every year and the virtual worlds that are used in serious games are getting more like reality every month.

Thinking about the premium of reality it is then only logical that allowing your staff to connect with each other in the real world and collaborate in face to face meetings will be a differentiator for acquiring and retaining talent.

Big data for innovation

I’ve done a lot of thinking about big data this year (see for example these learning analytics posts) and this was a tangential topic at the conference. The clearest example came from a carpool site which can use it’s data about future reservation to clearly predict how busy traffic will be on a particular day. PatientsLikeMe is of course another example of a company that uses data as a valuable asset.

Supercrunchers is full of examples of data-driven solutions to business problems. The ease of capturing data, combined with the increase in computing power and data storage has made doing randomized trials and regression analysis feasible where before it was impossible.

This means that the following question is now relevant for any business: How can we use the data that we capture to make our products, services and processes better? Any answers?

The need to overcome the open/closed dichotomy

In my circles, I usually only encounter people who believe that most things should be open. Geoff Mulgan spoke of ways to synthesize the open/closed dichotomy. I am not completely sure how he foresees doing this, but I do know that both sides have a lot to learn from each other.

Disruptive software innovations currently don’t seem to happen int the open source world, but open source does manage to innovate when it comes to their own processes. They manage to scale projects to thousands of participants, have figured out ways of pragmatically dealing with issues of intellectual property (in a way that doesn’t inhibit development) and have created their own tool sets to make them successful at working in dispersed teams (Git being my favorite example).

When we want to change the way we do innovation in a networked world, then we shouldn’t look at the open source world for the content of innovation or the thought leadership, instead we should look at their process.

Your thoughts

A lot of the above is still very immature and incoherent thinking. I would therefore love to have a dialog with anybody who could help me deepen my thoughts on these topics.

Finally, to give a quick flavour of all my other posts about Lift 11, the following word cloud based on those posts:

Lift11 Word Cloud
My Lift 11 wordcloud, made with Wordle

Lak11 Week 2: Rise of “Big Data” and Data Scientists

These are my reflection and thoughts on the second week of Learning and Knowledge analytics (Lak11). These notes are first an foremost to cement my own learning experience, so for everybody but me they might feel a bit disjointed.

What was week 2 about?

This week was an introduction to the topic of “big data”. As a result of all the exponential laws in computing, the amount of data that gets generated every single day is growing massively. New methods of dealing with the data deluge have cropped up in computer science.  Businesses, governments and scientists are learning how to use the data that is available to their advantage. Some people actually think this will fundamentally change our scientific method (like Chris Anderson in Wired).

Big data: Hadoop

Hadoop is one of these things that I heard a lot about without ever really understanding what it was. This Scoble interview with the CEO of Cloudera made things a lot clearer for me.

[youtube=http://www.youtube.com/watch?v=S9xnYBVqLws]

Here is the short version: Hadoop is a set of open source technologies (it is part of the Apache project) that allows anyone to do large scale distributed computing. The main parts of Hadoop are a distributed filesystem and a software framework for processing large data sets on clusters.

The technology is commoditised, imagination is what is needed now

The Hadoop story confirmed for me that this type of computing is already largely commoditised. The interesting problems in big data analytics are probably not technical anymore. What is needed isn’t more computing power, we need more imagination.

The MIT Sloan Management Review article titled Big Data, Analytics and the Path from Insights to Value says as much:

The adoption barriers that organizations face most are managerial and cultural rather than related to data and technology. The leading obstacle to wide-spread analytics adoption is lack of understanding of how to use analytics to improve the business, according to almost four of 10 respondents.

This means that we should start thinking much harder about what things we want to know that we couldn’t get before in a data-starved world. This means we have to start with the questions. From the same article:

Instead, organizations should start in what might seem like the middle of the pro-cess, implementing analytics by first defining the insights and questions needed to meet the big busi-ness objective and then identifying those pieces of data needed for answers.

I will therefore commit myself to try and formulate some questions that I would like to have answered. I think that Bert De Coutere’s use cases could be an interesting way of approaching this.

This BusinessWeek excerpt from Stephen Baker’s The Numerati gives some insight into where this direction will take us in the next couple of years. It profiles a mathematician at IBM, Haren, who is busy working on algorithms that help IBM match expertise to demand in real time, creating teams of people that would maximise profits. In the example, one of the deep experts takes a ten minute call while being on the skiing slopes. By doing that he:

[..] assumes his place in what Haren calls a virtual assembly line. “This is the equivalent of the industrial revolution for white-collar workers,”

Something to look forward to?

Data scientists, what skills are necessary?

This new way of working requires a new skill set. There was some discussion on this topic in the Moodle forums. I liked Drew Conway’s simple perspective, basically a data scientist needs to be on the intersection of Math & Statistics Knowledge, Substantive Expertise and Hacking Skills. I think that captures it quite well.

Data Science Venn Diagram (by Drew Conway)
Data Science Venn Diagram (by Drew Conway)

How many people do you know who could occupy that space? The How do I become a data scientist? question on Quora also has some very extensive answers as well.

Connecting connectivism with learning analytics

This week the third edition of the Connectivism and Connective Knowledge course has started too. George Siemens kicked of by posting a Connectivism Glossary.

It struck me that many of the terms that he used there are things that are easily quantifiable with Learning Analytics. Concepts like Amplification, Resonance, Synchronization, Information Diffusion and Influence are all things that could be turned into metrics for assessing the “knowledge health” of an organisation. Would it be an idea to get clearer and more common definitions of these metrics for use in an educational context?

Worries/concerns from the perspective of what technology wants

Probably the most lively discussion in the Moodle forums was around critiques of learning analytics. My main concern for analytics is the kind of feedback loop it introduces once you become public with the analytics. I expressed this in a reference to Goodhart’s law which states that:

Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes

George Siemens did a very good job in writing down the main concerns here. I will quote them in full for my future easy reference.

1. It reduces complexity down to numbers, thereby changing what we’re trying to understand
2. It sets the stage for the measurement becoming the target (standardized testing is a great example)
3. The uniqueness of being human (qualia, art, emotions) will be ignored as the focus turns to numbers. As Gombrich states in “The Story of Art”: The trouble about beauty is that tastes and standards of what is beautiful vary so much”. Even here, we can’t get away from this notion of weighting/valuing/defining/setting standards.
4. We’ll misjudge the balance between what computers do best…and what people do best (I’ve been harping for several years about this distinction as well as for understanding sensemaking through social and technological means).
5. Analytics can be gamed. And they will be.
6. Analytics favour concreteness over accepting ambiguity. Some questions dont have answers yet.
7. The number/quantitative bias is not capable of anticipating all events (black swans) or even accurately mapping to reality (Long Term Capital Management is a good example of “when quants fail”: http://en.wikipedia.org/wiki/Long-Term_Capital_Management )
8. Analytics serve administrators in organizations well and will influence the type of work that is done by faculty/employees (see this rather disturbing article of the KPI influence in universities in UK: http://www.nybooks.com/articles/archives/2011/jan/13/grim-threat-british-universities/?page=1 )
9. Analytics risk commoditizing learners and faculty – see the discussion on Texas A & M’s use of analytics to quantify faculty economic contributions to the institution: http://www.nybooks.com/articles/archives/2011/jan/13/grim-threat-british-universities/?page=2 ).
10. Ethics and privacy are significant issues. How can we address the value of analytics for individuals and organizations…and the inevitability that some uses of analytics will be borderline unethical?

This type of criticism could be enough for anybody to give up already and turn their back to this field of science. I personally belief that this would a grave mistake. You would be moving against the strong and steady direction of technology’s tendencies.

SNAPP: Social network analysis

The assignment of the week was to take a look at Social Networks Adapting Pedagogical Practice (better known as SNAPP) and use it on the Moodle forums of the course. Since I had already played with it before I only looked at Dave Cormier‘s video of his experience with the tool:

[youtube=http://www.youtube.com/watch?v=ZHNM8FWrpLk]

Snapp’s website gives a good overview of some of the things that a tool like this can be used for. Think about finding disconnected or at-risk students, seeing who are the key information brokers in the class, use it for “before and after” snapshots of a particular intervention, etc.

Before I was able to use it inside my organisation I needed to make sure that the tool does not send any of the data it scrapes back home to the creators of the software (why wouldn’t it, it is a research project after all). I had an exchange with Lori Lockyer, professor at Wollongong, who assured me that:

SNAPP locally complies the data in your Moodle discussion forum but it does not send data from the server (where the discussion forum is hosted) to the local machine nor does it send data from the local machine to the server.

Making social networks inside applications (and ultimately inside organisations) more visible to many more people using standard interfaces is a nice future to look forward to. Which LMS is the first to have these types of graphs next to their forum posts? Which LMS will export graphs in some standard format for further processing with tools like Gephi?

Gephi is one of the tools by the way, that I really should start to experiment with sooner rather than later.

The intelligent spammability of open online courses: where are the vendors?

One thing that I have been thinking about in relation to these Open Online Courses is how easy it would be for vendors of particular related software products to come and crash the party. The open nature of these courses lends itself to spam I would say.

Doing this in an obnoxious way will ultimately not help you with this critical crowd, but being part of the conversation (Cluetrain anybody?) could be hugely beneficial from a commercial point of view. As a marketeer where else would you find as many people deeply interested into Learning Analytics as in this course? Will these people not be the influencers in this space in the near future?

So where are the vendors? Do you think they are lurking, or am I overstating the opportunity that lies here for them?

My participation in numbers

Every week I give a numerical update about my course participation (I do this in the spirit of the quantified self, as a motivator and because it seems fitting for the topic). This week I bookmarked 37 items on Diigo, wrote 3 Lak11 related tweets, wrote 5 Moodle forum posts and 1 blog post.