Notes and Reflections on Day 1 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

From September 1-3, 2010, I will attend the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010) in beautiful Graz, Austria. I will use my blog to do a daily report on my captured notes and ideas.

And now for something completely different
In the last few years I have put a lot of effort into becoming a participating member in the global learning technology community. This means that when I visit a “learning” conference I know a lot of the people who are there. At this conference I know absolutely nobody. Not a single person in my online professional network seems to know let alone go to this conference.

One of my favourite competencies in the leadership competency framework of Shell is the ability to value differences. People who master this competency actively seek out the opinion of people who have a different opinion than theirs. There are good reasons for this (see for example Page’s The Difference), and it is one of the things that I would like to work on myself: I am naturally inclined to seek out people who think very much like me and this conference should help me in overcoming that preference.

After the first day I already realise that the world I live and work in is very “corporate” and very Anglo-Saxon. In a sense this conference feels like I have entered into a world that is normally hidden from me. I would also like to compliment the organizers of the conference: everything is flawless (there even is an iPhone app: soon to be standard for all for conferences I think, I loved how FOSDEM did this: publishing the program in a structured format and then letting developers make the apps for multiple mobile platforms).

Future Trends in Search User Interfaces
Marti Hearst has just finished writing her book Search User Interfaces which is available online for free here and she was therefore asked to keynote about the future of these interfaces.

Current search engines are primarily search text based, have a fast response time, are tailored to keyword queries (that support a search paradigm where there is iteration based on these keywords), sometimes have faceted metadata that delivers navigation/organization support, support related queries and in some cases are starting to show context-sensitive results.

Hearst sees a couple of things happening in technology and in how society interacts with that technology that could help us imagine what the search interface will look like in the future. Examples are the wide adoption of touch-activated devices with excellent UI design, the wide adoption of social media and user-generated content, the wide adoption of mobile devices with data service, improvements in Natural Language Processing (NLP), a preference for audio and video and the increasing availability of rich, integrated data sources.

All of these trends point to more natural interfaces. She thinks this means the following for search user interfaces:

  • Longer more natural queries. Queries are getting longer all the time. Naive computer users use longer queries, only shortening them when they learn that they don’t get good results that way. Search engines are getting better at handling longer queries. Sites like Yahoo Answers and Stack Overflow (a project by one of my heroes Joel Spolsky) are only possible because we now have much more user-generated content.
  • “Sloppy commands” are now slowly starting to be supported by certain interfaces. These allow flexibility in expression and are sometimes combined with visual feedback. See the video below for a nice example.

[vimeo http://vimeo.com/13992710]

  • Search is becoming as social as possible. This is a difficult problem because you are not one person, you are different people at different times. There are explicit social search tools like Digg, StumbleUpon and Delicious and there are implicit social search tools and methods like “People who bought x, also bought…” and Yahoo’s My Web (now defunct). Two good examples (not given by Hearst) of how important the social aspects of search are becoming are this Mashable article on a related Facebook patent and this Techcrunch article on a personalized search engine for the cloud.
  • There will be a deep integration of audio and video into search. This seemed to be a controversial part of her talk. Hearst is predicting the decline of text (not among academics and lawyers). There are enough examples around: the culture of video responses on YouTube apparently arose spontaneously and newspaper websites are starting to look more and more like TV. It is very easy to create videos, but the way that we can edit videos still needs improvement.
  • A final prediction is that the search interface will be more like a dialogue, or conversational. This reality is a bit further away, but we are starting to see what it might look like with apps like Siri.

Enterprise 2.0 and the Social Web

Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed
Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed

This track consisted of three presentations. The first one was titled “A Corporate Tagging Framework as Integration Service for Knowledge Workers”. Walter Christian Kammergruber, a PhD student from Munich, told us that there are two problems with tagging: one is how to orchestrate the tags in such a way that they work for the complete application landscape, another is the semantic challenge of getting rid of ambiguity, multiple spellings, etc. His tagging framework (called STAG) attempts to solve this problem. It is a piece of middleware that sits on the Siemens network and provides tagging functionality through web services to Siemens’ blogging platform, wiki, discussion forums and Sharepoint sites. These tags can then be displayed using simple widgets. The semantic problem is solved by having a thesaurus editor allowing people define synonyms for tags and make relationships between related tags.

I strongly believe that any large corporation would be very much helped with a centralised tagging facility which can be utilised by decentralised applications. This kind of methodology should actually not only be used for tagging but could also be used for something like user profiles. How come I don’t have a profile widget that I can include on our corporate intranet pages?

The second talk, by Dada Lin, was titled “A Knowledge Management Scheme for Enterprise 2.0”. He presented a framework that should be able to bridge the gap between Knowledge Management and Enterprise 2.0. It is called the IDEA framework in which knowledge is seen as a process, not as an object. The framework consists of the following elements (also called “moments”):

  • Interaction
  • Documentation
  • Evolution
  • Adoption

He then puts these moments into three dimensions: Human, Technology and Organisation. Finally he did some research around a Confluence installation at T-Systems. None of this was really enlightening to me, I was however intrigued to notice that the audience focused more on the research methodologies than on the outcomes of the research.

The final talk, “Enterprise Microblogging at Siemens Building Technologies Division: A Descriptive Case Study” by Johannes Müller a senior Knowledge Management manager at Siemens was quite entertaining. He talked about References@BT, a community at Siemens that consists of many discussion forums, a knowledge reference and since March 2009 a microblogging tool. It has 7000 members in 73 countries.

The microblogging platform is build by himself and thus has exactly the features it needed to have. One of the features he mentioned was that it showed a picture of every user in every view on the microblog posts. This is now a standard feature in lots of tools (e.g. Twitter or Facebook) and it made me realise that Moodle was actually one of the first applications that I know that this consistently: another example of how forward thinking Martin Dougiamas really was!.

Müller’s microblogging platform does allow posts of more than 140 characters, but does not allow any formatting (no line-breaks or bullet points for example). This seems to be an effective way of keeping the posts short.

He shared a couple of strategies that he uses to get people to adopt the new service. Two things that were important were the provision of widgets that can be included in more traditional pages on the intranet and the ability to import postings from other microblogging sites like Twitter using a special hash tag. He has also sent out personalised email to users with follow suggestions. These were hugely effective in bootstrapping the network.

Finally he told us about the research he has done to get some quantitative and qualitative data about the usefulness of microblogging. His respondents thought it was an easy way of sharing information, an additional channel for promoting events, a new means of networking with others, a suitable tool to improve writing skills and a tool that allowed for the possibility to follow experts.

Know-Center Graz
During lunch (and during the Bacardi sponsored welcome reception) I had the pleasant opportunity to sit with Michael Granitzer, Stefanie Lindstaedt and Wolfgang Kienreich from the Know-Center, Austria’s Competence Center for Knowledge Management.

They have done some work for Shell in the past around semantic similarity checking and have delivered a working proof of concept in our Mediawiki installation. They demonstrated some of their new projects and we had a good discussion about corporate search and how to do technological innovation in large corporations.

The first project that they showed me is called the Advanced Process- Oriented Self- Directed Learning Environment (APOSDLE). It is a research project that aims to develop tools that help people learn at work. To rephrase it in learning terms: it is a very smart way of doing performance support. The video below gives you a good impression of what it can do:

[youtube=http://www.youtube.com/watch?v=4ToXuOTKfAU?rel=0]

After APOSDLE they showed me some outcomes from the Mature IP project. From the project abstract:

Failures of organisation-driven approaches to technology-enhanced learning and the success of community-driven approaches in the spirit of Web 2.0 have shown that for that agility we need to leverage the intrinsic motivation of employees to engage in collaborative learning activities, and combine it with a new form of organisational guidance. For that purpose, MATURE conceives individual learning processes to be interlinked (the output of a learning process is input to others) in a knowledge-maturing process in which knowledge changes in nature. This knowledge can take the form of classical content in varying degrees of maturity, but also involves tasks & processes or semantic structures. The goal of MATURE is to understand this maturing process better, based on empirical studies, and to build tools and services to reduce maturing barriers.

Mature
Mature

I was shown a widget-based approach that allowed people to tag resources, put them in collections and share these resources and collections with others (more information here). One thing really struck me about the demo I got: they used a simple browser plugin as a first point of contact for users with the system. I suddenly realised that this would be the fastest way to add a semantic layer over our complete intranet (it would work for the extranet too). With our desktop architecture it is relatively trivial to roll out a plugin to all users. This plugin would allow users to annotate webpages on the net creating a network of meta-information about resources. This is becoming increasingly viable as more and more of the resources in a company are accessed from a browser and are URL addressable. I would love to explore this pragmatic direction further.

Knowledge Sharing
Martin J. Eppler from the University of St. Gallen seems to be a leading researcher in the field of knowledge management: when he speaks people listen. He presented a talk titled “Challenges and Solutions for Knowledge Sharing in Inter-Organizational Teams: First Experimental Results on the Positive Impact of Visualization”. He is interested in the question of how visualization (mapping text spatially) changes the way that people share knowledge. In this particular research project he focused on inter-organizational teams. He tries to make his experiments as realistic as possible, so he used senior managers and reallife scenarios, put them in three experimental groups and set them out to do a particular task. There was a group that was supported with special computer based visualization software, another group used posters with templates and a final (control) group used plain flipcharts. After analysing his results he was able to conclude that visual support leads to significant greater productivity.

This talk highlights one of the problems I have with science applied in this way. What do we now know? The results are very narrow and specific. What happens if you change the software? Is this the case for all kinds of tasks? The problem is: I don’t know how scientists could do a better job. I guess we have to wait till our knowledge-working lives can really be measured consistently and in realtime and then for smart algorythms to find out what really works for increased productivity.

The next talk in this talk was from Felix Mödritscher who works at the Vienna University of Economics and Business. His potentially fascinating topic “Using Pattern Repositories for Capturing and Sharing PLE Practices in Networked Communities” was hampered by the difficulty of explaining the complexities of the project he is working on.

He used the following definition for Personal Learning Environments (PLEs): a set of tools, services, and artefacts gathered from various contexts and to be used by learners. Mödritscher has created a methodology that allows people to share good practices in PLEs. First you record PLE interactions, then you allow people to depersonalise these interactions and share them as an “activity pattern” (distilled and archetypical), where people can then pick these up and repersonalise them. He has created a Pattern repository, with a pattern store. It has a client side component implemented as a Firefox extension: PAcMan (Personal Activity Manager). It is still early days, but these patterns appear to be really valuable: they not only help with professional competency development, but also with what he calls transcompentences.

I love the idea of using design patterns (see here), but thought it was a pity that Mödritscher did not show any very concrete examples of shared PLE patterns.

My last talk of the day was on “Clarity in Knowledge Communication” by Nicole Bischof, one of Eppler’s PhD students in the University of St. Gallen. She used a fantastic quote by Wittgenstein early in her presentation:

Everything that can be said, can be said clearly

According to her, clarity can help with knowledge creation, knowledge sharing, knowledge retention and knowledge application. She used the Hamburger Verständlichkeitskonzept as a basis to distill five distinct aspects to clarity: Concise content, Logical structure, Explicit content, Ambiguity low and Ready to use (the first letters conveniently spell “CLEAR”). She then did an empirical study about the clarity of Powerpoint presentations. Her presentation turned tricky at that point as she was presenting in Powerpoint herself. The conclusion was a bit obvious: knowledge communication can be designed to be more user-centred and thus more effective, clarity helps in translating innovation and potential of knowledge and can help with a clear presentation of complex and knowledge content.

Bischof did an extensive literature review and clarity is an underresearched topic. After just having read Tufte’s anti-Powerpoint manifesto I am convinced that there is a world to gain for businesses like Shell’s. So much of our decision making is based on Powerpoint slidepacks, that it becomes incredibly urgent to let this be optimal.

Never walk alone
I am at this conference all by myself and have come to realise that this is not the optimal situation. I want to be able to discuss the things that I have just seen and collaboratively translate them to my personal work situation. It would have been great to have a sparring partner here who shares a large part of my context. Maybe next time?!

What on Earth is RSS Cloud?

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. For this post we agreed to write about a new technology using Linux Format‘s “What on Earth is …?” style (see example on Android). We did not agree on a particular technology and we would get bonus points for a nice pixellated image to accompany the post. You can read Arjen’s post with the same title here.

RSS Cloud
RSS Cloud

RSS Cloud? I am getting a bit tired of this cloud computing trend.
Yes, I also think that cloud computing is slightly over hyped. However RSS Cloud is not about cloud computing. It is about bringing real-time updates to the RSS protocol.

I have only just grasped what RSS is. Only the technorati seem to use it, normal computer users have no idea.
Indeed: most people have no idea what RSS is or how they can use it. They still visit all their favourite news sites one after the other to check whether something new has been posted. However even people that don’t understand it often use it. If you download podcasts through iTunes you are using RSS technology. Furthermore RSS is the technological glue for many of the popular mashup sites. You don’t need to understand a technology for it to be useful to you.

Fair enough, so how would you explain RSS Cloud to a lay person?
Sites that have content that changes often (think blogs or news sites) publish an RSS feed on their server. Whenever a new item is posted it will be added to the feed, usually dropping the oldest item from the list at the same time. If you are interested in those news items you can use a news reader (also called an aggregator) and tell this news reader to check whether new items are added to the feed, if there is an update, then the news reader can retrieve it. A news reader typically does this every fifteen minutes or so. This means the news can be 15 minutes old when you get it. RSS Cloud makes it possible for news readers to subscribe to the updates of a feed. Whenever something new is added the feed, the RSS Cloud server notifies all subscribers so that they can pick up the content immediately: in real-time.

Another buzz word! What is the benefit of real-time? Can’t people just wait a couple of minutes before they get their news?
People listen to the radio so that they can hear the sports results in real-time. Weren’t you upset when all your friends knew about Michael Jackson’s death earlier than you, because they heard it on Twitter? The success of Twitter search and trending topics shows that people want to know about stuff as it happens and not fifteen minutes later.

Now that you mention it: Twitter indeed works in real-time. Why do we need something else, what’s wrong with Twitter?
Twitter actually also uses a “polling” model for its content. Each single Twitter client will have to access the Twitter API to see whether something new has been posted by the people you are following. This is a huge waste of computer resources. All these clients asking for new information even if there is none. It is a model that does not scale well. A “push” model actually works much better in this respect.

Oh, so it is a bit like the difference between getting your email once every couple of minutes and getting it immediately on your Blackberry?
Yes, that is a nice analogy. The Blackberry uses push email. You get the email as soon as it hits the server, because it is pushed to your phone. Traditional email clients, like Outlook, go to the server once every couple of minutes to see whether something new is there.

So what large company is trying to push this idea?
This time it is not a big company trying to establish a standard or protocol. The RSS Cloud protocol is designed by Dave Winer who also drafted the original RSS specification.

Dave Winer, isn’t that the guy that loves to rub people the wrong way?
He is a controversial character and is certainly very vocal and opinionated. At the same time, he is a true pioneer and one of those people that embody the values of the Internet. His vision for Cloud RSS is not about blogging. Instead, he wants to provide a decentralised architecture for microblog messages. To him the fact that Twitter centralises all the microblogging activity is a real vulnerability. His goal is to create a network that can work alongside Twitter without being in the control of a single company.

Talking about companies. I suddenly remember hearing about a similar technology. One of these cute names with many vowels?
You probably mean PubSubHubbub. This is a Google sponsored protocol that has already been implemented in Google Reader.

Great: another standards war. VHS versus Betamax, RSS versus Atom, Britney versus Whitney. Will we never learn?
This shouldn’t become a problem. RSS and Atom for example live happily next to each other now. It is easy to implement both. PubSubHubbub has a slightly different goal in comparison to Cloud RSS. It focuses mainly on blogging and associates itself with Feed Burner. The two technologies should be able to live next to each other, at least that is what Dave says.

Well, let’s hope he and you are right. By the way, isn’t this Cloud RSS just another sneaky way to measure subscribers, generate some statistics and store information about where they are from and what they are doing?
It is true that an RSS reader will have to register itself with the the RSS cloud for the protocol to work. However the RSS cloud forgets about the RSS reader if the registration isn’t renewed every 24 hours. You also have to remember that many people will use readers that do not support RSS Cloud. There are much better ways to get statistics.

Aren’t you a learning technology person? What does this have to do with learning?
I am very interested in Cloud RSS because I am a learning technologist! Like all new Internet based technologies it will only be a matter of time before some smart developer finds a way of using this in some unexpected fashion. Remember:
technology creates feasibility spaces for social practice! Just think of what kind of course delivery models RSS has made possible: the Connectivism and Connective Knowledge course could not run without it for example.

You are a Moodle evangelist. Does Moodle support RSS Cloud yet?
I haven’t checked, but I doubt it.  It is very new and the Moodle developers are focusing on getting Moodle 2.0 to a beta release. However, I am sure that in the future, parts of Moodle will move towards real-time. Imagine how Cloud RSS could be used to create activity streams or notify people of comments on their work. It could effectively bridge the gap between asynchronous activities like discussion forums and assignments and synchronous activities like web conferencing.

Ok, you have managed to pique my interested. Where can I go if I want to start using it?
There are two ways of using it. First, you can make your own feeds RSS Cloud enabled. If you have blog at WordPress.com this is automatically the case. You can opt-in if you host your own WordPress blog. The other way of using it would be to have an RSS reader that supports the protocol. Currently only River2 supports it and Lazyfeed has announced that it will support it too. Only web based readers can support it, as the RSS Cloud server needs to be able to ping the reader with the update.

Are there any sites that can tell me a bit more?
The current home of the protocol is http://www.rsscloud.org. Here you will find news about the protocol and an implementation guide. The Wikipedia entry could be better. Why don’t you help fixing it?

Here Comes Everybody: The Power of Organizing Without Organizations

The Power of Organizing Without Organizations, book cover
Here Comes Everybody

I am convinced that the web will change our society in many ways that we cannot currently grasp. Clay Shirky‘s Here Comes Everybody: The Power of Organizing Without Organizations is a book which everybody who is interested in these changes should read. Many books on technology take a very shallow approach. Often they focus on the technology itself or only look at one particular aspect of how technology can be used (e.g. books on “How Wikis can change the way you collaborate”). Shirky’s book is the first one I have read which takes a very deep sociological and often philosophical perspective on the ubiquitousness of the net and its wider implications.

He is not the first author to draw an analogy with the invention of movable type. The social effects of this invention lagged decades behind the technological effects:

Real revolutions don’t involve an orderly transition from point A to point B. Rather, they go from A through a long period of chaos and only then reach B. In that chaotic period, the old systems get broken long before new ones become stable.

We are just now entering the chaotic period. We cannot accurately predict the changes that will happen to society now that we have the Internet. It will be many years before we can oversee and look back at the consequences. I can instantly see how the above is true for education. Currently the old institutions are still in full reign, but they are more and more broken (e.g. look at the percentage of students who prematurely quit their vocational tertiary education in the Netherlands). These institutions have not harnessed the new possibilities of technology.

So what are these new possibilities? The book is full of wonderful examples, but Shirky’s main point is that the Internet allows groups of people to self organize without the need for organizations, firms or (governmental) institutions. Traditional communications were always one-to-one (like the phone) or one-to-many (broadcasting, like television). The net enables many-to-many communication which we never had before. E-mail was the first example of this, but IM, (micro-)blogs and social networking sites enable this too. These new tools are “eroding the institutional monopoly on large-scale coordination”.

Shirky has a great observation on media:

The twentieth century, with the spread of radio and television was the broadcast century. The normal pattern for media was that they were created by a small group of professionals and then delivered to a large group of consumers. But media, in the word’s literal sense as the middle layer between people, have always been a three-part affair. People like to consume media, of course, but they also like to produce it [..] and they like to share it [..]. Because we now have media that support both making and sharing, as well as consuming, those capabilities are reappearing, after a century mainly given over to consumption.

Social tools are coming into existence that support new patterns of group forming and group production. My personal favourite example is open source software. Clay Shirky attributes the success of this method of producing software to the way that it gets failure for free. For this reason, he considers open source software to be a threat to commercial software vendors:

Open source is a profound threat, not because the open source ecosystem is outsucceeding commercial efforts, but because it is outfailing them. Because the open source ecosystem, and by extension open social systems generally, rely on peer production, the work on those systems can be considerably more experimental, at considerably less cost, than any firm can afford. Why? The most important reasons are that open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.
The overall effect of failure is its likelihood times its cost. Most organizations attempt to reduce the effect of failure by reducing its likelihood. [..] The obvious problem is that no one knows for certain what will succeed and what will fail. [..] You will inevitably green-light failures and pass on potential successes. Worse still, more people will remember you saying yes to a failure than saying no to a radical but promising idea. Given this asymmetry, you will be pushed to make safe choices, thus systematically undermining the rationale for trying to be more innovative in the first place.
The open source movement makes neither kind of mistake, because it doesn’t have employees, it doesn’t make investments, it doesn’t even make decisions. It is not an organization, it is an ecosystem, and one that is remarkably tolerant of failure. Open source doesn’t reduce the likelihood of failure, it reduces the cost of failure; it essentially gets failure for free.

Do yourself a favour: If you haven’t read this profound book, please read it as soon as you can.

Why we should stop using Twitter and switch over to Laconica

The biggest implementation of Laconica
The biggest implementation of Laconica

A lot of my colleagues at Stoas Learning including myself are having a lot of fun using the microblogging service Twitter. It has changed the social interaction between some of the team members and we have gotten to know each other better through a very simple service delivering 140 character messages at a time.

I like the service a lot but have been worried about one thing: the fact that all this information is only on Twitter’s server. This point is extra poignant whenever Twitter is down (which happens quite often).

Imagine a world in which people with a Hotmail email address could only email somebody if they also had a Hotmail address. There would be no way for somebody who is registered at Gmail to email somebody at Yahoo. Luckily this is not the case: email is collection of open protocols which can be implemented by anybody. Unfortunately we cannot say the same about instant messaging. I personally have a MSN account, a Skype account and a Yahoo account and there is no way for me to talk to a Skype user with my MSN account.

So what about microblogging? Will we go towards a future which is similar to instant messaging with multiple microblogging services which are not connected to each other? Will Twitter be so dominant that there will be no alternative (creating a monopoly with all its disadvantages)? Or will we move towards a future where microblogging is like email: you can choose the provider you want and connect to people using other providers?

I prefer the last option and feel that I should be principled about it. That is why I will stop using Twitter, temporarily abandoning the people I follow and the people that follow me and switch over to Identi.ca, currently the largest Laconica installation. Here is how Identi.ca explains in what way it is different from services like Twitter:

Identi.ca is an Open Network Service. Our main goal is to provide a fair and transparent service that preserves users’ autonomy. In particular, all the software used for Identi.ca is Free Software, and all the data is available under the Creative Commons Attribution 3.0 license, making it Open Data.

The software also implements the OpenMicroBlogging protocol, meaning that you can have friends on other microblogging services that can receive your notices.

The goal here is autonomy — you deserve the right to manage your own on-line presence. If you don’t like how Identi.ca works, you can take your data and the source code and set up your own server (or move your account to another one).

I will spend the next couple of weeks trying to convince everybody around me to make the switch and maybe even get Stoas to start its own Laconica server.

If you are interested in hearing more about Identi.ca and Laconica I can recommend episode 37 of Floss Weekly where Evan Prodromou, the creator of both is interviewed and explains how Laconica works and what the plans for the future are.