Workflow Driven Apps Versus App Driven Workflow

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. This month we write about how the constant flux of new apps and platforms influences your workflow. We do this by (re-)viewing our workflow from different perspectives. After a general introduction we write a paragraph of 200 words each from the perspective of 1. apps, 2. platform and 3. workflow itself. You can read Arjen’s post with the same title here.

Instapaper on my iPhone
Instapaper on my iPhone

To me a workflow is about two things mainly: the ability to capture things and the ability to time-shift. Both of these need to be done effectively and efficiently. So let’s take a look at three separate processes and see how they currently work for me: task/todo management, sharing with others and reading news and interesting articles (not books). So how do I work nowadays for each of these three things?

Workflow
I use Toodledo for my task/todo management. Whenever I “take an action” or think of something that I need to do at some point in the future I fire up Toodledo and jot it down. Each item is put in a folder (private, work, etc.), gets a due date (sometimes with a timed reminder to email if I really cannot forget to do it) and is given a priority (which I usually ignore). At the beginning and end of every day I run through all the tasks and decide in my head what will get done.

For me it important to share what I encounter on the web and my thoughts about that with the rest of the world. I do this in a couple of different ways: explicitly through Twitter, through Twitter by using a Bit.ly sidebar in my Browser, in Yammer if it is purely for work, on this WordPress.com blog, through public bookmarks on Diigo, by sending a direct email or by clicking the share button in Google Reader.

I have subscribed to 300+ RSS feeds and often when I am scanning them and find something interesting and I don’t have the opportunity to read it at that time. I use Instapaper to capture these articles and make them available for easy reading later on. Instapaper doesn’t work with PDF based articles so I send those to a special email address so that I can pick them up with my iPad and save them to GoodReader when it is convenient.

Platform
“Platform” can have multiple meanings. The operating system was often called a platform. When you heavily invested into one platform it would become difficult to do any of your workflows with a different platform (at my employer this has been the case for many years with Microsoft and Exchange: hard to use anything else). Rich web applications have now turned the Internet itself into a workflow platform. This makes the choice for an operating system nearly, if not totally, irrelevant. I regularly use Ubuntu (10.04, too lazy to upgrade so far), Windows Vista (at work) and iOS (both on the iPhone and the iPad). All of the products and services mentioned either have specialised applications for the platform or are usable through any modern web browser. The model I prefer right now is one where there is transparent two-way synching between a central server/service and the different local apps, allowing me access to my latest information even if I am not online (Dropbox for example uses this model and is wonderful).

What I have noticed though, is that I have strong preferences for using a particular platform (actually a particular device) for doing certain tasks. The iPad is my preference for any reading of news or of articles: the “paginate” option on Instapaper is beautiful. Sharing is best done with something that has a decent keyboard and Toodledo is probably used the most with my iPhone because that is usually closest at hand.

Apps
Sharing is a good example of something where the app drives my behaviour very much: the app where I initially encounter the thing I want to share needs to support the sharing means of choice. This isn’t optimal at all: if I read something interesting in MobileRSS on the iPad that I want to share on Yammer, then I usually email the link from MobileRSS to my work email address, once at work I copy it from my mail client into the Browser version of Yammer and add my comments. This is mainly because Yammer (necessarily) has to be a closed off to the rest of the world with its APIs.

Services that create the least hickups in my workflow are those that have a large separation between the content/data of the service and the interface. Google Reader and Toodledo both provide very complete APIs that allow anybody to create an app that accesses the data and displays it in a smart way. The disadvantage of these services is that I am usually dependent on a single provider for the data. In the long term this is probably not sustainable. Things like Unhosted are already pointing to the future: an even stricter separation between data and app. Maybe in that future, the workflow can start driving the app instead of the other way around.

Serendipity 2.0

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. This time we decided to try and find out whether it is possible to engineer serendipity on the web. The post should start with a short (max. 200 words) reflection on what the Internet has meant for serendipity followed by three serendipitous discoveries including a description of how they were discovered. You can read Arjen’s post with the same title here.

There is an ongoing online argument over whether our increasing use of the Internet for information gathering and consumption has decreased our propensity for having serendipitous discoveries (see for example here, here or here). I have worried about this myself: my news consumption has become very focused on (educational) technology and has therefore become very silo-ed. No magazine has this level of specificity, so when I read a magazine I read more things I wasn’t really looking for than when I read my RSS feeds in Google Reader. This is a bit of red herring. Yes, the web creates incredibly focused channels and if all you are interested in is the history of the second world war, then you can make sure you only encounter information about that war; but at the same time the hyperlinked nature of the web as a network actually turns it into a serendipity machine. Who hasn’t stumbled upon wonderful new concepts, knowledge communities or silly memes while just surfing around? In the end it probably is just a matter of personal attitude: an open mind. In that spirit I would like to try and engineer serendipity (without addressing the obvious paradoxical nature of doing that).

Serendipity algorithm 1: Wikipedia
One way of finding serendipity in the Wikipedia is by looking at the categories of a particular article. Because of the many to many relationship between categories and articles these can often be very surprising (try it!). I have decided to take advantage of the many hyperlinks in Wikipedia and do the following:

  • Start with the “Educational Technology” article
  • Click on the first two links to other articles
  • In these articles find two links that look interesting and promising to you
  • In each of these four articles pick a link to a concept that you haven’t heard about yet or don’t understand very well
  • Read these links and see what you learn

Instructional theory was the first link. From there I went to Bloom’s Taxonomy and to Paulo Freire. Bloom’s Taxonomy took me to DIKW, a great article on the “Knowledge Pyramid” explaining the data-to-information-to-knowledge-to-wisdom transformation. I loved the following Frank Zappa quote:

Information is not knowledge,
Knowledge is not wisdom,
Wisdom is not truth,
Truth is not beauty,
Beauty is not love,
Love is not music,
and Music is the BEST.

Paulo Freire took me to Liberation theology which is is a movement in Christian theology which interprets the teachings of Jesus Christ in terms of a liberation from unjust economic, political or social conditions. This began as a movement in the Roman Catholic church in Latin America in the 1950s-1960s. The paradigmatic expression of liberation theology came from Gutierrez from his book A Theology of Liberation in his which he coined the phrase “preferential option for the poor” meaning that God is revealed to have a preference for those people who are “insignificant”, “unimportant” and “marginalized”.

The second link was Learning theory (education). That led to Discovery learning and Philosophical anthropology. Discovery learning prompted me to read the The Grauer School. This link didn’t really work out. The Discovery learning article had alluded to the “Learn by Discovery” motto with which the school was founded, but the article about the school has no further information. A dead alley on the serendipity trail! Philosophical anthropology brought me to Hylomorphism which is a concept I hadn’t heard of before (or I had forgotten about: I used to study this stuff). It is a philosophical theory developed by Aristotle analyzing substance into matter and form. “Just as a wax object consists of wax with a certain shape, so a living organism consists of a body with the property of life, which is its soul.”

Conclusion: Wikipedia is excellent for serendipitous discovery.

Serendipity algorithm 2: the Accidental News Explorer (ANE)

The Accidental News Explorer
The Accidental News Explorer

The tagline of this iPhone application is “Look for something, find something else” and its information page has a quote by Lawrence Block: “One aspect of serendipity to bear in mind is that you have to be looking for something in order to find something else.” I have decided to do the following:

  • Search for “Educational Technology”
  • Choose an article that looks interesting
  • Click on the “Related Topics” button
  • Choose the most interesting looking topic
  • Choose an article that looks interesting
  • Click on the “Related Topics” button
  • Choose the most interesting looking topic
  • Read the most appealing article

The article that looked interesting was an article on Kurzweil educational Systems. The only related topic was “Dallas, Texas”. This brought me to an article on Nowitzki from where I chose “Joakim Noah” as a related topic. The most appealing article in that topic was titled: Who’s better: Al Horford or Joakim Noah?

Conclusion: An app like this could work, but it needs to be a little bit better in its algorithms and sources for finding related news. One thing I noticed about this particular news explorer is its complete US focus, you always seem to go to cities and then to sports or politics.

Serendipity algorithm 3: Twitter
Wikipedia allows you to make fortunate content discoveries, Twitter should allow the same but then in a social dimension. Let’s try and use Twitter to find interesting people. I have decided to do the following:

  • Search for a the hashtag “#edtech”
  • Look at the first three people who have used the hashtag and look at their first three @mentions
  • Choose which of the nine people/organizations is the most to follow
  • Follow this person and share/favourite a couple of tweets of this person

So the search brought me to @hakan_sentrk, @ShellTerrell and @briankotts. These three mentioned the following nine Twitter users/organizations:

  1. @mike08, ESP teacher; ICT consultant; e-tutor
  2. @MsBarkerED, Education Major, Michigan State University, Senior, Aspiring Urban Educator, enrolled in the course CEP 416
  3. @jdthomas7, educational tech/math coach, former math, computer teacher. former director of technology at a local private school. specializing in tech/ed integration
  4. @ozge, Teacher/trainer, preschool team leader, coordinator of an EFL DVD project, e-moderator, content & educational coordinator of Minigon reader series, edtech addict!
  5. @ktenkely, Mac Evangelist, Apple Fanatic, Technology Teacher, classroom tech integration specialist, Den Star, instructional coach
  6. @Parentella, Ever ask your child: What happened at school today? If so, join us.
  7. @Chronicle, The leading news source for higher education.
  8. @BusinessInsider, Business news and analysis in real time.
  9. @techcrunch, Breaking Technology News And Opinions From TechCrunch

I decided to follow @ozge who seems to be a very active Twitter user posting mostly links that are relevant to education.

Conclusion: the way I set up this algorithm did not help in getting outside of my standard community of people. I was already following @ShellTerrell for example. I probably should have designed a slightly different experiment, maybe involving lists in some way (and choosing an a-typical list somebody is on). That might have allowed me to really jump communities, which I didn’t do in this case.

There are many other web services that could be used in a similar fashion as the above  for serendipitous discovery. Why don’t you try doing it with Delicious, with Facebook, with LinkedIn or with YouTube?

Notes and Reflections on Day 2 and 3 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

These are my notes and reflections for the second and third days of the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010).

Another appstore!
Rafael Sidi from Elsevier kicked of the second day with a talk titled “Bring in ‘da Developers, Bring in ‘da Apps – Developing Search and Discovery Solutions Using Scientific Content APIs” (the slightly ludicrous title was fashioned after this).

He opened his talk with this Steve Ballmer video which, if I was the CIO of any company, would seriously make me reconsider my customer relationship with Microsoft:

[youtube=http://www.youtube.com/watch?v=8To-6VIJZRE&rel=0]

(If you enjoyed that video, make sure you watch this one too, first watch it with the sound turned off and only then with the sound on).

Sidi is responsible for Elservier’s SciVerse platform. He has seen that data platforms are increasingly important, that there is an explosion of applications and that people work in communities of innovation. He used Data.gov as an example: it went from 47 sources to 220,000+ sources within a year’s time and has led to initiatives like Apps for America. We need to have an “Apps for science” too. Our current scientific platforms make us spend too much time gathering instead of analysing information and none of them really understand the user’s intent.

The key trends that he sees on the web are:

  • Openness and interoperability (“give me your data, my way”). Access to APIs helps to create an ecosystem.
  • Personalization (“know what I want and deliver results on my interest”). Well known examples are: Amazon, Netflix and Last.fm
  • Collaboration & trusted views (“the right contacts at the right time”). Filtering content through people you trust. “Show me the articles I’ve read and show me what my friends have right differently from me”. This is not done a lot. Sidi didn’t mention this but I think things like Facebook’s open API are starting to deliver this.

So Elsevier has decided to turn SciVerse, the portal to their content, into a platform by creating an API with which developers can create applications. Very similar to Apple’s appstore this will include a revenue sharing model. They will also nurture a developers community (bootstrapping it with a couple of challenges).

He then demonstrated how applications would be able to augment SciVerse search results, either by doing smart things with the data in a sidebar (based on aggregated information about the search result) or by modifying a single search result itself. I thought it looked quite impressive and thought it was a very smart move: scientific publishers seem to be under a lot of pressure from things like Open Access and have been struggling to demonstrate their added value in this Internet world. This could be one way to add value. The reaction from the audience was quite tough (something Sidi already preempted by showing an “I hate Elsevier”-tweet in his slides). One audience member: “Elsevier already knows how to exploit the labour of scientists and now wants to exploit the labour of developers too”. I am no big fan of large publisher houses, but thought this was a bit harsh.

Knowledge Visualization
Wolfgang Kienreich demoed some of the knowledge visualization products that the Know-Center has developed over the years. The 3D knowledge space is not available through the web (it is licensed to a German encyclopedia publisher), but showed what is possible if you think hard about how a user should be able to navigate through large knowledge collections. Their work for the Austrian Press Agency is available online in a “labs” evironment. It demonstrates a way of using faceted search in combination simple but insightful visualizations. The following example is a screenshot showing which Austrian politicians have said something about pensions.

APA labs faceted visual search
APA labs faceted visual search

I have only learned through writing this blog post that Wolfgang is interested in the Prisoner’s Dilemma. I would have loved to have talked to him about Goffman’s Expression games and what they could mean for the ways decisions get made in large corporations. I will keep that for a next meeting.

Knowledge Work
This track was supposed to have four talks, but one speaker did not make it to the conference, so there were three talks left.

The first one was provocatively titled “Does knowledge worker productivity really matter?” by Rainer Erne. It was Drucker who said that is used to be the job of management to increase the productivity of manual labour and that is now the job of management to make knowledge workers more productive. In one sense Drucker was definitely right: the demand for knowledge work is increasing all the time, whereas the demand for routine activities are always going down.

Erne’s study focuses on one particular part of knowledge workers: expert work which is judgement oriented, highly reliant on individual expertise and experience and dependent on star performance. He looked at five business segments (hardware development, software development, consulting, medical work and university work) and consistently found the same five key performance indicators:

  • business development
  • skill development
  • quality of interaction
  • organisation of work
  • quality of results

This leads Erne to belief that we need to redefine productivity for knowledge workers. There shouldn’t just be a focus on quantity of the output, but more on the quality of the output. So what can managers do knowing this? They can help their experts by being a filter, or by concentrating their work for them.

This talk left me with some questions. I am not sure whether it is possible to make this distinction between quantitative and qualitative output, especially not in commercial settings. The talk also did not address what I consider to be the main challenge for management in this information age: the fact that a very good manual worker can only be 2 or maybe 3 times as productive as an average manual worker, whereas a good knowledge worker can be hundreds if not thousands times more productive than the average worker.

Robert Woitsch talk was titled “Industrialisation of Knowledge Work, Business and Knowledge Alignment” and I have to admit that I found it very hard to contextualize what he was saying into something that had any meaning to me. I did think it was interesting that he really went in another direction compared to Erne as Woitsch does consider knowledge work to be a production process: people have to do things in efficient ways. I guess it is important to better define what it is we actually mean when we talk about knowledge work. His sites are here: http://promote.boc-eu.com and http://www.openmodels.at.

Finally Olaf Grebner from SAP research talked about “Optimization of Knowledge Work in the Public Sector by Means of Digital Metaphors”. SAP has a case management system that is used by organisations as a replacement for their paper based system. The main difference between current iterations of digital systems and traditional paper based systems is that the latter allows links between the formal case and the informal aspects around the case (e.g. a post-it note on a case-file). Digital case management systems don’t allow informal information to be stored.

So Grebner set out to design an add-on to the digital system that would link informal with formal information and would do this by using digital metaphors. He implemented digital post-it notes, cabinets and ways of search and his initial results are quite positive.

Personally I am bit sceptical about this approach. Digital metaphors have served us well in the past, but are also the cause for the fact that I have to store my files in folders and that each file can only be stored in one folder. Don’t you lose the ability to truly re-invent what a digital case-management system can do for a company if you focus on translating the paper world into digital form? People didn’t like the new digital system (that is why Grebner was commissioned to do make his prototype I imagine). I believe that is because it didn’t allow the same affordances as the paper based world. Why not focus on that first?

Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed
Graz Kunsthaus, photo by Marion Schneider & Christoph Aistleitner, CC-licensed

Knowledge Management and Learning
This track had three learning related sessions.

Martin Wolpers from the Fraunhofer Institute for Applied Information Technology (FIT) talked about the “Early Experiences with Responsive Open Learning Environments”. He first defined each of the terms in Responsive Open Learning Environments:
Responsive: responsiveness to learners’ activities in respect to learning goals
Open: openness for new configurations, new contents and new users
Learning Environment: the conglomerate of tools that bring together people and content artifacts in learning activities to support them in constructing and processing information and knowledge.

The current generation of Virtual Learning Environments and Learning Management Systems have a couple of problems:

  • Lack of information about the user across learning systems and learning contexts (i.e. what happens to the learning history of a person when they switch to a different company?)
  • Learners cannot choose their own learning services
  • Lack of support for open and flexible personalized contextualized learning approach

Fraunhofer is making an intelligent infrastructure that incorporates widgets and existing VLE/LMS functionality to truly personalize learning. They want to bridge what people use at home with what they use in the corporate environment by “intelligent user driven aggregation”. This includes a technology infrastructure, but also requires a big change in understanding how people actually learn.

They used Shindig as the widget engine and Opensocial as the widget technology. They used this to create an environment with the following characteristics:

  • A widget based environment to enable students to create their own learning environment
  • Development of new widgets should be independent from specific learning platforms
  • Real-time communication between learners, remote inter-widget communication, interoperable data exchange, event broadcasting, etc.

He used a student population in China as the first people to try the system. It didn’t have the uptake that he expected. They soon realised that this was because the students had come to the conclusion that use or non-use of the system did not directly affect their grades. The students also lacked an understanding of the (Western?) concept of a Personal Learning Environment. After this first trial he came to a couple of conclusions. Some where obvious like that you should respect the cultural background of your students or that responsive open learning environments create challenges on the technology and the psycho-pedagogical side. Other were less obvious like that using an organic development process allowed for flexibility and for openly addressing emerging needs and requirements and that it makes sense to enforce your own development to become the standard.

For me this talk highlighted the still significant gap that seems to exist between computer scientists on the one side and social scientists on the other side. Trying out Personal Learning Environments in China is like sending CliniClowns to Africa: not a good idea. Somebody could have told them this in advance, right?

Next up was a talk titled “Utilizing Semantic Web Tools and Technologies for Competency Management” by Valentina Janev from the Serbian Mihajlo Pupin Institute. She does research to help improve the transferability and comparability of competences, skills and qualifications and to make it easier to express core competencies and talents in a standardized machine accessible way. This was another talk that was hard for me to follow because it was completely focused on what needs to happen on the (semantic) technical side without first giving a clear idea of what kind of processes these technological solutions will eventually improve. A couple of snippets that I picked up are that they are replacing data warehouse technologies with semantic web technologies, that they use OntoWiki a semantic wiki application, that RDF is the key word for people in this field and that there is thing called DOAC which has the ambition to make job profiles (and the matching CVs) machine readable.

The final talk in this track was from Joachim Griesbaum who works at the Institute of Information Science and Language Technology. The title of his talk must have been the longest in the conference: “Facilitating collaborative knowledge management and self-directed learning in higher education with the help of social software, Concept and implementation of CollabUni – a social information and communication infrastructure”, but as he said: at least it gives you an idea what it is about (slides of this talk are available here, Griesbaum was one of the few presenters that made it clear where I could find the slides afterwards).

A lot of social software in higher education is used in formal learning. Griesbaum wants to focus on a Knowledge Management approach that primarily supports informal learning. To that end he and his students designed a low cost (there was no budget) system from the bottom up. It is called CollabUni and based on the open source e-portfolio solution (and smart little sister of Moodle) Mahara.

They did a first evaluation of the system in late 2009. There was little self-initiated knowledge activity by the 79 first year students. Roughly one-third of the students see an added value in CollabUni and declare themselves ready for active participation. Even though the knowledge processes that they aimed for don’t seem to be self-initiating and self-supporting, CollabUni still shows and stands for a possible low-cost and bottom-up approach towards developing social software. During the next steps of their roll out they will pay attention to the following:

  • Social design is decisively important
  • Administrative and organizational support components and incentive schemes are needed
  • Appealing content (for example an initial repository of term papers or theses)
  • Identify attractive use cases and applications

Call me a cynic, but if you have to try this hard: why bother? To me this really had the feeling of a technology trying to find a problem, rather than a technology being the solution to the problem. I wonder what the uptake of Facebook is with his students? I did ask him the question and he said that there has not been a lot of research into the use of Facebook in education. I guess that is true, but I am quite convinced there is a lot use of Facebook in education. I believe that if he had really wanted to leverage social software for the informal part of learning, he should have started with what his students are actually using and try to leverage that by designing technology in that context, instead of using another separate system.

Collaborative Innovation Networks (COINs)
The closing keynote of the conference was by Peter A. Gloor who currently works for the MIT Center for Collective Intelligence. Gloor has written a couple of books on how innovation happens in this networked world. Though his story was certainly entertaining I also found it a bit messy: he had an endless list of fascinating examples that in the end supported a message that he could have given in a single slide.

His main point is that large groups of people behave apparently randomly, but that there are patterns that can be analysed at the collective level. These patterns can give you insight into the direction people are moving. One way of reading the collective mind is by doing social network analysis. By combining the wisdom of the crowd with the wisdom of groups of experts (swarms) it is possible to do accurate predictions. One example he gave was how they had used reviews on the Internet Movie Database (the crowd) and on Rotten Tomatoes (the swarm) to predict on the day before a movie opens in the theatres how much the movie will bring in in total.

The process to do these kinds of predictions is as follows:

COIN cycle
COIN cycle

This kind of analysis can be done at a global level (like the movie example), but also in for example organizations by analysing email-archives or equipping people with so called social badges (which I first read about in Honest Signals) which measure who people have contact with and what kind of interaction they are having.

He then went on to talk about what he calls “Collaborative Innovation Networks” (COINs) which you can find around most innovative ideas. People who lead innovation (think Thomas Edison or Tim Berners-Lee) have the following characteristics:

  • There are well connected (they have many “friends”)
  • They have a high degree of interactivity (very responsive)
  • They share to a very high degree

All of these characteristics are easy to measure electronically and thus automatically, so to find COINs you find the people who score high on these points. According to Gloor high-performing organizations work as collaborative innovation networks. Ideas progress from Collaborative Innovation Network (COIN) to Collaborative Learning Network (CLN) to Collaborative Interest Network (CIN).

Twitter is proving to be a very useful tool for this kind of analysis. Doing predictions for movies is relatively easy because people are honest in their feedback. It is much harder for things like stock, because people game the system with their analysis. Twitter can be used (e.g. by searching for “hope”, “fear” and “worry” as indicators for sentiment) as people are honest in their feedback there.

Finally he made a refence in his talk to the Allen curve (the high correlation between physical distance and communication, with a critical distance of 50 meters for technical communication). I am sure this curve is used by many office planners, but Gloor also found an Allen curve for technical companies around his university: it was about 3 miles.

Interesting Encounters
Outside of the sessions I spoke to many interesting people at the conference. Here are a couple (for my own future reference).

It had been a couple of years since I had last seen Peter Sereinigg from act2win. He has stopped being a Moodle partner and now focuses on projects in which he helps global virtual teams in how they communicate with each other. There was one thing that he and I could fully agree on: you first have to build some rapport before you can effectively work together. It seems like such an obvious thing, but for some reason it still doesn’t happen on many occasions.

Twitter allowed me to get in touch with Aldo de Moor. He had read my blog post about day 1 of this conference and suggested one of his articles for further reading about pattern languages (the article refers to a book on a pattern language for communication which looks absolutely fascinating). Aldo is a independent research consultant in the field of Community Informatics. That was interesting to me for two reasons:

  • He is still actively publishing in peer reviewed journals and speaking at conferences, without being affiliated with a highly acclaimed research institute. He has written an interesting blog post about the pros and cons of working this way.
  • I had never heard of this young field of community informatics and it is something I would like to explore further.

I also spent some time with Barend Jan de Jong who works at Wolters Noordhoff. We had some broad-ranging discussions mainly about the publishing field: the book production process and the information technology required to support this, what value a publisher can still add, e-books compared to normal books (he said how a bookcase says something about somebody’s identity, I agreed but said that a digital book related profile is way more accessible than the bookcase in my living room, note to self: start creating parody GoodReads accounts for Dutch politicians), the unclear if not unsustainable business model of the wonderful Guardian news empire and how we both think that O’Reilly is a publisher that seem to have their stuff fully in order.

Puzzling stuff
There were also some things at I-KNOW 2010 that were really from a different world. The keynote on the morning of the 3rd day was perplexing to me. Márta Nagy-Rothengass titled the talk “European ICT Research and Development Supporting the Expansion of Semantic Technologies and Shared Knowledge Management” and opened with a video message of Neelie Kroes talking in very general terms about Europe’s digital agenda. After that Nagy-Rothengass told us that the European Commission will be nearly doubling its investment into ICT to 11 billion Euros, after which she started talking about the “Call 5” of “FP7” (apparently that stands for the Seventh Framework Programme), the dates before which people should put their proposals in, the number of proposals received, etc., etc., etc. I am pro-EU, but I am starting to understand why people can make a living advising other people how best to apply for EU grants.

Another puzzling thing was the fact that people like me (with a corporate background) thought that the conference was quite theoretical and academic, whereas the researchers thought everything was very applied (maybe not enough research even!). I guess this shows that there is quite a schism between universities furthering the knowledge in this field and corporations who could benefit from picking the fruits of this knowledge. I hope my attendance at this great conference did its tiny part in bridging this gap.

Notes and Reflections on Day 1 of I-KNOW 2010

I-KNOW 2010
I-KNOW 2010

From September 1-3, 2010, I will attend the 10th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW 2010) in beautiful Graz, Austria. I will use my blog to do a daily report on my captured notes and ideas.

And now for something completely different
In the last few years I have put a lot of effort into becoming a participating member in the global learning technology community. This means that when I visit a “learning” conference I know a lot of the people who are there. At this conference I know absolutely nobody. Not a single person in my online professional network seems to know let alone go to this conference.

One of my favourite competencies in the leadership competency framework of Shell is the ability to value differences. People who master this competency actively seek out the opinion of people who have a different opinion than theirs. There are good reasons for this (see for example Page’s The Difference), and it is one of the things that I would like to work on myself: I am naturally inclined to seek out people who think very much like me and this conference should help me in overcoming that preference.

After the first day I already realise that the world I live and work in is very “corporate” and very Anglo-Saxon. In a sense this conference feels like I have entered into a world that is normally hidden from me. I would also like to compliment the organizers of the conference: everything is flawless (there even is an iPhone app: soon to be standard for all for conferences I think, I loved how FOSDEM did this: publishing the program in a structured format and then letting developers make the apps for multiple mobile platforms).

Future Trends in Search User Interfaces
Marti Hearst has just finished writing her book Search User Interfaces which is available online for free here and she was therefore asked to keynote about the future of these interfaces.

Current search engines are primarily search text based, have a fast response time, are tailored to keyword queries (that support a search paradigm where there is iteration based on these keywords), sometimes have faceted metadata that delivers navigation/organization support, support related queries and in some cases are starting to show context-sensitive results.

Hearst sees a couple of things happening in technology and in how society interacts with that technology that could help us imagine what the search interface will look like in the future. Examples are the wide adoption of touch-activated devices with excellent UI design, the wide adoption of social media and user-generated content, the wide adoption of mobile devices with data service, improvements in Natural Language Processing (NLP), a preference for audio and video and the increasing availability of rich, integrated data sources.

All of these trends point to more natural interfaces. She thinks this means the following for search user interfaces:

  • Longer more natural queries. Queries are getting longer all the time. Naive computer users use longer queries, only shortening them when they learn that they don’t get good results that way. Search engines are getting better at handling longer queries. Sites like Yahoo Answers and Stack Overflow (a project by one of my heroes Joel Spolsky) are only possible because we now have much more user-generated content.
  • “Sloppy commands” are now slowly starting to be supported by certain interfaces. These allow flexibility in expression and are sometimes combined with visual feedback. See the video below for a nice example.

[vimeo http://vimeo.com/13992710]

  • Search is becoming as social as possible. This is a difficult problem because you are not one person, you are different people at different times. There are explicit social search tools like Digg, StumbleUpon and Delicious and there are implicit social search tools and methods like “People who bought x, also bought…” and Yahoo’s My Web (now defunct). Two good examples (not given by Hearst) of how important the social aspects of search are becoming are this Mashable article on a related Facebook patent and this Techcrunch article on a personalized search engine for the cloud.
  • There will be a deep integration of audio and video into search. This seemed to be a controversial part of her talk. Hearst is predicting the decline of text (not among academics and lawyers). There are enough examples around: the culture of video responses on YouTube apparently arose spontaneously and newspaper websites are starting to look more and more like TV. It is very easy to create videos, but the way that we can edit videos still needs improvement.
  • A final prediction is that the search interface will be more like a dialogue, or conversational. This reality is a bit further away, but we are starting to see what it might look like with apps like Siri.

Enterprise 2.0 and the Social Web

Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed
Murinsel Bridge in Graz, photo by Flickr user theowl84, CC-licensed

This track consisted of three presentations. The first one was titled “A Corporate Tagging Framework as Integration Service for Knowledge Workers”. Walter Christian Kammergruber, a PhD student from Munich, told us that there are two problems with tagging: one is how to orchestrate the tags in such a way that they work for the complete application landscape, another is the semantic challenge of getting rid of ambiguity, multiple spellings, etc. His tagging framework (called STAG) attempts to solve this problem. It is a piece of middleware that sits on the Siemens network and provides tagging functionality through web services to Siemens’ blogging platform, wiki, discussion forums and Sharepoint sites. These tags can then be displayed using simple widgets. The semantic problem is solved by having a thesaurus editor allowing people define synonyms for tags and make relationships between related tags.

I strongly believe that any large corporation would be very much helped with a centralised tagging facility which can be utilised by decentralised applications. This kind of methodology should actually not only be used for tagging but could also be used for something like user profiles. How come I don’t have a profile widget that I can include on our corporate intranet pages?

The second talk, by Dada Lin, was titled “A Knowledge Management Scheme for Enterprise 2.0”. He presented a framework that should be able to bridge the gap between Knowledge Management and Enterprise 2.0. It is called the IDEA framework in which knowledge is seen as a process, not as an object. The framework consists of the following elements (also called “moments”):

  • Interaction
  • Documentation
  • Evolution
  • Adoption

He then puts these moments into three dimensions: Human, Technology and Organisation. Finally he did some research around a Confluence installation at T-Systems. None of this was really enlightening to me, I was however intrigued to notice that the audience focused more on the research methodologies than on the outcomes of the research.

The final talk, “Enterprise Microblogging at Siemens Building Technologies Division: A Descriptive Case Study” by Johannes Müller a senior Knowledge Management manager at Siemens was quite entertaining. He talked about References@BT, a community at Siemens that consists of many discussion forums, a knowledge reference and since March 2009 a microblogging tool. It has 7000 members in 73 countries.

The microblogging platform is build by himself and thus has exactly the features it needed to have. One of the features he mentioned was that it showed a picture of every user in every view on the microblog posts. This is now a standard feature in lots of tools (e.g. Twitter or Facebook) and it made me realise that Moodle was actually one of the first applications that I know that this consistently: another example of how forward thinking Martin Dougiamas really was!.

Müller’s microblogging platform does allow posts of more than 140 characters, but does not allow any formatting (no line-breaks or bullet points for example). This seems to be an effective way of keeping the posts short.

He shared a couple of strategies that he uses to get people to adopt the new service. Two things that were important were the provision of widgets that can be included in more traditional pages on the intranet and the ability to import postings from other microblogging sites like Twitter using a special hash tag. He has also sent out personalised email to users with follow suggestions. These were hugely effective in bootstrapping the network.

Finally he told us about the research he has done to get some quantitative and qualitative data about the usefulness of microblogging. His respondents thought it was an easy way of sharing information, an additional channel for promoting events, a new means of networking with others, a suitable tool to improve writing skills and a tool that allowed for the possibility to follow experts.

Know-Center Graz
During lunch (and during the Bacardi sponsored welcome reception) I had the pleasant opportunity to sit with Michael Granitzer, Stefanie Lindstaedt and Wolfgang Kienreich from the Know-Center, Austria’s Competence Center for Knowledge Management.

They have done some work for Shell in the past around semantic similarity checking and have delivered a working proof of concept in our Mediawiki installation. They demonstrated some of their new projects and we had a good discussion about corporate search and how to do technological innovation in large corporations.

The first project that they showed me is called the Advanced Process- Oriented Self- Directed Learning Environment (APOSDLE). It is a research project that aims to develop tools that help people learn at work. To rephrase it in learning terms: it is a very smart way of doing performance support. The video below gives you a good impression of what it can do:

[youtube=http://www.youtube.com/watch?v=4ToXuOTKfAU?rel=0]

After APOSDLE they showed me some outcomes from the Mature IP project. From the project abstract:

Failures of organisation-driven approaches to technology-enhanced learning and the success of community-driven approaches in the spirit of Web 2.0 have shown that for that agility we need to leverage the intrinsic motivation of employees to engage in collaborative learning activities, and combine it with a new form of organisational guidance. For that purpose, MATURE conceives individual learning processes to be interlinked (the output of a learning process is input to others) in a knowledge-maturing process in which knowledge changes in nature. This knowledge can take the form of classical content in varying degrees of maturity, but also involves tasks & processes or semantic structures. The goal of MATURE is to understand this maturing process better, based on empirical studies, and to build tools and services to reduce maturing barriers.

Mature
Mature

I was shown a widget-based approach that allowed people to tag resources, put them in collections and share these resources and collections with others (more information here). One thing really struck me about the demo I got: they used a simple browser plugin as a first point of contact for users with the system. I suddenly realised that this would be the fastest way to add a semantic layer over our complete intranet (it would work for the extranet too). With our desktop architecture it is relatively trivial to roll out a plugin to all users. This plugin would allow users to annotate webpages on the net creating a network of meta-information about resources. This is becoming increasingly viable as more and more of the resources in a company are accessed from a browser and are URL addressable. I would love to explore this pragmatic direction further.

Knowledge Sharing
Martin J. Eppler from the University of St. Gallen seems to be a leading researcher in the field of knowledge management: when he speaks people listen. He presented a talk titled “Challenges and Solutions for Knowledge Sharing in Inter-Organizational Teams: First Experimental Results on the Positive Impact of Visualization”. He is interested in the question of how visualization (mapping text spatially) changes the way that people share knowledge. In this particular research project he focused on inter-organizational teams. He tries to make his experiments as realistic as possible, so he used senior managers and reallife scenarios, put them in three experimental groups and set them out to do a particular task. There was a group that was supported with special computer based visualization software, another group used posters with templates and a final (control) group used plain flipcharts. After analysing his results he was able to conclude that visual support leads to significant greater productivity.

This talk highlights one of the problems I have with science applied in this way. What do we now know? The results are very narrow and specific. What happens if you change the software? Is this the case for all kinds of tasks? The problem is: I don’t know how scientists could do a better job. I guess we have to wait till our knowledge-working lives can really be measured consistently and in realtime and then for smart algorythms to find out what really works for increased productivity.

The next talk in this talk was from Felix Mödritscher who works at the Vienna University of Economics and Business. His potentially fascinating topic “Using Pattern Repositories for Capturing and Sharing PLE Practices in Networked Communities” was hampered by the difficulty of explaining the complexities of the project he is working on.

He used the following definition for Personal Learning Environments (PLEs): a set of tools, services, and artefacts gathered from various contexts and to be used by learners. Mödritscher has created a methodology that allows people to share good practices in PLEs. First you record PLE interactions, then you allow people to depersonalise these interactions and share them as an “activity pattern” (distilled and archetypical), where people can then pick these up and repersonalise them. He has created a Pattern repository, with a pattern store. It has a client side component implemented as a Firefox extension: PAcMan (Personal Activity Manager). It is still early days, but these patterns appear to be really valuable: they not only help with professional competency development, but also with what he calls transcompentences.

I love the idea of using design patterns (see here), but thought it was a pity that Mödritscher did not show any very concrete examples of shared PLE patterns.

My last talk of the day was on “Clarity in Knowledge Communication” by Nicole Bischof, one of Eppler’s PhD students in the University of St. Gallen. She used a fantastic quote by Wittgenstein early in her presentation:

Everything that can be said, can be said clearly

According to her, clarity can help with knowledge creation, knowledge sharing, knowledge retention and knowledge application. She used the Hamburger Verständlichkeitskonzept as a basis to distill five distinct aspects to clarity: Concise content, Logical structure, Explicit content, Ambiguity low and Ready to use (the first letters conveniently spell “CLEAR”). She then did an empirical study about the clarity of Powerpoint presentations. Her presentation turned tricky at that point as she was presenting in Powerpoint herself. The conclusion was a bit obvious: knowledge communication can be designed to be more user-centred and thus more effective, clarity helps in translating innovation and potential of knowledge and can help with a clear presentation of complex and knowledge content.

Bischof did an extensive literature review and clarity is an underresearched topic. After just having read Tufte’s anti-Powerpoint manifesto I am convinced that there is a world to gain for businesses like Shell’s. So much of our decision making is based on Powerpoint slidepacks, that it becomes incredibly urgent to let this be optimal.

Never walk alone
I am at this conference all by myself and have come to realise that this is not the optimal situation. I want to be able to discuss the things that I have just seen and collaboratively translate them to my personal work situation. It would have been great to have a sparring partner here who shares a large part of my context. Maybe next time?!

My Top 10 Tools for Learning 2010

CC-licensed photo by Flickr user yoppy
CC-licensed photo by Flickr user yoppy

For this year’s edition of the Top 100 Tools for Learning (a continuing series started, hosted and curated by JaneDuracell BunnyHart of the Internet Time Alliance) I decided to really reflect on my own Learning Process. I am a knowledge worker and need to learn every single day to be effective in my job. I have agreed with my manager to only do very company-specific formal training. Things like our Leadership development programs or the courses around our project delivery framework are so deeply embedded in our company’s discourse that you miss out if you don’t allow yourself to learn the same vocabulary. All other organised training is unnecessary: I can manage myself and that is the only way in which I can make sure that what I learn is actually relevant for my job.

So what tools do I use to learn?

1. Goodreads in combination with Book Depository
The number one way for me personally to learn is by reading a book. When I started as an Innovation Manager in January I wanted to learn more about innovation as a topic and how you could manage an innovation funnel. I embarked on a mission to find relevant books. Nowadays I usually start at Goodreads, a social network for readers. I like the reviews there more than the ones on Amazon and I love the fact that I can get real recommendations from my friends. Goodreads has an excellent iPhone app making it very easy to keep a tab on your reading habits. I found a bunch of excellent books on innovation (they will get a separate post in a couple of weeks).
My favourite book store to buy these books is Book Depository (please note that this is an affiliate link). They have worldwide free shipping, are about half the price of the book stores in the Netherlands and ship out single books very rapidly.

2. Twitter and its “local” version Yammer
Ever since I got an iPhone I have been a much keener Twitter user (see here and guess when I got the iPhone). I have come to realise that it is a great knowledge management tool. In recent months I have used it to ask direct questions to my followers, I have used it to follow live news events as they unfold, I have searched to get an idea of the Zeitgeist, I have used it to have a dialogue around a book, and I have used it as a note taking tool (e.g. see my notes on the Business-IT fusion book, still available thanks to Twapperkeeper).
Yammer is an enterprise version of Twitter that is slowly taking off in my company. The most compelling thing about it is how it cuts across all organizational boundaries and connects people that can help each other.

3. Google
Google does not need any introduction. It is still my favourite search tool and still many searches start at Google. I have to admit that those searches are often very general (i.e. focused on buying something or on finding a review or a location). If I need structured information I usually default to Wikipedia or Youtube.

4. Google Reader
I have about 300 feeds in Google Reader of which about 50 are in my “first read” category, meaning I follow them religiously. This is the way I keep up with (educational) technology news. What I love about Google Reader is how Google has made a very mature API available allowing people to write their own front-end for it. This means I can access my feeds from a native iPhone app or from the web or from my desktop while keeping the read counts synchronised. Another wonderful thing is that Google indexes and keeps all the feed items once you have added the feeds. This means that you can use it to archive all the tweets with a particular hash tag (Twitter only finds hash tags from the last two weeks or so when you use their search engine). Finally, I have also used Google Reader as a feed aggregator. This Feedburner feed, for example, was created by putting three different feeds in a single Google Reader folder (more about how to do that in a later post).

5. Wikipedia (and Mediawiki)
The scale of Wikipedia is stupefying and the project still does not seem to run out of steam. The Wikimedia organization has just rolled out some enhancements to their Mediawiki software allowing for easier editing. The openness of the project allows for people to build interesting services on top of the project. I love Wikipanion on my iPhone and I have enthusiastically used Pediapress a couple of times to create books from Wikipedia articles. I find Wikipedia very often (not always!) offers a very solid first introduction to a topic and usually has good links to the original articles or official websites.

6. Firefox
Even though I have written earlier that I was a Google Chrome user, I have now switched back and let Mozilla’s Firefox be the “window” through which I access the web. This is mainly due to two reasons. The first being that I am incredibly impressed with the ambitions of Mozilla as an organization. Their strategy for making the web a better place really resonates with me. The other reason is Firefox Sync, allowing me to use my aliased bookmarks and my passwords on multiple computers. I love Sync for its functionality but also for its philosophy: you can also run your own Sync server and do not need to use Mozilla’s and all the sync data is encrypted on the server side, needing a passphrase on the client to get to it.

7. LinkedIn
It took a while before I started to see the true benefits of LinkedIn. A couple of weeks ago I had a couple of questions to ask to people who have experience with implementing SAP Enterprise Learning in large organizations. LinkedIn allowed me to search for and then contact people who have SAP Enterprise Learning in their profile in some way. The very first person that I contacted forwarded me on to a SAP Enterprise Learning discussion group on LinkedIn. I asked a few questions in that forum and had some very good public and private answers to those questions within days. In the past I would only have access to that kind of market information if SAP would have been the broker of this dialogue or if I would buy from analysts like Bersin. LinkedIn creates a lot of transparency in the market place and transparency is a good thing (especially for customers).

8. WordPress (including the WordPress.com network) and FocusWriter
Writing is probably one of the best learning processes out there and writing for other people is even better. WordPress is used to publish this post, while I use a simple cross-platform tool called FocusWriter to give me a completely uncluttered screen with just the words (no menus, window edges or status bars!). WordPress is completely free to use. You can either opt for a free (as in beer) hosted version that you can set up within seconds on http://www.wordpress.com or you can go the free (as in speech) version where you download the application, modify it to your needs and host it where you want. If I was still a teacher now, this would be the one tool that I would let all of my students use as much as possible.

9. Youtube
The quantity of videos posted on Youtube is not comprehensible. It was Rob Hubbard who first showed me how you could use the large amount of great tutorials to great effect. He rightfully thought: Why would I put a lot of effort into developing a course on how to shoot a great video if I can just link to a couple of excellent, well produced, short, free videos that explain all the most important concepts? The most obvious topics to learn about are music (listening to music and learning how to play music) and games (walkthroughs and cheat codes) , but there are already lots of great videos on other topics too.

10. Moodle and the community on Moodle.org
Moodle is slowly slipping to the bottom of my list. In the last few years a lot of my professional development was centred around Moodle and I still owe many of the things I know about educational technology, open source and programming/systems administration to my interactions in the forums at Moodle.org. Two things are the cause for Moodle being less important to my own learning:
1. I now have a job in which I am tasked to try and look ahead and see what is coming in the world of enterprise learning technology. That is a broad field to survey and I have been forced to generalise my knowledge on the topic.
2. I have become increasingly frustrated with the teacher led pedagogical model that all Virtual Learning Environments use. I do believe that VLEs “are dead”: they don’t fully leverage the potential of the net as a connection machine, instead they are usually silos that see themselves as the centre of the learning technology experience and lack capabilities to support a more distributed experience.

Previous versions of my Top 10 list can be found here for 2008 and here for 2009. A big thank you again to Jane for aggregating and freely sharing this hugely valuable resource!