IT From Liability to Asset

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. Nowadays IT is as ubiquitous in a working environment as water, electricity and a toilet. Unfortunately, this is why many managers interpret IT as a utility and often see it as a liability. For this post we studied 3 organograms which popped up after a Google search and describe in 500 words what is (probably) wrong or right with them in terms of the role and place of the IT department. You can read Arjen’s post with the same title here.

Peter Hinssen has written a book titled Business/IT Fusion. In it, he attacks the current focus on aligning IT and the business and proposes to truly integrate IT into the business: a fusion. In his introduction he writes:

So, cost reduction in IT typically enhances the stereotype of IT being a commoditized, non-differentiating function. [..] Alignment is like a slow-acting poison that initially shows no signs of having a negative effect, but which paralyzes an IT organization by inducting servant like behavior.

His website shows the transformation that he is advocating:

IT 1.0 and IT 2.0
IT 1.0 and IT 2.0 from Hinssen 2009, page 17

Drucker wrote in The Practice of Management:

Because the purpose of business is to create and keep a customer, the business enterprise has two -and only two – basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs.

I am a strong believer in IT as one of the drivers of innovation. Hinssen references a 2005 Harvard Business Review article by Nolan and McFarlan, Information Technology and the Board of Directors, in which they publish a IT strategic impact grid:

Click to Enlarge
IT Strategic Impact Grid (click to Enlarge)

If you believe that IT should be playing on the offensive side of the grid, then this should be reflected in your organizational structure. So let’s look at the following three randomly chosen organograms and see whether the organizations consider themselves to be in factory mode, support mode, turnaround mode or strategic mode and whether they show signs of fusion. One way of doing this is to try and see whether there is a separate IT branch or not and whether IT and business processes are in the same box.

First Hibernia college, an online college in Ireland:

Click to enlarge
Organogram Hibernia (Click to enlarge)

IT seems to reside under the Chief Knowledge Officer (who reports into the director of Operations). An online course seems to need development, instructional design and learning technology (each has their own manager) and it seems pretty clear that technology is part of the core operations. This organization probably considers itself to be in strategic mode (makes sense for an online college!).

Next ONGC India, an Oil and Natural Gas Corporation:

Click to enlarge
Organogram ONGC India (Click to enlarge)

This one is a bit of mystery to me. There is a chief “Infocom” who reports into a role that is called “…To be filled…”. I imagine that this company will have multiple IT perspectives. The director of Tech & Field Services will have to steer a lot of technology and will be well integrated with the business. The infocom chief will likely have a much harder time playing in the offensive space.

Finally Uniex Ghana Limited, a trading company and consultancy:

Click to enlarge
Organogram Uniex Ghana (Click to enlarge)

This is very traditional organogram. There does not seem to be a high level IT role (no CIO). There are two layers of organization between the Marketing & IT manager (one of the most important roles if we agree with Drucker and the CEO). I wouldn’t be surprised if this organization would consider itself in support mode.

My main conclusion from this exercise is that an organizational diagram is not a good indicator for the role that IT plays in a business. I do not seem to be able to parse these diagrams in such a way that I can really understand how IT is seen in the company. Is that my lack of competence? I do now realize I need to do some more thinking in this area.

Looking at your own organization, what mode do you seem to be operating in?

Privacy and the Internet – A Talk at the HvA

Bits of Freedom is doing important work (and are effective in the way they do their job). I am therefore honoured to ocassionally field some of their speaker requests. Today I presented at the Hogeschool van Amsterdam on Privacy and the Internet and had some good talks with the students afterwards.

I am not sure the slides make a lot of sense without the audio, but if you augment them with a visit to some of the links in this bundle, then you might understand a bit better in which ways the Internet’s permance, replicability, scale and searchability (thank you danah) should affect the way we think about privacy going forward.

[slideshare id=8031673&doc=110520privacyandtheinternet-110519151745-phpapp01]

You can also download the slides as a 8.1MB PDF file.

The “Narrating Your Work” Experiment

I have just finished writing a small proposal to the rest of my team. I thought it would be interesting to share here:

Introduction

We work in a virtual team. Even though there aren’t many of us, we often have few ideas about what the other people in our team are working on, which people they have met recently and what they are struggling with. The time difference between our main offices make our occasional feelings of being disconnected worse.

This “Narrating Your Work” experiment is an attempt to help overcome these problems.

If you are interested in some background reading, you should probably start with Luis Suarez’ blog post about narrating your work (”it’s all about the easiest way of keeping up with, and nurturing, your working relationships by constantly improving your social capital skills”) and then follow his links to Dave Winer, ambient intimacy and declarative living.

The experiment

“Narrating Your Work” should really be approached as an experiment. When it was first suggested, some people showed some hesitation or worries. We just don’t know whether and how it will work yet. The best way to find out is by trying. In Dutch: “niet geschoten, altijd mis”.

The experiment will have a clear-cut start and will last for two months. After running the experiment we will do a small survey to see what people thought of it: Did it deliver any benefits? If any to whom? Was it a lot of work to write updates? Did it create too much reading to do? Do we want to continue with narrating our work? Etc.

Three ways of participating

It needs to be clear who is participating in the experiment. If you decide to join, you commit to doing one of the following three things (you are allowed to switch between them and you will be “policed”):

  1. Constant flow of updates: Every time you meet somebody who is not in the team, every time you create a new document or every time you do something that is different from just answering your emails, you will write a very short status update to say what you are doing or what you have done. This will create a true “activity stream” around the things you do at work.
  2. Daily updates: At the end of your day you give a one paragraph recap of what you have done, again focusing on the people you have met, the places you have visited or the things you have created.
  3. Weekly updates: On Friday afternoon or on Monday morning you write an update about the week that has just passed. To give this update some structure, it is suggested that you write about two things that went very well, two things that went less well and two things that are worrying to you (or at least will require attention in the next week).

The first option requires the most guts, whereas the last option requires the most diligence: it is not easy to take the time every week to look back at what happened over the last five working days. Are you the type of person who likes to clean the dishes as the day progresses, or are you the type who likes to leave them till there is nothing clean left? Choosing one of the first two options (rather than the third) will give the experiment the greatest chance of success.

Participation only requires the commitment for writing the updates. You are not expected to read all updates of the others, although you might very well be tempted!

How to do it: making it work

To make the work updates easily accessible we will use Yammer. You can do this in two ways:

  • You can post the work update with the tag #nywlob to your followers. People will see this message when they are following you, when they are watching the company feed or when they follow the nywlob topic.
  • If you don’t feel comfortable posting publicly to the whole company (or want to say something that needs to stay in the team) then you can post in an unlisted and private group. People will only see this message if they are members of the group and we will only let people in who work in the HRIT LoB and have agreed to join the experiment. Posting in this group will limit your chances of serendipity, so the first method is preferred.

When you are posting an update, please think about the people who might be reading it, so:

  • When you refer to a person that is already on Yammer, use the @mention technique to turn their name into a link (and notify them of you mentioning them)
  • If you refer to a person outside of Shell, link to their public LinkedIn profile.
  • If you mention any document or web page, make sure to add the link to the document so that people can take a look at it.

I am very interested in any comments you might have. Does anybody have any experience with this?

Why I Have Deleted my Facebook Account

Arjen Vrielink and I write a monthly series titled: Parallax. We both agree on a title for the post and on some other arbitrary restrictions to induce our creative process. Some people would consider Facebook a threat to the open Internet (e.g. Tim Berners-Lee), whereas other people see it as a key tool for promoting democracy in this world (e.g. Wael Ghonim). We decided to each argue both sides of the argument (300 words “for” and 300 words “against”) and then poll our readers to see which argument they find more persuasive. You can read Arjen’s post with the same title here.

Facebook or no Facebook?
Facebook or no Facebook?

For a couple of months now my pragmatic side has been battling with my principled conscience. The matter of contention: whether to keep my Facebook account.

Why I will delete my Facebook account

There are three main problems with Facebook:

  1. It creates a silo-ed version of the web . A big reason why the web works is the way you can link to other pages on the web: you don’t need anybody’s permission. The Berners-Lee video that I linked to earlier gives some great arguments about why this is important. Facebook is a closed silo from this perspective, creating an alternative network that does not have the same characteristics as the Internet. For some young people around me, the web (if not computing) is nearly synonymous with Facebook: they hardly leave the Facebook browser tab. If they do, it is usually to buy something. I am sure that soon you will be able to do that from Facebook too (e.g. Did you know that you can get somebody a Amazon gift certificate to be given to them on their Facebook wall on their birthday which they have registered with Facebook?)
  2. The social graph is too important to be under the governance of a single commercial US-based company . Knowing how you are connected to other people can lead to powerful applications (see below). In fact, the social experiences that this allows are so important that we would be crazy to accept that all this relational data is in the hands of a company that can do with it whatever they want and might even be forced to share this data with the US government. There is no easy way to migrate this social graph into another system and Facebook displays a very proprietary attitude to it. What would happen if Facebook was forced to stop doing business or would decide to start charging people for their services?
  3. Their sphere of influence is not transparent and ever-increasing . Facebook is all over the web now. What news site does not have a “Like” button? If you have a Facebook account and you don’t log out after you have used it, then Facebook is able to see the URLs of the pages you are reading, even if you don’t ever click on the like button. Your attention is mined and commercialized by Facebook. Even if you have very restrictive privacy settings your data will be still be given to any third party app that has managed to seduce one of your many Facebook friends. More and more sites are cropping up that will only allow you to log in using the Facebook login mechanism making it harder to use multiple identities the net. Facebook is becoming so pervasive on the net, that it requires tools like Disconnect or Abine’s TACO to make sure you are staying out of their clutches. Does this feel like a positive development in the way that you can use the web?

Why I will not delete my Facebook account

There are a couple of good reasons for me to keep a Facebook account:

  1. They are past the tipping point . The network effect has come into play. Why should you be on Facebook? Because it is the one and only (global) place where everybody else is! Two years ago I organized a reunion of the very first class I mentored as a teacher. It took weeks of searching using all kinds of media before we got about 50% of the class together. This year we are doing another reunion: within a week we found 95% of the class on Facebook. Facebook facilitates this so-called ambient intimacy with people that you don’t regularly see or talk to, but still want to stay in touch with. What other means of communications has transaction costs that are this low?
  2. They deliver an incredibly innovative service. Facebook deserves a lot of credit for the ideas that they have implemented and for the pace at which they keep innovating their mind-blowingly large scale service. They were the first company that decided to create a web platform for which third parties could write applications, they were the first to see and deliver on the true power of the social graph (turning it inside-out) and they have been creative in the way that they appropriate and add to ideas about activity streams, sharing in groups and even privacy controls (what other web service gives you that level of control over what you want to share?). For somebody like me, fascinated if not captivated by technology and looking through an innovation lens, there is an immense amount over ever-changing functionality to explore.
  3. Having a centralized social graph leads to powerful applications. The first time I realized this was when I played Bejeweled on my iPhone. It allowed me to connect to my Facebook account and suddenly I wasn’t playing against other people at Internet scale (how can anyone score 20.000 points?!), but I was engaged in battles with family, friends and colleagues. Soon there will be a time where every piece of content we consume (books, news, magazines, videos, podcast) will be enriched by this meta-layer of your friends opinions. I call this the social contextualization of content. Facebook’s integration with Pandora was one of the first examples of how this will work. This meta-layer assumes a persistent social graph: you don’t want to keep finding your different groups of relations again and again do you?

[polldaddy poll=4696600]

Anyway, for me it is clear: I don’t want to be a part of Facebook’s success and would prefer it if we all would be using a differently architected solution in the near future. Fully decentralized and distributed systems are in the making everywhere (e.g. Diaspora, Pagekite, StatusNet, Unhosted and Buddycloud) and I will invest some time to explore those further. As I also personally get very little value out of Facebook, it is not hard to act principled in this case: I will be deleting my account.

Update on 10-11-2012: As I still don’t have a Facebook account I’ve deciced to change the title of this post.

Lak11 Week 3 and 4 (and 5): Semantic Web, Tools and Corporate Use of Analytics

Two weeks ago I visited Learning Technologies 2011 in London (blog post forthcoming). This meant I had less time to write down some thoughts on Lak11. I did manage to read most of the reading materials from the syllabus and did some experimenting with the different tools that are out there. Here are my reflections on week 3 and 4 (and a little bit of 5) of the course.

The Semantic Web and Linked Data

This was the main topic of week three of the course. Basically the semantic web has a couple of characteristics. It tries to separate the presentation of the data and the data itself. It does this by structuring the data which then allows linking up all the data. The technical way that this is done is through so-called RDF-triples: a subject, a predicate and an object.

Although he is a better writer than speaker, I still enjoyed this video of Tim Berners-Lee (the inventor of the web) explaining the concept of linked data. His point about the fact that we cannot predict what we are going to make with this technology is well taken: “If we end up only building the things I can imagine, we would have failed“.

[youtube=http://www.youtube.com/watch?v=OM6XIICm_qo]

The benefits of this are easy to see. In the forums there was a lot of discussion around whether the semantic web is feasible and whether it is actually necessary to put effort into it. People seemed to think that putting in a lot of human effort to make something easier to read for machines is turning the world upside down. I actually don’t think that is strictly true. I don’t believe we need strict ontologies, but I do think we could define more simple machine readable formats and create great interfaces for inputting data into these formats.

Use cases for analytics in corporate learning

Weeks ago Bert De Coutere started creating a set of use cases for analytics in corporate learning. I have been wanting to add some of my own ideas, but wasn’t able to create enough “thinking time” earlier. This week I finally managed to take part in the discussion. Thinking about the problem I noticed that I often found it difficult to make a distinction between learning and improving performance. In the end I decided not to worry about it. I also did not stick to the format: it should be pretty obvious what kind of analytics could deliver these use cases. These are the ideas that I added:

  • Portfolio management through monitoring search terms
    You are responsible for the project management portfolio learning portfolio. In the past you mostly worried about “closing skill gaps” through making sure there were enough courses on the topic. In recent years you have switched to making sure the community is healthy and you have switched from developing “just in case” learning intervention towards “just in time” learning interventions. One thing that really helps you in doing your work is the weekly trending questions/topics/problems list you get in your mailbox. It is an ever-changing list of things that have been discussed and searched for recently in the project management space. It wasn’t until you saw this dashboard that you noticed a sharp increase in demand for information about privacy laws in China. Because of it you were able to create a document with some relevant links that you now show as a recommended result when people search for privacy and China.
  • Social Contextualization of Content
    Whenever you look at any piece of content in your company (e.g. a video on the internal YouTube, an office document from a SharePoint site or news article on the intranet), you will not only see the content itself, but you will also see which other people in the company have seen that content, what tags they gave it, which passages they highlighted or annotated and what rating they gave the piece of content. There are easy ways for you to manage which “social context” you want to see. You can limit it to the people in your direct team, in your personal network or to the experts (either as defined by you or by an algorithm). You love the “aggregated highlights view” where you can see a heat map overlay of the important passages of a document. Another great feature is how you can play back chronologically who looked at each URL (seeing how it spread through the organization).
  • Data enabled meetings
    Just before you go into a meeting you open the invite. Below the title of the meeting and the location you see the list of participants of the meeting. Next to each participant you see which other people in your network they have met with before and which people in your network they have emailed with and how recent those engagements have been. This gives you more context for the meeting. You don’t have to ask the vendor anymore whether your company is already using their product in some other part of the business. The list also jogs your memory: often you vaguely remember speaking to somebody but cannot seem to remember when you spoke and what you spoke about. This tools also gives you easy access to notes on and recordings of past conversations.
  • Automatic “getting-to-know-yous”
    About once a week you get an invite created by “The Connector”. It invites you to get to know a person that you haven’t met before and always picks a convenient time to do it. Each time you and the other invitee accept one of these invites you are both surprised that you have never met before as you operate with similar stakeholders, work in similar topics or have similar challenges. In your settings you have given your preference for face to face meetings, so “The Connector” does not bother you with those video-conferencing sessions that other people seem to like so much.
  • “Train me now!”
    You are in the lobby of the head office waiting for your appointment to arrive. She has just texted you that she will be 10 minutes late as she has been delayed by the traffic. You open the “Train me now!” app and tell it you have 8 minutes to spare. The app looks at the required training that is coming up for you, at the expiration dates of your certificates and at your current projects and interests. It also looks at the most popular pieces of learning content in the company and checks to see if any of your peers have recommended something to you (actually it also sees if they have recommended it to somebody else, because the algorithm has learned that this is a useful signal too), it eliminates anything that is longer than 8 minutes, anything that you have looked at before (and haven’t marked as something that could be shown again to you) and anything from a content provider that is on your blacklist. This all happens in a fraction of a second after which it presents you with a shortlist of videos for you to watch. The fact that you chose the second pick instead of the first is of course something that will get fed back into the system to make an even better recommendation next time.
  • Using micro formats for CVs
    The way that a simple structured data format has been used to capture all CVs in the central HR management system in combination with the API that was put on top of it has allowed a wealth of applications for this structured data.

There are three more titles that I wanted to do, but did not have the chance to do yet.

  • Using external information inside the company
  • Suggested learning groups to self-organize
  • Linking performance data to learning excellence

Book: Head First Data Analytics

I have always been intrigued by O’Reilly’s Head First series of books. I don’t know any other publisher who is that explicit about how their books try to implement research based good practices like an informal style, repetition and the use of visuals. So when I encountered Data Analysis in the series I decided to give it a go. I wrote the following review on Goodreads:

The “Head First” series has a refreshing ambition: to create books that help people learn. They try to do this by following a set of evidence-based learning principles. Things like repetition, visual information and practice are all incorporated into the book. This good introduction to data analysis, in the end only scratches the surface and was a bit too simplistic for my taste. I liked the refreshers around hypothesis testing, solver optimisation in Excel, simple linear regression, cleaning up data and visualisation. The best thing about the book is how it introduced me to the open source multi-platform statistical package “R”.

Learning impact measurement and Knowledge Advisers

The day before Learning Technologies, Bersin and KnowledgeAdvisors organized a seminar about measuring the impact of learning. David Mallon, analyst at Bersin, presented their High-Impact Measurement framework.

Bersin High-Impact Measurement Framework
Bersin High-Impact Measurement Framework

The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?

Jeffrey Berk from KnowledgeAdvisors then tried to show how their Metrics that Matter product allows measurement and then dashboarding around all the parts of the Bersin framework. They basically do this by asking participants to fill in surveys after they have attended any kind of learning event. Their name for these surveys is “smart sheets” (an much improved iteration of the familiar “happy sheets”). KnowledgeAdvisors has a complete software as a service based infrastructure for sending out these digital surveys and collating the results. Because they have all this data they can benchmark your scores against yourself or against their other customers (in aggregate of course). They have done all the sensible statistics for you, so you don’t have to filter out the bias on self-reporting or think about cultural differences in the way people respond to these surveys. Another thing you can do is pull in real business data (think things like sales volumes). By doing some fancy regression analysis it is then possible to see what part of the improvement can be attributed with some level of confidence to the learning intervention, allowing you to calculate return on investment (ROI) for the learning programs.

All in all I was quite impressed with the toolset that they can provide and I do think they will probably serve a genuine need for many businesses.

The best question of the day came from Charles Jennings who pointed out to David Mallon that his talk had referred to the increasing importance of learning on the job and informal learning, but that the learning measurement framework only addresses measurement strategies for top-down and formal learning. Why was that the case? Unfortunately I cannot remember Mallon’s answer (which probably does say something about the quality or relevance of it!)

Experimenting with Needlebase, R, Google charts, Gephi and ManyEyes

The first tool that I tried out this week was Needlebase. This tool allows you to create a data model by defining the nodes in the model and their relations. Then you can train it on a web page of your choice to teach it how to scrape the information from the page. Once you have done that Needlebase will go out to collect all the information and will display it in a way that allows you to sort and graph the information. Watch this video to get a better idea of how this works:

[youtube=http://www.youtube.com/watch?v=58Gzlq4zSDk]

I decided to see if I could use Needlebase to get some insights into resources on Delicious that are tagged with the “lak11” tag. Once you understands how it works, it only takes about 10 minutes to create the model and start scraping the page.

I wanted to get answers to the following questions:

  • Which five users have added the most links and what is the distribution of links over users?
  • Which twenty links were added the most with a “lak11” tag?
  • Which twenty links with a “lak11” tag are the most popular on Delicious?
  • Can the tags be put into a tag cloud based on the frequency of their use?
  • In which week were the Delicious users the most active when it came to bookmarking “lak11” resources?
  • Imagine that the answers to the questions above would be all somebody were able to see about this Knowledge and Learning Analytics course. Would they get a relatively balanced idea about the key topics, resources and people related to the course? What are some of the key things that would they would miss?

Unfortunately after I had done all the machine learning (and had written the above) I learned that Delicious explicitly blocks Needlebase from accessing the site. I therefore had to switch plans.

The Twapperkeeper service keeps a copy of all the tweets with a particular tag (Twitter itself only gives access to the last two weeks of messages through its search interface). I manage to train Needlebase to scrape all the tweets, the username, URL to user picture and userid of the person adding the tweet, who the tweet was a reply to, the unique ID of the tweet, the longitude and latitude, the client that was used and the date of the tweet.

I had to change my questions too:

Another great resource that I re-encountered in these weeks of the course was the Rosling’s Gapminder project:

[youtube=http://www.youtube.com/watch?v=BPt8ElTQMIg]

Google has acquired some part of that technology and thus allows a similar kind of visualization with their spreadsheet data. What makes the data smart is the way that it shows three variables (x-axis, y-axis and size of the bubble and how they change over time. I thought hard about how I could use the Twitter data in this way, but couldn’t find anything sensible. I still wanted to play with the visualization. So at the World Bank’s Open Data Initiative I could download data about population size, investment in education and unemployment figures for a set of countries per year (they have a nice iPhone app too). When I loaded that data I got the following result:

Click to be able to play the motion graph
Click to be able to play the motion graph

The last tool I installed and took a look at was Gephi. I first used SNAPP on the forums of week and exported that data into an XML based format. I then loaded that in Gephi and could play around a bit:

Week 1 forum relations in Gephi
Week 1 forum relations in Gephi

My participation in numbers

I will have to add up my participation for the two (to three) weeks, so in week 3 and week 4 of the course I did 6 Moodle posts, tweeted 3 times about Lak11, wrote 1 blogpost and saved 49 bookmarks to Diigo.

The hours that I have played with all the different tools mentioned above are not mentioned in my self-measurement. However, I did really enjoy playing with these tools and learned a lot of new things.