Tony Karrer's eLearning Blog on e-Learning Trends eLearning 2.0 Personal Learning Informal Learning eLearning Design Authoring Tools Rapid e-Learning Tools Blended e-Learning e-Learning Tools Learning Management Systems (LMS) e-Learning ROI and Metrics

Wednesday, April 15, 2009

Social Learning Measurement

It's been a while since I looked at: ROI and Metrics in eLearning and in this case, I was more looking at traditional content delivery and their impact.  I've recently been more interested in measures around social learning.  And I think I need real help here understanding how to measure this stuff.

And, I just saw a post Measuring Networked (or Social) Learning that discusses how we could go about measuring social learning.  In this, Eric suggests the following minimum measures:

    • Networking patterns - the relationship between people and content categories, the network make up or profile (business unit, job, level, etc), key brokers and influencers by content category, and the degree of networking across silos.
    • Learning efficiency - time lag between posting content and when content is viewed,  amount of time spent producing content for others to view, amount of redundant or significantly overlapping content, the degree to which “informal” content is reused in “formal” content (and perhaps reducing formal content development costs and effort).
    • Learning needs - differences between the learning needs or demand between “formal” and “social” learning (are some skills best learnt formally?), most popular learning needs by job, level, business unit, etc.
    • Contribution patterns - most active contributors and methods of contribution, busiest days and times for contributing, frequency and amount of contributions by job, level, business unit, etc. 
    • Content usage patterns - preferred ways to consume various content topics, busiest days and times for viewing content,  amount of time spent viewing content and participating in discussion threads and blogs, and preferred way to “find” content.
    • Content quality - ratings by content category, contributor, and medium, amount of “inappropriate” or “wrong” content reported by users, and the amount and type of content with very few or a lot of hits or views.

I'm not sure I buy how much real impact any of this will have on bottom line measures.  The recent MIT Study that showed that more highly networked individuals were more productive (see Workplace Productivity).  So, size and access of networks might be a proxy, but what about all the rest of these suggested measures.  Do we really believe they will be proxies for effectiveness?

I like what Kevin Jones talked about in Objection #13: How Do You Measure ROI?

How do you know that it is producing bottom line results? …

Rachel Happe suggested some measurements of ROI. A lot of them are for environments that face the customer, but some are for internal. Among those were:

# Number of new product ideas
# Idea to development initiation cycle time
# Retention/Employee turn over
# Time to hire
# Prospect identification cost
# Prospect to hire conversion rate
# Hiring cost
# Training cost
# Time to acclimation for new employees

Remember, we are looking at the final outcome, not necessarily “did they learn”. Because, honestly, we don’t care if they learn if they don’t use it for the benefit of the company. So the benefit is what we measure.  Other’s measurements might be:

# How large one’s network is
# Number of meetings taking place (or, more intuitively, are NOT taking place)
# Number of travel arrangements made (or, again, NOT made).

I always like to work on project where there are clear metrics that are the focus at the end of day (see Data Driven).  But in many cases, we need to define Intermediate Factors in Learning (see also Intermediate Factors).  As an example, we might be focusing on Customer Loyalty.  However, this metric is far too hard to directly measure and impact and thus we might say that there are intermediate factors such as customer satisfaction (based on surveys), recent contact, staff knowledge, etc.  We often know that these factors are contributors to the end measure.  Thus, we can define these as the actual goal.

What is particularly challenging about social learning measurement is understanding what these intermediate factors are going to be?  If you have really good support for networks, communities, social learning, etc. – how would we expect that to impact (in a measurable way) the above factors?  Why?  Why would this impact something like engagement or turnover?  Is there any proof?  What does that suggest about the intermediate measures?  Any pointers to good resources on this?

7 comments:

Will Thalheimer said...

Measuring Social Learning is going to be VERY difficult. We ought to measure everything of course until we really understand what's going on. One thing to remember--there are basically three reasons to measure learning:

1. To prove benefits
2. To improve the intervention
3. To improve learning.

And of course it's more complicated (I made a list of 18 or so of these a while back), but you get my point.

All the "Activity Measures" (e.g., number of posts, how often people read them, quality ratings) are going to be especially useful for helping us improve our social-media designs (and support), but NOT so helpful to prove/show cost-benefit, AND certainly very little to actually improve learning.

ALSO, I see very little being discussed about opportunity costs. Social media has costs in terms of time, distraction, resources, attention span, etc., so we need to look at the dark side too.

My earlier article that catalogs some of these concerns I think is still valuable, though it too misses much. http://www.work-learning.com/Catalog/Documents/EvaluatingLearning20_FinalVersion.pdf

Sreya Dutta said...

Hi Tony, just the point I was thinking of after some discussion about implementing such a model in my organization. I've blogged my thoughts on this. Please let me know what you think. We do need to have a solution to come with some tangible outcome.

Sreya

Tony Karrer said...

Will - great points. Sidenote: Blogger does accept anchor tags. Makes it easier for people to click through to your whitepaper. But it doesn't seem to allow me to edit existing comments.

Your article is a good one to go back to - although I have some disagreements with things in the article and we both I think are challenged with figuring out exactly what we can and should do around measurement of this stuff.

What's interesting to me is that with eLearning 2.0 or social learning or more specifically with using social tools to do things like have interesting conversations - what I want to measure is really not at all what is learned. I want to measure whether the results produced are better. I am not sure I know what they should have learned at all.

In your article, I fear that we are looking at eLearning 2.0 with a traditional learning measurement mindset. Even calling them "learners" is a bit of a challenge.

I look forward to discussing this further.

Sreya - (sidenote: good use of anchor tag in comment) - I enjoyed reading your post. Most of the concerns you sited (access, quality, timeliness) are common issues for any kind of content access.

I agree with you about measuring the outcome as being the most important.

---

Out of both of these comments - I'm coming to realize that we have to shift our mindset about measurement.

Sreya Dutta said...

Thanks Tony, but I couldn't stop at one post. Am up this late as this issue is still on my head and I had to get it out. So here's another one on my thoughts on how to make collaborative learning work in organization. I hope you enjoy this one as well.

Sreya

Anonymous said...

A few thoughts in response to the comments and the post:

We need to worry about measuring outcomes to start - reduction in turnover, reduction in support calls, increased sales, reduction in time-to-competency. I don't understand why these things are so difficult to measure. You measure the starting state, you implement some initiatives, and then you measure the deltas. I'm not trying to be flip. I just don't see the challenge here.

Pretty soon, we'll also see the advent of reputation management, auto-generated ONA maps, and new reports on stuff like sentiment analysis, key influencers, and "hot" topics. These will provide some of the analytic meat to relate outcomes to network dynamics, but until then, I don't think we need to get too crazy. Ace showed a 500% ROI in under six months just by connecting dealers through discussions boards so they could share expertise. I'm pretty sure the exec team didn't need any more "proof" of value than that one sentence.

Will, as you might expect, I disagree about opportunity costs. Opportunity costs implies that you should be doing something else instead of sharing / collaborating / creating. Maybe for process-driven, relatively formal orgs this is true, but for virtually every knowledge-based role or organization, the sharing, collaborating, creating *is* the work. And therefore can't be an "opprotunity" cost. The true opportunity cost is the time you spend searching for docs that should be in a wiki, having closed discussions via email instead of a discussion forum, and endless phone calls trying to find expertise in the org that should be readily searchable via social profiles.

Unless you really screw up your social learning models, those work activities will become significantly more efficient and innovative through the application of social media concepts and technologies.

There are numerous existing examples of companies hitting the ball out of the park on this stuff. I have some at my blog (including the Ace story). Another great source is the work Rob Cross and Sal Parise are doing relative to Organizational Network Analysis (ONA) and the related business benefits of strengthening informal networks or better aligning them to meet business objectives.

Anyway, that's my two cents for now.

Eric Davidove said...

In my blog, I was focused on metrics that would help us to understand, manage, and improve the networked (or social) learning eco-system. Clearly we also need to look at results and outputs that come about as a result of social learning. I will update my blog to include some thoughts about measuring the impact of social learning.

Amanda Crowe said...

I think the question that is raised here is a question that needs to be asked prior to asking about measurement. I don't believe that social media is a 'strategy'. Social media allows us new ways of listening, engaging, connecting and learning about what our customers and potential customers are saying about our brand/product/service.