I want to thank everyone who voted for and reads this blog - eLearning Technology. It was announced that we won for Best elearning / corporate education blog.
I previously had some commentary on the edublog awards. I also lobbied for a few of my fellow bloggers who did not win.
The bottom line is that none of us do this to win awards. It's nice to be recognized, but it's not going to change what we do.
In a way, each of us votes every day for blogs. We spend time reading them. Commenting on them. Blogging about them. You can see the ones that I most actively engage with either in the sidebar of this blog or in the sidebar of eLearning Learning.
With the Crisis of Attention, when any of us gives someone our attention, we are spending a valuable resource and giving a gift. I very much appreciate the gift of your attention and your votes.
Monday, December 22, 2008
Friday, December 19, 2008
Personal Learning Books
Brett Miller has taken me up on my 100 Conversation Topics which you can see what's happening at eLearning Learning - 100 Conversations. His post were some book recommendations for learning professionals. I must say that his list of books was quite interesting and come from a bit outside where I normally think. I went and ordered:
just based on Brett's comment:
Thanks Brett for the suggestions.
just based on Brett's comment:
Gelb looks at what made the greatest learner of all time the, um, greatest learner of all time;I wasn't as sure on the others:
- Mastery: The Keys to Success and Long-Term Fulfillment
- Way of Aikido, The: Life Lessons from an American Sensei: Life Lessons from an American Sensei
- The Art of Learning: An Inner Journey to Optimal Performance
Thanks Brett for the suggestions.
Thursday, December 18, 2008
Holding Back
As part of some renewed discussion on blogging such as in New Blog, No Trust, and Audience Member, I had in my notes to go back and discuss the issue of holding back.
When is holding back better than posting?
Clive posted The world's a safer place today (talking about the Obama victory), took some flack about posting something political, so he wondered if things were a bit too serious.
When do you find yourself holding back?
When is holding back better than posting?
Clive posted The world's a safer place today (talking about the Obama victory), took some flack about posting something political, so he wondered if things were a bit too serious.
One anonymous reader commented that he or she was "close to dropping you from my feed list, as I've had about enough of the irrelevant political commentary. Please get back to online learning, instead of pretending to be a political pundit."He mentions something that I just felt through my poor choices with Little Sandwiches. A time when I should have held back. Clive tells us:
But the response to my Obama posting has made me think that perhaps the situation does change once you get a wide readership, and that this probably does places an extra responsibility on you, the blogger. Having a readership gives you power, not to influence voting in an election of course, but certainly to influence buying decisions and choices as far as e-learning is concerned. If you don't take that responsibility seriously, you can hurt people that don't deserve to be hurt.What that gets us to then is having to decide what we can/should write. We need to censor ourselves. Dan Roddy talked about the issue of holding back:
There are posts that I've started that I've never published since they run contrary to my employer's position on the matter, or pieces that I've re-read and dropped since they could be interpreted as a critique of work by colleagues and clients (or even my own) that some people may not interpret as being helpful. There posts where I've simply not been comfortable with the way that I've articulated by point and I've left them with the intention of coming back to edit them and, well, they're still waiting. Heck, there are even comments that I would like to have made on other people's blogs that I've pulled after typing.My guess is that most of us have gone through a bit of transformation learning when holding back makes sense. I probably suffer from not self censoring enough. But hopefully I'm learning to hold back.
When do you find yourself holding back?
Wednesday, December 17, 2008
Write for Skimming
Back in January 2008, I suggested that people Stop Reading and instead Skim Dive Skim. It received some passionate replies - although not quite what I expected. Most people proved me wrong and they actually read things - see the survey results about whether people found a small bit of text embedded in the middle of my post:
Writing for Skimming
To me this means that we all have to work on our ability to write for skimming. I can't say that I'm all that great at this, but here are some things I try to do.
- 74% of the people saw it
- 21% missed it
- 5% not sure
Writing for Skimming
To me this means that we all have to work on our ability to write for skimming. I can't say that I'm all that great at this, but here are some things I try to do.
- Break Up Text - Write relatively shorter paragraphs with the main idea called out in text.
- Use Headlines - Breaking up content into major sections and label those sections with headers.
- Use Bullets - Bulleted lists makes it much easier for a skimmer to get useful information.
- Bolding- Within copy on the page, it's good to bold words or phrases that you want to jump out. Skimmers' will pick up that text first and then may read the rest of the words around it. Don't make everything bold or it will make nothing jump out.
- Hyperlink Text - The text that goes along with the hyperlink will also jump out to the skimmer. Change the text to fit what you are trying to say.
Can Find You
Someone added a reason not to blog to my Top Ten Reasons To Blog and Top Ten Not to Blog:
What did I miss?
Because you think no one will read it...how can people find it?I realized that in all my posts talking about blogging and Pushing People to Blog as a learning tool, I had never specifically blogged about how you find readers - or more appropriately - what should you do so that potential readers can find you. So here are a few specific suggestions to make sure that readers can find you:
- Subscribe to bloggers and get to know what they write about.
- Participate in the Learning Circuit's Big Question.
- Engage me in my 100 Conversation Topics.
- Engage any blogger by posting and linking to them (do item #1 first). They won't respond every time, but they do quite often.
- Comment and link to your post in the comments on blogs. It's better to link directly in a new post on the topic, but if you've already posted on a related issue, feel free to link to your blog in a comment. (Make sure you know how the anchor tag works.)
- Ask Questions and Make Openings Clear in your posts in order to get responses.
- Post Controversial Topics, but make sure you believe your position and can take the heat.
- Participate in Blog Carnivals is the Work Learning carnival still going?
- Twitter about it, especially to twitter groups such as the upcoming TK09 group
- Make sure to include a link to your blog in email footer, social network profiles, etc.
- Include links to your posts (when relevant) to discussion groups
- Make sure your blog is search engine friendly. Good titles and URLs are a must.
What did I miss?
Tuesday, December 16, 2008
Using SharePoint
I've been having fabulous conversations about using SharePoint.
Update Dec. 2009 - We are in the process of getting learning professionals to discuss the use of SharePoint for Learning. Please see SharePoint for Learning Professionals and connect with me around it.
SharePoint is so flexible and the documentation for it is so big and diverse, that a big part of my goals have been to understand the different ways that training organizations are using SharePoint. In my post SharePoint Examples there are some great examples in the comments. I've had conversations with several of these folks in more detail and with a few others.
In this post, I wanted to capture some of the patterns of use of SharePoint that seem to be emerging. This is a bit crude, but I thought that folks might find these interesting.
Using SharePoint before, during and after courses
This typically takes the form of sharing best practices, code examples, templates, links; posting announcements; having discussions; showing calendar items; supporting student profiles; supporting student project work; sharing notes, documents; providing course content. The reality is that what we did on the Work Literacy course or what I did for my Collaborative Learning Course could easily be supported by the various types of web parts within SharePoint.
Using SharePoint for Work Team or Communities of Practice (CoP) Collaboration
Outside of any particular formal learning, many training organizations are using SharePoint to support work teams. Typically this involves many of the same aspects as above: document sharing, calendar, discussion, resources, links, profiles, contacts, etc.
Using SharePoint to Publish to Work Teams or Communities of Practice (CoP)
Another common model is more of a publishing model where the training organization wants to provide on-going communication to the work teams or CoP. They focus more on information push and it's less intended to have user-contributed content. Obviously, there is a spectrum of using SharePoint to support collaboration and using it to publish. But in discussions there were often distinctions based on what the work team or CoP expected.
Using SharePoint to Publish to Content to the Web
Several training organizations were using SharePoint as a means of publishing web pages for public consumption. These would be external consituents. In some cases, login was provided to allow the third party to more actively participate.
Using SharePoint as Project System for Training Organization
Quite a few people talked about how they were using SharePoint as a collaborative tool to work on projects. They would share course materials, project plans, documents. They had profiles, directories, blogs to help foster sharing between spread out teams. Some used it to track bugs. Some with Subject Matter experts.
Using SharePoint for Event Planning & Organizing
Just like we used a Wiki one year and Ning another to support the online conference LearnTrends, SharePoint can be used to manage all types of events, especially internal events. This is similar to support for courses and much of the web parts used, follow-up techniques, etc. were similar.
Using SharePoint for Software Support Site / Help Desk
Another common use of SharePoint was as a reference site, especially software support site. This provides an easy way to have easy access to support materials. It also makes it easy for the Help Desk to be actively involved in on-going support.
Quick Thoughts on eLearning 2.0 and SharePoint
The reality with SharePoint is that when you go back and look at the great list of eLearning 2.0 Examples, most all of these could have been supported through SharePoint. There are some definite challenges to getting SharePoint set up right, rolling it out in smart ways, helping people the right way, etc. In some ways then, SharePoint is well suited to supporting eLearning 2.0.
However, one thing that was very interesting to find in the discussions is that I feel there is a gap between these patterns for using SharePoint and the idea of helping concept workers address the Knowledge Worker Skill Gap and begin to be able to work and learner better. A lot of what eLearning 2.0 is about is helping the individual to self-serve. They should be at the core.
In SharePoint, there are MySite which is more like a portal page showing RSS feeds, list of SharePoint sites, shared documents. Possibly its smarter use of Outlook that's the intent from a Microsoft vision of supporting the knowledge worker. But it was clear from the conversations that we've not quite made the shift to thinking about personal work and learning environments (PWLE) - see: PWLE Not PLE - Knowledge Work Not Separate from Learning, Personal Work and Learning Environments (PWLE) - More Discussion and Personal Work and Learning Environments.
In looking back at the discussion in Training Design, the suggestion is that there's a new piece here that has to do with on-going support. As part of this look at using SharePoint, I'm realizing that it's something a bit more. It's personal. I don't quite have the picture yet.
I welcome other patterns that I've missed and I welcome people chiming in with how they view the personal work and learning aspect.
Update Dec. 2009 - We are in the process of getting learning professionals to discuss the use of SharePoint for Learning. Please see SharePoint for Learning Professionals and connect with me around it.
SharePoint is so flexible and the documentation for it is so big and diverse, that a big part of my goals have been to understand the different ways that training organizations are using SharePoint. In my post SharePoint Examples there are some great examples in the comments. I've had conversations with several of these folks in more detail and with a few others.
In this post, I wanted to capture some of the patterns of use of SharePoint that seem to be emerging. This is a bit crude, but I thought that folks might find these interesting.
Using SharePoint before, during and after courses
This typically takes the form of sharing best practices, code examples, templates, links; posting announcements; having discussions; showing calendar items; supporting student profiles; supporting student project work; sharing notes, documents; providing course content. The reality is that what we did on the Work Literacy course or what I did for my Collaborative Learning Course could easily be supported by the various types of web parts within SharePoint.
Using SharePoint for Work Team or Communities of Practice (CoP) Collaboration
Outside of any particular formal learning, many training organizations are using SharePoint to support work teams. Typically this involves many of the same aspects as above: document sharing, calendar, discussion, resources, links, profiles, contacts, etc.
Using SharePoint to Publish to Work Teams or Communities of Practice (CoP)
Another common model is more of a publishing model where the training organization wants to provide on-going communication to the work teams or CoP. They focus more on information push and it's less intended to have user-contributed content. Obviously, there is a spectrum of using SharePoint to support collaboration and using it to publish. But in discussions there were often distinctions based on what the work team or CoP expected.
Using SharePoint to Publish to Content to the Web
Several training organizations were using SharePoint as a means of publishing web pages for public consumption. These would be external consituents. In some cases, login was provided to allow the third party to more actively participate.
Using SharePoint as Project System for Training Organization
Quite a few people talked about how they were using SharePoint as a collaborative tool to work on projects. They would share course materials, project plans, documents. They had profiles, directories, blogs to help foster sharing between spread out teams. Some used it to track bugs. Some with Subject Matter experts.
Using SharePoint for Event Planning & Organizing
Just like we used a Wiki one year and Ning another to support the online conference LearnTrends, SharePoint can be used to manage all types of events, especially internal events. This is similar to support for courses and much of the web parts used, follow-up techniques, etc. were similar.
Using SharePoint for Software Support Site / Help Desk
Another common use of SharePoint was as a reference site, especially software support site. This provides an easy way to have easy access to support materials. It also makes it easy for the Help Desk to be actively involved in on-going support.
Quick Thoughts on eLearning 2.0 and SharePoint
The reality with SharePoint is that when you go back and look at the great list of eLearning 2.0 Examples, most all of these could have been supported through SharePoint. There are some definite challenges to getting SharePoint set up right, rolling it out in smart ways, helping people the right way, etc. In some ways then, SharePoint is well suited to supporting eLearning 2.0.
However, one thing that was very interesting to find in the discussions is that I feel there is a gap between these patterns for using SharePoint and the idea of helping concept workers address the Knowledge Worker Skill Gap and begin to be able to work and learner better. A lot of what eLearning 2.0 is about is helping the individual to self-serve. They should be at the core.
In SharePoint, there are MySite which is more like a portal page showing RSS feeds, list of SharePoint sites, shared documents. Possibly its smarter use of Outlook that's the intent from a Microsoft vision of supporting the knowledge worker. But it was clear from the conversations that we've not quite made the shift to thinking about personal work and learning environments (PWLE) - see: PWLE Not PLE - Knowledge Work Not Separate from Learning, Personal Work and Learning Environments (PWLE) - More Discussion and Personal Work and Learning Environments.
In looking back at the discussion in Training Design, the suggestion is that there's a new piece here that has to do with on-going support. As part of this look at using SharePoint, I'm realizing that it's something a bit more. It's personal. I don't quite have the picture yet.
I welcome other patterns that I've missed and I welcome people chiming in with how they view the personal work and learning aspect.
Audience Member
On ICT in Early Learning someone (not sure their name) has responded to my Conversation Topics post (see them via eLearning Learning - 100 Conversations). The basic focus was on their audience - who they see as the typical audience member. This may be quite interesting as one of the first comments was:
I enjoyed reading this post and it relates to both New Blog and No Trust. It is a great discussion of the process they used to understand their audience and really to find their blogging voice.
It was interesting to see in the post -
One thing I'm finding in responding to these conversations is whether I'm writing for the one person who initiated the conversation or to my prototype audience member. It's actually causing me a bit of grief. I'm sure I'll find the right pattern as I do more.
I look forward to further conversation. And if you have thoughts on whether this works as a model for my blog, please let me know.
I have set up a draft post behind the scenes of the 25 of the 100 conversations that I feel inspired to participate in.Wow! I'm curious to see what results.
I enjoyed reading this post and it relates to both New Blog and No Trust. It is a great discussion of the process they used to understand their audience and really to find their blogging voice.
I noticed most was that there is a huge audience out there looking for information about technology and learning for young children. By observing readers search queries I have begun to target my post to address some of the queries educators have about technology in education.It makes me realize that I've probably assumed a lot about who an audience member of my blog really is. It's a bit tough since there are people who come through search and there are a fair number of subscribers. I'm not sure I can accurately identify either kind of audience members. Instead, I tend to think about individual people I know (in vague terms) and write for them as an audience member.
It was interesting to see in the post -
Perhaps just the one, though I know of two others.The definition is also in terms of a prototype audience member. It's so much easier when you feel you are talking to one person - or a vague idea of a single audience member.
One thing I'm finding in responding to these conversations is whether I'm writing for the one person who initiated the conversation or to my prototype audience member. It's actually causing me a bit of grief. I'm sure I'll find the right pattern as I do more.
I look forward to further conversation. And if you have thoughts on whether this works as a model for my blog, please let me know.
Monday, December 15, 2008
Training Standards
Bill Sawyer posted in response to my Conversation Topics post. You can find posts aggregated via eLearning Learning - 100 Conversations. I've not met Bill before, and this was a great way to start. He is definitely challenged and thinking a lot about training standards.
Bill has quite a few questions in his post:
Instead, I'd like to focus on what Bill asks about the challenges around training standards and eLearning 2.0:
When he asks what do we use as the front-end technology and in which case?
In terms of SCORM, Almost always the answer is yes, authoring tools need to support it. Do you ever plan to track it in an LMS? Then yes. But don't most tools support SCORM at this point?I completely understand why Bill feels the way he does. The amount of innovation and change and number of choices definitely makes it harder to decide how to approach things. At the same time, asking for standards is likely to be asking a lot. It's doubtful we are going to see enough coming from standards except in narrow areas like SCORM.
Bill, I hear you. Certainly, there's a lot to try to figure out. And it's not getting any easier. I'm not sure I buy asking for help from training standards, but there seems to be a need to have some ways to get through the clutter to understand how to structure things.
In a prior post, Bill tells us that:
At the same time, this happens to be an area where likely there will be high expectations about providing more than just training. Programmers are very much used to accessing code examples, reference libraries, seeking and getting help, etc. I'm going to guess that Oracle does quite a bit of this for this exact audience. I have no idea if/how this ties to training standards, but it may be the case that elements of eLearning 2.0 already exist in this world.
Bill, I look forward to any further thoughts on this.
Bill has quite a few questions in his post:
eLearning is suffering from the Beta/VHS or Blu-ray/HD-DVD challenge. In fact, it is probably even more systemic. For example, it is elearning? eLearning? e-Learning? or E-Learning? Heck, if something doesn’t even have a standard for what to call itself, is it really ready for a rev. 2.0?I'm not really going to address this much. See some thoughts at: eLearning or e-Learning vs. learning, but I somewhat agree with Jay Cross (who coined the term eLearning) that it's not worth a whole lot of time trying to define it too closely.
Instead, I'd like to focus on what Bill asks about the challenges around training standards and eLearning 2.0:
What is happening with the eLearning world is that we lack standardization. Should we support Flash? Where does PowerPoint fit into the standards? Should we be supporting OpenOffice? Where does SCORM fit into the picture? Should we demand that our product support SCORM? What about Adobe products vs. Articulate vs. Qarbon?When I talked about Training Design one of the things I didn't discuss is how we've gone through waves of innovation along with each innovation cycle. When CBT (CD-ROM based multimedia training) came out, there were a lot of different authoring tools and approaches that came along with it. It was hard to choose a tool because you didn't know quite what you were eventually going to do with it. However, it all settled down to roughly Toolbook, Authorware and IconAuthor. I used to love these tools. Each allowed us to do some pretty incredible things. But then along came the web and WBT (web-based training), again huge innovation, lots of tools. This made us uncomfortable with our choices. But, I actually think things in the world of traditional online courseware development have become much easier. There are a few leading elearning authoring tools that work in most situations. That said, the cycle of innovation is happening so fast now that one cycle doesn't settle completely before the next cycle starts. That's why it feels so uncomfortable all the time ...
Until eLearning vendors bite the bullet, come to real standards on formats, and then the tools and structure can build up to support those standards, eLearning is never going to be what it can be.
When he asks what do we use as the front-end technology and in which case?
- HTML + simple JavaScript
- AJAX
- Flash
In terms of SCORM, Almost always the answer is yes, authoring tools need to support it. Do you ever plan to track it in an LMS? Then yes. But don't most tools support SCORM at this point?I completely understand why Bill feels the way he does. The amount of innovation and change and number of choices definitely makes it harder to decide how to approach things. At the same time, asking for standards is likely to be asking a lot. It's doubtful we are going to see enough coming from standards except in narrow areas like SCORM.
Bill, I hear you. Certainly, there's a lot to try to figure out. And it's not getting any easier. I'm not sure I buy asking for help from training standards, but there seems to be a need to have some ways to get through the clutter to understand how to structure things.
In a prior post, Bill tells us that:
I train Oracle programmers, primarily internal employees in the E-Business Suite (EBS) line of business, how to write J2EE-based applications for Oracle’s EBS product using our framework called Oracle Applications Framework (FWK).Given this context, I think I can understand a bit more about why Bill would have expectation that there would be more in the way of training standards. In the world of J2EE app development, there are incredible standards being worked on all the time. These allow all sorts of interoperability. I'm not sure I even know what the standards would be in the world of eLearning.
At the same time, this happens to be an area where likely there will be high expectations about providing more than just training. Programmers are very much used to accessing code examples, reference libraries, seeking and getting help, etc. I'm going to guess that Oracle does quite a bit of this for this exact audience. I have no idea if/how this ties to training standards, but it may be the case that elements of eLearning 2.0 already exist in this world.
Bill, I look forward to any further thoughts on this.
Friday, December 12, 2008
Related Terms
The recent addition of related terms (relationship factors) in eLearning Learning that show what how related terms are to a given result set provides some interesting insights. I already pointed to some of the Interesting Information that we could see as we compare what different bloggers write about. I can also do a query (which is not available through the interface) to see what terms are related to what's being discussed right now.
Here are some terms that are getting more attention the first couple weeks this month (December 2008) include Social Media, eLearning Activity, Mobile Learning, Yugma , Slideshare , SharePoint , Twitter , 100 Conversations, Mzinga , and GeoLearning. Some of these are no surprise, but others such as Yugma made me notice that version 4 is out, hence people are talking about it more than usual.
It's also interesting to me to drill down another level on a couple of the companies to see what pops for them. For example, I see GeoLearning relates to Learning Portals, Community of Practice, Mentoring, IntraLearn, Learnframe, ViewCentral, GeoMaestro, WBT Manager, WBT Systems, KnowledgeNet, Generation 21, and GeoConnect. Mzinga is shown related to Personal Learning, Social Software, Learning 2.0, Storyboards, PLEs, CollectiveX, Firefly, Tomoye, KnowledgePlanet, Element K, Awareness Networks. Not too bad and it's definitely useful to have the ability to drill down on the GeoLearning Mentoring page to try to understand why those two terms were linked.
Oh and I don't know if I mentioned it, but you can also use text search to see what related terms come up as related to arbitrary search terms.
Let me know if you find interesting related terms as you go.
Here are some terms that are getting more attention the first couple weeks this month (December 2008) include Social Media, eLearning Activity, Mobile Learning, Yugma , Slideshare , SharePoint , Twitter , 100 Conversations, Mzinga , and GeoLearning. Some of these are no surprise, but others such as Yugma made me notice that version 4 is out, hence people are talking about it more than usual.
It's also interesting to me to drill down another level on a couple of the companies to see what pops for them. For example, I see GeoLearning relates to Learning Portals, Community of Practice, Mentoring, IntraLearn, Learnframe, ViewCentral, GeoMaestro, WBT Manager, WBT Systems, KnowledgeNet, Generation 21, and GeoConnect. Mzinga is shown related to Personal Learning, Social Software, Learning 2.0, Storyboards, PLEs, CollectiveX, Firefly, Tomoye, KnowledgePlanet, Element K, Awareness Networks. Not too bad and it's definitely useful to have the ability to drill down on the GeoLearning Mentoring page to try to understand why those two terms were linked.
Oh and I don't know if I mentioned it, but you can also use text search to see what related terms come up as related to arbitrary search terms.
Let me know if you find interesting related terms as you go.
No Trust
I've been reading various mentions of the new report by Forrester, that provides the following information on the sources that people trust. Or basically they show that there's no trust for blogs.
I held back on posting about this because I thought I was just being defensive. Surely there's more trust than that. Having just seen posts by Ken Allan and Manish Mohan about this issue, it got me thinking some more about this issue of No Trust of blogs as sources of information. So a couple of thoughts ...
Do you see what's at the top of the list? Email from people you know. The bottom line is that for most of us, we believe people we know (and likely already trust). I certainly feel that way. I ask people I know about things and that's what often gets me to finally act. This is why I talk about the importance of new skills for Leveraging Networks, Network Feedback, Finding Expertise, Using Social Media to Find Answers to Questions, Learning through Conversation.
But what's interesting about the survey is that there is a built in assumption that you don't know the blogger. If you asked me whether I would trust information provided by a blogger I didn't know, I likely would respond the same way. However, what I've found through blogging is that I get to know lots of people including maybe especially other bloggers. Thus, when I see them post, there's not this issue of no trust. It is someone I know. No the communication is not through email - but it's very similar. It acts just like that category. When Brent, Mark, Michele, etc. (wow, these folks are like Madonna and Sting - they only need one name) say in their blog - here's this great new tool and here is how it's working for me - that fits into the top category. It gets me to believe and possibly act. If I read it from a well known blogger who I don't have that relationship with, I don't trust it the same way. Funny thing, probably not very smart, but that's true.
This does mean that as a person who blogs you must be extra careful of the trust you are given. You have to be honest. You can't shill. Because most blogs are personal and real human relationships form - you must act in a way that never engenders the no trust factor.
That said, there are a quite a lot of people who come to my blog and who don't really know me, they don't have a personal relationship, we've not exchanges around 100 Conversations yet, ... And it's a bit depressing to realize that you rank behind direct mail and online classifieds in terms of trust. That they think of what they find here the same way I think about other bloggers who I don't know. It's another data point that I will eventually validate through people I do know. A little depressing, but at least it's a data point.
One last thought, how can people respond that they trust portals and search engines? Don't these often find blog posts? How can that be trusted? To me, a set of search results are the least trustworthy. Sure, I use them, but do I "trust the results" - no way - no trust here for those sources. Give me a fellow blogger (who I know) any day.
Am I being too defensive here?
I held back on posting about this because I thought I was just being defensive. Surely there's more trust than that. Having just seen posts by Ken Allan and Manish Mohan about this issue, it got me thinking some more about this issue of No Trust of blogs as sources of information. So a couple of thoughts ...
Do you see what's at the top of the list? Email from people you know. The bottom line is that for most of us, we believe people we know (and likely already trust). I certainly feel that way. I ask people I know about things and that's what often gets me to finally act. This is why I talk about the importance of new skills for Leveraging Networks, Network Feedback, Finding Expertise, Using Social Media to Find Answers to Questions, Learning through Conversation.
But what's interesting about the survey is that there is a built in assumption that you don't know the blogger. If you asked me whether I would trust information provided by a blogger I didn't know, I likely would respond the same way. However, what I've found through blogging is that I get to know lots of people including maybe especially other bloggers. Thus, when I see them post, there's not this issue of no trust. It is someone I know. No the communication is not through email - but it's very similar. It acts just like that category. When Brent, Mark, Michele, etc. (wow, these folks are like Madonna and Sting - they only need one name) say in their blog - here's this great new tool and here is how it's working for me - that fits into the top category. It gets me to believe and possibly act. If I read it from a well known blogger who I don't have that relationship with, I don't trust it the same way. Funny thing, probably not very smart, but that's true.
This does mean that as a person who blogs you must be extra careful of the trust you are given. You have to be honest. You can't shill. Because most blogs are personal and real human relationships form - you must act in a way that never engenders the no trust factor.
That said, there are a quite a lot of people who come to my blog and who don't really know me, they don't have a personal relationship, we've not exchanges around 100 Conversations yet, ... And it's a bit depressing to realize that you rank behind direct mail and online classifieds in terms of trust. That they think of what they find here the same way I think about other bloggers who I don't know. It's another data point that I will eventually validate through people I do know. A little depressing, but at least it's a data point.
One last thought, how can people respond that they trust portals and search engines? Don't these often find blog posts? How can that be trusted? To me, a set of search results are the least trustworthy. Sure, I use them, but do I "trust the results" - no way - no trust here for those sources. Give me a fellow blogger (who I know) any day.
Am I being too defensive here?
Thursday, December 11, 2008
Web Conferencing Services
I found a Google doc via delicious that I have a feeling wasn't intended to be public, but it has such a wonderful comparison of the various web conferencing tools that I felt compelled to copy it here and as a new Google Doc in case the document goes away. I also noticed that Wikipedia has a page - Comparison of web conferencing software - but it doesn't have pricing and a few other columns.
Application | Local Install | Hosted Service | Cost Model | # of users | Scheduling | Video Conf | Telephony Audio Conf | VoIP Audio Conf | Chat | Desktop (Keyboard/Mouse) Sharing | App Sharing | File sharing | Whiteboard | Recording | Interacts w/ LMS | Integration w/ Enterprise Apps | SSL | Training | Field Support | Server Support | URL |
Skype | Y | N | Free | 1-9* | Y (1 to 1) | Y* | Y | Y | N | N | Y* | Y | N | N | N | Low | Low | N/A | http://www.skype.com | ||
DimDim | Y | Y | Free - Varies | Varies | Y | - | Y | Y | Y | Y | - | Y | Y | - | N | Med-High | http://www.dimdim.com | ||||
Elluminate | Y | Y | Varies | Varies | Y | N | Y | Y | Y | Y | Y | Y | Y | - | Y | Med-High | http://www.elluminate.com | ||||
Elluminate V-Room | N | Y | Free | 3 | Y | - | Y | Y | - | Y | Y | N | N | - | N | None | |||||
WebEx | Pay Per Use | .33 per min per user | - | Y | Y | Y | Y | Y | Y | Y | Y | http://www.webex.com | |||||||||
Wimba | Y | N | - | Y | Y | Y | Y | Y | Y | Y | - | http://www.wimba.com | |||||||||
GoToMeeting | Monthly, Annual | $49, $468 | Up to 15 | Y | Y | Y | Y | Y | Y | None | http://www.gotomeeting.com | ||||||||||
GoToMeeting Corporate | Licensed | TBD | Varies | Y | Y | Y | Y | Y | Y | None | http://www.gotomeeting.com | ||||||||||
GoToWebinar | Monthy, Annual | $99, $948 | Up to 1000 | Y | None | http://www.gotowebinar.com | |||||||||||||||
Acrobat Connect | Y | Annual, Monthly | $395/yr or $39.95/mo. | Up to 15 | Y | N | Y | Y | Y | Y | Y | N | Y | http://www.adobe.com/products/acrobatconnect/ | |||||||
Acrobat Connect Professional | Y | Annual, Monthly, Pay Per Use | Annual fee not available, 5-user=$375/mo, 10-user=$750/mo., Pay Per Use+.32 per min per user | More than 15 | Y | Y | Y | Y | Y | Y | Y | Y | Y | http://www.adobe.com/products/acrobatconnectpro/ | |||||||
Yugma | N | As needed | Basic service is free, premium rates vary by number of attendees | Up to 10 for free Premium service up to 500 | N | Y (long dist. rates apply) | Y | Y | Y* | Y* | Y* | Y* | Y | ||||||||
Vyew | Y | Y | Varies from free to $14/mo+ | 20-45 | Y | Y | Y | Y | Y | N | Y | Y | Y | Y | Y (w/ appliance only) | http://vyew.com |
Wednesday, December 10, 2008
Data Driven
At the start of any performance improvement initiative, there is a question of whether the initiative is going to have real impact on what matters to the organization. Will the retail sales training change behavior in ways that improve customer satisfaction? Will the performance support tool provided to financial advisors increase customer loyalty? Will the employee engagement intervention provide only short-term benefit, or will it have a longer-term effect on engagement and retention?
If you want to really improve the numbers via a performance improvement initiative then you need to start and end with the data.
Using a data driven approach to performance improvement is a passion of mine. As I look back at various projects that have done this, a widely applicable model emerges for data driven performance improvement initiatives. Understanding this model is important in order to be able to apply it within different situations in order to help drive behavior change that ultimately leads to improvement in metrics.
THE PROCESS AND MODEL
At its simplest, the model is based on providing metrics that suggest possible tactical interventions, support the creation of action plans to improve the metrics and track the changes in the metrics so that performers can see their progress and continually improve. Additional specifics around this model will be introduced below, but it is easiest to understand the model through an example.
This system comes out of an implementation where the focus was improving customer satisfaction in retail stores that was built by my company, TechEmpower, as a custom solution for the retailer. In this situation, customer satisfaction is the key driver in profitability, same store sales growth, and basically every metric that matters to this organization. Customer satisfaction data is collected in an ongoing basis using a variety of customer survey instruments. These surveys focus on overall customer satisfaction, intent to repurchase, and a specific set of key contributing factors. For example, one question asks whether “Associates were able to help me locate products.”
In this case, the performance improvement process began when performance analysts reviewed the customer satisfaction metrics and conducted interviews with a wide variety of practitioners, especially associates, store managers and district managers. The interviews were used to determine best practices, find interventions that had worked for store managers, understand in-store dynamics and the dynamics between store managers and district managers.
Based on the interviews, the performance analysts defined an initial set of targeted interventions that specifically targeted key contributing behaviors closely aligned to the surveys. For example, there were four initial interventions defined that would help a store manager improve the store’s scores on “knowledge of product location.” The defined interventions focused on communications, associate training opportunities, follow-up opportunities, and other elements that had been successful in other stores.
Once the interventions were defined, the custom web software system illustrated in the figure above was piloted with a cross-section of stores. There was significant communication and support provided to both store managers and district managers in order for them to understand the system, how it worked and how it could help them improve customer satisfaction. Because customer satisfaction data was already a primary metric with significant compensation implications, there was no need to motivate them but there was need to help them understand what was happening.
The system is designed to run along 3 month cycles with store managers and district managers targeting improvements for particular metrics. At the beginning of the cycle, store managers receive a satisfaction report. This report showed the numbers in a form similar to existing reports and showed comparison with similar stores and against organizational benchmarks.
Store managers review these numbers and then are asked to come up with action plans against particular metrics where improvement was needed. To do this, the store manager clicks on a link to review templates of action plans that were based on best practices from other stores. Each action plan consists of a series of steps that included things like pre-shift meetings/training, on-the-fly in-store follow-up, job aids for employees such as store layout guides, games, etc. Each item has a relative date that indicates when it should be completed. Managers can make notes and modify the action plan as they see fit. Once they are comfortable with their plan, they send it to the district manager for review. The district manager reviews the plan, discusses it with the store manager, suggests possible modifications, and then the store manager commits to the plan.
Once the plan is approved, the store manager is responsible for executing to the plan, marking completion, making notes on issues, and providing status to the district manager. Most action plans last four to six weeks. Both the store manager and district manager receive periodic reminders of required actions. As part of these email reminders, there is subtle coaching. For example, performance analysts have determined suggested conversations that district managers should have with the store manager or things they might try on their next store visit associated with the particular intervention. The district manager is given these suggestions electronically based on the planned execution of the action plan. This is not shown to the store manager as part of the action plan, and it has been found to be an important part of effectively engaging the district managers to help get change to occur.
Once the store manager has marked the entire plan as completed, an assessment is sent to the store and district managers. This assessment briefly asks whether the store manager and district managers felt they were able to effectively implement the intervention and offers an important opportunity for them to provide input around the interventions. Their ratings are also critical in determining why some interventions are working or not working.
At the next reporting cycle, the system shows store managers and district managers the before and after metrics that corresponded to the timing of the action plan. We also show how their results compare with other stores who had recently executed a similar action plan.
This marks the beginning another action plan cycle. The store managers review their customer satisfaction data and are again asked to make action plans. In most cases, we add to the action plan for this cycle a series of follow-up steps to continue the changed behavior associated with the prior action plan.
If you look at what’s happening more broadly, the larger process is now able to take advantage of some very interesting data. Because we have before and after data tied to specific interventions, we have clear numbers on what impact interventions had on the metrics. For example, two interventions were designed to help store managers improve the scores around “knowledge of store layout.” One intervention used an overarching contest, with a series of shift meetings to go through content using a job aid, a series of actions by key associates that would quiz and grade other associates on their knowledge, but all encompassed within the overall fun contest. The other intervention used a series of scavenger hunts designed to teach associates product location in a fun way. Both interventions were found to have positive impact on survey scores for “knowledge of store layout.” However, one of the interventions was found to be more effective. I’m intentionally not going to tell you which, because I’m not sure we understand why nor can we generalize this. We are also trying to see if modifications will improve the other intervention to make it more effective. The bottom line is that we quickly found out what interventions were most effective. We also were able to see how modifications to the pre-defined interventions done by store managers as part of the action planning process affected the outcomes. Some modifications were found to be more effective than the pre-defined interventions, which allowed us to extract additional best practice information.
Overall, this approach had significant impact on key metrics, helped capture and spread best practices. It also had a few surprises. In particular, we were often surprised at what was effective and what had marginal impact. We were also often surprised by tangential effects. For example, interventions aimed at improving knowledge of store layout among employees has positive impact on quite a few other factors such as “store offered the products and services I wanted,” “products are located where I expect them,” “staff enjoys serving me,” and to a lesser extent several other factors. In hindsight it makes sense, but it also indicates that stores that lag in those factors can be helped by also targeting associate knowledge.
The pilot ran for nine months, three cycles of three months each. It showed significant improvement as compared to stores that had the same data reported but did not have the system in place. Of course, there were sizable variations in the effectiveness of particular interventions and also in interventions across different stores and with different district managers involved. Still, the changes in the numbers made the costs of implementing the system seem like a rounding error as compared to the effect of improvement in customer satisfaction.
The system continues to improve over time. And when we say “the system,” the software and approach has not changed much, but our understanding of how to improve satisfaction continues to get better. As we work with this system, we continually collaborate to design more and different kinds of interventions, modify or remove interventions that don’t work, and explore high scoring stores to try to find out how they get better results.
So why was this system successful when clearly this retailer, like many other retailers, had been focused on customer satisfaction for a long time across various initiatives? In other words, this organization already provided these metrics to managers, trained and coached store managers and district managers on improving customer satisfaction, placed an emphasis on customer satisfaction via compensation, and used a variety of other techniques. Most store managers and district managers would tell you that they already were working hard to improve satisfaction in the stores. In fact, there was significant skepticism about the possibility of getting real results.
So what did this system do that was different than what they had been doing before? In some ways, it really wasn’t different than what this organization was already doing; it simply enabled the process in more effective ways and gave visibility into what was really happening so that we could push the things that worked and get rid of what didn’t work. In particular, if you look at the system, it addresses gaps that are common in many organizations:
ADDITIONAL DOMAINS
Data driven performance improvement systems have been used across many different types of organizations, different audiences, and different metrics. Further, there are a variety of different types of systems that support similar models and processes.
Several call center software providers use systems that are very similar to this approach. You’ll often hear a call center tell you, “This call may be monitored for quality purposes.” That message tells you that the call center is recording calls so that quality monitoring evaluations can be done on each agent each month. The agent is judged on various criteria such as structure of the call, product knowledge, use of script or verbiage, and interaction skills. The agent is also being evaluated based on other metrics such as time on the call, time to resolution, number of contacts to resolve, etc. Most of these metrics and techniques are well established in call centers.
Verint, a leading call center software provider, uses these metrics in a process very similar to the retail example described above. Supervisors evaluate an agent’s performance based on these metrics and then can define a series of knowledge or skill based learning or coaching steps. For example, they might assign a particular eLearning module that would be provided to the agent during an appropriate time based on the workforce management system. The agent takes the course, which includes a test to ensure understanding of the material. At this point the Verint system ensures that additional calls are recorded on this agent in order for the supervisor to make the evaluation if the agent has made strides of improvement in a specific area.
In addition to specific agent skills, the Verint system is also used to track broader trends and issues. Because you get before and after metrics, you have visibility in changes in performance based on particular eLearning modules.
Oscar Alban, a Principal and Global Market Consultant at Verint, “Many companies are now taking this these practices into the enterprise. The area that we see this happening to is the back-office where agents are doing a lot of data entry–type work. The same way contact center agents are evaluated on how well they interact with customers, back office agents are evaluated on the quality of the work they are performing. For example if back-office agents are inputting loan application information, they are judged on the amount of errors and the correct use of the online systems they must use. If they are found to have deficiencies in any area, then they are coached or are required to take an online training course in order to improve.” Verint believes this model applies to many performance needs within the enterprise.
Gallup uses a similar approach, but targeted at employee engagement. Gallup collects initial employee engagement numbers using a simple 12-question survey called the Q12. These numbers are rolled-up to aggregate engagement for managers based on the survey responses of all direct and indirect reports. The roll-up also accounts for engagement scores for particular divisions, job functions, and other slices. Gallup provides comparison across the organization based on demographics supplied by the company and also with other organizations that have used the instrument. This gives good visibility into engagement throughout the organization.
Gallup also provides a structure for action planning and feedback sessions that are designed to help managers improve engagement. Gallup generally administers the surveys annually. This allows them to show year-over-year impact of different interventions. For example, they can compare the engagement scores and change in engagement scores for managers whose subordinates rated their manager’s feedback sessions in the top two boxes (highest ratings) compared with managers who did not hold feedback sessions or whose feedback session was not rated highly. Not surprisingly, engagement scores consistently have a positive correlation with effective feedback sessions.
There are many examples beyond the three cited here. Just based on these examples, it is clear that this same model can apply to a wide variety of industries, job functions, and metrics. Metrics can come from a variety of existing data sources such as product sales numbers, pipeline activity, customer satisfaction, customer loyalty, evaluations, etc. Metrics can also come from new sources as in the case of Gallup, where a new survey is used to derive the basis for interventions. These might be measures of employee satisfaction, employee engagement, skills assessments, best practice behavior assessments, or other performance assessments. In general, using existing business metrics will have the most impact and often have the advantage of alignment within the organization around these metrics. For example, compensation is often aligned with existing metrics. Using metrics that are new to the organization will come with minimally a need for communicating the connection between these numbers and the bottom line.
COMMON CHALLENGES
When you implement this kind of solution, there are a variety of common challenges that are encountered.
Right Metrics Collected
As stated above, there are a wide variety of possible metrics that can be tied to particular performance interventions. However, in the case that metrics don’t exist or are not being collected, then additional work is required not only to gather the input metrics, but to convince the organization that these are the right metrics. Assessments and intermediate factors can and often should be used, but they must be believed and have real impact for all involved.
Slow-Changing Data and Slow Collection Intervals
Many metrics change slowly and may not be collected often enough so you have immediate visibility into the impact. In these cases, we’ve used various data points as proxies for the key metrics. For example, if customer loyalty is the ultimate metric, you should likely focus on intermediate factors that you know contributes to loyalty such as recency and frequency of contact, customer satisfaction, and employee knowledge. For metrics where you only have annual cycles, you may want to focus on a series of interventions over the year. Alternatively, you may want to define targeted follow-up assessments to determine how performance has changed.
Data Not Tied to Specific Performance/Behavior
Customer loyalty is again a good example of this challenge. Knowing that you are not performing well on customer loyalty does not provide enough detail to know what interventions are needed. In the case of customer satisfaction at the store level, the survey questions asked about specific performance, skills or knowledge you expected of the store employees – “Were they able to direct you to products?” or “Were they knowledgeable of product location in the store?” Poor scores on these questions suggest specific performance interventions.
In the case of customer loyalty, you need to look at the wide variety of performance / behaviors that collectively contribute to customer loyalty and define metrics that link to those behaviors. In a financial advisor scenario, we’ve seen this attacked by looking at metrics such as frequency of contact, customer satisfaction, products sold, employee satisfaction. With appropriate survey researchers involved, you often will gain insights over time into how these behavior-based numbers relate to customer loyalty. But, the bottom line is that you likely need additional assessment instruments that can arrive at more actionable metrics.
CONCLUSIONS
The real beauty of a data driven model for performance improvement is that it focuses on key elements of behavior change within a proven framework. More specifically, it directs actions that align with metrics that are already understood and important. It helps ensure commitment to action. It provides critical communication support, for example helping district managers communicate effectively with store managers around metrics and what they are doing. In helps hold the people involved accountable to each other and to taking action in a meaningful way. And, the system ties interventions to key metrics for continuous improvement.
One of the interesting experiences in working on these types of solutions is that it’s not always obvious what interventions will work. In many cases, we were surprised when certain interventions had significant impact and other similar interventions did not. Sometimes we would ultimately trace it back to problems that managers encountered during the implementation of the intervention that we had not anticipated. In other words, it sounded good on paper, but ultimately it really didn’t work for managers. For example, several of the games or contests we designed didn’t work out as anticipated. Managers quickly found interest faded quickly and small external rewards didn’t necessarily motivate associates. Interestingly, other games or contests worked quite well. This provide real opportunity to modify or substitute interventions. We also found ourselves modifying interventions based on the feedback of managers who had good and bad results from their implementation.
The other surprise was that very simple interventions would many times be the most effective. Providing a manager with a well-structured series of simple steps, such as what we refer to a “meeting in-a-box” and “follow-up in-a-box” would often turn out to have very good results. These interventions were provided as web pages, documents, templates, etc. that the manager could use and modify for their purposes. There was, of course, lots of guidance in how to use these resources effectively as part of the intervention. In many cases, the interventions were based on information and documents that was being used in some stores but not widely recognized or adopted. Because of the system, we then were able to use similar interventions in other cases. But, because practicality of interventions is paramount, we still had challenges with the design of those interventions.
Of course, this points to the real power of this approach. By having a means to understand what interventions work and don’t work, and having a means to get interventions out into the organization, we have a way of really making a difference. Obviously, starting and ending with the data is the key.
In 2009, I'm hoping that I will get to work on a lot more data driven performance improvement projects.
If you want to really improve the numbers via a performance improvement initiative then you need to start and end with the data.
Using a data driven approach to performance improvement is a passion of mine. As I look back at various projects that have done this, a widely applicable model emerges for data driven performance improvement initiatives. Understanding this model is important in order to be able to apply it within different situations in order to help drive behavior change that ultimately leads to improvement in metrics.
THE PROCESS AND MODEL
At its simplest, the model is based on providing metrics that suggest possible tactical interventions, support the creation of action plans to improve the metrics and track the changes in the metrics so that performers can see their progress and continually improve. Additional specifics around this model will be introduced below, but it is easiest to understand the model through an example.
This system comes out of an implementation where the focus was improving customer satisfaction in retail stores that was built by my company, TechEmpower, as a custom solution for the retailer. In this situation, customer satisfaction is the key driver in profitability, same store sales growth, and basically every metric that matters to this organization. Customer satisfaction data is collected in an ongoing basis using a variety of customer survey instruments. These surveys focus on overall customer satisfaction, intent to repurchase, and a specific set of key contributing factors. For example, one question asks whether “Associates were able to help me locate products.”
In this case, the performance improvement process began when performance analysts reviewed the customer satisfaction metrics and conducted interviews with a wide variety of practitioners, especially associates, store managers and district managers. The interviews were used to determine best practices, find interventions that had worked for store managers, understand in-store dynamics and the dynamics between store managers and district managers.
Based on the interviews, the performance analysts defined an initial set of targeted interventions that specifically targeted key contributing behaviors closely aligned to the surveys. For example, there were four initial interventions defined that would help a store manager improve the store’s scores on “knowledge of product location.” The defined interventions focused on communications, associate training opportunities, follow-up opportunities, and other elements that had been successful in other stores.
Once the interventions were defined, the custom web software system illustrated in the figure above was piloted with a cross-section of stores. There was significant communication and support provided to both store managers and district managers in order for them to understand the system, how it worked and how it could help them improve customer satisfaction. Because customer satisfaction data was already a primary metric with significant compensation implications, there was no need to motivate them but there was need to help them understand what was happening.
The system is designed to run along 3 month cycles with store managers and district managers targeting improvements for particular metrics. At the beginning of the cycle, store managers receive a satisfaction report. This report showed the numbers in a form similar to existing reports and showed comparison with similar stores and against organizational benchmarks.
Store managers review these numbers and then are asked to come up with action plans against particular metrics where improvement was needed. To do this, the store manager clicks on a link to review templates of action plans that were based on best practices from other stores. Each action plan consists of a series of steps that included things like pre-shift meetings/training, on-the-fly in-store follow-up, job aids for employees such as store layout guides, games, etc. Each item has a relative date that indicates when it should be completed. Managers can make notes and modify the action plan as they see fit. Once they are comfortable with their plan, they send it to the district manager for review. The district manager reviews the plan, discusses it with the store manager, suggests possible modifications, and then the store manager commits to the plan.
Once the plan is approved, the store manager is responsible for executing to the plan, marking completion, making notes on issues, and providing status to the district manager. Most action plans last four to six weeks. Both the store manager and district manager receive periodic reminders of required actions. As part of these email reminders, there is subtle coaching. For example, performance analysts have determined suggested conversations that district managers should have with the store manager or things they might try on their next store visit associated with the particular intervention. The district manager is given these suggestions electronically based on the planned execution of the action plan. This is not shown to the store manager as part of the action plan, and it has been found to be an important part of effectively engaging the district managers to help get change to occur.
Once the store manager has marked the entire plan as completed, an assessment is sent to the store and district managers. This assessment briefly asks whether the store manager and district managers felt they were able to effectively implement the intervention and offers an important opportunity for them to provide input around the interventions. Their ratings are also critical in determining why some interventions are working or not working.
At the next reporting cycle, the system shows store managers and district managers the before and after metrics that corresponded to the timing of the action plan. We also show how their results compare with other stores who had recently executed a similar action plan.
This marks the beginning another action plan cycle. The store managers review their customer satisfaction data and are again asked to make action plans. In most cases, we add to the action plan for this cycle a series of follow-up steps to continue the changed behavior associated with the prior action plan.
If you look at what’s happening more broadly, the larger process is now able to take advantage of some very interesting data. Because we have before and after data tied to specific interventions, we have clear numbers on what impact interventions had on the metrics. For example, two interventions were designed to help store managers improve the scores around “knowledge of store layout.” One intervention used an overarching contest, with a series of shift meetings to go through content using a job aid, a series of actions by key associates that would quiz and grade other associates on their knowledge, but all encompassed within the overall fun contest. The other intervention used a series of scavenger hunts designed to teach associates product location in a fun way. Both interventions were found to have positive impact on survey scores for “knowledge of store layout.” However, one of the interventions was found to be more effective. I’m intentionally not going to tell you which, because I’m not sure we understand why nor can we generalize this. We are also trying to see if modifications will improve the other intervention to make it more effective. The bottom line is that we quickly found out what interventions were most effective. We also were able to see how modifications to the pre-defined interventions done by store managers as part of the action planning process affected the outcomes. Some modifications were found to be more effective than the pre-defined interventions, which allowed us to extract additional best practice information.
Overall, this approach had significant impact on key metrics, helped capture and spread best practices. It also had a few surprises. In particular, we were often surprised at what was effective and what had marginal impact. We were also often surprised by tangential effects. For example, interventions aimed at improving knowledge of store layout among employees has positive impact on quite a few other factors such as “store offered the products and services I wanted,” “products are located where I expect them,” “staff enjoys serving me,” and to a lesser extent several other factors. In hindsight it makes sense, but it also indicates that stores that lag in those factors can be helped by also targeting associate knowledge.
The pilot ran for nine months, three cycles of three months each. It showed significant improvement as compared to stores that had the same data reported but did not have the system in place. Of course, there were sizable variations in the effectiveness of particular interventions and also in interventions across different stores and with different district managers involved. Still, the changes in the numbers made the costs of implementing the system seem like a rounding error as compared to the effect of improvement in customer satisfaction.
The system continues to improve over time. And when we say “the system,” the software and approach has not changed much, but our understanding of how to improve satisfaction continues to get better. As we work with this system, we continually collaborate to design more and different kinds of interventions, modify or remove interventions that don’t work, and explore high scoring stores to try to find out how they get better results.
So why was this system successful when clearly this retailer, like many other retailers, had been focused on customer satisfaction for a long time across various initiatives? In other words, this organization already provided these metrics to managers, trained and coached store managers and district managers on improving customer satisfaction, placed an emphasis on customer satisfaction via compensation, and used a variety of other techniques. Most store managers and district managers would tell you that they already were working hard to improve satisfaction in the stores. In fact, there was significant skepticism about the possibility of getting real results.
So what did this system do that was different than what they had been doing before? In some ways, it really wasn’t different than what this organization was already doing; it simply enabled the process in more effective ways and gave visibility into what was really happening so that we could push the things that worked and get rid of what didn’t work. In particular, if you look at the system, it addresses gaps that are common in many organizations:
- Delivers best practices from across the organization at the time and point of need
- Provides metrics in conjunction with practical, actionable suggestions
- Enables and supports appropriate interaction in manager-subordinate relationships that ensures communication and builds skills in both parties
- Tracks the effectiveness of interventions to form a continuous improvement cycle to determine what best practices could be most effectively implemented to improve satisfaction.
ADDITIONAL DOMAINS
Data driven performance improvement systems have been used across many different types of organizations, different audiences, and different metrics. Further, there are a variety of different types of systems that support similar models and processes.
Several call center software providers use systems that are very similar to this approach. You’ll often hear a call center tell you, “This call may be monitored for quality purposes.” That message tells you that the call center is recording calls so that quality monitoring evaluations can be done on each agent each month. The agent is judged on various criteria such as structure of the call, product knowledge, use of script or verbiage, and interaction skills. The agent is also being evaluated based on other metrics such as time on the call, time to resolution, number of contacts to resolve, etc. Most of these metrics and techniques are well established in call centers.
Verint, a leading call center software provider, uses these metrics in a process very similar to the retail example described above. Supervisors evaluate an agent’s performance based on these metrics and then can define a series of knowledge or skill based learning or coaching steps. For example, they might assign a particular eLearning module that would be provided to the agent during an appropriate time based on the workforce management system. The agent takes the course, which includes a test to ensure understanding of the material. At this point the Verint system ensures that additional calls are recorded on this agent in order for the supervisor to make the evaluation if the agent has made strides of improvement in a specific area.
In addition to specific agent skills, the Verint system is also used to track broader trends and issues. Because you get before and after metrics, you have visibility in changes in performance based on particular eLearning modules.
Oscar Alban, a Principal and Global Market Consultant at Verint, “Many companies are now taking this these practices into the enterprise. The area that we see this happening to is the back-office where agents are doing a lot of data entry–type work. The same way contact center agents are evaluated on how well they interact with customers, back office agents are evaluated on the quality of the work they are performing. For example if back-office agents are inputting loan application information, they are judged on the amount of errors and the correct use of the online systems they must use. If they are found to have deficiencies in any area, then they are coached or are required to take an online training course in order to improve.” Verint believes this model applies to many performance needs within the enterprise.
Gallup uses a similar approach, but targeted at employee engagement. Gallup collects initial employee engagement numbers using a simple 12-question survey called the Q12. These numbers are rolled-up to aggregate engagement for managers based on the survey responses of all direct and indirect reports. The roll-up also accounts for engagement scores for particular divisions, job functions, and other slices. Gallup provides comparison across the organization based on demographics supplied by the company and also with other organizations that have used the instrument. This gives good visibility into engagement throughout the organization.
Gallup also provides a structure for action planning and feedback sessions that are designed to help managers improve engagement. Gallup generally administers the surveys annually. This allows them to show year-over-year impact of different interventions. For example, they can compare the engagement scores and change in engagement scores for managers whose subordinates rated their manager’s feedback sessions in the top two boxes (highest ratings) compared with managers who did not hold feedback sessions or whose feedback session was not rated highly. Not surprisingly, engagement scores consistently have a positive correlation with effective feedback sessions.
There are many examples beyond the three cited here. Just based on these examples, it is clear that this same model can apply to a wide variety of industries, job functions, and metrics. Metrics can come from a variety of existing data sources such as product sales numbers, pipeline activity, customer satisfaction, customer loyalty, evaluations, etc. Metrics can also come from new sources as in the case of Gallup, where a new survey is used to derive the basis for interventions. These might be measures of employee satisfaction, employee engagement, skills assessments, best practice behavior assessments, or other performance assessments. In general, using existing business metrics will have the most impact and often have the advantage of alignment within the organization around these metrics. For example, compensation is often aligned with existing metrics. Using metrics that are new to the organization will come with minimally a need for communicating the connection between these numbers and the bottom line.
COMMON CHALLENGES
When you implement this kind of solution, there are a variety of common challenges that are encountered.
Right Metrics Collected
As stated above, there are a wide variety of possible metrics that can be tied to particular performance interventions. However, in the case that metrics don’t exist or are not being collected, then additional work is required not only to gather the input metrics, but to convince the organization that these are the right metrics. Assessments and intermediate factors can and often should be used, but they must be believed and have real impact for all involved.
Slow-Changing Data and Slow Collection Intervals
Many metrics change slowly and may not be collected often enough so you have immediate visibility into the impact. In these cases, we’ve used various data points as proxies for the key metrics. For example, if customer loyalty is the ultimate metric, you should likely focus on intermediate factors that you know contributes to loyalty such as recency and frequency of contact, customer satisfaction, and employee knowledge. For metrics where you only have annual cycles, you may want to focus on a series of interventions over the year. Alternatively, you may want to define targeted follow-up assessments to determine how performance has changed.
Data Not Tied to Specific Performance/Behavior
Customer loyalty is again a good example of this challenge. Knowing that you are not performing well on customer loyalty does not provide enough detail to know what interventions are needed. In the case of customer satisfaction at the store level, the survey questions asked about specific performance, skills or knowledge you expected of the store employees – “Were they able to direct you to products?” or “Were they knowledgeable of product location in the store?” Poor scores on these questions suggest specific performance interventions.
In the case of customer loyalty, you need to look at the wide variety of performance / behaviors that collectively contribute to customer loyalty and define metrics that link to those behaviors. In a financial advisor scenario, we’ve seen this attacked by looking at metrics such as frequency of contact, customer satisfaction, products sold, employee satisfaction. With appropriate survey researchers involved, you often will gain insights over time into how these behavior-based numbers relate to customer loyalty. But, the bottom line is that you likely need additional assessment instruments that can arrive at more actionable metrics.
CONCLUSIONS
The real beauty of a data driven model for performance improvement is that it focuses on key elements of behavior change within a proven framework. More specifically, it directs actions that align with metrics that are already understood and important. It helps ensure commitment to action. It provides critical communication support, for example helping district managers communicate effectively with store managers around metrics and what they are doing. In helps hold the people involved accountable to each other and to taking action in a meaningful way. And, the system ties interventions to key metrics for continuous improvement.
One of the interesting experiences in working on these types of solutions is that it’s not always obvious what interventions will work. In many cases, we were surprised when certain interventions had significant impact and other similar interventions did not. Sometimes we would ultimately trace it back to problems that managers encountered during the implementation of the intervention that we had not anticipated. In other words, it sounded good on paper, but ultimately it really didn’t work for managers. For example, several of the games or contests we designed didn’t work out as anticipated. Managers quickly found interest faded quickly and small external rewards didn’t necessarily motivate associates. Interestingly, other games or contests worked quite well. This provide real opportunity to modify or substitute interventions. We also found ourselves modifying interventions based on the feedback of managers who had good and bad results from their implementation.
The other surprise was that very simple interventions would many times be the most effective. Providing a manager with a well-structured series of simple steps, such as what we refer to a “meeting in-a-box” and “follow-up in-a-box” would often turn out to have very good results. These interventions were provided as web pages, documents, templates, etc. that the manager could use and modify for their purposes. There was, of course, lots of guidance in how to use these resources effectively as part of the intervention. In many cases, the interventions were based on information and documents that was being used in some stores but not widely recognized or adopted. Because of the system, we then were able to use similar interventions in other cases. But, because practicality of interventions is paramount, we still had challenges with the design of those interventions.
Of course, this points to the real power of this approach. By having a means to understand what interventions work and don’t work, and having a means to get interventions out into the organization, we have a way of really making a difference. Obviously, starting and ending with the data is the key.
In 2009, I'm hoping that I will get to work on a lot more data driven performance improvement projects.
New Blog
Ingrid O'Sullivan has a new blog and is the first person to take me up on my post 100 Conversation Topics which asks people to start a conversation with me and get aggregated into 100 conversations. Good for you Ingrid!
Sidenote: I feel a little behind having just seen that the company that Ingrid works for Third Force actually acquired MindLeaders back in June 2007 and looks to be a fairly serious player. Normally, I'm pretty familiar with companies in the space, but I was not familiar with them. So, it was good for me to at least get them on my radar.
Ingrid's post tells a bit of a story that is likely familiar to other authors of a relatively new blog. Ingrid tells us that among her hardest challenges is deciding what to write in the blog ...
Some quick thoughts as I read the post on her new blog ...
Ack, someone help me here. First, I Push People to Blog and then I critique them. That's not good. What should I have said to the writer of a new blog that would have been much more encouraging?
And anything else that would help Ingrid? I'm sure there are some other thoughts from other bloggers out there.
Sidenote: I feel a little behind having just seen that the company that Ingrid works for Third Force actually acquired MindLeaders back in June 2007 and looks to be a fairly serious player. Normally, I'm pretty familiar with companies in the space, but I was not familiar with them. So, it was good for me to at least get them on my radar.
Ingrid's post tells a bit of a story that is likely familiar to other authors of a relatively new blog. Ingrid tells us that among her hardest challenges is deciding what to write in the blog ...
I’m pretty new to blogging [...] I really want this blog to grow, to be of interest to you our readers and provide relevant information to you. And boy is that hard… at least twice a week I am faced with the task of getting something ready to post. I question what I write - how personal should it be, if it’s too technical will it bore you, is it original and new, am I at all amusing or funny – this list goes on. And I think half the problem is because this is such a new blog, we are still discovering who you the readers are, and looking for feedback on what you want. I’m hoping as I gain more experience, have more “conversations” and learn from the likes of Tony, this will no longer be my hardest ongoing task - but for now dear readers please read with patience.I think that it's likely the case with a new blog that you go from posting your first couple of posts that maybe come out quite easily to finding yourself wondering what to write about later. Likely there are some great posts out there that chronical the lifecycle of new blogs as they go through this early growing challenge. Take a look at what Janet Clarey had to say after her first 100 days - Debriefing myself…a noob’s experience after 100-ish days of blogging. I'm sure there are other good examples out there of this lifecycle - pointers?
Some quick thoughts as I read the post on her new blog ...
- You are right that trying to figure out the audience is helpful for any new blog. What kinds of questions do they have? Hopefully my list of topics helps. At least those are some of my questions and likely some questions that other people have as well.
- I think it's easier to write posts when you are writing almost as much for your own learning as you are for "the audience." I personally don't ever even think of "audience" or "readers" - many who I don't know. Instead, I think about people I do know who I know read this and somewhat have a conversation with them. But the bottom line, if you are interested in something, it will be interesting to the audience.
- Your past posts are definitely interesting. I personally would get more if you go a bit deeper on your topics. What are the challenges with being funny? personal? etc? What was a specific example of where you were challenged to find a topic? Is this something that you think other bloggers face (they do)? Point me to some examples of that? These would have been a bit more interesting conversation for me and likely other bloggers and likely your readers as well. A blog offers the opportunity to go deep and narrow. Oh, and, I will skip it (as will other readers) if it's not relevant. But I think the bigger risk is never going deep enough.
- Don't get too caught up in Measuring Blog Success. Your goal should be to have interesting conversations. Results will follow.
- Have you participated in a Learning Circuit's Big Question? This is a great way to get exposure to the blogging community and grow your audience.
- As you are writing a corporate blog, you have to walk a fine line. It's far more difficult than writing a personal blog outside the confines of a corporation. I would recommend staying away from promoting Third Force explicitly in your posts. You'll notice what I deleted above when I cut and paste. The extra stuff was not needed and a bit too promotional. You'll get the message across without that kind of stuff, but you will turn off some people with it. So, it's far safer to avoid it.
- Make sure you periodically engage other bloggers with them around their posts. Oh you just did. Well done. :)
- Take a look at Blog Discussion for some ideas on other ways to spark discussion.
Ack, someone help me here. First, I Push People to Blog and then I critique them. That's not good. What should I have said to the writer of a new blog that would have been much more encouraging?
And anything else that would help Ingrid? I'm sure there are some other thoughts from other bloggers out there.
Tuesday, December 09, 2008
Training Design
I've been struggling a bit to capture a concept that I believe represents a fairly fundamental shift in how we need to think about Training Design.
Back in 2005, 2006 and 2007, I would regularly show the following slides to help explain the heart of what Training Design is all about and how it has changed over the years. Oh, and I called it Learning Design in the diagrams, but I'm afraid that it's really more about Training Design.
Basically, we conduct an analysis (sometimes extensive, often very quick) to determine what we are really trying to accomplish. We take into account a wide variety of considerations. And we consult our delivery model options to do this fuzzy thing - Training Design. Back in 1987, the dominant tool was classroom delivery and thus, we primarily created training and train-the-trainer materials. We kept these in notebooks which adorn many shelves today (but are getting rather dusty).
(And yes, I know this is a gross oversimplification, but it gets the point across.)
Ten years later, life was good because we had another Training Method available, the CD-ROM allowing us to train individuals.
Yes, we theoretically had this back in 1987 with paper-based materials, but we looked at the CD as a substitute for classroom instruction.
In 2007, we suddenly had a whole bunch of different delivery models. Virtual classroom, web-based training (WBT), rapidly created eLearning, lots of online reference tools such as help, cheat sheets, online manuals. We also had discussion forums, on-going office hours.
In many cases, this makes our final delivery pattern much more complex, but it greatly reduces the time required upfront by learners and allows us to get them information much more just-in-time and with more appropriate costs.
However, when you look at these models, the design is roughly the same. Maybe this more appropriately would be called Learning Design - or eLearning Design - or maybe something else that implies performance support as well.
Now the interesting part ... the heart of the picture and realistically how we approach training design in 1987 is the same as it was in 2007.
My sense is that we may need a new picture because of eLearning 2.0.
Yes, you can think of Blogs, Wikis, etc. as a means of enriching the Training Design much the same as a discussion group alongside formal instruction. Pretty much when Harold, Michele and I worked together to design the Web 2.0 for Learning Professionals Course, we settled on using Ning and it's various capabilities as part of the delivery pattern. This is the same picture as above.
However, what about the case when you are providing tools and really don't have the content defined ahead of time? How about when you build skills around scanning via RSS, social bookmarking, reaching into networks for expertise, etc.? What about when you help individuals about blogging as a learning practice? When you support informal / self-directed / workgroup learning? Is it the same picture?
Maybe it is? Maybe we conduct a similar performance analysis and take into account similar considerations and then provide appropriate structure (delivery pattern). Maybe we are providing a Wiki and conducting a barn raising session?
My sense is that there's something different about it? But I'm so used to having this as my mental model, that I'm having a hard time figuring out what the alternative is?
Back in 2005, 2006 and 2007, I would regularly show the following slides to help explain the heart of what Training Design is all about and how it has changed over the years. Oh, and I called it Learning Design in the diagrams, but I'm afraid that it's really more about Training Design.
Basically, we conduct an analysis (sometimes extensive, often very quick) to determine what we are really trying to accomplish. We take into account a wide variety of considerations. And we consult our delivery model options to do this fuzzy thing - Training Design. Back in 1987, the dominant tool was classroom delivery and thus, we primarily created training and train-the-trainer materials. We kept these in notebooks which adorn many shelves today (but are getting rather dusty).
(And yes, I know this is a gross oversimplification, but it gets the point across.)
Ten years later, life was good because we had another Training Method available, the CD-ROM allowing us to train individuals.
Yes, we theoretically had this back in 1987 with paper-based materials, but we looked at the CD as a substitute for classroom instruction.
In 2007, we suddenly had a whole bunch of different delivery models. Virtual classroom, web-based training (WBT), rapidly created eLearning, lots of online reference tools such as help, cheat sheets, online manuals. We also had discussion forums, on-going office hours.
In many cases, this makes our final delivery pattern much more complex, but it greatly reduces the time required upfront by learners and allows us to get them information much more just-in-time and with more appropriate costs.
However, when you look at these models, the design is roughly the same. Maybe this more appropriately would be called Learning Design - or eLearning Design - or maybe something else that implies performance support as well.
Now the interesting part ... the heart of the picture and realistically how we approach training design in 1987 is the same as it was in 2007.
My sense is that we may need a new picture because of eLearning 2.0.
Yes, you can think of Blogs, Wikis, etc. as a means of enriching the Training Design much the same as a discussion group alongside formal instruction. Pretty much when Harold, Michele and I worked together to design the Web 2.0 for Learning Professionals Course, we settled on using Ning and it's various capabilities as part of the delivery pattern. This is the same picture as above.
However, what about the case when you are providing tools and really don't have the content defined ahead of time? How about when you build skills around scanning via RSS, social bookmarking, reaching into networks for expertise, etc.? What about when you help individuals about blogging as a learning practice? When you support informal / self-directed / workgroup learning? Is it the same picture?
Maybe it is? Maybe we conduct a similar performance analysis and take into account similar considerations and then provide appropriate structure (delivery pattern). Maybe we are providing a Wiki and conducting a barn raising session?
My sense is that there's something different about it? But I'm so used to having this as my mental model, that I'm having a hard time figuring out what the alternative is?
Monday, December 08, 2008
100 Conversation Topics
Today, I saw a post by someone suggesting ways to come up with ideas for blog post topics and they gave some examples. The examples were not all that relevant to most of the readers of this blog, but it definitely sparked a thought for me.
Almost every time I have a conversation, I learn something new. Most of the time I learn something, I write a blog post. But I don’t have nearly enough time to have conversations, learn and write blog posts. So now that some people called me influential, I’m hoping that I can leverage my influence to inspire people to have a conversation with me and help me with my lack of time.
So, here are my suggested 100 conversation topics that I wish I had time to speak to you (yes YOU) about. And since I’m sure I’d learn something, I’d likely write up a post about it. But since I don’t have time for either ...
I’ll also try to make sure that readers of this blog see it via blog posts. My goal is to make sure that I use this as an opportunity to have a more meaningful conversation with you.
If you are not a blogger then start a blog so we can have this conversation. It's a great start to your blog. And likely I can get you some initial traffic.
If you aren’t willing to start a blog, then send it to me via email, let me know if it can be public, and if so, I can see if it would work for me to post it somewhere.
This is a bit of an experiment, so please bear with me if I’m slow or don’t quite have it all figured out. And please follow the specific instructions above (about links and categories) to make sure that this works out. Oh, and if this is a really bad idea, or there's a better way to do it, or whatever, then maybe that would be good to have a conversation about.
Important - please keep in mind that the audience here is learning professionals involved in the use of technology for learning. So, please write the conversation for me and for them. Here are my 100 conversation topics …
Almost every time I have a conversation, I learn something new. Most of the time I learn something, I write a blog post. But I don’t have nearly enough time to have conversations, learn and write blog posts. So now that some people called me influential, I’m hoping that I can leverage my influence to inspire people to have a conversation with me and help me with my lack of time.
So, here are my suggested 100 conversation topics that I wish I had time to speak to you (yes YOU) about. And since I’m sure I’d learn something, I’d likely write up a post about it. But since I don’t have time for either ...
I’m hoping you will just pretend we had the conversation and write a summary of the conversation we had.If you are a blogger, then posting the conversation is great. Point me to it by putting a link to: 100 Conversations and the exact text "100 Conversations" in your post and I'll find it via blog search. Also please include terms and a link that will help it get put into appropriate categories in the eLearning Learning Community. I've included some examples below, but I got tired after a while and I'm hoping that you will help include terms / links in your post that will help them get categorized using some of the examples I've included. You can also point me via a comment.
I’ll also try to make sure that readers of this blog see it via blog posts. My goal is to make sure that I use this as an opportunity to have a more meaningful conversation with you.
If you are not a blogger then start a blog so we can have this conversation. It's a great start to your blog. And likely I can get you some initial traffic.
If you aren’t willing to start a blog, then send it to me via email, let me know if it can be public, and if so, I can see if it would work for me to post it somewhere.
This is a bit of an experiment, so please bear with me if I’m slow or don’t quite have it all figured out. And please follow the specific instructions above (about links and categories) to make sure that this works out. Oh, and if this is a really bad idea, or there's a better way to do it, or whatever, then maybe that would be good to have a conversation about.
Important - please keep in mind that the audience here is learning professionals involved in the use of technology for learning. So, please write the conversation for me and for them. Here are my 100 conversation topics …
- Here’s the eLearning Authoring Tool we chose to use and approach we used to evaluate and decide. And the major decision criteria that really differentiated for us was…
- Here are the surprises we found after we chose our eLearning Authoring Tool ….
- Here’s my eLearning Authoring Method or Trick …
- Topics 1, 2, 3 for LMS, LCMS, Audio, Virtual Classroom Tool, Screencast, Wiki, eLearning Game Tool, etc.
- An eLearning Activity I created or Interactivity I added to an online course that I thought was a good idea.
- Here’s how I use Facebook for personal learning
- How I use Twitter for personal learning
- How I use Blogging for personal learning
- Where I believe social media can be adopted by learners in my organization.
- A plan for adopting social media as a learning tool in our organization.
- Where we have adopted social media as a learning tool in our organization. What our experience has been so far? What we’ve learned so far?
- My thoughts on the ROI of eLearning 2.0.
- The problems with eLearning 2.0 in my organization.
- How I found an answer to a work problem using a learning community.
- A search method I use that I don’t think a lot of other people use.
- Five presentations related to eLearning that learning professionals should see.
- Example of successful precedent searches. In other words, where and how do you find examples that you can use as a starting point?
- Where I’ve found good source training content for common training needs.
- Examples of how I conduct high consequence searches. In other words, what do I do when I need to make sure that I’ve found the right stuff. Found everything. So I won’t get a question from left-field that throws me off in my presentation.
- Which desktop search tool I use and why.
- My aha moment during a personal learning or formal learning experience?
- How I make my conference experiences more effective.
- Something that I’ve not seen written about recently that I think is really good to keep in mind.
- A long, lost blog post that needs to be revisited.
- Ways that my children are learning that is significantly different from how I learn.
- New places (online or not) that I’m finding interesting conversations.
- What I’ve given up in order to have more time for X.
- How do I decide what to scan and what to give up?
- How do I balance skimming and reading? How has this changed for me?
- Five videos of interest to learning professionals.
- Things I've done as follow-up to improve effectiveness of learning.
- Five podcasts for learning professionals.
- How do I envision my audience.
- After LinkedIn, Delicious, Twitter and Facebook, the next three are...
- The learning project I really think should be done by my organization.
- The best and worst use of money by learning professionals.
- How learning professionals as a group should be better using social media to help each other.
- My recommendations of books for learning professionals.
- A tool I haven’t seen that I’d like to have.
- My ideal conference would be?
- Branding and messaging strategies I’ve used for my learning initiatives.
- My favorite learning story.
- People I’d like to meet in person to have a conversation.
- How I would/did explain informal learning and social media to my CEO or the head of a business unit.
- eLearning in five years in my organization.
- How my job will have changed in five years.
- Examples of recent, interesting online conversation.
- Examples of dumb things that people say about learning in my organization.
- If I had a magic wand, I’d …
- Common objections I run into in my work … and what I do.
- What I’m doing to have a personal brand inside and outside my organization.
- Where I add the biggest value to my organization or client organizations. What do I get paid for? Where are the disconnects between my value add and my pay?
- What my next professional will be and things I’m doing towards that today.
- Email tricks I use.
- Something I’ve done to act as a catalyst to other people.
- Something I’ve done where if I had asked for permission I may not have got it.
- Something or someone that deserves praise (another blog post, blogger, worker, etc.)
- A recent tough decision that I had to make.
- A tough decision I made and that looking back, I wish I had known something else, done something different, etc.
- While I hate networking, here’s how I do it.
- How I network inside my organization.
- What I do to stay in touch with people I meet.
- My hardest ongoing task.
- How staying in touch has changed for me over the past five years.
- Tools that I get free and tools that I pay for and why I’m willing to pay.
- A great example of the use of a tool that really impressed me.
- My favorite learning quotes.
- How I do my work and where my work skills have changed in the last ten years.
- What I’ve learned in the past year that I don’t think most of my co-workers know.
- Important questions that every learning professional should be asking.
- A great source of information that I’m not sure people know about or maybe have forgotten.
- My ways for keeping things organized, finding things again, and keeping lists.
- Tricks to working with internal IT that seem to work well.
- How I take electronic notes as compared to taking notes on paper that I learned to do in school.
- Notes I take when I’m searching.
- How I structure my concept work tasks. Things I do at the start. Things I do during.
- How I find good lists.
- How and examples of exploring a new space. My company is thinking of entering a new market and I want to get up to speed on it. I want to understand about eLearning for sales people.
- What do I do if I’ve searched for something and I’m not sure if it exists. I’m considering writing about a tool that I think is needed, but I want to make sure that it doesn’t exist before I write about it.
- A search tool other than Google that I use, when and why.
- A barrier I face at work.
- Something my organization is doing to help employees that I’ve not heard much about from other organizations.
- Something I do that I suspect there’s a better way, but I’m just not sure what.
- A recent concept work task that seemed a lot harder than it should have been.
- A trend I’m seeing in my organization.
- Hip learning trend during the past that was way over hyped.
- Cool Word, Excel, etc. feature that I use when creating learning objects.
- Free media sources that I use – stock photography, clip art, animations, sounds, music.
- Something I’ve heard about learning or eLearning that I’m not sure if it’s true.
- A general principle or rule that I don’t follow and why.
- My top goals.
- If we hired someone to replace me that had a similar background, here are five things I don’t think they’d know.
- Metrics we actually track that are meaningful and useful that may be applicable to other learning professionals.
- A tool we use professionally that I thought worked well even though I wasn’t sure at the start.
- Examples of working with different generations in the workforce differently.
- Something we should borrow from other professionals (librarians, anthropologists, etc.) to help us.
- If I could get the following people (names or types) together for a conversation, here’s what I’d want to discuss.
- My top challenges in my work
- How I find blog topics
- The 5 conversation topics that are not listed in Tony’s that I’d like to have with other people that Tony and his readers would likely find interesting.
- (Just added) Why 100 Conversations is a really good or bad idea?
Subscribe to:
Posts (Atom)