Tony Karrer's eLearning Blog on e-Learning Trends eLearning 2.0 Personal Learning Informal Learning eLearning Design Authoring Tools Rapid e-Learning Tools Blended e-Learning e-Learning Tools Learning Management Systems (LMS) e-Learning ROI and Metrics

Wednesday, December 10, 2008

Data Driven

At the start of any performance improvement initiative, there is a question of whether the initiative is going to have real impact on what matters to the organization. Will the retail sales training change behavior in ways that improve customer satisfaction? Will the performance support tool provided to financial advisors increase customer loyalty? Will the employee engagement intervention provide only short-term benefit, or will it have a longer-term effect on engagement and retention?

If you want to really improve the numbers via a performance improvement initiative then you need to start and end with the data.

Using a data driven approach to performance improvement is a passion of mine. As I look back at various projects that have done this, a widely applicable model emerges for data driven performance improvement initiatives. Understanding this model is important in order to be able to apply it within different situations in order to help drive behavior change that ultimately leads to improvement in metrics.

THE PROCESS AND MODEL

At its simplest, the model is based on providing metrics that suggest possible tactical interventions, support the creation of action plans to improve the metrics and track the changes in the metrics so that performers can see their progress and continually improve. Additional specifics around this model will be introduced below, but it is easiest to understand the model through an example.

This system comes out of an implementation where the focus was improving customer satisfaction in retail stores that was built by my company, TechEmpower, as a custom solution for the retailer. In this situation, customer satisfaction is the key driver in profitability, same store sales growth, and basically every metric that matters to this organization. Customer satisfaction data is collected in an ongoing basis using a variety of customer survey instruments. These surveys focus on overall customer satisfaction, intent to repurchase, and a specific set of key contributing factors. For example, one question asks whether “Associates were able to help me locate products.”



In this case, the performance improvement process began when performance analysts reviewed the customer satisfaction metrics and conducted interviews with a wide variety of practitioners, especially associates, store managers and district managers. The interviews were used to determine best practices, find interventions that had worked for store managers, understand in-store dynamics and the dynamics between store managers and district managers.

Based on the interviews, the performance analysts defined an initial set of targeted interventions that specifically targeted key contributing behaviors closely aligned to the surveys. For example, there were four initial interventions defined that would help a store manager improve the store’s scores on “knowledge of product location.” The defined interventions focused on communications, associate training opportunities, follow-up opportunities, and other elements that had been successful in other stores.

Once the interventions were defined, the custom web software system illustrated in the figure above was piloted with a cross-section of stores. There was significant communication and support provided to both store managers and district managers in order for them to understand the system, how it worked and how it could help them improve customer satisfaction. Because customer satisfaction data was already a primary metric with significant compensation implications, there was no need to motivate them but there was need to help them understand what was happening.

The system is designed to run along 3 month cycles with store managers and district managers targeting improvements for particular metrics. At the beginning of the cycle, store managers receive a satisfaction report. This report showed the numbers in a form similar to existing reports and showed comparison with similar stores and against organizational benchmarks.

Store managers review these numbers and then are asked to come up with action plans against particular metrics where improvement was needed. To do this, the store manager clicks on a link to review templates of action plans that were based on best practices from other stores. Each action plan consists of a series of steps that included things like pre-shift meetings/training, on-the-fly in-store follow-up, job aids for employees such as store layout guides, games, etc. Each item has a relative date that indicates when it should be completed. Managers can make notes and modify the action plan as they see fit. Once they are comfortable with their plan, they send it to the district manager for review. The district manager reviews the plan, discusses it with the store manager, suggests possible modifications, and then the store manager commits to the plan.

Once the plan is approved, the store manager is responsible for executing to the plan, marking completion, making notes on issues, and providing status to the district manager. Most action plans last four to six weeks. Both the store manager and district manager receive periodic reminders of required actions. As part of these email reminders, there is subtle coaching. For example, performance analysts have determined suggested conversations that district managers should have with the store manager or things they might try on their next store visit associated with the particular intervention. The district manager is given these suggestions electronically based on the planned execution of the action plan. This is not shown to the store manager as part of the action plan, and it has been found to be an important part of effectively engaging the district managers to help get change to occur.
Once the store manager has marked the entire plan as completed, an assessment is sent to the store and district managers. This assessment briefly asks whether the store manager and district managers felt they were able to effectively implement the intervention and offers an important opportunity for them to provide input around the interventions. Their ratings are also critical in determining why some interventions are working or not working.

At the next reporting cycle, the system shows store managers and district managers the before and after metrics that corresponded to the timing of the action plan. We also show how their results compare with other stores who had recently executed a similar action plan.

This marks the beginning another action plan cycle. The store managers review their customer satisfaction data and are again asked to make action plans. In most cases, we add to the action plan for this cycle a series of follow-up steps to continue the changed behavior associated with the prior action plan.

If you look at what’s happening more broadly, the larger process is now able to take advantage of some very interesting data. Because we have before and after data tied to specific interventions, we have clear numbers on what impact interventions had on the metrics. For example, two interventions were designed to help store managers improve the scores around “knowledge of store layout.” One intervention used an overarching contest, with a series of shift meetings to go through content using a job aid, a series of actions by key associates that would quiz and grade other associates on their knowledge, but all encompassed within the overall fun contest. The other intervention used a series of scavenger hunts designed to teach associates product location in a fun way. Both interventions were found to have positive impact on survey scores for “knowledge of store layout.” However, one of the interventions was found to be more effective. I’m intentionally not going to tell you which, because I’m not sure we understand why nor can we generalize this. We are also trying to see if modifications will improve the other intervention to make it more effective. The bottom line is that we quickly found out what interventions were most effective. We also were able to see how modifications to the pre-defined interventions done by store managers as part of the action planning process affected the outcomes. Some modifications were found to be more effective than the pre-defined interventions, which allowed us to extract additional best practice information.

Overall, this approach had significant impact on key metrics, helped capture and spread best practices. It also had a few surprises. In particular, we were often surprised at what was effective and what had marginal impact. We were also often surprised by tangential effects. For example, interventions aimed at improving knowledge of store layout among employees has positive impact on quite a few other factors such as “store offered the products and services I wanted,” “products are located where I expect them,” “staff enjoys serving me,” and to a lesser extent several other factors. In hindsight it makes sense, but it also indicates that stores that lag in those factors can be helped by also targeting associate knowledge.

The pilot ran for nine months, three cycles of three months each. It showed significant improvement as compared to stores that had the same data reported but did not have the system in place. Of course, there were sizable variations in the effectiveness of particular interventions and also in interventions across different stores and with different district managers involved. Still, the changes in the numbers made the costs of implementing the system seem like a rounding error as compared to the effect of improvement in customer satisfaction.

The system continues to improve over time. And when we say “the system,” the software and approach has not changed much, but our understanding of how to improve satisfaction continues to get better. As we work with this system, we continually collaborate to design more and different kinds of interventions, modify or remove interventions that don’t work, and explore high scoring stores to try to find out how they get better results.

So why was this system successful when clearly this retailer, like many other retailers, had been focused on customer satisfaction for a long time across various initiatives? In other words, this organization already provided these metrics to managers, trained and coached store managers and district managers on improving customer satisfaction, placed an emphasis on customer satisfaction via compensation, and used a variety of other techniques. Most store managers and district managers would tell you that they already were working hard to improve satisfaction in the stores. In fact, there was significant skepticism about the possibility of getting real results.

So what did this system do that was different than what they had been doing before? In some ways, it really wasn’t different than what this organization was already doing; it simply enabled the process in more effective ways and gave visibility into what was really happening so that we could push the things that worked and get rid of what didn’t work. In particular, if you look at the system, it addresses gaps that are common in many organizations:
  • Delivers best practices from across the organization at the time and point of need
  • Provides metrics in conjunction with practical, actionable suggestions
  • Enables and supports appropriate interaction in manager-subordinate relationships that ensures communication and builds skills in both parties
  • Tracks the effectiveness of interventions to form a continuous improvement cycle to determine what best practices could be most effectively implemented to improve satisfaction.
From the previous description, it should be clear that the beauty of this kind of data driven approach is that it supports a common-sense model, but does it in a way that allows greater success.

ADDITIONAL DOMAINS

Data driven performance improvement systems have been used across many different types of organizations, different audiences, and different metrics. Further, there are a variety of different types of systems that support similar models and processes.

Several call center software providers use systems that are very similar to this approach. You’ll often hear a call center tell you, “This call may be monitored for quality purposes.” That message tells you that the call center is recording calls so that quality monitoring evaluations can be done on each agent each month. The agent is judged on various criteria such as structure of the call, product knowledge, use of script or verbiage, and interaction skills. The agent is also being evaluated based on other metrics such as time on the call, time to resolution, number of contacts to resolve, etc. Most of these metrics and techniques are well established in call centers.

Verint, a leading call center software provider, uses these metrics in a process very similar to the retail example described above. Supervisors evaluate an agent’s performance based on these metrics and then can define a series of knowledge or skill based learning or coaching steps. For example, they might assign a particular eLearning module that would be provided to the agent during an appropriate time based on the workforce management system. The agent takes the course, which includes a test to ensure understanding of the material. At this point the Verint system ensures that additional calls are recorded on this agent in order for the supervisor to make the evaluation if the agent has made strides of improvement in a specific area.

In addition to specific agent skills, the Verint system is also used to track broader trends and issues. Because you get before and after metrics, you have visibility in changes in performance based on particular eLearning modules.

Oscar Alban, a Principal and Global Market Consultant at Verint, “Many companies are now taking this these practices into the enterprise. The area that we see this happening to is the back-office where agents are doing a lot of data entry–type work. The same way contact center agents are evaluated on how well they interact with customers, back office agents are evaluated on the quality of the work they are performing. For example if back-office agents are inputting loan application information, they are judged on the amount of errors and the correct use of the online systems they must use. If they are found to have deficiencies in any area, then they are coached or are required to take an online training course in order to improve.” Verint believes this model applies to many performance needs within the enterprise.

Gallup uses a similar approach, but targeted at employee engagement. Gallup collects initial employee engagement numbers using a simple 12-question survey called the Q12. These numbers are rolled-up to aggregate engagement for managers based on the survey responses of all direct and indirect reports. The roll-up also accounts for engagement scores for particular divisions, job functions, and other slices. Gallup provides comparison across the organization based on demographics supplied by the company and also with other organizations that have used the instrument. This gives good visibility into engagement throughout the organization.

Gallup also provides a structure for action planning and feedback sessions that are designed to help managers improve engagement. Gallup generally administers the surveys annually. This allows them to show year-over-year impact of different interventions. For example, they can compare the engagement scores and change in engagement scores for managers whose subordinates rated their manager’s feedback sessions in the top two boxes (highest ratings) compared with managers who did not hold feedback sessions or whose feedback session was not rated highly. Not surprisingly, engagement scores consistently have a positive correlation with effective feedback sessions.

There are many examples beyond the three cited here. Just based on these examples, it is clear that this same model can apply to a wide variety of industries, job functions, and metrics. Metrics can come from a variety of existing data sources such as product sales numbers, pipeline activity, customer satisfaction, customer loyalty, evaluations, etc. Metrics can also come from new sources as in the case of Gallup, where a new survey is used to derive the basis for interventions. These might be measures of employee satisfaction, employee engagement, skills assessments, best practice behavior assessments, or other performance assessments. In general, using existing business metrics will have the most impact and often have the advantage of alignment within the organization around these metrics. For example, compensation is often aligned with existing metrics. Using metrics that are new to the organization will come with minimally a need for communicating the connection between these numbers and the bottom line.

COMMON CHALLENGES

When you implement this kind of solution, there are a variety of common challenges that are encountered.

Right Metrics Collected

As stated above, there are a wide variety of possible metrics that can be tied to particular performance interventions. However, in the case that metrics don’t exist or are not being collected, then additional work is required not only to gather the input metrics, but to convince the organization that these are the right metrics. Assessments and intermediate factors can and often should be used, but they must be believed and have real impact for all involved.

Slow-Changing Data and Slow Collection Intervals

Many metrics change slowly and may not be collected often enough so you have immediate visibility into the impact. In these cases, we’ve used various data points as proxies for the key metrics. For example, if customer loyalty is the ultimate metric, you should likely focus on intermediate factors that you know contributes to loyalty such as recency and frequency of contact, customer satisfaction, and employee knowledge. For metrics where you only have annual cycles, you may want to focus on a series of interventions over the year. Alternatively, you may want to define targeted follow-up assessments to determine how performance has changed.

Data Not Tied to Specific Performance/Behavior

Customer loyalty is again a good example of this challenge. Knowing that you are not performing well on customer loyalty does not provide enough detail to know what interventions are needed. In the case of customer satisfaction at the store level, the survey questions asked about specific performance, skills or knowledge you expected of the store employees – “Were they able to direct you to products?” or “Were they knowledgeable of product location in the store?” Poor scores on these questions suggest specific performance interventions.

In the case of customer loyalty, you need to look at the wide variety of performance / behaviors that collectively contribute to customer loyalty and define metrics that link to those behaviors. In a financial advisor scenario, we’ve seen this attacked by looking at metrics such as frequency of contact, customer satisfaction, products sold, employee satisfaction. With appropriate survey researchers involved, you often will gain insights over time into how these behavior-based numbers relate to customer loyalty. But, the bottom line is that you likely need additional assessment instruments that can arrive at more actionable metrics.

CONCLUSIONS

The real beauty of a data driven model for performance improvement is that it focuses on key elements of behavior change within a proven framework. More specifically, it directs actions that align with metrics that are already understood and important. It helps ensure commitment to action. It provides critical communication support, for example helping district managers communicate effectively with store managers around metrics and what they are doing. In helps hold the people involved accountable to each other and to taking action in a meaningful way. And, the system ties interventions to key metrics for continuous improvement.

One of the interesting experiences in working on these types of solutions is that it’s not always obvious what interventions will work. In many cases, we were surprised when certain interventions had significant impact and other similar interventions did not. Sometimes we would ultimately trace it back to problems that managers encountered during the implementation of the intervention that we had not anticipated. In other words, it sounded good on paper, but ultimately it really didn’t work for managers. For example, several of the games or contests we designed didn’t work out as anticipated. Managers quickly found interest faded quickly and small external rewards didn’t necessarily motivate associates. Interestingly, other games or contests worked quite well. This provide real opportunity to modify or substitute interventions. We also found ourselves modifying interventions based on the feedback of managers who had good and bad results from their implementation.
The other surprise was that very simple interventions would many times be the most effective. Providing a manager with a well-structured series of simple steps, such as what we refer to a “meeting in-a-box” and “follow-up in-a-box” would often turn out to have very good results. These interventions were provided as web pages, documents, templates, etc. that the manager could use and modify for their purposes. There was, of course, lots of guidance in how to use these resources effectively as part of the intervention. In many cases, the interventions were based on information and documents that was being used in some stores but not widely recognized or adopted. Because of the system, we then were able to use similar interventions in other cases. But, because practicality of interventions is paramount, we still had challenges with the design of those interventions.

Of course, this points to the real power of this approach. By having a means to understand what interventions work and don’t work, and having a means to get interventions out into the organization, we have a way of really making a difference. Obviously, starting and ending with the data is the key.

In 2009, I'm hoping that I will get to work on a lot more data driven performance improvement projects.

New Blog

Ingrid O'Sullivan has a new blog and is the first person to take me up on my post 100 Conversation Topics which asks people to start a conversation with me and get aggregated into 100 conversations. Good for you Ingrid!

Sidenote: I feel a little behind having just seen that the company that Ingrid works for Third Force actually acquired MindLeaders back in June 2007 and looks to be a fairly serious player. Normally, I'm pretty familiar with companies in the space, but I was not familiar with them. So, it was good for me to at least get them on my radar.

Ingrid's post tells a bit of a story that is likely familiar to other authors of a relatively new blog. Ingrid tells us that among her hardest challenges is deciding what to write in the blog ...
I’m pretty new to blogging [...] I really want this blog to grow, to be of interest to you our readers and provide relevant information to you. And boy is that hard… at least twice a week I am faced with the task of getting something ready to post. I question what I write - how personal should it be, if it’s too technical will it bore you, is it original and new, am I at all amusing or funny – this list goes on. And I think half the problem is because this is such a new blog, we are still discovering who you the readers are, and looking for feedback on what you want. I’m hoping as I gain more experience, have more “conversations” and learn from the likes of Tony, this will no longer be my hardest ongoing task - but for now dear readers please read with patience.
I think that it's likely the case with a new blog that you go from posting your first couple of posts that maybe come out quite easily to finding yourself wondering what to write about later. Likely there are some great posts out there that chronical the lifecycle of new blogs as they go through this early growing challenge. Take a look at what Janet Clarey had to say after her first 100 days - Debriefing myself…a noob’s experience after 100-ish days of blogging. I'm sure there are other good examples out there of this lifecycle - pointers?

Some quick thoughts as I read the post on her new blog ...
  1. You are right that trying to figure out the audience is helpful for any new blog. What kinds of questions do they have? Hopefully my list of topics helps. At least those are some of my questions and likely some questions that other people have as well.
  2. I think it's easier to write posts when you are writing almost as much for your own learning as you are for "the audience." I personally don't ever even think of "audience" or "readers" - many who I don't know. Instead, I think about people I do know who I know read this and somewhat have a conversation with them. But the bottom line, if you are interested in something, it will be interesting to the audience.
  3. Your past posts are definitely interesting. I personally would get more if you go a bit deeper on your topics. What are the challenges with being funny? personal? etc? What was a specific example of where you were challenged to find a topic? Is this something that you think other bloggers face (they do)? Point me to some examples of that? These would have been a bit more interesting conversation for me and likely other bloggers and likely your readers as well. A blog offers the opportunity to go deep and narrow. Oh, and, I will skip it (as will other readers) if it's not relevant. But I think the bigger risk is never going deep enough.
  4. Don't get too caught up in Measuring Blog Success. Your goal should be to have interesting conversations. Results will follow.
  5. Have you participated in a Learning Circuit's Big Question? This is a great way to get exposure to the blogging community and grow your audience.
  6. As you are writing a corporate blog, you have to walk a fine line. It's far more difficult than writing a personal blog outside the confines of a corporation. I would recommend staying away from promoting Third Force explicitly in your posts. You'll notice what I deleted above when I cut and paste. The extra stuff was not needed and a bit too promotional. You'll get the message across without that kind of stuff, but you will turn off some people with it. So, it's far safer to avoid it.
  7. Make sure you periodically engage other bloggers with them around their posts. Oh you just did. Well done. :)
  8. Take a look at Blog Discussion for some ideas on other ways to spark discussion.
As I wrote this, I realized that if we were at a cocktail party (a bit less public and with drinks) this probably would have come out much better. As it stands, it sounds far more critical than it should. I'm trying to be helpful and I actually think you are doing good stuff and it's a good idea for you (and your company) to have you blogging. So, I hope this is okay. Ingrid's not asking for a critique. She's just wanting to converse about it.

Ack, someone help me here. First, I Push People to Blog and then I critique them. That's not good. What should I have said to the writer of a new blog that would have been much more encouraging?

And anything else that would help Ingrid? I'm sure there are some other thoughts from other bloggers out there.

Tuesday, December 09, 2008

Training Design

I've been struggling a bit to capture a concept that I believe represents a fairly fundamental shift in how we need to think about Training Design.

Back in 2005, 2006 and 2007, I would regularly show the following slides to help explain the heart of what Training Design is all about and how it has changed over the years. Oh, and I called it Learning Design in the diagrams, but I'm afraid that it's really more about Training Design.



Basically, we conduct an analysis (sometimes extensive, often very quick) to determine what we are really trying to accomplish. We take into account a wide variety of considerations. And we consult our delivery model options to do this fuzzy thing - Training Design. Back in 1987, the dominant tool was classroom delivery and thus, we primarily created training and train-the-trainer materials. We kept these in notebooks which adorn many shelves today (but are getting rather dusty).

(And yes, I know this is a gross oversimplification, but it gets the point across.)

Ten years later, life was good because we had another Training Method available, the CD-ROM allowing us to train individuals.



Yes, we theoretically had this back in 1987 with paper-based materials, but we looked at the CD as a substitute for classroom instruction.

In 2007, we suddenly had a whole bunch of different delivery models. Virtual classroom, web-based training (WBT), rapidly created eLearning, lots of online reference tools such as help, cheat sheets, online manuals. We also had discussion forums, on-going office hours.



In many cases, this makes our final delivery pattern much more complex, but it greatly reduces the time required upfront by learners and allows us to get them information much more just-in-time and with more appropriate costs.

However, when you look at these models, the design is roughly the same. Maybe this more appropriately would be called Learning Design - or eLearning Design - or maybe something else that implies performance support as well.

Now the interesting part ... the heart of the picture and realistically how we approach training design in 1987 is the same as it was in 2007.

My sense is that we may need a new picture because of eLearning 2.0.

Yes, you can think of Blogs, Wikis, etc. as a means of enriching the Training Design much the same as a discussion group alongside formal instruction. Pretty much when Harold, Michele and I worked together to design the Web 2.0 for Learning Professionals Course, we settled on using Ning and it's various capabilities as part of the delivery pattern. This is the same picture as above.

However, what about the case when you are providing tools and really don't have the content defined ahead of time? How about when you build skills around scanning via RSS, social bookmarking, reaching into networks for expertise, etc.? What about when you help individuals about blogging as a learning practice? When you support informal / self-directed / workgroup learning? Is it the same picture?

Maybe it is? Maybe we conduct a similar performance analysis and take into account similar considerations and then provide appropriate structure (delivery pattern). Maybe we are providing a Wiki and conducting a barn raising session?

My sense is that there's something different about it? But I'm so used to having this as my mental model, that I'm having a hard time figuring out what the alternative is?

Monday, December 08, 2008

100 Conversation Topics

Today, I saw a post by someone suggesting ways to come up with ideas for blog post topics and they gave some examples. The examples were not all that relevant to most of the readers of this blog, but it definitely sparked a thought for me.

Almost every time I have a conversation, I learn something new. Most of the time I learn something, I write a blog post. But I don’t have nearly enough time to have conversations, learn and write blog posts. So now that some people called me influential, I’m hoping that I can leverage my influence to inspire people to have a conversation with me and help me with my lack of time.

So, here are my suggested 100 conversation topics that I wish I had time to speak to you (yes YOU) about. And since I’m sure I’d learn something, I’d likely write up a post about it. But since I don’t have time for either ...
I’m hoping you will just pretend we had the conversation and write a summary of the conversation we had.
If you are a blogger, then posting the conversation is great. Point me to it by putting a link to: 100 Conversations and the exact text "100 Conversations" in your post and I'll find it via blog search. Also please include terms and a link that will help it get put into appropriate categories in the eLearning Learning Community. I've included some examples below, but I got tired after a while and I'm hoping that you will help include terms / links in your post that will help them get categorized using some of the examples I've included. You can also point me via a comment.

I’ll also try to make sure that readers of this blog see it via blog posts. My goal is to make sure that I use this as an opportunity to have a more meaningful conversation with you.

If you are not a blogger then start a blog so we can have this conversation. It's a great start to your blog. And likely I can get you some initial traffic.

If you aren’t willing to start a blog, then send it to me via email, let me know if it can be public, and if so, I can see if it would work for me to post it somewhere.

This is a bit of an experiment, so please bear with me if I’m slow or don’t quite have it all figured out. And please follow the specific instructions above (about links and categories) to make sure that this works out. Oh, and if this is a really bad idea, or there's a better way to do it, or whatever, then maybe that would be good to have a conversation about.

Important - please keep in mind that the audience here is learning professionals involved in the use of technology for learning. So, please write the conversation for me and for them. Here are my 100 conversation topics …
  1. Here’s the eLearning Authoring Tool we chose to use and approach we used to evaluate and decide. And the major decision criteria that really differentiated for us was…
  2. Here are the surprises we found after we chose our eLearning Authoring Tool ….
  3. Here’s my eLearning Authoring Method or Trick
  4. Topics 1, 2, 3 for LMS, LCMS, Audio, Virtual Classroom Tool, Screencast, Wiki, eLearning Game Tool, etc.
  5. An eLearning Activity I created or Interactivity I added to an online course that I thought was a good idea.
  6. Here’s how I use Facebook for personal learning
  7. How I use Twitter for personal learning
  8. How I use Blogging for personal learning
  9. Where I believe social media can be adopted by learners in my organization.
  10. A plan for adopting social media as a learning tool in our organization.
  11. Where we have adopted social media as a learning tool in our organization. What our experience has been so far? What we’ve learned so far?
  12. My thoughts on the ROI of eLearning 2.0.
  13. The problems with eLearning 2.0 in my organization.
  14. How I found an answer to a work problem using a learning community.
  15. A search method I use that I don’t think a lot of other people use.
  16. Five presentations related to eLearning that learning professionals should see.
  17. Example of successful precedent searches. In other words, where and how do you find examples that you can use as a starting point?
  18. Where I’ve found good source training content for common training needs.
  19. Examples of how I conduct high consequence searches. In other words, what do I do when I need to make sure that I’ve found the right stuff. Found everything. So I won’t get a question from left-field that throws me off in my presentation.
  20. Which desktop search tool I use and why.
  21. My aha moment during a personal learning or formal learning experience?
  22. How I make my conference experiences more effective.
  23. Something that I’ve not seen written about recently that I think is really good to keep in mind.
  24. A long, lost blog post that needs to be revisited.
  25. Ways that my children are learning that is significantly different from how I learn.
  26. New places (online or not) that I’m finding interesting conversations.
  27. What I’ve given up in order to have more time for X.
  28. How do I decide what to scan and what to give up?
  29. How do I balance skimming and reading? How has this changed for me?
  30. Five videos of interest to learning professionals.
  31. Things I've done as follow-up to improve effectiveness of learning.
  32. Five podcasts for learning professionals.
  33. How do I envision my audience.
  34. After LinkedIn, Delicious, Twitter and Facebook, the next three are...
  35. The learning project I really think should be done by my organization.
  36. The best and worst use of money by learning professionals.
  37. How learning professionals as a group should be better using social media to help each other.
  38. My recommendations of books for learning professionals.
  39. A tool I haven’t seen that I’d like to have.
  40. My ideal conference would be?
  41. Branding and messaging strategies I’ve used for my learning initiatives.
  42. My favorite learning story.
  43. People I’d like to meet in person to have a conversation.
  44. How I would/did explain informal learning and social media to my CEO or the head of a business unit.
  45. eLearning in five years in my organization.
  46. How my job will have changed in five years.
  47. Examples of recent, interesting online conversation.
  48. Examples of dumb things that people say about learning in my organization.
  49. If I had a magic wand, I’d …
  50. Common objections I run into in my work … and what I do.
  51. What I’m doing to have a personal brand inside and outside my organization.
  52. Where I add the biggest value to my organization or client organizations. What do I get paid for? Where are the disconnects between my value add and my pay?
  53. What my next professional will be and things I’m doing towards that today.
  54. Email tricks I use.
  55. Something I’ve done to act as a catalyst to other people.
  56. Something I’ve done where if I had asked for permission I may not have got it.
  57. Something or someone that deserves praise (another blog post, blogger, worker, etc.)
  58. A recent tough decision that I had to make.
  59. A tough decision I made and that looking back, I wish I had known something else, done something different, etc.
  60. While I hate networking, here’s how I do it.
  61. How I network inside my organization.
  62. What I do to stay in touch with people I meet.
  63. My hardest ongoing task.
  64. How staying in touch has changed for me over the past five years.
  65. Tools that I get free and tools that I pay for and why I’m willing to pay.
  66. A great example of the use of a tool that really impressed me.
  67. My favorite learning quotes.
  68. How I do my work and where my work skills have changed in the last ten years.
  69. What I’ve learned in the past year that I don’t think most of my co-workers know.
  70. Important questions that every learning professional should be asking.
  71. A great source of information that I’m not sure people know about or maybe have forgotten.
  72. My ways for keeping things organized, finding things again, and keeping lists.
  73. Tricks to working with internal IT that seem to work well.
  74. How I take electronic notes as compared to taking notes on paper that I learned to do in school.
  75. Notes I take when I’m searching.
  76. How I structure my concept work tasks. Things I do at the start. Things I do during.
  77. How I find good lists.
  78. How and examples of exploring a new space. My company is thinking of entering a new market and I want to get up to speed on it. I want to understand about eLearning for sales people.
  79. What do I do if I’ve searched for something and I’m not sure if it exists. I’m considering writing about a tool that I think is needed, but I want to make sure that it doesn’t exist before I write about it.
  80. A search tool other than Google that I use, when and why.
  81. A barrier I face at work.
  82. Something my organization is doing to help employees that I’ve not heard much about from other organizations.
  83. Something I do that I suspect there’s a better way, but I’m just not sure what.
  84. A recent concept work task that seemed a lot harder than it should have been.
  85. A trend I’m seeing in my organization.
  86. Hip learning trend during the past that was way over hyped.
  87. Cool Word, Excel, etc. feature that I use when creating learning objects.
  88. Free media sources that I use – stock photography, clip art, animations, sounds, music.
  89. Something I’ve heard about learning or eLearning that I’m not sure if it’s true.
  90. A general principle or rule that I don’t follow and why.
  91. My top goals.
  92. If we hired someone to replace me that had a similar background, here are five things I don’t think they’d know.
  93. Metrics we actually track that are meaningful and useful that may be applicable to other learning professionals.
  94. A tool we use professionally that I thought worked well even though I wasn’t sure at the start.
  95. Examples of working with different generations in the workforce differently.
  96. Something we should borrow from other professionals (librarians, anthropologists, etc.) to help us.
  97. If I could get the following people (names or types) together for a conversation, here’s what I’d want to discuss.
  98. My top challenges in my work
  99. How I find blog topics
  100. The 5 conversation topics that are not listed in Tony’s that I’d like to have with other people that Tony and his readers would likely find interesting.
  101. (Just added) Why 100 Conversations is a really good or bad idea?
I look forward to our conversation.

Interesting Information via eLearning Learning

We've implemented a few features in the eLearning Learning Community. You can see the first feature by visiting the site and clicking around on terms. What you will now see is that the left hand keywords are sorted based on a matching algorithm that takes into account how related that term is to the currently selected terms. As an example, I can see that eLearning 2.0 relates closely to the concepts Learning 2.0, eLearning Tools, Corporate eLearning, Personal Learning, and Enterprise 2.0; the Tools Odeo, CollectiveX, Bea Pages, Apache Roller, and Dogear; and the Companies NexLearn, Awareness Networks, Element K, and Mzinga.

This is far from perfect, but it generally gives you a pretty good sense of what relates to what. I'll be curious what people find that are interesting associations of terms.

This approach also helps to show what terms particular bloggers talk about more often than other bloggers. Below are some of the bloggers who are participating in the community and when you click on the link, it will show you the community page with them selected. On the left side you will see the keywords sorted in order based on how often they blog about these terms as compared to the terms used by everyone (with a couple of other factors thrown in that give a better result). I've listed a few of the terms that I saw.

As you drill down it takes into account both the source and the additional terms that are selected.

Again, I'll be curious to see if people find patterns in this information.

Lars Hyland - Lars is Learning
Good to see Learning Technologies, Mobile Learning come up for Lars. Gives me an idea of who to talk to. Also interesting topics: Cognitive, Effectiveness, LCMS, Social Network, iPhone, Director.
Ken Allan - Blogger in Middle-earth
Ken talks about everything including some that come up for me Writing a Blog, Analytics, Communities of Practice, Photoshop, Pipes, Firefox, RSS.
Mel Aclaro - Business Casual
Mel has a lot of interesting topics, but clearly a lot around social topics (Social Media, Social Networks, LinkedIn, Twitter, RSS, Social Network, Facebook), but keep him in mind for Streaming, Privacy, and Internationalization.
David Fair - Learning Journeys
Talk to him about graphics (PNG, JPG) and staying organized (Tagging, Information Overload).
Joe Deegan - Blender - Training Solutions
Freeware, ILT, SWF, ROI, Wondershare, PowerCONVERTER, Adobe Captivate, Photoshop, SharePoint (SharePoint Examples - Joe, can you help me?)
Other Sources - you can click and see what they write about on the left.