Thursday, July 31, 2008

Preface to a Month of Poems

This is a shaggy dog tale if there ever was one and I'm not sure how interesting it will be to others, but I find it a curious example of how the mind works. At least, my mind.

I am about to start a new post where I will read and comment on a poem every day for at least a month. The process will be:

  • Each day I will read a poem by a different poet
  • I will will then write a short comment about the poem, the poet, or some random thought the reading of it instigates.
  • At the end of the month I will either stop, or if I still find it interesting, I'll keep going.
Since this is an experiment and I don't want to bloat this site with lots of short -- possibly boring -- entries, the entire month of poems will be a single post that I will edit each day.

Why do this? Well, that's what I find curious. It all started when I was cleaning my office and looking at my shelf (actually, shelves) of Nintendo DS games, several of which are still in the wrapper. I want to play them, I just don't have a lot of time. That's when it occurred to me to help encourage me to play them by creating an exercise: a month of video games.

The original idea was to play a different video game a day for an entire month. That would get me through most of my DS collection, including both games I've played before and those I haven't. I could then use the excuse of commenting on them here in my blog to complete the exercise.

Unfortunately, there was an immediate problem with this plan. Video games, even the simplest ones, take time to get used to. Quite frankly, even if I played for an hour a day, there are a number of games where I would not get sufficiently involved in or comfortable with their controls to do any more than frustrate myself.

So that wasn't going to work. Next step was to think of similar things that I don't spend enough time with. The obvious answer was my collection of poetry books. I had recently rearranged the bookshelves and --for lack of any better scheme and as a change from my previous by school or genre organization -- I sorted the books alphabetically by author.

So the next plan was to read a book a day for a month, reading from one end of the shelf to the other. Since I have far too many books to read them all (that would be more like a year of poetry), I decided to limit it to a different poet each day.

But I still have the problem of time. Poetry books, like video games, take a while to get involved with. To be fair to a book of poems you need to familiarize yourself with the poet's voice (or voices), their style, what you could refer to as the ontology of their poetic world... But unlike video games, poetry books are made up of individual poems that are -- in most cases -- intended to stand on their own. They do not require learning a control scheme, a background story, or any other prerequisites.

Which led me to my final refinement: reading a single poem by a different poet each day for a month. I have no idea if this will result in any useful revelations for either myself or the readers of my blog. That is why it is an experiment. But succeed or fail, it should be interesting to find out what happens.


Thursday, July 17, 2008

The KM Core Sample

One of my favorite diagrams of the past year or so is what I call the "KM Core Sample". The Core Sample is not really an architectural diagram, since it shows no process or function that can be implemented. But I have found the diagram to be extremely useful in explaining why knowledge management is such a complex topic and where various KM methodologies "fit" within the strata of the knowledge universe.

(Click to expand)

The Core Sample is -- like its name sake -- a snapshot of a point in time. It captures the various levels of "knowledge" and where they reside. The diagram also illustrates the rationalization and codification of knowledge as it rises through the layers.

That last statement might sound like the description of a process: the codification of knowledge. But what I like about the diagram is that it shows that different types of knowledge reside in all levels at any given time.

This is because the process of codifying or standardizing knowledge into actionable procedures and practices actually changes the knowledge. It cleanses, sanitizes, and simplifies the knowledge -- removing the stray tidbits, the ugly but necessary workarounds, the secret tricks of the trade... all of the untidy clutter that make up true expertise in a field -- all of this is stripped off to achieve a linear, documentable, process.

But back to the diagram. Let's take a quick look at the various strata of the core sample:

  • Starting at the bottom, at the very core, are people. This is where true knowledge exists. In other words what people know. And the most accurate way of sharing that knowledge is talking to the people who possess it: asking questions, telling stories, cracking jokes.
  • The next layer up is where that personal communication is expanded to allow people to "talk" to others they do not know or cannot meet in person. Email distribution lists, forums, and other discussion technology reside in this layer. (Note that blogs are also in this layer.)
  • The next layer up represents "knowledge capture". Here the knowledge is instantiated in documents of some kind: sample documents, lesson learned, case studies, white papers. These all represent mechanisms used to selectively capture and sort knowledge in such a way that it can be reused by people who may never come in contact with the original author. The obvious limitation is that only a small portion of what any individual knows about their profession is captured in any of these documents. This is offset by trying to capture the most important or influential pieces of wisdom.
  • Finally, in the top layer the captured knowledge and learnings are further refined into a defined set of templates, guidelines, and standard processes. In some sense, you might say that in this final layer the actual "knowledge" has been removed and is replaced by step-by-step procedures to ensure a consistent and reliable execution of desired behavior. To achieve this goal, a significant amount of sorting, sifting, and selection is required to winnow down all possible options or alternatives to a limited set of recommended or required processes and deliverables.

What I like about the core sample diagram is that it helps you discuss the scope and effects of different approaches to knowledge management. Collaboration strategies focus on the tacit knowledge layer. Methods like knowledge harvesting, lessons learned, and storytelling focus on the best practices layer. While ITIL, Six Sigma, ISO 9001, and other standardization methodologies focus on establishing institutionalized knowledge.


Monday, July 14, 2008

Implementing Web 2.0 Inside the Castle Walls

All the buzz about Web 2.0 and Enterprise 2.0 is exciting and good for theoretical discussions and all, but how do you actually go about doing something about it?

In response to one of my previous posts zyxo commented that Enterprise 2.0 is not just Web 2.0 inside the firewall. True. It is certainly not just implementing technology, for sure. But it is also more than just thinking differently. It is acting differently and managing knowledge differently. And that change is impossible using the traditional business applications that are built on old assumptions about security, ownership, and usage. So at some point you must tactically bring social software into the mix.

As I mentioned before, process is extremely important when you bring social software in-house. It is the process that needs to change or adapt for web 2.0 to have any impact on the business. (It won't do any good if you switch from SharePoint to wikis if no one knows its there or cannot access it due to security restrictions.)

On a more tactical level, you need to understand what usage you expect and what you don't, so you can manage the technology and its content. You need to identify success criteria so you can tell whether you are succeeding in solving a problem or not. At the same time, you don't want to apply so much control you squelch the inherent viral nature of the technology which requires users trying and learning for themselves.

More importantly, you are operating in a microcosm -- the scope of your company employees -- rather than the entire web universe. This significantly reduces the elemental power of many web 2.0 technologies and in some cases may make them totally ineffectual.

There are five ways of making the shift to web 2.0 technologies inside the firewall. (Actually, I have only seen three or four "in the wild", but there are additional options you might want to consider.) Needless to say, the most common option is not necessarily the most effective:

  • Build it and they will come -- this is the process-less option. Set up a web 2.0 technology inside the firewall and let people use it as they will. This is quite common with blogs, bookmarking, and wikis. The problem is, as mentioned before, there is no way of telling if these technologies are succeeding at solving a business problem. A more inherent problem with this approach is that if your users are already using the same technology outside the firewall to manage their links, their friends, or whatever, why would they use an internal version and then have to maintain both? And if they aren't using the technology outside, what would drive them to use it inside? There is no impetus to use the service for new users and a deficit for existing users. The service tends to sit idle or be used by only a few enthusiasts.
  • Replicate what succeeds on the web -- otherwise known as "Wikipedia inside the firewall". If it worked outside, it should inside as well, right? Well, not quite. Many times the first thing a company does with a wiki inside the firewall is try to create an internal wikipedia. Why? What information do they expect to collect here, that isn't readily available outside the firewall already? And do you have the enthusiasm for maintaining business-related content that the maintainers of Wikipedia have? Ditto blogs. Follow the external model where anyone can have a blog. Many get started, few stay alive. Why? Because, quite frankly, there are usually several other, well established, channels for sharing information within a corporation and the blogs create an alternative, competing signal.
  • Define a process and pilot -- This is the traditional business approach: define what the technology should be used for, who should use it, and run a pilot to test it. The only problem here is that most web 2.0 technologies are dependent on a critical mass of users to be effective. Five people editing a wiki or three people blogging is not necessarily going to tell you much about the potential of these technologies. Also, because these technologies often offer new usage models (rather than computerizing existing processes) it is easy to miscalculate what processes will actually benefit from their application.
  • Establish a service and solicit trial cases -- This is a combination of #1 and #3. I have never seen this done, but it seems like a reasonable approach. Have IT establish an internal service, then ask the business groups to propose pilots (i.e. processes to apply the service to). This will have a better chance of exposing innovative applications of the technologies to business cases and would require the declaration of the business process that it is being applied to.
  • Extend existing services/processes using web 2.0 technology -- I have not personally seen this in use elsewhere (except where we are doing it ourselves) but most of the current success stories of web 2.0 inside the firewall -- such as IBM's Fringe -- involve extending or integrating existing services or applications with web 2.0 technology. Fringe adds tagging and rating of people that is integrated into an existing white pages application, as I understand it. This is possibly the most likely approach to succeed because the existing application provides an inherent process, an established audience and user base, and linkage to familiarize users with the new capabilities.

Linking web 2.0 technologies to existing systems has another benefit -- it justifies their existence. For example, social tagging inside the firewall vs. social tagging outside has little to recommend it, and a number of drawbacks. A smaller audience, less flexibility to grow and extend features, simply less exposure and name recognition than public services... On the other hand, tie that tagging to how the corporate intranet search works (automated favorites, improved relevance, best bets, etc.), and users will start to see the direct impact of their use of the internal service as well as having the service in front of them on a regular basis when they search.

Wednesday, July 9, 2008

Understanding Technology Adoption From the Customer's Perspective

Much has been written about the adoption of technology from an industry perspective. Clay Christensen in The Innovator's Dilemma, Geoffrey Moore in Crossing the Chasm, and Malcolm Gadwell in Tipping Point all articulate models for the adoption (or lack thereof) of technologies based on their position in the product lifecycle.

However, as interesting as these models are, they provide little solace to the individual customer who is trying to decide whether to purchase and rollout a specific technology for his or her own business. All of the preceding authors discuss technology at a macro level: its adoption by the market in terms of volume of customers. But for each customer, there is a second, more important adoption that occurs after the purchase: the rollout and, hopefully, successful integration of the technology into their specific business processes.

The problem is that no matter how "successful" a product is in the market, there is no guarantee it will actually prove to be effective when applied to a specific business situation. SAP may be the poster child for this syndrome, where several large-scale implementations are rumored to have proven unusable in the end and ultimately had to be abandoned.

So, what does determine if a technology can successfully be incorporated into an existing business environment? The answer is not related to the technology's current marketing position or "disruptiveness" -- although that will impact the outcome. The real attributes that influence the success or failure of technology rollout in an individual business are all related to the business itself: its culture, its environment, and its history.

Traditional Technology Rollout Plans

Any corporate technology plan worth its salt includes an adoption chart showing the expected rollout over time. These charts fall into two categories: the "s" curve and the stairstep.



The "s" curve shows a slow but steady adoption shifting to a steep climb flattening out at a plateau, usually marked by 100% of the target audience. This model follows the "chasm" or "tipping point" theory where at some time enough early adopters are using the technology that word of mouth takes effect and rollout becomes self-realizing. Adoption ramps up until success is achieved. (Here is an example.)

The stairstep is a more phased approach and assumes adoption based on the ability to train users. The steps in the chart are usually based on incrementally adding divisions or projects as the technology is rolled out progressively through the corporation.

Neither chart takes into consideration that employees may choose not to use the new technology or may actively resist using it. And as much as we would like to think it doesn't happen, these are the real reasons technologies fail. There may be technical problems. There may be bugs and system failures. But ultimately what determines any technology's success or failure is whether the target employees agree to use it or not.

Understanding the Technology Adoption Curves

Where traditionally adoption is seen as a single curve, there are actually three equally important variables that need to be considered:

  • Adoption rate
  • Rejection rate
  • Misuse and abuse

Therefore, the real adoption might look something like the following diagram



Adoption is the number of employees actively using the technology. Resistance is the opposite of adoption; it is the number of employees who refuse to use the technology or actively complain about it to their friends and colleagues. Misuse is the number of users who are using the technology, but in ways it was not intended (and usually for activities that should not be encouraged).

Real adoption rates are more erratic and event driven than the theoretical s-curve or stairstep. There is usually a series of "bumps" with each announcement or management memo concerning the new product. However, usage then drops off after the bump. Why? Because unless the users see a direct impact on their own jobs, there is no incentive for them to keep using the new technology (beyond management dictate). So with each memo more people will try it; some will stick with it, but others will stop.

Resistance is difficult to measure, but has a real and significant impact on adoption. If users find the technology objectionable, too hard to use, or simply burdensome, they will avoid it, work around it, or use it grudgingly (and often badly). Resistance will tend to exaggerate the spikes and can often lead to a drop off of usage over time.

Misuse is the hardest to account for, but is again a serious problem. The classic example of misuse is email: many users in large corporations use email as an archiving tool -- emailing themselves documents as a way of saving them (rather than leaving them on their PC and risk their loss). The result is quick saturation of the mail storage system with little or no way to sort out the "misused" mails from real business correspondence.

Understanding and Accounting for Resistance

It would seem that resistance to a technology is solely a reaction to the usability or applicability of the technology to the function it performs, but that is not the case. People can reject technical solutions for a number of reasons. Yes, if it is difficult to use or hard to understand, resistance will be higher. But it also depends on whether there is already a solution in place.

Replacing an existing tool can be more difficult than instituting a new tool. Even if the existing processes are outdated or overly complex, employees can be resistant to replacing the known for something new. And it doesn't have to be one technology for another. There can be resistance to implementing a technical solution to even a manual process, if they see the manual process as "working". In other words, unless the employees themselves see a problem, they are not likely to appreciate or accept the solution.

This is particularly problematic when replacing multiple point solutions with a single corporate-wide technology. Each of the existing tools will have advocates who will adamantly argue the merits of their own solution over the proposed replacement. And. quite frankly, in many cases their arguments are not entirely baseless. Each division may have instituted a point solution tuned towards their needs and a corporate-wide solution is likely to result in some loss of functionality. Even if the overall outcome is better for the company, these separate divisions will see it as a step backwards for their own purposes.

So resistance is actually the result of a number of factors:

  • Corporate culture: how accepting the organization is of change and technology in particular
  • Current environment: whether there is an existing solution (or solutions) in place that is being replaced
  • History: whether past rollouts have gone well or badly will heavily influence the receptiveness to further change

Clearly, when the technology you are implementing provides a unique and obvious advantage to the business and to those who must use the technology, then resistance will be low. But that combination of variables is rare. In most cases it is useful to take resistance into consideration when planning your rollout to lessen its impact.

Usually resistance can be overcome with sufficient management support. The implicit threat of losing one's job for not following through on a management dictate can help drive adoption. But at the same time, it will foster additional resistance as well. So if you take this approach, you better be sure you have the necessary management support -- and not just verbal support -- to address any complaints that arise about overly aggressive deadlines, time lost to training, missing or faulty features, etc.

Getting sufficient management attention for an extended period of time is not always possible, so the other option is to try to avoid resistance by not asserting too much pressure for adoption. In other words, using the carrot instead of the stick. Obviously, the tradeoff with this technique (i.e. not demanding strict adoption or applying management pressure) is that adoption will be significantly slower. On the plus side, done well, the adoption will be slow but steady, as there will be less resistance. But if there are strong advocates for alternate solutions, even this approach is likely to fail.

With either approach, there is likely to be at least some resistance, and the best policy is to preemptively counteract it. How? Preferably before, or as early as possible during, rollout identify the most likely sources of resistance. That is, alternative solutions, processes that will be affected by the change, and so on. Then identify the primary advocates for the alternatives or the most reputable critics of the change. Finally, approach these people personally. Explain the plans for rollout, the rationale, and ask them what are the most significant issues they foresee in adoption.

The goal is to persuade these key individuals that the plans are taking their concerns into consideration. To succeed, you may need to actually change the rollout plans or modify the technology somewhat (which is why doing this before rollout begins is preferable) because the goal is to convince them that their concerns are being taken seriously. And the only way to do that is to take them seriously.

Note, I did not say find the loudest or the harshest critics. The key is to find the most respected, dedicated, and sincere advocates. Loud critics can make your life a pain, but they can be overcome -- or at least counteracted -- by reasonable, respected people. You want to find the gurus, the experts people turn to for help. These are the people you want to convince.

Note that I also did not say convince them that the planned rollout is the best option. Be realistic. You will not be able to convince everyone that your plans are the right solution. The goal is to get them to recognize that it is at least well thought out and that their alternatives have been considered, even if rejected. This way, they may not turn around and advocate for you; but at least they will not argue against it and are likely to stand as a voice of reason during any confusion that arises during rollout.

Understanding and Accounting for Misuse

Misuse is different than resistance. Whereas resistance results in a downturn in adoption, misuse will give the impression that adoption is progressing well, because it involves active use. The problem is that the use runs counter to the original intent and may well interfere with the ultimate business goal.

Misuse can be very hard to identify and sometimes even harder to stop once begun. Like resistance, the key is to try and predict where it will occur and then (if it is serious enough) design around it, rather than trying to clean up after it becomes epidemic. But unlike resistance, where you can often guess where opposition will come from, it is difficult to predict in advance all the possible misuses of a system.

Take SharePoint, for example. SharePoint is a very useful tool in some ways -- it mixes the best of automated web site design, document management, and Windows-based security. But it doesn't do any of them in any great depth. It provides the easy creation of sites and subsites, libraries and lists.

But if you allow users to readily create these repositories (which can be a very efficient way to manage artifacts -- especially for smaller projects or teams -- without requiring a "librarian") you are also making those individuals responsible for the appropriate use and maintenance of those sites. Unfortunately, as eager as people are to create repositories, it is very hard to get them to do proper maintenance and delete them or clean them up periodically.

So two possible misuses of SharePoint are creating too many sites and not removing "dead" sites when their usefulness is over. (This is above and beyond the usual misuse issues such as storing and sharing inappropriate materials: copyrighted music, videos, etc.)

The use of disk quotas gives the appearance of alleviating these problems, since it stops runaway inflation of individual sites. But it doesn't actually stop the misuse. People can just create more sites if they can't increase the volume of the ones they already have. Also, disk quotas do not address the problem of undeleted "dead" sites. Restricting the number of sites any one user can create is another deterrent to creating too many sites, but involves an arbitrary limit (how many sites is "too many"?) and can result in animosity from your user population.

One alternative, if you suspect this lack of cleanup will be prevalent, would be to institute a policy of deleting sites that become inactive for a set period of time. Note that to make this practical, you will need to enhance the application itself to identify and automate this procedure.

Users will still complain that the content in inactive sites (for example, sites with no one accessing them for more than 60 days) is still needed. But unless SharePoint is also your archiving tool (a really bad idea, by the way), storing old content offline can be easily addressed with alternative, less expensive solutions.

The key is to predict what forms of misuse are most likely to occur based on the nature of the business, proclivities of the users, and any gaps or open capabilities in the technologies and processes being rolled out. This may require some imaginative thinking. More importantly, once the danger areas are identified, there may need to be changes or additions to the technologies themselves to ensure the desired processes are followed and negative alternative uses are avoided.

Note that you don't want to eliminate all alternatives, since users are likely to discover creative and effective business uses for the technology that were never planned. But this is another reason why it is a good idea to monitor the rollout periodically (every 6 months or so) to see what sort of uses are developing. This allows you to catch both misuses you hadn't thought of but need to account for as well as creative new uses that you may want to acknowledge and promote throughout the user community.