Saturday, November 14, 2009

Approaches to Sustainability: Design to Zero


[Continued from Approaches to Sustainable KM: Embedded KM]


Another approach to sustainable KM is "design to zero".

It's impossible to start a business initiative with no resources. Even if it is only your own time and attention, there is some expenditure required. And usually there is a lot more than just that.

Any new project requires a "bump" to get it started. This might include training, hardware and software expenditures, project management, etc. Any number of capital or resource costs are needed to get things going.

The problem is that budget planning often only accounts for the short term (2, 3 or perhaps 5 years at most). For projects with a defined endpoint, this is OK. However, almost all KM projects are intended to run indefinitely. (It doesn't make sense to stop sharing knowledge after 3 years, does it?)

This means that, although it may not show up on the plan of record, the KM program must account for the ongoing maintenance of long-term projects. If you load up your KM program with management of ongoing initiatives, there are two negative consequences:

  • If budgets and headcount are cut (or worse, eliminated), you have no choice but to abandon one or more of the initiatives, usually bringing the program to a screeching halt. Without the expected leadership and constant "push", non-sustainable programs fail when the budget stops.
  • Even if budgets stay the same, after a while, you have no spare resources to start new programs. Even if a good idea comes along (such as Enterprise 2.0) your KM team is fully booked and you do not have sufficient resources to start anything new without impacting existing programs.

How do you avoid this dilemma? The key is to design each project -- from the beginning -- to reach zero cost.

That doesn't mean management goes to zero or that residual effort goes to zero, but that the program is designed to become self-supporting at a set point in time.

So rather than planning for the the first year "bump" and letting maintenance trail on indefinitely, plan from the beginning that management of the program and responsibility for its ongoing success will be transferred to the appropriate people within the organization. Sometimes this means the project may need a bigger expense up front (as soon in the graph below). But the benefit is that the project then becomes self-sustaining and the KM team can move on to tackle other tasks.


Going back to our example of embedded KM, it is not sufficient to have the idea to invite architects from other disciplines to the project reviews. You need to make sure that the organization running the reviews understands that their success depends on this outside participation and, therefore, they are responsible for making sure it continues once the program is off the ground.

As with any sustainability practice, not all projects are suited for design to zero. But far more projects are than you might expect. The trick is to look for the part of the organization (usually lines of business) that will benefit most from the effort. Engage them early in the planning, so they feel responsible for its success. Then get them to commit to ongoing management as part of their regular business cycle.

[To be continued]

Tuesday, November 3, 2009

What I'm Playing: Professor Layton and the Curious Village

Last night I finished Professor Layton and the Curious Village, the puzzle/mystery game for the Nintendo DS. I know I'm a little late -- the game's been out for more than a year now and there is a new Professor Layton game that's already been released. But I will not be playing the second game.


Why not? Because the Curious Village is surprisingly boring. You'd think it was right up my alley. It has puzzles (I like puzzles). It has a mystery (I like mysteries). You can save at any time (I have limited free time so being able to play in short bursts -- which this game is ideally suited for -- is essential for me). And when I saw the original trailer, I loved the art style and the animation.

But I didn't enjoy it. To start with... Hey! Where'd the animation go? Except for the opening cinematics (and the final scene) there is almost no animation. The story -- what there is of one -- is told in a slide show of static images and printed text. And there aren't very many of these either, since the village is pretty small. Get ready to see the same places over and over again.

Then there is the mystery, which is really no mystery at all. About a quarter of the way through the game, the primary "mystery" of the village becomes self-evident. At that point, the game becomes an exercise in slogging through the puzzles and waiting for new areas to open up.

Which brings us to the puzzles. I like puzzles, I really do. But the worst problem with Professor Layton is I don't find its puzzles satisfying. These puzzles are not mental exercises, they are more what I would call "trick" puzzles, similar to the "move two match sticks to form a different picture" variety. (In fact, that specific type of puzzle shows up several times.)

Now you might say that these types of puzzles teach out-of-the-box thinking, where you need to look at the question in a new way to recognize the answer. However, in many cases, you either see the trick of your don't. If you don't, then the puzzle is simply a frustration. If you do, you quickly answer it and move on, without learning much or feeling any great sense of achievement. This is especially true the third or fourth time you have to answer the same type of puzzle.

Quite frankly I find the puzzles in the "educational" titles Brain Age and Big Brain Academy far more animated, enlightening, and ultimately more fun than dealing with the professor and his mysterious village.

So why, you might ask, did I finish it? Well, to tell the truth, I wanted to prove that my guess as to the answer to mystery was true (which it was). And, in fact, the last few puzzles in the game really ramped up the difficulty and required some serious brain power to solve -- and they were subsequently more satisfying to master.

But it was far too little too late to make up for the general tedium of the game. It's a shame. I really like puzzle games and was hoping this one would live up to the hype. But I guess I'll have to keep looking...

Friday, September 25, 2009

(Silence)

Ah! Silence, a blessing... or a curse?

I realize Incredibly Dull has been quiescent for the last two months. No, I was not hit by a bus, run over by a train, nor did I run out of things to say. What did happen was I started a new job which has been occupying my time and attention (in a good way).

However, now that I am in the groove, I expect to get back to posting blog entries, particularly on sustainable knowledge management, games, and poetry & the other arts. You can consider that a promise (or a threat, depending upon your point of view). But for now, back to our regularly scheduled program...

Friday, July 10, 2009

Approaches to Sustainability: Embedded KM


[Continued from Sustainable KM: Principles & Approaches]



The first approach to sustainable KM is fairly obvious: embedded KM. This is where you embed knowledge management practices into the existing business processes. Some of the benefits to embedded KM are:

  • Rapid adoption
  • Tied to business metrics

An added advantage is that -- if done properly -- there is little or no need for training since there is little or no change to behavior. If you place the knowledge management procedures in the existing operational processes, they will be completed by people as they perform their day-to-day work.

This is best explained by example:

  • Say your business process already includes reviews at key milestones. It is likely that certain documents are required for the review: project overview, costs, schedule, etc. There may already be a template for these documents.
  • If one of your KM goals is to make employees more aware of other projects to increase shared resources and reduce overlap, an easy way to do this would be to simply collect the project overviews from each review to create a project catalog.

First, let's look at an embedded KM solution with little or no change to the process. Since the documents are already being created and the review is an existing milestone in the process, the only work needed to implement the KM catalog is creating the repository (a one-time event) and influencing the review managers to post the appropriate documents in it. Working with the managers responsible for the reviews, it should not be a major hurdle to get this slight modification to the standard process in place.

However, collecting information is only part of the solution. To have any impact at all, the information has to be used. And, quite frankly, a directory full of unsorted documents is neither particularly useful nor appealing to anyone. So to complete the circle there are at least two additional steps:

  • Adding steps to the project lifecycle to remind team members to use the repository. (Again, this is best done by tweeking existing process steps or milestones, such as the steps for starting the requirements gathering, design, and/or implementation stages.)
  • Making the repository easy to use.

The latter step sounds deceptively simple, but is really the crux of where KM projects go astray from a sustainability perspective. For the repository to be useful, it needs to be searchable/sortable in some meaningful way. The primary way to do this is to provide metadata about the documents; classifications such as industry, country, client name, etc.

Rarely is the interesting information clearly or consistently labeled in business documents. So the only way to effectively add the metadata is to:

  • Require those contributing to the repository to fill out the metadata for each item
  • Modify the templates used to include specific fields for the metadata (and get the authors to use them correctly)

Suddenly our simple update to an existing process has become far more complex. Getting people to fill out input forms including metadata is extremely hard; they are either put off by the form and so resist contributing or they fill out the form with incomplete or misleading information. The same problem occurs when templates contain embedded fields. Making sure everyone uses the new templates and uses them correctly is a significant training and communication task, besides the effort required to maintain and periodically update the templates. Finally, the only way to make sure the forms or templates are being filled out correctly is to monitor the submissions to make sure they are complete and meaningful.

As you can see, a simple "improvement" to the system quickly burgeons into extended activities required to maintain and support the process. Conceptually, you can view a "capture & reuse" program such as described as being positioned somewhere on a curve from no modification to optimized for reuse.

When there is little or no modification to the content, very little management needs to be applied and the process is highly sustainable. Unfortunately, this also leaves the work of providing relevance up to the users of the content, which is often too much to make the effort worthwhile. (In other words, the system may go unused and have little value.)

To make the information usable, it needs to be modified to provide relevance. However, doing this requires changes to the process by which it is captured, adding new challenges in communicating the new process, training the users, and encouraging submissions. The content becomes more usable, but significant, ongoing effort is required to enforce and monitor the submissions, since the effort is now put on the contributors rather than the end users. In other words, an unsustainable process.

The trick is to find the middle ground (indicated by the blue box in the following diagram), where the content is usable enough and the contribution process simple enough that people can and will manage their own use of the system. Experience teaches us that this "sweet spot" can be an extremely narrow segment of the curve and very hard to hit.



Now, I must confess. I cheated in my previous example. Creating a project catalog from existing documents is often the first instinct. But as I point out, processes managing this sort of explicit knowledge quickly evolve into complex, unsustainable programs from only minor "improvements".

So let's consider an alternative. Rather than trying to make all project knowledge available to anyone, what if we simply try to expand the current knowledge base incrementally over time? Rather than collecting the review documents, why not include at least one reviewer from an unrelated project to each review? This could be an architect, implementer, or project manager as long as that person can provide an objective, outside view of the project progress.

This approach has numerous benefits, but two in particular are related to our example:

  • First, from a project management perspective, the outside reviewer helps to keep the project team "honest". It is easy for internal reviews to become formulaic rubber stamp events if those involved are all working on the project.They do not have enough distance to see hidden pitfalls and will resist calling foul on people they have to work with on a daily basis.
  • Second, from a KM perspective, including outsiders gives at least one person a much more indepth and personal knowledge than could ever be gained by reading a set of historical documents with no one to explain them. Another value from a KM perspective is the opportunity the reviewer and the project team have to exchange knowledge, hints, and tips on the fly and in context of the discussion.

The outside reviewer will take this knowledge back to their own project where it may or may not be used immediately. But it will stick with that person for a long, long time due to their intimate interaction with the other team.

What is required to make this program work? Initiating the program may be difficult because it requires diplomacy. On the other hand, it involves only a limited number of people. What is needed is to convince management (either of the project teams or of the review process itself) of the efficacy of including outside reviewers. Although there are KM benefits, the real advantage is that the proposed process has management benefits, as described, and will make the reviews more meaningful.

Once you succeed at convincing them to make the modification to the review process, the program then becomes essentially self-managing from a KM perspective. The project management teams are responsible for ensuring outside reviewers are included and with each review, little by little, knowledge is shared across the organization.

Again, there are a number of details I have left out that will impact the efficacy of such a program. How outside reviewers are selected and/or rewarded for their efforts can significantly impact the extent to which knowledge spreads as well as the reviewers' willingness to participate. But all of these factors can be addressed either up front or at a periodic (annual?) review of the program, with little impact to the individuals who carry out the plan. The program will continue to expand the knowledge base over time with little or no input from the KM team. The KM team can move on to addressing other issues without being tied up in maintaining the review process, in the best sense of sustainable KM.


Thursday, June 18, 2009

The Art of Managing Knowledge Management Programs

I recently gave a presentation on adaptive knowledge architectures (slides and audio). The presentation was more a case study than anything else. I ended with two slides of lessons learned -- what I would do differently in hindsight to avoid the difficulties we encountered.

The original slides (and afterthought) are more conceptual than practical. For example, my primary insight was "beware of success". What I meant by that was that, if you succeed, others will not only want to jump on the bandwagon, they will try to take control and alter both the goals and practices to match their needs rather than the original principles.

There were two things I left out of my slides. One I intentionally left out because it is a more generalized issue about strategy, KM or otherwise. The other I simply forgot until people started asking me questions. They are both practical considerations when managing knowledge management programs.

The general rule is: communicate continually and repeatedly.

I must have presented the original architecture (the 3-tier model) 50-60 times when we started. And my boss at the time did as well. It even appeared in a case study written by Microsoft.

I would then include the 3-tier architecture diagram at the beginning of every KM presentation I gave -- explaining how the new features or changes fit into the architecture. Inevitably someone would ask about the diagram as if they had never seen it. Even people I know I had presented it to within the previous year.

Richard Saul Wurman makes the point in one of his books that you only learn what you are ready to learn. This is particularly true of strategies and architectures. You have to repeat it over and over again -- until you are bored with it! Because there will be people who haven't absorbed it yet.

I think this applies particularly when you move into Enterprise 2.0 and web 2.0 where there is a seismic shift of intent and responsibilities. Managers just don't get it. They say they do, but they don't. They are hearing part of it (the rapid adoption part) but not the self-managed part. They think they can pick and choose the attributes without damaging the system. Sorry, it doesn't work that way. In the case study I used for my adaptive architecture presentation, that is exactly what they did to our communities.

Which brings me to my second point (the one I had forgotten). The reason they took over our communities was not so much that we had succeeded at KM, but that we had succeeded at content management. At the beginning of my presentation I mentioned that the 3-tier diagram includes the top tier (the intranet) so I could dismiss it and say it is not part of the KM environment.

What I didn't count on was the fact that we built an infrastructure (based on SharePoint) that was so much easier to use and and manage than what was being used for the intranet, management would want it for their "portals".

Essentially, they were responsible for the last branch of the intranet hierarchy and it took weeks (and several employees) to get content posted. Ironically, I had worked with the organization's IT team trying to sell them on using SharePoint to manage their intranet sites (separate from our KM infrastructure), but it was rejected. (I won't go into that here, but that was an entertaining episode in its own right.)

But what happened was management saw that our "communities" had all the attributes of their portals but little of the pain. So they jumped on it.

So what would I do differently? I wouldn't ignore the top layer. I would set aside part of the infrastructure just for them. Even though it isn't really KM, I would do it to create a manageable buffer zone between their activities and our KM processes.

How does this apply to Enterprise 2.0? I think the same thing will happen to people trying to implement web 2.0 internally. Part of the attraction of social software is its ease of use. And if the technology catches on, people will jump on it for purposes not intended by the software vendors or the sponsors. And once they do, they will try to apply their traditional "management" thinking to it.

Two small examples I saw at a large corporation:

  • The internal corporate "wikipedia" included pages describing, among other things, each of the organizations: what they did, who ran them, their relationships. Someone discovered this and complained that the entries did not match the description on the managed intranet pages. It was not a suggestion or a demand. It was simply stated as what they saw as a fact; that the wiki pages should be made to match the intranet pages. In other words, selected content should be controlled.
  • On our social networking site we got management to promote the site to our organization. It was then pointed out that the managers themselves should create profiles. Several did. But upper management insisted that their executive assistants draft the content and that we should have a way to post these "ghosted" profiles (completely in opposition to the basic model and implementation of the application). We did change the software to permit this, but as innocuous as this sounds it does create a philosophical schism in the application: who gets to have "ghosted" profiles? How can people tell real from edited profiles? etc.

These may seem trivial, but our apps were still in birth mode, without widespread adoption. So these were just the first signs of the problem.

The question is: what would I do about it? Like I said in the presentation, I don't think there is a generic answer. It depends on the corporate culture and, I'm afraid, the individuals involved. You can try to second guess the culture. So, for example, in hindsight I would have created spaces for the HQ intranet sites to try to alleviate the pressure on the communities. Would that have worked? Possibly. But it is just as likely that they would then argue that the communities are unnecessary since their portals provide all the information consultants need. (Which, in fact they did when they argued that we should dissolve all of the communities that they didn't control. But we managed to stop that effort...)

In the case of Enterprise 2.0, I would have suggested creating one or more executive wikis, secured for use by managers of the individual organizations. This may have satisfied their need for secrecy, ease of use, and taught them a little bit about the operating principles of web 2.0. Would that have sufficiently distracted them from messing with the employee wiki (which was the real goal)? Perhaps, perhaps not. But it would be worth a try.

Another recommendation would be to have a very clear business objective for E2.0. For example, internal development blogs for each project/product. You can allow other uses of blogs, but by having a clear, measurable, but not ROI-based, objective and repeatedly stating it (i.e. constant communication as stated above) you may be able to deflect people trying to commandeer the program.

I could go on, but I better stop before I end up writing War & Peace...


[Many thanks to Steve Ardire for inspiring this post and Stan Garfield for teaching me much of what I know about managing KM programs.]

Sunday, June 7, 2009

Bing Bang Boom

I've seen it. I've tried it. I'm bored.

OK. That's not entirely fair. All the hoopla around the emergence of Microsoft's new search engine Bing has made me testy.

Bing isn't all that bad as a search engine. There's nothing particularly new here (except the name) and lots of copy cat behavior. Overall it is an improvement over its predecessor, Live Search. But why all the ruckus?

Why? Because Microsoft is out to "win". All their business strategies focus on displacing the current industry leader and taking command of the market so they can then use that position to promote (or as they like to say "integrate") all their other products. Oh yes, there's the usual nod to improving the user experience and enhancing productivity. But the ultimate goal is market dominance.

And they are willing to spend the money to do it. Ten million dollars, purportedly. One might say "what's so wrong with that? This is a free market economy isn't it?" Yes it is. On the other hand, I don't know if Seth Godin was thinking about Microsoft as he was writing it, but his blog entry strikes me as very apropos when he says "you're boring." As Seth put it, the "half-price sale on attention is now over."

I'm not as sanguine as Seth. I think there is still a lot of attention that can be bought. And Microsoft has done it over and over again. Internet Explorer, Office, even Windows itself. Why do you think they redesign the logos of their products for every version? And the user interface? To make them look new. To give the consumer (particularly the corporate consumer) a reason for upgrading.

I am tired of Microsoft buying their way into the market with mediocre, me-too products. What's annoying is that their products aren't that bad. Windows has grown up into quite a reasonable OS. And Office has most of the features any normal human being could want. Unfortunately, it also has bucket loads of features that 90% of humanity will never need and that get in the way of finding the useful ones, simply as part of the one upmanship of product sequels.

And now we have Bing. What is really annoying about Bing is that it might be a good search engine. I'm not sure. A competitor to Google? Unlikely, but possible. But I am so sick of Microsoft's aggressive business practices (usually at the expense of the user), that I am soured to everything it does and Bing suffers for it.

But I did try Bing. And it's OK.

  • The top horizontal function menu is borrowed wholesale from Google, as is the stripped down functional layout.
  • The design does has a nice, clean visual feel.
  • There's been a lot of touting of the popup excerpts for search results. But haven't we had abstracts since AltaVista 14 years ago?
  • The Related Searches sidebar is nice. But not nice enough to make me replace my current favorite search engine.

And that is where Microsoft has a problem. They sell plenty of software in the corporate world. But internet search is a personal choice. And a fickle one at that. They will be able to buy a certain amount of attention with advertising, but ultimately they need a significant change in functionality to make people change their ways. And I don't see it in Bing.

So, failing to win technically, they now want to win by subterfuge. Bing touts itself as something new, a "decision engine". Excuse me? What decisions is it making? Even if I liked Bing enough to try it, this hyperbolic nonsense is enough to make me want it to fail simply to spite Microsoft's incessant marketing machine.

Which is a shame. Bing is a decent search engine. I feel sorry for the engineers who have put their time into it because, ultimately, its success or failure will have little to do with their efforts compared to the animosity and confusion Microsoft's business practices generate in the market.



Monday, June 1, 2009

I am Tired of Killing Things

I love playing video games. I like the technology, I like the imaginative environments, I like the gameplay, the challenges, the characters, and the music. I particularly enjoy the childish glee I get as I conquer some meaningless virtual hurdle, clear a level, earn a star, or whatnot. But I am getting tired of killing things.

This is not a polemic against violence in video games, per se. I enjoy fighting games as much as the next person. From the realistic (Call of Duty) to the cartoonish (Smash Brothers), from the horrifying (Resident Evil) to the hilarious (Ape Escape), from the fantastic (Star Wars) to the funny (Lego Star Wars). But at some point there have to be other modes of play.

What brought on this fit was hearing all the pre-show hype and rumor around this week's E3 exhibition. Oh, there will be plenty of non-violent news and entertainment (the usual passel of racing games and mini game collections aimed at "families") but the big bucks go to the third or fourth iteration of numerable kill-everything-and-save-the-world games. I'm talking about Nier, Assassin's Creed 2, God of War 3, Tekken 6, and Final Fantasy I've Lost Count. At some point I don't need the blood any more realistic or the hits any more spectacular. The game play is the same.

Now, I know half of the people reading this (the gamers) are going to dismiss it as the whining complaints of an ignorant old crank. The other half (non-gamers or ex-gamers) are likely to latch on to it as a global invective against fighting games. It is neither of those. It is simply an expression of frustration at the lack of innovation in game play at the highest levels.

Each fighting game has its nuance, its (hopefully) unique take on the genre. There are the stealth games, the strategy games, the collaborative games, the gruesome and the garish games. But there are ultimately only so many flavors of kill and games become boring when they are repetitive -- no matter how flashy or colorful the explosions.

But, of course, there is hope. And, no, it is not just adding motion detection or making me wear a telekinetic headset. It comes from invention. Titles like last year's Little Big Planet demonstrate that there is plenty of room left to create enthralling games without more killing. This year, Mini Ninjas, even while continuing the fighting model, seems to inject enough humor, story, and imaginative objects into the game to create a uniquely enjoyable experience.

At least from the trailers. And that's all we have to go on so far.

Thursday, April 30, 2009

Sustainable KM: Principles & Approaches


[Continued from The Challenges, Part 4]

The Principles of Sustainable KM

So to summarize, the basic principles of sustainable KM are:

  • Do not make KM extra work. Embed it in existing business processes.
  • Avoid "Change Management". Let change will manage itself.
  • Design for humans, not data.
  • Pay attention to the people, not the policies.
  • Eliminate the opposition.

The Approaches to Sustainable KM

Now that we have characterized some of the basic principles of sustainable KM, we can look at ways of achieving those goals. The following are four practical approaches to achieving sustainable KM. This is not an exhaustive list in any way, but is intended as a starting point. Each approach has its advantages and disadvantages, which I will go into when I describe each approach in detail.

The approaches to sustainable KM are:

  • Embedded KM
  • Design to Zero
  • (Re)Use What Exists
  • DYI KM


[Continued in Approaches to Sustainability: Embedded KM]

Tuesday, April 21, 2009

Sustainable KM: The Challenges (Part 4)

[Continued from Part 3]

There is one more principle of sustainable KM that I personally have struggled with, but ultimately been forced to accept. That principle is the need to eliminate the competition.

At first, it seems to contradict the second principle: let change manage itself. In fact, it does. It also goes against my own natural style and approach, which tends towards inclusiveness. But the fact is, within corporations, competition is not productive, it is divisive.

Knowledge management programs tend to be additive -- new systems and processes are added on top of the existing infrastructure. If distribution lists aren't working, add forums. If forums aren't working, add blogs and wikis.

On the internet, this isn't a problem; the audience is large enough to sustain all of these interactions, and people will over time migrate from one to another. But within corporations where the audience is much smaller and resources are limited, continually adding new processes and technology has several very negative impacts:

  • You confuse the users. People are always asking which system are they supposed to use: the old ones or the new ones? Even if a strategic direction has been chosen and announced, the more systems there are, the more likely people will either A.) use the wrong one out of ignorance or B.) not even know the right one exists.
  • You invite resistance. People don't like to change how they do things, even if the new method is better in the long run. If the old system exists, some percentage of the audience will insist on still using it, often bad-mouthing the new system as they do so.
  • You more than double the expense. Within corporations, all programs cost money, even when they are not in use. Systems cost money for IT to maintain them. They cost money for the sponsoring organization to advertise and teach them. And it is not just twice the cost. Because there are two, there are additional expenses needed to explain when and why to use each and to migrate people and content from one to the other.

These rules are not specific to KM. They apply to any technology. But in the absence of overwhelming management support, KM tends to suffer them more than other line-of-business programs do.

Ultimately, unless you are implementing something totally new and so innovative it does not replace or overlap existing processes, there is going to be competition between the old and the new.

However, eliminating the competition right away can have an equally negative impact. If you change the process, people need to be made aware of the change. Unless your organization is preternaturally well organized, this is nearly impossible to achieve in one fell swoop. So there will be some crossover period. Even if you switch over technology "under the covers" leaving the interfaces and processes the same, there are likely to be glitches and unforeseen differences that will be noticed by the users.

So the key questions are when do you make the switch over and what do you do in the interim to mitigate the negative impact?

The answer to the first question is as soon as possible and plan it from the beginning. It is usually best, even if you must keep pre-existing processes and/or systems for some interim period, to schedule their removal from the beginning so everyone is aware they are going away. A swift changeover can be painful, but a long drawn out battle (with users) is worse.

What you do in the interim depends on the nature of the change and the influence you have. If you cannot shut off an existing process or system because you don't "own" it (a common problem in hierarchical organizations), the best approach is to integrate with the other process and make the new process demonstrably better.

By integrating, you do not penalize people who adopt the new system (they can still interoperate with others who have not). By being demonstrably better, you are able to sway the target audience and encourage adoption -- to the point where the old processes can be shut down. In other words, you win.

Even if you do control both the old and the new processes, it is important to provide a clear migration path: convert old data, map old processes to new, integrate with other processes, etc. Theses steps all help smooth the path for the users and reduce pain for both them and yourself.


[Continued in Sustainable KM: Principles & Approaches]

Wednesday, April 15, 2009

Have We Missed the Boat?

I have been following with interest the enthusiasm with which practitioners are adopting and promoting web 2.0 as the next big thing for knowledge management. (Myself included.) It is not hard to see why. The explosive growth of social media and social networking sites such as FaceBook, MySpace, LinkedIn and Twitter is enough to make any old school KMer turn green with envy. Why can't we induce that sort of participation in our knowledge sharing programs?

However, even when corporations try implementing web 2.0 solutions inside the firewall, the results are often underwhelming. That is the Enterprise 2.0 dilemma.

People have offered a number of explanations for the difference: limited audience, cultural constraints, lack of incentives, generation gap, etc. All of these have an impact. But I am beginning to think we (i.e. corporations and those who are trying to move them forward) may be missing the bigger picture.

Why did social computing succeed in the first place? Granted, a significant portion of it was originally personal in nature, but it was very quickly adopted by engineers, white collar workers, and other professionals as a way to discuss both their private and their work life, including their professional experiences. Many professions have established formal and/or informal communities where they collaborate and share information through blogs, forums, twitter, etc. outside of the companies they work for.

Why is it that they are so willing to share information on the internet that is so hard to get out of them inside the firewall? We can nitpick the details of how and why, but at some point we have to face the fact that they do it because they get more satisfaction from sharing information outside than inside.

People share information on the internet because they feel a sense of connectedness to others with similar interests and tastes. They also feel that their ideas and opinions matter.

Even when web 2.0 technologies and methods are used inside a corporation, the sense of satisfaction is greatly diminished. The audience is smaller -- immeasurably smaller -- so finding like minds is far less likely. There may be others within the company with similar roles or professions, but having the same job doesn't automatically make you friends. If, on the other hand, you take all of the people with similar jobs from all companies around the world who may be on the internet... the chances of compatibility increases.

More importantly, people -- peers -- on the internet may not be able to act on your ideas, but they can admire them, praise them, and commiserate with your inability to get them implemented. Because of the smaller audience, contributions to an intranet often elicit fewer if any comment. That doesn't mean they are not seen. But at least subconsciously, the contributor often feels like their offerings are falling on deaf ears.

What's more, the corporation's specific business focus dramatically cuts down the scope of what is "appropriate" or "noteworthy" within the smaller environment. Even without explicit guidelines, there is an implicit constraint within the firewall that does not exist externally. And if web 2.0 is about anything, it is about the individual's freedom to contribute as they see fit.

So, yes, the smaller audience does play a role. But it is the individual's sense of actively participating, connecting, and freedom of expression that drives continued use of social software on the internet and not within the firewall. Are they really free? No. They are constrained by their own personal code of civility and moral appropriateness -- especially when dealing with information concerning their employment. But the sense of freedom is key to their willingness to participate.

What's worse is that by implementing web 2.0 technologies inside the firewall without any commensurate modifications or integration, knowledge management programs once again look like stodgy organizations that see a fad but doesn't quite "get it".

So, is all hope lost? No. But there are several lessons that can be learned here:

  • Don't expect too much. No matter what you do, adoption of social software will not be as viral internally as it is externally for all the reasons described above.
  • You may need to "feed the pump". Interaction is critical to the success of social software. (That's why it's called "social".) Since feedback is going to be diminished inside the firewall, you may want to solicit the assistance of a group of early adopters to amp up the initial set of responses to get the feedback loop going.
  • If you can't beat them, join them. Before you even start planning new internal services, think whether you need them. Why compete with active, healthy services that already exist?

Perhaps it would be cheaper and far more successful to use those existing external services through the firewall as is, instead of setting up competing internal services. For example, rather than setting up blogs internally, think about the alternatives:

  • Providing a list of the best internet blogs pertaining to your company's business (including those of your employees)
  • Encouraging employees to blog externally and join external professional communities to both further their career and enhance their skills
  • Aggregating internal news with feeds from external sources to provide more dynamic, objective information to your employees

Corporations are hesitant to "open up" the firewalls and blur the distinction between internal/proprietary and external/public information. But when you are talking about social computing, that blurring of distinctions is one of its key strengths and defining attributes. Why is Twitter popular? Not because I can talk about my private life; because I can talk about whatever parts of my life, private or professional, I choose to.

Smart companies will recognize that the floodgates between internal and external information were breached long ago, without their having any control over it. They also recognize that they can benefit more from encouraging their employees to use this new powerful medium effectively -- and appropriately -- than trying to constrain it within the artificial boundaries of the corporate firewall.

Thursday, April 9, 2009

Social Architecture

Patti Anklam recently asked whether we need to define social architecture and if so, how should we do it? I never shy away from defining new terms, as long as:

  • The term identifies some meaningful thing or quality
  • The thing being defined is new and/or unlabeled (and therefore difficult to discuss without some shared terminology)
  • The term is not completely ambiguous*

My gut feeling is that social architecture would be a good thing to define. That said, let's start with what it is we are defining.

The Definition

Social architecture is the conscious design of an environment that encourages certain social behavior leading towards some goal or set of goals.

By environment I mean a bounded set of physical or virtual structures, functions, or events where people interact.

I say "certain social behavior" because you are designing for specific interactions with the aim of achieving some goal. You are not designing a generic space where people congregate and interact in whatever way they please. (Unless, of course, that will achieve your goal.) You are designing towards some purpose, such as encouraging conservation (wiserEarth) or grassroots sharing of ideas and innovation (barcamps).

On the other hand, I am intentionally vague about what constitutes an "environment". If we are just speaking of digital spaces, then there is very little difference between "social architecture" and "information architecture" or "interaction design". Designers of social software might very well call themselves "social media architects". But that is not inclusive of everything that is needed to instigate and drive social behavior. Barcamp is an example that requires digital spaces to organize, but also a physical space and event logistics to pull off.

There is an ongoing debate within the Enterprise 2.0 community that E2.0 is not just social software inside the firewall. It is a change of culture. Well, that change of culture cannot occur without establishing the appropriate environment to foster it, including a coordinated set of capabilities, recommendations, influences, and incentives. The design of such an environment is social architecture.

Is a Definition Necessary?

Why even bother with a definition? Well, the argument within the Enterprise 2.0 community is a good example of why a new term is needed. I won't go into the details of why discussing changing culture is unproductive -- Venkatesh Rao has done a far better job explaining it than I could do -- suffice it to say that rather than complaining about a resistant culture, designing a system that utilizes inherent social behavior to recognize and reward a different approach is more likely to result in change.

Unfortunately, social media is being applied within corporations as if it were a large hammer, cutting a wide swath through traditional, stovepiped corporate approaches to knowledge. Even when the new applications are well received (which isn't always the case), one of the side effects of this approach is an "us vs. them" mentality. The old approaches are not removed; the new social applications are set up in opposition to them, creating an unnecessary barrier between the traditonalists and new agers. This friction is often amplified by a lack of integration of the social software into the existing corporate infrastructure.

What is needed is a more systematic approach to integrating social applications -- and the activities and interactions they incite -- into the corporate environment. From a technical perspective, this means integrating the content into intranet search and actively feeding social content streams into traditional environments, such as intranet web sites (e.g. the latest Yammer messages on the team site, the employee's blog and Twitter ID becoming part of the corporate whitepages, liveblogging status meetings, etc.) From a operational perspective, structuring the social interactions around meaningful topics and goals helps avoid competing approaches.

This type of coordination does not require extensive resources or a major overhaul of existing systems,. But it does require planning and often a complex set of small, coordinated adjustments to systems and processes. And the best way to describe this approach is social architecture.

As a side note, despite my examples, corporate environments are not the only possible target for social architecture. However, I mention them here because intranets are an area with perhaps the greatest potential use of coordinated architectural approaches because of the need to address deep rooted hierarchical processes.

Ambiguity

We also need to check to see if the new term we are defining is so ambiguous it will be misinterpreted or quickly misused. In other words, is there a better term?

I happen to like social architecture for several reasons:

  • It fits well into the lingua franca of social computing, where terms such as social media and social software are already established.
  • It is distinct from the existing terminology which tends to focus on either the technology or the content, but not the overall strategy.
  • It is relatively intuitive as a term and does not require a significant amount of explanation.

On the negative side, there is ambiguity with previous uses of the term within the realm of physical architecture and sociology. (There is at least one reference as far back as 1876.) In physical architecture, the term social architecture tends to refer to the application of architecture towards humantarian aims. Housing for the poor, sustainable architecture practices, and designing for larger social goals all seem to fall within this category. The Wikipedia entry on social architecture is a stub referring to social structures, a sociological concept not too far from the environments and goals discussed in my definition.

Although there is ambiguity here, it does not seriously invalidate the use of the term in reference to web-based systems. More importantly, there is sufficient crossover between the definitions to to avoid direct conflict.

Existing Usage

Finally, I make no claim of originality in defining social architecture. Besides the use of the term in other fields, there are already a number of references to it as applied directly to social software and the internet:

  • Stowe Boyd defined it in 2005. Although he appears to define it as an existant state ("the foundation of the blogosphere") rather than as a specific activity.
  • Sam Huweatt describes it in his blog. His definition is very similar to what I outline above. He also makes a distinction between social architecture and social media architects.
  • Christina Wodtke lists the elements of social architecture in her book Blueprints for the Web (summarized in A List Apart).
  • In his slide presentation on Social Architecture: Modeling the Next Generation, Sean Madden makes the point that "social networks have limitless potential but we need to work towards designing them that way."
  • Amy Jo Kim, in her bio, defines herself as designing "social games and social architecture[s]". Her book Community Building on the Web pre-dates much of what we now consider social software, but is still the pre-eminent text on designing for social interaction. She also calls her blog "Musings of a Social Architect".

I take these all as good signs that the term is both useful and sufficiently clear in its meaning. On the other hand, there are at least two other uses that do conflict.

There are a number of people (in particular, Ryan Turner) who use the alternate term "web information architecture" to define much the same thing as I defined as social architecture. My preference for the shorter term comes from the fact that "web", by this point in time, is pretty much redundant. Almost all information and interaction in modern life now involves the web to at least some extent. But at the same time, as I mentioned in the definition, not all activity involving social architecture is web based (for example meetups, Big Urban Games, and barcamps).

There is also at least one case where social architecture is equated to an existing term (Information Architecture = Social Architecture). Although this is well-intentioned, I believe it is inherently wrong. Not all information is social (in the social media sense) and at least some aspects of information architecture -- such as navigation and metadata definition -- that are determinative, not social. And not all social interaction design can be considered a part of information architecture. They are definitely related fields, but not identical.


*I'm sure others would go for more precision, such as "not ambiguous". But if that were the criteria we would define almost nothing.

Wednesday, April 8, 2009

Sustainable KM: The Challenges (Part 3)

[Continued from Part 2]

The same rules that apply to information apply to processes as well.

Knowledge management practices should be embedded in the work processes. And many KM initiatives attempt to do this. However, beware of designing KM programs around the officially defined processes of a corporation. Because, as many who has worked in business can tell you, the official policies and the actual practice can often vary -- dramatically.

To give a couple of examples from my own experience:

  • When working on a project catalog, the project management office told me all projects had a project ID that could be used as a unique identifier. However, when talking to individual projects, it turned out that different IDs were used in each region, they weren't unique, and that most people working on the project did not know what the project ID was -- only the project manager did. So any process assuming a known, unique ID was bound to fail.
  • The official process for creating project proposals was to use a web-based proposal generator that filled in all of the legal boilerplate materials etc. However, in actuality, most project leads used their last proposal to cut and paste or asked around for examples from similar projects.

Needless to say, if you design your KM programs around the officially documented processes rather than what people actually do, your program will be no more successful than the policies themselves. So, another principle of sustainable KM is: Pay attention to the people, not the policies.

As easy as it this is to say, achieving it can be far more difficult than one would expect. Official policies exist for a reason. Management wants the policies to be obeyed. In most cases the policies are intended to create consistency and reliability within the system. Unfortunately, the policies also often result in extra work with no perceived benefit to the individuals who are asked to comply. (Does this sound familiar? Oh yes! "Do not make [fill in the blank] extra work." The same rules that apply to sustainable KM apply to sustainable business processes as well...)

Unfortunately, business processes and policies tend to have far more management attention and leverage than KM programs. (This gets back to the age-old issue of perceived ROI for long-term vs. short-term objectives.) So in a battle between reality and official policy, strangely enough, reality usually loses out.

Going back to my second example above, the KM team was frequently asked by the consultants to provide a library of past proposals that they could borrow from when it came time to write a new proposal. Of course, as soon as the team responsible for the proposal generator caught wind of this, they objected. The consequence was that the proposal library project was canceled.

There is no generic answer to this problem. In the case I described, the outcome was that the several regions created and maintained their own libraries of proposals for reuse. This was not optimal but it was a pragmatic solution to the situation they found themselves in.

Another solution is to generalize the problem to avoid specific cases. For example, by creating a global project document library, it was possible for teams to share proposals (and other project documents) between regions. Because it was not specifically a proposal library, it successfully flew under the radar of those maintaining the official project proposal policies.

I am not recommending this approach. It depends on the specifics of each individual case as to how best to identify the true processes being used, encourage effective knowledge habits, and meet commitments to management. This is one of the many fine lines knowledge management professionals find themselves having to address.

[Continued in Part 4]

Wednesday, April 1, 2009

There Are Only Three Corporate Strategy Diagrams in the World

I have come to the conclusion that there are only three strategy diagrams in the entire world. Oh, the labels change and someone may use five or six boxes instead of four or five, but the basic diagrams are the same.

The Cloud

This is perhaps the oldest corporate strategy diagram around, dating back to at least the early 1980's. It depicts a set of known entities (usually shown on top as inputs), a whirling cloud of activity, and a set of one or more desired outputs. (Usually fewer outputs than inputs.)

This diagram is very good when trying to explain how to get from chaos to sanity. Strangely enough, the cloud is not the chaos, the plethora of unintegrated inputs at the top is the chaos. The cloud is the magic black box which boils the chaos down to a smaller, manageable set of outputs.

I have seen this diagram used to describe datatype conversions, programming interfaces, repurposing of content, and several other things.


As a side note, in the 1990's this diagram morphed sightly. As the multitude of disconnected systems diminished, the cloud was replaced with a solid object: a box, pipes, or a two-headed arrow. But the basic intent of communicating the integration of disparate systems into a manageable system remained.

The Pyramid

The pyramid is the preferred strategy diagram of non-management types. Non-techies use it in its 2-D form; techies prefer it in 3-D.

The pyramid represents a hierarchy of importance. Its origins as a diagram are as ancient as its inspiration, the pyramids of Egypt. We all know the recently dethroned food pyramid. Also Maslow's hierarchy of needs utilized this diagrammatic shape. However, its application in corporations is primarily to categorize – and prioritize – whatever the business is doing or producing.

As I mentioned, non-techies use the 2-D pyramid, usually accompanied with an arrow indicating the goal of moving from the lowest state of being to the most refined at the top. In KM, the pyramid is data -> information -> knowledge -> wisdom. (How you achieve this is never clearly defined.)

Engineers and other technical professions prefer the 3-D pyramid because it provides more surface to divvy up, classify, and subclassify into its component parts. The obvious problem with this is that no one but an engineer can understand or appreciate the level of detail provided, and the pyramid is lost for the trees (to mix a bad metaphor).

I actually had a software architect try to replace the pyramid with a cube – he wanted to show the products as the intersection of three different architectural perspectives, each suitably cut and divided into even further refinement. Once I explained that he could not show the back side of the cube on paper, he abandoned the plan (and, I might add, much of his respect for my abilities as a graphic artist. Oh well....)

The Arrow

Finally, there is the arrow -- the quickest, shortest path to any target. As a strategy diagram, the arrow is usually segmented to show the linear steps needed to reach the goal. Managers like the arrow because it not only identifies the necessary tasks but a sequence as well, making management (and the assignment of blame) much easier.


One important variant of the arrow is the circular arrow. The circular arrow is not as popular with management types since it does not have a clear start or end point. However, this variant is extremely popular with program teams

Intersecting Circles


Finally, there is a fourth diagram that is frequently seen in corporate presentations, but it is not a strategy diagram. This is the intersecting circles.

The circles can be labeled with anything you like: people , processes, technology... customers, managers, employees... music, video, telephony... etc.

This diagram is often mistaken for a strategy diagram because the center of the intersection is viewed as an end goal. However, the diagram is actually an illustration of the current state. Add an arrow and another stage where all three circles overlap and you might have a strategy diagram!

Tuesday, March 31, 2009

Sustainable KM: The Challenges (Part 2)

[Continued from Part 1]

The previous example of a skills database brings up another principle of sustainable KM: design for the people, not for the data. We often get so caught up in what we are trying to capture, that we forget who we are capturing it from and who will use it.

I was recently filling out a form online that asked where I got my college education. What's more, it insisted on trying to guess my answer. I couldn't simply enter the name of the university; it insisted it had a complete list of all possible answers. As you might suspect, it was neither complete nor easy.

Why did it do this? Because whoever wrote the program wanted to make sure the data was valid -- that there was no ambiguity between University of New Hampshire and University of NH, for example. People have no problem understanding this sort of variation. Unfortunately, computers do.

As a result, the designer decided I, the user, was the one who would have to solve the problem. I was forced to scroll half way through an extraordinarily long list of names to discover that -- for the sake of the computer -- my alma mater was classified as NH - University of, Durham.

This is a case of simple annoyance. But this type of thinking, when dealing with large volumes of data and more complicated concepts, becomes debilitating.

Going back to our example of a skills database, these applications often assume a preset list of skills, organized hierarchically into job categories (such as administration, information technology, management, marketing, communications, etc.) More often than not, the UI reproduces the computer's way of seeing the information, forcing the user (the person whose information it seeks to elicit) to click blindly through hierarchies looking for something that resembles their skills. What's worse, the preset list is often defined by management, so it not only is incomplete, it lists only what management wants to see, rather than skills the person entering might choose to identify.

The results are predictable. Entering information in such a system is frustrating and annoying. People will avoid it or enter as little as possible to get done quickly. Where presets do not match actual skills (or can't be found easily within the hierarchies), people will deliberately choose incorrect or approximate answers.

This is before we get to any of the psychological issues of what information (true or false) people will enter based on the implicit messages the preset items and hierarchies are telling them about the importance of specific skills.

The consequence is that those responsible now have to budget time and money for training on how to fill out the form as well as prompting people to maintain their data!

A disconnect between the interface and the users' perspective results in frustration, misuse, mistakes, invalid entries, and avoidance. This concept is well understood in the field of usability and UI design. Unfortunately, it is not applied frequently enough to internal systems design and KM. The result is (unsustainable) systems that often cost more to maintain than to create.

[Continued in Part 3]

Monday, March 30, 2009

TwitterFish: Bridging the Language Gap


A common problem for knowledge management programs -- especially those that span multiple countries or continents -- is bridging the gap between languages. People obviously feel more comfortable communicating in their native language and in many cases cannot communicate well -- if at all -- in other languages.

For formal documents such as white papers or reports, there is no easy solution to this problem. Automated translation services exist but the results are often rudimentary, at times amusing, and at worst they can actually be misleading or just plain wrong. There is very little choice but to do manual translations for important documents, insist that everyone communicate using one common language, and/or live with a Babel-like ignorance of the knowledge and expertise of other countries.

Because of their limited usefulness for published documents, automated translation services have been shunned by most KM programs. But are they really so bad? Or are there cases where automated translation is not only "good enough" but provides a vital missing link for multilingual teams?

I was recently working with an organization that operates in four different locations around the world, in four separate languages. Clearly, the language barrier is a significant obstacle for them. It turns out, however, that within each geographic region team members communicate frequently among themselves through IM and mobile texting.

The good news is that communication is happening. The bad news, from a knowledge management perspective, is that the language barrier has become a permanent wall separating groups of employees and the insights they hold.

The usual KM solution to this problem is to try and get each group to capture their learnings in whitepapers, reports, and other written documents. The problem of translating those documents is then addressed as a separate task. The language problem is exchanged for a translation problem and significant extra work for everyone. This is in addition to the many bright ideas and offhand stories that are lost in the move from conversation to written documents (i.e. implicit vs. explicit).

But if you take a step back, translating everything (or even a select portion identified as "important") before determining if it is actually going to be useful, is inefficient and almost guaranteed to be prohibitively expensive. What is really needed is to get a rough sense if something is of interest before making the effort to establish connections across the language boundary.

Which is exactly what automated translation is good at. Trying to follow a procedural document written in a foreign language -- or translated badly -- without other assistance can be both difficult and dangerous (depending on how risky mistakes are). But knowing that such knowledge exists, even if you can't read it all, can save hours or days trying to recreate the learnings that have already been captured.

What would we give for a way to "listen in" to conversations -- no matter what the language -- to see if there was either a discussion we could contribute to or knowledge we could use.

Well, we have ways to listen in through social computing. Forums, blogs, and microblogging move the one-to-one conversation to a broader social platform. Micro-blogging services in particular, such as Twitter, provide almost all of the immediacy and interaction of IM but to a much larger audience. All that is missing is the ability to read the different languages.

Which is where TwitterFish comes in. TwitterFish is a prototype to demonstrate the effectiveness of automated translation services for identifying potential points of useful information.

Twitter already provides a translation feature for its search interface. But the public timeline and the stream of your friends' updates do not. TwitterFish lets you select a language and translate all updates into that language on the fly. You can also click on a specific individual to see just their status updates, if you find something interesting.

The translations are still rough. You cannot use them alone. But the point is they give you window into what people are discussing in other languages that is not available in any other form. What's more, each message is associated with a person. So if you do find a piece of information you want to follow up on, you can start a conversation directly with those involved. Unlike translated documents, where the text is all you have, in social applications such as Twitter you have both the words and the people.

TwitterFish is just a prototype. Viewing the public timeline (the default) is interesting but not necessarily useful. However it does demonstrate the potential of automated translation services for dynamic data. The techniques used to create TwitterFish would be far more effective to groups bounded by a common interest. For example:

  • Apply TwitterFish to Yammer, the business version of Twitter, where only messages from within a single company are visible.
  • Create a Twitter account that "friends" a specific, global community of users, such as a professional organization. The accounts' stream can then be recast-- or displayed on the organization's web site -- translated into the viewer's language of choice.
  • Apply the same technique to other dynamic community content, such as forum posts, blog comments, etc.

As a final note, TwitterFish is a fairly simple application. It would not be possible without the generous availibility of a number of foundation services. Specifically:




Sunday, March 29, 2009

The Business of Casual Games

I recently received mail from a game publisher offering me free access to one of their PC games. The invitation was very nice; they were offering it to me since I talk about video games, obviously looking for a review but not insisting. I was, actually, pleased that they asked.

Yes, of course it is a marketing ploy. They would like me to write a review. But they managed to walk a very delicate balance between gifting and requiring reciprocation. It was an offer, and nothing more. And I appreciated that.

However, I was still left in a bit of a quandary. I am not a professional reviewer. I talk about games I am playing when I think I have something interesting to say about them (either good or bad). I don't review everything I play and I certainly don't have time to review lots of games sent to me out of the blue. (I do have another career, etc.)

But even that wasn't what bothered me. And then it struck me: my problem is that I don't play PC games. It is kind of funny because I work with computers all day (and frequently at night) so have plenty of opportunity. And it is not like I've never played games on computers. (I was very fond of Tetris and various Breakout clones years ago. I even wrote a few games -- toys really -- as programming practice while with one of my previous employers.)

But with a few exceptions (Myst, Riven, The Journeyman Project, and Microsoft Flight Simulator) once we got into game consoles -- and especially handhelds starting with the Gameboy SP -- I have not done any PC gaming.

Why not? Well, there are several reasons:

  • Usability: Quite frankly, the keyboard and mouse are seriously under par as a control set for real-time games. They are OK for strategy/board games with a lot of pointing or typing, but otherwise the controls are awkward. Game systems on the other hand are designed specifically for that purpose. (I'll save my comments about bad game console design for another time.)
  • Compatibility: why do I need a $3,000 computer to play a $50 game? Or why does a $20 game insist on resetting the color scheme and resolution of my monitor (and not setting it back)? Or why can I play games on one version of an OS and not on another?
  • Lack of time for "real" PC games: Quite frankly, I no longer have hours to devote to the larger PC games like Myst, Age of Empires, etc.
  • Lack of patience with "casual" PC games: I just can't get involved in the multitude of what might be called "semi-professional" downloadable games that litter the casual games market for the PC.

It is this last item that got me thinking. I know I don't have time for longer games - whether on the PC or a game console. But casual games would seem to fit right into my vector of needs, interests, and limitations. Quick, fun, no heavy investments...

But they don't attract me. Why? Because I've been burnt before. As, I suspect, have many of you.

Its very simple: there are lots of amateur and semi-pro casual games out there (most with free downloadable demos) and the majority of them stink.

Now, I am not saying there aren't console games that are deplorable (and I've been unfortunate enough to play several). I also don't feel quite comfortable with my previous remark about "the majority" of semi-pro PC games. Despite the high cost of entry for publishing console games, if I were being frank it is unclear whether the ratio of gold to dross is any higher for consoles than for PC games.

However, when you encounter a loser on a console, you simply remove the cartridge and throw it away with no side effects. No uninstall, no danger of virus contamination. No leftover hidden bits and pieces you might not know about. With PC games, there is always the lingering doubt (and plenty of quirky behavior on the part of PC's from whatever source) about its long-term impact.

So, as kind as the offer was, I declined. It is unfortunate, because I would like to give independent artists and developers their due. But ultimately, time and the technology is not on their side. This may be why the market for independent games on smart phones (such as the iPhone) is taking off. It provides both the marketplace -- and a relatively secure hardware platform -- that gives users the confidence to try out less familiar or well-financed options.

Friday, March 27, 2009

Sustainable KM: The Challenges



Budget is not the only factor affecting the sustainability of knowledge management. In fact, it is only a minor obstacle that tends to impact all larger business initiatives equally. By far the most important factor is the people involved in the program and their willingness to participate.

When people talk about the "sustainability" of a KM program, they are usually referring to the level of engagement of the target audience and their willingness and enthusiasm to keep participating. No effort is sustainable if its audience is resistant. However, this is unfortunately the case for many KM programs.

Some of the common complaints I've heard about KM programs over the years include:

  • I don't know where to put things.
  • How do I use it?
  • Why do I have to enter this information again?
  • I don't have time to participate.
  • And (one of my favorites) how do I charge the time I spend on knowledge management?

This last is particularly confounding, because no one asks how do I charge the time I spend drinking coffee or the time I spend using the xerox machine?

Each of these complaints is often handled separately, by developing training on the benefits of the KM program or how to charge time (further increasing the size and complexity of the initiative). But they are really just symptoms of a larger problem, which is that KM is seen as separate and distinct, an "extra" activity from the normal work of the employees.

Trying to convince someone that an activity is good for them is always an uphill battle. So one of the first principles of sustainable KM should be do not to make KM additional work. Knowledge Management practices should be embedded in the existing business processes.

Note that I say "existing business processes". A second reason that KM initiatives are often so top heavy is that they attempt to alter business processes to make the processes more amenable to managing the knowledge. By attempting to change the process -- no matter how well-intentioned -- you are seriously adding to the "weight" (in terms of cost, both in resources and money) of the program and the likelihood of failure. Changing people's behavior is extremely hard to do from the outside.

Programs that are trying to externally influence behavior are easy to recognize. They inevitably include activities labeled as "change management", which can consume up to 50% of the project.

To put it crassly, change management means you are trying to get people to do something they don't want to do. This is both expensive and usually only partially successful, if that.

That doesn't mean change can't happen. Often change is necessary. But trying to dictate change leads right back to the need for an executive champion -- someone willing to enforce the change -- and all of the deficits and difficulties such sponsorship presents.

So how does change happen if you don't enforce it? It happens because it benefits the people who need to enact the change. In other words, people change when they see value for themselves in the change.

It may seem like a contradiction, but changing processes is extremely difficult, whereas getting people to change the processes themselves (if they see fit) is much easier. An example might help:

Say you were building a skills database. (I am not promoting this activity, just using it as an example.) You will require all employees to fill out a form identifying their individual skills and level of ability. Managers will use the database to find resources with the appropriate skills and employees will need to keep their entries updated.

Now, even assuming this is a good idea, why would employees participate? They get no benefit from the results of the activity (only managers get to see the results) and it repeats work they are already doing (they already have to maintain an up-to-date curriculum vitae). The effort requires training for all employees and a significant management push to get them to comply with the initial loading of the database. Worse yet, once loaded, there are no triggers in their regular work that would initiate an update. So there will have to be an equivalent effort put into getting updates every six months. This is anything but sustainable.

Before you argue that this is a nonsensical example, i would point out that I know of at least two companies using a system like this. An alternative approach would be the following:

Build the skills database so that everyone has access to the results. Use the content provided by the users to generate intranet profiles. (E.g. employees immediately see the results of their effort and get feedback from their peers as to its usefulness.) Also, collect enough information to autogenerate the CV they need to maintain under current policies.

Which system do you think employees are more likely to contribute to? By extending the initial purpose and putting in the effort to provide extra functionality, you not only fit the new system into the existing process (i.e. employees maintaining their CVs), you significantly reduce the overhead required to enforce compliance.

So another principle of sustainability is avoid change management, help change manage itself.

[Continued in Part 2]

Sunday, March 22, 2009

Notes Towards a Theory of Sustainable Knowledge Management

This is the first in a series of posts.

For a number of years there has been a growing interest within the field of physical design for "sustainable architecture". Definitions may vary, but the general theme is to design buildings that do not drain the pool of natural resources: buildings that generate their own energy through solar power; that minimize the need for mechanical heating and cooling; even buildings that can be recycled without harmful bi-products when their usefulness is over. In other words, buildings that have a positive impact on the environment.

For some time now, I have thought that knowledge management as a discipline could do with a similar initiative.

Now, there are certainly things that can be done to make knowledge management systems and the computers they run on "green" or greener. However, these ecologically sound IT practices are not specific to knowledge management. They apply equally to all computerized business applications: supply chain, document management, accounting, etc.

What interests me is looking at knowledge management in a new way to see if we can reduce the impact and cost it has in terms of human resources. Can we design knowledge management initiatives with a smaller "footprint" in terms of cost in dedicated headcount, learning curve, and unique time away from people's "real" work? This is the concept I am calling Sustainable Knowledge Management.

Why Sustainable?

Knowledge Management has a reputation for requiring large, expensive initiatives with questionable results. That reputation is not entirely unwarranted. Many of the early KM programs were overly ambitious and software vendors have often sold large, all-inclusive systems (e-mail, document management, CRM, etc) as KM solutions.

Even smaller KM initiatives often require a significant upfront "push" in terms of cost, headcount, and executive attention to get them started.

Is all this effort necessary? More importantly, once the effort is underway, how much does it cost to maintain the initiative? Knowledge management is not unique in this respect. The same could be said of quality initiatives, change management, business process reengineering, etc. But my focus is KM.

Although I have no objection to having a career for life, there is a serious danger that knowledge management programs are pricing themselves out of the market. This is particularly true when there is no reliable or believable way to calculate Return On Investment (ROI) for most KM programs. This is because KM is focused on the longer-term improvement of business and employee performance and expertise, not bottom-line financials.

So without the direct-to-the-bottom-line connection between KM initiatives and improvement, it is important that KM programs reduce their disruptive impact and avoid becoming a ready target for cuts.

One common approach is to find a sponsor or a champion within upper management. Within KM circles, the need for "senior level" support is often discussed as a key component of a KM program. This means a VP or other high level manager who can influence budget and ensure the program is not axed. However, proponents of this approach are not so quick to explain what to do if you can't get that support.

Even if you do find a champion, relying on one person for your existence is risky. Management, particularly in western corporations, have a habit or restructuring and repositioning themselves frequently. It's called climbing the corporate ladder. What happens when you champion changes jobs? Can they still support your program?

Over the past 5-8 years, I've spent a nontrivial amount of time creating presentations explaining and justifying various KM programs to new managers. Having senior management support -- short of the CEO, and not even reliably in that case -- does not ensure that a KM program can be seen though to completion.

What we need is a more systematic approach to designing KM initiatives that are sustainable over the long-term.

Sunday, February 22, 2009

What I'm Playing: Persona 4

Persona 4 Box art
It seems unlikely, but I am playing Atlus's Shin Megami Tensei: Persona 4. Why so improbable? Because, as I've mentioned before:

  • I don't like RPGs,
  • I don't have time to play long games or games that need extended play between saves,
  • And I mostly play portable, not console games

Well, Persona 4 is a straight up RPG on the Playstation 2, complete with points system, level ups, turn-based attacks, etc. Game play consists of long story episodes with few saves spots and intermittent battles. Not exactly my usual type of game.

But I am addicted.

It is not the game play; I have hardly got far enough to have even the barest minimum control of the game. Mostly it is an extended animation interspersed with my pressing X to move mechanically forward in the story. Oh, and every once in a while I get to choose my response in discussions with other characters in the story. Only every once in a while.

And there are battles. But what has me itching to keep playing is the story, the environment, and the presentation.

  • Story-wise, playing Persona 4 is like reliving high school, complete with the plodding pace, the often inane conversations, and seemingly menial activities. The game captures this perfectly -- including the verbal banter that often masks an intricate social dance of half truths, dares, flaunts, and feints. I don't need to relive high school (I am way past that) and there are many movies and TV shows that, sadly, pretend to. But few actually capture the meaningless intrusion of random external events quite like Persona 4.

  • This is made all the more interesting because the story is acted out in modern Japan. This is not a ploy to create a feeling of alienation. It is an artifact of the game's origin; it was developed in Japan and Atlus unapologetically makes no attempt to Westernize it. The result is a fascinating immersion in the smallest details of Japanese culture: the houses, the streets, the furniture and clothing, even the advertising in the trains, all share a distinctly non-western look.

  • Finally, the game is presented in a curious mix of 2D animation, 3D animated game sequences, audio, and printed text overlaid on top of two colored text boxes set at different angles on the bottom of the screen. The text, besides giving an edge to the presentation, also gives a "staged" appearance to the 3D segments (staged as in presented on a proscenium stage, as opposed to contrived) that helps to fit with the distinctly 2D animation. It is not a big deal, but just enough of a quirky -- partially formal, partially "hip" -- presentation to keep the player going through the story segments leading up to game play.

I know what is coming: a whole lot more fighting, dying and restarting, trial and error as I try to find the right combination of attacks and special powers, etc. Will the story be able to retain its interest through this?

Can't tell yet. But for the time being, I'm thoroughly enjoying the change of pace.