Wednesday, August 29, 2007

My Computer is Dead

My computer is dead. This should not be a surprise to anyone. That's what computers do; they die.

In this case, my desktop was about a year old. I had never really liked it from the beginning. It was a mid-level machine of a major brand, but everything about it seemed a little "cheap" -- gaps at the seams, flimsy plastic facade, loose keyboard with too many keys. All around too many features with almost no value.

So I shouldn't have been surprised. We have had a four or five power outages in the past few months. But each time the computer would come back up. Until Monday. I had shut it down for two days, came back, pushed the power button and... nothing. No click, no whir, no nothing.

No, I shouldn't be surprised. Especially since over the past month or so the fan had started to sound more like the little engine that could than a European sports car. A sure sign of strain. No, I shouldn't be surprised, but I am.

I am irked. All symptoms point to a bad power supply. So I could pay to have it replaced. But then again, it might be one of the other devices. I could spend several hours fiddling and trying to track it down, but there is no guarantee that after 4 hours and probably a hundred dollars of parts, I would be sure to have solved the problem.

What is most irksome is that this is about the fifth desktop machine I've owned. My last one, a Sony, I loved and was a workhorse for 3 years. I replaced it because it was getting old and slow; not because it failed. I would have got another Sony except they don't make desktops anymore. Before that, I had 4 Macintoshes: 3 gems (all still operable), and one lemon from the day it arrived. Oh, and a Power Computing Mac clone. Now that was a machine! It failed three times and was fixed -- under warranty-- three times.

So I have seen failure, but I also loved that Mac clone -- there was nothing that compared. But this is different. My current PC is a commodity, expendable, replaceable.

Computers die, and when they die they take your data with them. So I am going to take the "easy" way out and get a replacement -- bigger and faster than before -- and I'll spend 1-3 days reinstalling all of the software I need and doing disk-to-disk copies of my data.

Will I be happy? Well, sort of. I'll be happy to be back online. I'll have the simple pleasure of something new, shiny, fast... But I also face the tedium of digging out all my installation CDs and installing (and restarting) for hours at a time... and no guarantee I won't have to do it all over again in a year. In fact, I am almost guaranteed I will have to do it over again in the next 1-4 years. Because computers are fragile.

Which brings me to my point. (There's a point to this? I hear you asking.)

Why do I have to do this? Why do I have to waste 1-3 days rebuilding my new PC with all of the software and data I had on my now defunct machine?

Since computers are a commodity, why does the operating system insist on binding your data and applications to this fragile hardware? Sure, my data is in My Documents and I can recover that fairly easily (assuming that the disk isn't fried). But what good is the data without the applications to use it or even see it?

Computers are designed on a very basic structure, a "stack" of abstractions:


Data
Applications
Operating System
Hardware

If the bottom layer of the stack -- hardware -- is likely to fail, you want to make sure your other layers are separate and recoverable. You could move the entire top three levels (i.e. replicate the C: drive). But in reality, the operating system is also a commodity. And if it is Windows, it is also designed to expand and clutter your available space with updates, restore points and other data that chews up all available space. Point of fact: I don't know anyone who has done a Microsoft Windows upgrade and been happy. The only truly successful upgrade is to replace both the OS and the underlying hardware (i.e. buy a new computer).

So, in fact, the most practical separation is Hardware+OS vs. Application+Data, because the OS is dependent on the hardware and the data is dependent on the application. (You can see this because the OS is bundled with the hardware; and although some apps are usually "bundled" with computers, they are usually pale imitations of the applications users really need.)

What you, the user, would really like would be for there to be separation at all levels -- so you could take the data, the applications (i.e. your working environment), or both to a new machine. But the way Windows is built by default, all three of the top layers are installed on a single drive, C:\. And although the data is separated into a folder structure by default (My Documents), the applications are bound in a death-like grip with the OS which is impossible to break without reinstalling. You can move C:\Program Files to a new computer, but nothing will work, because the actual application components are spread around and -- more importantly -- Windows encourages the applications to build in dependencies on entries in the OS registry.

The result is a system that cannot easily be deconstructed into its logical parts (if at all). The solution, however, is both simple and obvious -- given the proper attitude and efforts of the interested parties. It is:
  • Redirect My Documents to separate media, not just a subfolder of one disk shared with the OS and applications. Preferably encourage more reliable and hardy flash drives for easy transport between machines. (2, even 4 gig flash drives are now a commodity, which is more than enough for most normal human beings. The 200-300 gig drives are primarily for the Windows operating system...)
  • Load the Windows registry dynamically while booting, to pick up application settings from a separate configuration folder. Preferably, allow alternate locations, so users could define multiple configurations (Home, Business, Video Editing, etc.) that could be selected at boot time.
  • Put the application files, configuration folder, and DLLs on a separate root, possibly separate media, so that the entire configuration can be moved from one machine to another en masse.

Saturday, August 25, 2007

Squirrels Don't Need to Remember


As I was finishing off my post on Google, I came across an ad in InformationWeek that demonstrates the we-know-better attitude I was discussing in my post. The ad was striking not only because it is such an extreme example of what I was discussing, but for the glaring fallacy of its argument.

The ad said, in large letters:

Squirrels don't remember where they hide their nuts.

Then in smaller print:

They're not looking in the right places for what they need.
But you can. With proven information management software from SAS.

Say what!?! There are so many things wrong with the logic here it is hard to know where to being. Yeah, yeah, yeah... it's just marketing so you aren't supposed to think, you are supposed to feel. But it feels all wrong.

First off, I'm not sure if squirrels remember where they hide their nuts or not. After a short bit of scrounging around the web (using Google, of course, although my wife would use Yahoo! -- the results are essentially the same) it seems that naturalists and biologists are agreed that they do not remember. But at the same time, they don't need to remember because they use their sense of smell to find the nuts, whether they are the nuts they buried or another squirrel set aside. (Hey! That's not a half bad analogy for search itself...)

And I don't know about you, but where I live I don't see a lot of squirrels dieing of hunger. In fact, they seem to be flourishing without the assistance of any "proven nut management" software. So they have no problem "not looking in the right places."

The more you think about it, the worse it gets. If you follow the URL listed in the ad they come right out and say it:

In a complex business world, the information you need to be successful may be hidden in the most improbable places. Unlike the squirrel, however, you don’t have time to forage for answers.
What is this fear of having to spend five minutes finding something? Now, I know SAS is not advertising a search solution, they are selling business analytics. But the argument they make is the same, and the argument against it is also identical -- if I know what I will need to know, I can safely structure my content to fit the future answer (and there are cases where this is the case: budgetary information, business contacts, the output of standard procedures); but if I don't know in advance, messy information is not made less messy by applying artificial filters and strictures on its storage and access.

I'm not a squirrel, but there are plenty of times I'm looking for nuggets of information and with a decent search engine and access to the data (i.e. a "good sense of smell"), I can find it. There are also plenty of times I've been trying to get a nugget out of a "proven information management" system, that confounds my best efforts to answer the unnecessarily complex or irrelevant questions it insists on asking before giving me its "best answer".

Squirrels don't need to remember. And neither, thank goodness, do I.

Monday, August 20, 2007

Why Don't We Just Use Google?

I get this question about once a week. The reasons for not using Google inside the company I work for* are so numerous, I tend to shrug off the question now days. But it probably deserves answering fully at least once.

I understand why they ask: it seems so easy to find information out on the internet. Why is it so complicated and difficult inside the corporate firewall, where there are so many people (including myself) working to make it accessible?

Why Not?

The argument goes something like this:

  • Much of our information inside the firewall is critical business data and is not made available to every Tom, Dick, and Harry (even within the company) so cannot be crawled by a generic search engine.
  • We can't have our employees wasting their time searching through page after page of search results. We need to provide a "better search" tuned to their needs. This better search means:

    1. Only indexing "quality" content -- that which is deemed part of the official corporate intranet -- not cluttering the results with unstructured information such as discussions, forums, blogs, personal sites, etc.
    2. Qualifying certain content as "best bets" (or whatever you like to call them) -- so the right answer shows up first and highlighted.
    3. Providing custom search interfaces for specific types of data -- such as customer testimonials, employee records, market data, sales collateral, etc.
  • Much of the important business information is in special databases and applications, such as SAP, Documentum, SharePoint, Lotus Notes... name your favorite business app. Therefore, you must use that application's UI and custom search (see above) to find and access it.
  • We've already spent significant resources in both time and money creating the high quality search environment we have (see above). We can't afford to throw away more money and start over.
  • Finally we're not the group responsible for the corporate intranet and search, so we couldn't do anything about it even if we wanted to. Besides, we'd get our wrists slapped if we go buying and installing a competitor to the corporate solution.
OK. So that's the argument. Is it valid? Well, the part about restricted access and the Google Appliance** having trouble crawling content it cannot reach is true enough. It is a technical limitation that creates a problem for any proposed search solution (more on this later).

As for the "better search" argument, this is wrong on at least two points. The first is so obvious it barely needs stating, but perhaps it is its obviousness that makes it hard for the proponents of intranet search solutions to see. That is, if your custom search is so much better, why do people keep suggesting alternatives out of frustration?

Sure, there's always a certain number of naysayers to any decision, change, or technology within a company. But this isn't just nay saying. The people asking the question are honestly saying "I can do this better somewhere else -- better, faster, and easier. Why is it so difficult inside the firewall?" Answering "but we're better" may convince management, but it doesn't win over the users.

The second reason this is wrong is more complex. it involves the rationale for "better" and the assumptions that underlie it. They claim they are better because they get the users to the right answer more directly. Of course, if that were true we wouldn't hear so many complaints (point #1) but more importantly, there are two key assumptions here that deserve attention.

  • The first assumption for this to be true is that they (the designers of the search interface) know what the right answers are. That in itself is questionable.
  • The second assumption -- a priori -- is, if they assume they know the right answers, then they must also know what the questions will be!
These assumptions justify decisions like restricting what content is indexed and excluding "noise" such as forums and blogs. But from a knowledge management perspective, these "noisy" channels are where the true nuggets of wisdom and experience are shared! This is one of the longstanding dilemmas of KM: whether to focus on refined knowledge (often referred to as "explicit knowledge" or "best practices") or to support the messier knowledge-in-action of forums and distribution lists where tacit knowledge (the hard-won tidbits of wisdom from experience) becomes explicit through the interaction of practitioners.

By eliminating the "clutter" of unstructured knowledge, the enterprise search reshapes itself as a qualified but sterile channel for "approved" knowledge, not what the people need to answer specific questions that arise in their day to day work. And not what they have come to expect fro a global search engine.

I say it is a dilemma because it is not a matter of picking one over the other: explicit vs. implicit, structured vs. unstructured. Both have their place and need to be supported -- and findable. And when the "corporate" search solution excludes one or the other, it is very difficult for KM or IA to recover the necessary balance.

So, what about the rest of the argument, custom searches and special applications? Yes, it is true these exist. But why are these separate and mutually exclusive to global search? Even if these custom searches and UIs exist, why can't the standard corporate search also return appropriate results from these databases?

The answer to that question is two-fold. The first part goes back to argument #1: often these custom applications and databases have restrictive access permissions that don't allow them to be crawled. The second part is a conceit similar to the argument that the current search is "better"; the owners and developers of these applications feel their interfaces are better and see no need to expose their data for generalized crawling.

The problem with this attitude is that it puts the onus on the user to know that the data exists and to go find it. As a KM professional, I am familiar with many of the resources within the company, but that's my job. The average employee is pretty much in the dark. Why should they be expected to know the contents and whereabouts of every website and database in the company?

This severity of the problem was brought home to me recently. I am not responsible for corporate search, or for the content in many of these special repositories . But I am responsible for the knowledge architecture for my division and was aware of the problem our employees were having finding knowledge.

We couldn't "just use Google" and we don't have the resources to do a federated search (which would be another alternative). Instead, I built a simple javascript-based search interface that simply provides a text box and a pull-down menu asking which repository you want to search. The results are displayed in a frame so the search interface stays visible, in case you want to switch to search a different resource.

Very simple. Crude even. But amazingly popular. It doesn't consolidate results; it doesn't do anything more than many of our KM websites, which already list all these resources in one way or another. Except that it is small. concise, and it lets the user take action. I was surprised how enthusiastic our users were for this tool. Which goes to show how little would be needed to help them...

What if?

So, back to the point. If you hadn't guessed, I think many corporate intranets would be significantly improved if they used a commercial search engine like Google. But... But.. It is not as simple as just installing hardware and software.

The primary argument against commercial crawlers is a tiny bit technical and a large part cultural. And that is argument #1. The attitude towards information as "intellectual property" that has to be protected against misuse by the company's own employees (or contractors, or partners, or customers...). This attitude leads to an intranet that operates like a building full of locked rooms with no signs on the doors. (Funnily enough, not unlike the physical offices of many corporations I have visited...) It confounds the ability to use the technology, like Google, that makes the public internet the amazing resource it is.

To use Google, or any other commercial search engine, the company as a whole -- starting from the very top and going through all levels of management -- has to believe that information only has value when it is used, not when it is locked up. There is no inherent value in information, only in what you can do with it. And providing the mechanisms to make it accessible proportionally increase its value to the company. This includes:

  • Making information read-accessible to all intranet users
  • Making RSS feeds, XML representations and other open interfaces for databases and business applications as important as their own custom UIs, so the content can be crawled and reused on a broader scale
  • Crawling all content, including content created by individuals through discussions, forums, personal blogs, etc.
Are these strictures a panacea? No. Other, more focused activities are also needed. (Notice that I didn't say abandon special applications and custom UIs, just open them up.) But search is such a fundamental, rudimentary activity for knowledge management, that crippling as is done so often sets you off on the wrong foot and puts far more pressure on the other solutions to "get it right". (Which, to be truthful, they are not likely to do on their own.)

And from a pragmatical level, is this approach even feasible? Perhaps not. Changing an entire corporate culture is an unlikely event. So this ideal situation is more likely to happen in small companies where top management has a clear vision, or in new startups.

Or Else...

So, what are the alternatives? If you can't change corporate culture (and that is a pretty tall order for anyone, including a CEO), what can you do to improve the situation?

Even small steps can have a significant impact, just as my crude javascript-based search consolidation did for the employees I work with. Some things you can do are:

  • Start by getting the "noisy" stuff indexed. If they won't include it in the primary index, look into including it as a separate scope within the same interface.
  • If your corporate search isn't working for you, notify your users of the alternatives. Make a list of all of the best resources you know of within the corporation for searching. As messy as the list might be, it will help somebody.
  • Don't create any new silos! If you manage content, make sure it is accessible, work to get it indexed by the corporate search engine (and any other search engines you know of internally).


*Footnote: I am using the company I work for as an example, but I know for a fact from talking to other that the same questions plague large corporations around both nationally and globally.

**Footnote: Just as my company is a placeholder for pretty much any large corporation, Google is a placeholder for any good, commercial crawler-based search engine. Fill in the blank with your favorite...

Monday, August 13, 2007

The Good, the Bad, and the Ugly

I'm a curmudgeon. I'm a grumpy old man. At least it seems that way whenever I hear myself talking about books, movies, artists, musicians, etc...

Things are never good enough. Many of my favorite artists seem to have fallen and can't pick themselves up. Most modern art is, frankly, boring. New music is more noise than music (and this coming from an avid ex-Punker). Contemporary poetry is repeating the mistakes of the last ten years, and the ten years before that, and the ten years before that.... I seem to have a bad word to say about everything.

At the same time I can almost hear myself saying defensively “but I still like so-and-so...” or “such-and-such wasn't too bad....” But there's no real consolation in the words.

The fact is I wasn't so quick to judge when I was younger. It was all new to me and I took it all in like a hungry man at an all-you-can-eat buffet. Mind you, I didn't like it any better then than I do today. But back in the day I'd cast off what I didn't like or couldn't understand with a shrug and move on. Perhaps it was too deep for me, too shallow, not my type of thing... Whatever. It was tried, discarded, and the next item taken up. I was in love with the search for something fine, something that captured the essence of truth and beauty.

My ability to suffer the search quietly has left me. Now I rail against the poseurs, the time wasters, the annoying wannabes. But still in the back of my mind, there is the fear that my anger and hardened skin not only protects me from failed art, but blinds me to the quiet presence of something beautiful. Something I won't recognize and will step on in my ignorance.

This feeling can be so pervasive, that it comes as a bit of shock when you get proof to the contrary.

The other day I was reading Ploughshares, one of the better poetry journals. If reading poetry is dangerous for the curmudgeon, reading poetry magazines is practically suicidal. The ratio of good poems to mediocre is always low. And because of the varying styles and voices, you really need to be open minded to sort the wheat from the chaff. (Unlike a single author's book where you have the time to recognize and learn to appreciate the poet's voice, for example.)

So, I was reading Ploughshares (Vol. 32, No. 4) and came across the poem “Recognitions” by W. S. Merwin. Now, I have a soft spot for Merwin's early work. It is cryptic, difficult, but very rewarding. However, in the latter years he has been one of the targets of my what's-wrong-with-older-poets rant. His mind seemed to get rich and soft somewhere around the 80's-90's. (Merwin is one of the few poets who seemed to achieve a comfortable and consistent level of fame through his ongoing publication in the likes of the New Yorker.)

The poem starts with an unsupportable premise:

a wave and an ash tree were sisters
This is the sort of statement drives a wedge between the author and the reader, forcing you to confront your “willing suspension of disbelief” head on. Why? How? And Merwin doesn't let you off the hook. He keeps up the fairy tale, straining the thin line of probability with each new image:

they had been separated since they were children
but they went on believing in each other
though each was sure that the other must be lost
Even when the story attempts to draw a connection, it remains purely within the realm of fairy tale:

they cherished traits of themselves that they thought of
as family resemblances features they held in common
the sheen of the wave fluttered in remembrance
of the undersides of the leaves of the ash tree
recalled the wave as the breeze lifted it
And then the narrator interrupts to juxtapose the real and imaginary still farther:

and they wrote to each other every day
Unlike reading Charles Wright's poem and reluctantly having to reject images that ultimately have no support, here Merwin is deliberately – and charmingly – holding the bizarre image of the wave and ash tree up to your face and not letting you forget it.

So what is the outcome? While you as reader are busy struggling with the initial coupling of force and nature, Merwin secretly leads you to a part of the story you do recognize, understand, and believe -- the letters:

some of which have come to light only now
revealing in their old but familiar language
a view of the world we could not have guessed at

but that we always wanted to believe
And we do believe. We believe in the wave and the ash and – more importantly – we believe that the possibility of these tales and believing in them is more important than the tale itself.

Unlike the poem by Wright (and I am just using that one poem as an example, since Wright is an excellent poet and the poem is not really characteristic of his work) which leaves us questioning whether the poem is true, Merwin has challenged us, teased us, tricked us, and led us to believe in a poem of only 17 lines. It is sheer genius.

This is the type of poem that keeps you going for weeks, lets you forgive – even forget! -- the hundreds of bad poems you read to reach it. It is a work of art. And I guess I am not such a curmudgeon after all.

Sunday, August 12, 2007

Text is Good

Text is good. That might not be an shocking statement, but it does run counter to recent assumptions concerning the education and practices of modern youth.

For much of the 90's we were told that “Johnny Can't Read” and that TV, video games, and waning educational standards had made our children a generation of illiterate media zombies. Even when they do attempt to communicate, they resort to dialects that are viewed as further devolution of the English language: l33t speak and the cryptic text messaging abbreviations, FCOL.

Given all the doom saying and the recent meteoric rise of non-written communication such as YouTube, you might think the youngest members of the work force would be barely able to put two words together. However, evidence points to the contrary.

Text is still good. In fact, text may be better off now than it has been for a long time! Yes, readership of printed newspapers is declining, but the number of blogs – written communication at its purest – is increasing at a phenomenal rate. 55 million blogs worldwide by one count, with 84 thousand new blogs discovered each day (blogimpulse as of August 12, 2007).

I am not claiming all of these blogs are literary masterpieces or even reach beyond a rudimentary level of speech, but they are the first sign of the resurgence of text. And the fact that the blogosphere has maintained its remarkable ascension is at least in part due to the excellent, often previously undiscovered, practitioners of the written word at work there. Many of the prominent bloggers -- such as David Weinberger (JoHo), Corey Doctorow (of Boing Boing), and Robert Scoble (Scoberlizer) -- stand out as much for their writing ability as the ideas themselves.

Even in the arena of video games, one of the primary bastions of slackerdom, writing has come into prominence. A number of blogs and bloggers have risen to the top as gaming “journalists” Many of these writers have traditional journalism backgrounds, either as students in college or through previous jobs at newspapers and magazines. (Many of them still maintaining those roles.) The video game blogging sites, from the rambunctious Destructoid to the more explicitly journalistic and commercial Kotaku, Gamasutra, 1Up, and IGN all display a real knack for the written word.

Again, I am not arguing that these sites are replacements for traditional journalistic media (newspapers and such). They tend to promote a form of “new journalism”-- post-Hunter Thompson gonzo journalism, without the self-destructive impulses but keeping the fierce personal perspective. They encourage a sort of rapid fire, off-the-top-of-the-head commentary. (This is a broad generalization and many of these sites do occasional “features” to provide room for more indepth evaluations. But in general they do cater to the fast and furious reading style of their audience.)

Whether these larger sites like Gawker Media are “real journalism” or not is a topic for a different discussion. My point is that, either way, they do represent lively and adept writing and they are immensely popular, proving the resurgence of text as a good thing.

One last thought: The use of a contemporary venacular – whether it is leet speak or over-the-top text messaging acronyms – is a hallmark of every generation and is consistently decried as a sign of the imminent collapse of society as we know it by the previous generation. Whether it was Jive in the 30's and 40's, the language of the beats in the 50's, flower child lingo in the 60's and 70's, or the Gansta talk of the 90's, each generation finds a way to self-identify (and exclude others) through language. Some small portion of each gets absorbed into the ongoing lexicography of our culture. The rest is history.

Saturday, August 4, 2007

What's Wrong with Email?

Why is everyone gunning for E-Mail? The prognostications of its imminent demise (or calls for its outright impeachment) are constant – each week a new report on how old-fashioned and oh-so-1980's it is in the eyes of the MySpace generation (see here and here).

So, what's wrong with it? Or more appropriately, what is so new and different about the contenders for its crown?

The argument seems to be that email is old-fashioned, disorganized, riddled with spam, and used for too many things for which there are better solutions.

Old-Fashioned? Yes. Email is essentially text-based. Yes, it does pictures and you can force it to do formatting or attach video or audio. But you cannot rely on anything but the text reaching the other end. (That is part of its old-fashioned charm....)

Disorganized? Yes again. Folders are an oh too familiar mechanism for organizing information. But I don't know anyone who has actually got their mail under control. The volume and nature of email changes too frequently, it confounds and exceeds the ability for any individual to keep its structure up-to-date. As a consequence the inbox starts to bulge like an overstuffed filing cabinet. Search and sort (by sender, by subject) become the primary tools for finding old mail.

Spam? You got it. Nuff said.

Used for too many things? Well, wait a minute...

If email is all that bad, why is it used so much? It can’t only be out of habit and inertia.

Email is just a container, like files are containers, or websites. Email is traditionally the predominant container for four reasons:
  • It is private, personal. It is your email and acts as your online identity. (This is reinforced by the number of other internet services that use the email address as your unique identifier.) You get to control how it organized (or disorganized) and what gets shared or not.

  • It is easy to use. Ignoring the disorganization problem, it is as easy to use as file folders, with the added advantage that it is equally easy for others to send you content – and for you to delete their messages if desired.

  • It’s ubiquitous. Pretty much anything can be put into an email message container. This is part of its simplicity. Of course, it is also part of its organizational disfunction since there are no capabilities for “tagging” content besides the basic to/from/subject and folder hierarchy.

  • Text is good. It's not fancy; it's not hip; it's not modern. But text is still the most pervasive, low-fi medium of expression we have.

The fact is that email is so simple and so ubiquitous that it can be used for many different tasks – personal messages, corporate announcements, reminders, alerts, scheduled events, sharing and storing projects documents, archiving... even rants, tirades, and laundry lists. The fact that any two of these might be done more effectively elsewhere doesn't counteract the importance that I control it and I have everything I need in one place.

Which brings us to new users -- those who do not need email and therefore have not filled it with both content and context yet. The latest article from cNet restates the fact that teens don't use email, eschewing it for instant messages, phone texting, and social networking sites. How have they managed to avoid email's siren call?

First, they have alternatives. Before 5-10 years ago, the only alternative to email was the telephone, which fulfills very few of email's benefits.

Next, the alternatives are both public and private at the same time. IM and text messaging are private. Social networking sites provide essentially private within-the-site email as well as more public messaging/commenting areas.

In almost all cases, the alternatives are as easy to use as email or easier.* Oh! And they are almost all text-based. So text is still good.

Which leaves only ubiquity. Here is where the true distinction arises. Few if any of the tools of modern youth share the ubiquity of email (except phone). But there is a hint about the meaning of this distinction too, which is as youngsters are not fully involved in the working world, they don't care about ubiquity – or a single identity.

An article in the BBC points out that just as the youth of today flock to the web 2.0 sites, they also happily abandon and replace tools -- and their identities on those sites -- at whim. They use multiple sites – each with a separate identity or multiple identities – as needed to keep up with their friends. This works when socializing is your major activity. However, it is in direct conflict with the needs of business – where unique, persistent, and reliable identities and ubiquity are essential. By this I mean business as an employee and as a consumer, where commercial sites still predominantly or exclusively require an email address and a credit card.

So will this flood of polymorphic youth carry their current behaviors into the business world? I suspect they will try, but they won't succeed. To ensure reliability and security, firms are likely to continue to insist on single identities and a ubiquitous communication vehicle for business – in other words, email. However, four possible developments might change this prognosis:

Development #1: A secure mechanism for aliasing different accounts to a unique identity is developed, so I can have any number email accounts, blogs, IM accounts, etc that all are reliably and securely associated with my online persona. Open Directory aims at this goal, but it is unclear if it is easy enough or understandable enough to be trusted by humans.

Development #2: Telephones replace email. Originally, telephones could not compete, since their were a strictly audio medium. However, they now do text, pictures, video, store and forward (i.e. Voicemail), etc. And phone numbers are – almost exclusively – associated with individuals. However, to become ubiquitous, they will need to develop a way to access the data associated with your phone number from other devices – specifically PCs. This does not seem to be a direction the phone companies are interested in going in. Also, the current pricing for add-on phone services (such as texting) is simply exorbitant and prohibitive for phone-as-ID to become pervasive.

Development #3: Credit cards replace email. It is the business of credit card companies to manage a reliable one-to-one relationship between credit card numbers and users. It is perhaps the only truly reliable unique, global identifier. (A person's social security number is also reliable and unique, but is only valid within the country to which it applies.) Currently, credit card companies provide none of the communication services of email, texting, voice, etc. However, if the situation gets worse, there is an opportunity for them to step in and both solve a problem and further cement their control over their customers personal and financial data. (The advantages of credit card IDs over phone Ids is they are cheap – practically free – as long as you pay your billls...)

Development #4: the IM service providers A.) agree on real interoperability and IM addresses become ubiquitous and a true rival to email, B.) refuse to agree on interoperability (which seems to be their current path) and users force the industry to accept multiple identities as a consequence of their inability to agree-to-agree on a crucial piece of technology, or C.) they muddle along half-competing, half-collaborating and frustrating the user base so much they rise up and adopt an open source solution (such as open directory) as a requirement for managing their own identities.

Quite frankly, I don't believe any of these options will occur. The incentive to “rule the roost” of user identity management is likely to be counteracted by competition from the other contenders and the inertia of changing from the current a perfectly valid option, email. Of the lot of them, I believe the phone is the only replacement that is truly likely to occur. The real driving factor is when wireless phone services return to being a commodity – as they eventually must – priced as a low-cost service rather than an overpriced per usage luxury. The sooner this happens, the more likely phone services have a chance to actually replace email. The longer it takes, the more entrenched email addresses will becomes as unique identifiers and the less likely they will be displaced.

Email usage will drop off as the predominance of IM, text messaging, and other alternatives increase. But at the same time, email vendors will be integrating these features into their products, leveraging their presence and platform ubiquity to maintain their leadership as the users' communication identity.


* Footnote: I would claim text messaging is the exception to the rule concerning ease of use. But given the form factor of current cell phones, email can do no better on the same device, teaching us all to type with our thumbs...