Much has been written about the adoption of technology from an industry perspective. Clay Christensen in The Innovator's Dilemma, Geoffrey Moore in Crossing the Chasm, and Malcolm Gadwell in Tipping Point all articulate models for the adoption (or lack thereof) of technologies based on their position in the product lifecycle.
However, as interesting as these models are, they provide little solace to the individual customer who is trying to decide whether to purchase and rollout a specific technology for his or her own business. All of the preceding authors discuss technology at a macro level: its adoption by the market in terms of volume of customers. But for each customer, there is a second, more important adoption that occurs after the purchase: the rollout and, hopefully, successful integration of the technology into their specific business processes.
The problem is that no matter how "successful" a product is in the market, there is no guarantee it will actually prove to be effective when applied to a specific business situation. SAP may be the poster child for this syndrome, where several large-scale implementations are rumored to have proven unusable in the end and ultimately had to be abandoned.
So, what does determine if a technology can successfully be incorporated into an existing business environment? The answer is not related to the technology's current marketing position or "disruptiveness" -- although that will impact the outcome. The real attributes that influence the success or failure of technology rollout in an individual business are all related to the business itself: its culture, its environment, and its history.
Traditional Technology Rollout Plans
Any corporate technology plan worth its salt includes an adoption chart showing the expected rollout over time. These charts fall into two categories: the "s" curve and the stairstep.
The "s" curve shows a slow but steady adoption shifting to a steep climb flattening out at a plateau, usually marked by 100% of the target audience. This model follows the "chasm" or "tipping point" theory where at some time enough early adopters are using the technology that word of mouth takes effect and rollout becomes self-realizing. Adoption ramps up until success is achieved. (Here is an example.)
The stairstep is a more phased approach and assumes adoption based on the ability to train users. The steps in the chart are usually based on incrementally adding divisions or projects as the technology is rolled out progressively through the corporation.
Neither chart takes into consideration that employees may choose not to use the new technology or may actively resist using it. And as much as we would like to think it doesn't happen, these are the real reasons technologies fail. There may be technical problems. There may be bugs and system failures. But ultimately what determines any technology's success or failure is whether the target employees agree to use it or not.
Understanding the Technology Adoption Curves
Where traditionally adoption is seen as a single curve, there are actually three equally important variables that need to be considered:
- Adoption rate
- Rejection rate
- Misuse and abuse
Therefore, the real adoption might look something like the following diagram
Adoption is the number of employees actively using the technology. Resistance is the opposite of adoption; it is the number of employees who refuse to use the technology or actively complain about it to their friends and colleagues. Misuse is the number of users who are using the technology, but in ways it was not intended (and usually for activities that should not be encouraged).
Real adoption rates are more erratic and event driven than the theoretical s-curve or stairstep. There is usually a series of "bumps" with each announcement or management memo concerning the new product. However, usage then drops off after the bump. Why? Because unless the users see a direct impact on their own jobs, there is no incentive for them to keep using the new technology (beyond management dictate). So with each memo more people will try it; some will stick with it, but others will stop.
Resistance is difficult to measure, but has a real and significant impact on adoption. If users find the technology objectionable, too hard to use, or simply burdensome, they will avoid it, work around it, or use it grudgingly (and often badly). Resistance will tend to exaggerate the spikes and can often lead to a drop off of usage over time.
Misuse is the hardest to account for, but is again a serious problem. The classic example of misuse is email: many users in large corporations use email as an archiving tool -- emailing themselves documents as a way of saving them (rather than leaving them on their PC and risk their loss). The result is quick saturation of the mail storage system with little or no way to sort out the "misused" mails from real business correspondence.
Understanding and Accounting for Resistance
It would seem that resistance to a technology is solely a reaction to the usability or applicability of the technology to the function it performs, but that is not the case. People can reject technical solutions for a number of reasons. Yes, if it is difficult to use or hard to understand, resistance will be higher. But it also depends on whether there is already a solution in place.
Replacing an existing tool can be more difficult than instituting a new tool. Even if the existing processes are outdated or overly complex, employees can be resistant to replacing the known for something new. And it doesn't have to be one technology for another. There can be resistance to implementing a technical solution to even a manual process, if they see the manual process as "working". In other words, unless the employees themselves see a problem, they are not likely to appreciate or accept the solution.
This is particularly problematic when replacing multiple point solutions with a single corporate-wide technology. Each of the existing tools will have advocates who will adamantly argue the merits of their own solution over the proposed replacement. And. quite frankly, in many cases their arguments are not entirely baseless. Each division may have instituted a point solution tuned towards their needs and a corporate-wide solution is likely to result in some loss of functionality. Even if the overall outcome is better for the company, these separate divisions will see it as a step backwards for their own purposes.
So resistance is actually the result of a number of factors:
- Corporate culture: how accepting the organization is of change and technology in particular
- Current environment: whether there is an existing solution (or solutions) in place that is being replaced
- History: whether past rollouts have gone well or badly will heavily influence the receptiveness to further change
Clearly, when the technology you are implementing provides a unique and obvious advantage to the business and to those who must use the technology, then resistance will be low. But that combination of variables is rare. In most cases it is useful to take resistance into consideration when planning your rollout to lessen its impact.
Usually resistance can be overcome with sufficient management support. The implicit threat of losing one's job for not following through on a management dictate can help drive adoption. But at the same time, it will foster additional resistance as well. So if you take this approach, you better be sure you have the necessary management support -- and not just verbal support -- to address any complaints that arise about overly aggressive deadlines, time lost to training, missing or faulty features, etc.
Getting sufficient management attention for an extended period of time is not always possible, so the other option is to try to avoid resistance by not asserting too much pressure for adoption. In other words, using the carrot instead of the stick. Obviously, the tradeoff with this technique (i.e. not demanding strict adoption or applying management pressure) is that adoption will be significantly slower. On the plus side, done well, the adoption will be slow but steady, as there will be less resistance. But if there are strong advocates for alternate solutions, even this approach is likely to fail.
With either approach, there is likely to be at least some resistance, and the best policy is to preemptively counteract it. How? Preferably before, or as early as possible during, rollout identify the most likely sources of resistance. That is, alternative solutions, processes that will be affected by the change, and so on. Then identify the primary advocates for the alternatives or the most reputable critics of the change. Finally, approach these people personally. Explain the plans for rollout, the rationale, and ask them what are the most significant issues they foresee in adoption.
The goal is to persuade these key individuals that the plans are taking their concerns into consideration. To succeed, you may need to actually change the rollout plans or modify the technology somewhat (which is why doing this before rollout begins is preferable) because the goal is to convince them that their concerns are being taken seriously. And the only way to do that is to take them seriously.
Note, I did not say find the loudest or the harshest critics. The key is to find the most respected, dedicated, and sincere advocates. Loud critics can make your life a pain, but they can be overcome -- or at least counteracted -- by reasonable, respected people. You want to find the gurus, the experts people turn to for help. These are the people you want to convince.
Note that I also did not say convince them that the planned rollout is the best option. Be realistic. You will not be able to convince everyone that your plans are the right solution. The goal is to get them to recognize that it is at least well thought out and that their alternatives have been considered, even if rejected. This way, they may not turn around and advocate for you; but at least they will not argue against it and are likely to stand as a voice of reason during any confusion that arises during rollout.
Understanding and Accounting for Misuse
Misuse is different than resistance. Whereas resistance results in a downturn in adoption, misuse will give the impression that adoption is progressing well, because it involves active use. The problem is that the use runs counter to the original intent and may well interfere with the ultimate business goal.
Misuse can be very hard to identify and sometimes even harder to stop once begun. Like resistance, the key is to try and predict where it will occur and then (if it is serious enough) design around it, rather than trying to clean up after it becomes epidemic. But unlike resistance, where you can often guess where opposition will come from, it is difficult to predict in advance all the possible misuses of a system.
Take SharePoint, for example. SharePoint is a very useful tool in some ways -- it mixes the best of automated web site design, document management, and Windows-based security. But it doesn't do any of them in any great depth. It provides the easy creation of sites and subsites, libraries and lists.
But if you allow users to readily create these repositories (which can be a very efficient way to manage artifacts -- especially for smaller projects or teams -- without requiring a "librarian") you are also making those individuals responsible for the appropriate use and maintenance of those sites. Unfortunately, as eager as people are to create repositories, it is very hard to get them to do proper maintenance and delete them or clean them up periodically.
So two possible misuses of SharePoint are creating too many sites and not removing "dead" sites when their usefulness is over. (This is above and beyond the usual misuse issues such as storing and sharing inappropriate materials: copyrighted music, videos, etc.)
The use of disk quotas gives the appearance of alleviating these problems, since it stops runaway inflation of individual sites. But it doesn't actually stop the misuse. People can just create more sites if they can't increase the volume of the ones they already have. Also, disk quotas do not address the problem of undeleted "dead" sites. Restricting the number of sites any one user can create is another deterrent to creating too many sites, but involves an arbitrary limit (how many sites is "too many"?) and can result in animosity from your user population.
One alternative, if you suspect this lack of cleanup will be prevalent, would be to institute a policy of deleting sites that become inactive for a set period of time. Note that to make this practical, you will need to enhance the application itself to identify and automate this procedure.
Users will still complain that the content in inactive sites (for example, sites with no one accessing them for more than 60 days) is still needed. But unless SharePoint is also your archiving tool (a really bad idea, by the way), storing old content offline can be easily addressed with alternative, less expensive solutions.
The key is to predict what forms of misuse are most likely to occur based on the nature of the business, proclivities of the users, and any gaps or open capabilities in the technologies and processes being rolled out. This may require some imaginative thinking. More importantly, once the danger areas are identified, there may need to be changes or additions to the technologies themselves to ensure the desired processes are followed and negative alternative uses are avoided.
Note that you don't want to eliminate all alternatives, since users are likely to discover creative and effective business uses for the technology that were never planned. But this is another reason why it is a good idea to monitor the rollout periodically (every 6 months or so) to see what sort of uses are developing. This allows you to catch both misuses you hadn't thought of but need to account for as well as creative new uses that you may want to acknowledge and promote throughout the user community.