Friday, July 29, 2011

Going Around in Circles

I, like several million other people, have recently been trying out Google+. G+ has received plenty of press in the past few weeks and I don't want to add to the noise. But when I started I noticed two things that I didn't see mentioned until recently:
  • All my G+ friends are KM types, or otherwise involved professionally in communication and social interaction. Few if  any of my "normal" friends are using G+ (or see why they should).
  • I don't like making circles. They require too much thinking.
The first observation were confirmed this week when the unofficial Google Plus Directory (http://findpeopleonplus.com/) posted demographic information on G+ users based on their ascribed professions. Most of the top twenty are technology or information-focused professions. And many of those that are not explicitly "in the business" are questionably tied to technology (such as writers and designers).

My second issue is around circles. I understand they sound like a good idea. My personal (and professional) relationships are more complex than Facebook's simplistic friends / non-friends model.So being able to define your relationships in more detail sounds like a positive step.

The problem is, it's far more difficult than it sounds. I have friend friends and I have professional friends. I have professional friends and professional acquaintances. Some work for my old employer; some used to; some never did. Some know I am interested in poetry and video games (among other things); some don't. A few have met my wife; some may not even know I am married.

When I start to break it down, it is not only not binary, it is more complex than even I can describe. Which is what makes Google+'s circles so frustrating. They require too much thinking. This is not a technical issue, per se, but a failure to be able to turn an implicit organic process into an explicit concrete categorization.

In other words, my friends are analog and circles are digital.

Andrew McAfee confirmed my suspicions in a blog post. He goes into far more depth and argues that it is an issue of a priori vs. a posteriori decisions.I am sure he is right from a process perspective, but I am not even sure deciding after I find an item to share is going to help that much.

Part of the joy of Twitter is that there is no decision. You post or you don't. You open yourself to anyone who chooses to listen (essentially). Oh, it has its limitations as well (starting with the length of the messages). But the freedom from thinking about who a message is intended for can be quite liberating.

However, that freedom doesn't have much to do with friends; it has more to do with publishing (or proclaiming). But it can be a useful and easier process in the digital world than trying to sort out your friends.

    Monday, July 18, 2011

    Holy Crap, Batman! The Social Business Stack

    I just read D. Hinchcliffe's () Social Business Stack at the Dachis Group and all I can say is [explicative deleted]. Dang! That's one impressive and imposing architectural diagram!

    I'm not saying the diagram is architecturally incorrect. In fact, I suspect it is accurate from a corporate IT perspective. It looks like so many other all-inclusive architectures.

    The trouble is no normal human being in their right mind could look at it and do anything but shake in their boots. This is the sort of diagram that justifies five years of intense IT investment. It also presupposes (or pre-justifies) failure since there are so many moving parts.

    The stack is accurate in that it captures all of the possible interactions and interdependencies from a KM and IT perspective. (That is, the old people/processes/technology triumvirate.) But the fact is no one really cares about anything but the top layer. (The Social/People layer.)

    So why is this so complex and social networking "in the wild" so simple? Well, it isn't that simple in real life. But:
    • On the public web people are more than willing to do things manually to "make it work", such as putting in links to blogs, etc by hand.
    • If it does become difficult, there's an app for that. People are happy to juggle 5, 10, even 20 separate apps such as bit.ly, twitpic, intagr.am, last.fm etc to achieve their goals. What's more, it is cumulative: people learn new tricks from watching their friends' posts.
    • Ultimately, the public internet is an almost limitless (since it is always growing) source of additional material, support, inspiration, or target for discussion.
     In other words, all the other layers of Dion's stack exist in the public instance but no one cares about them. Not that they aren't necessary. The next four layers (Data, Delivery, Aggregation, and Discovery) are just assumed to be there. And the critical vertical integration "glue" is heavily biased towards manual effort and simple HTTP links, rather than some complicated automation.

    The last two layers (Security and Business Model) also exist. But people are amazingly carefree about security on the public web and the Business Model is the responsibility of the technology/service providers and people simply give a yea or nay vote on the instantiation by staying with the service or moving on.

    So, what does this mean? I think the first meaning is that, as usual, corporations are taking something simple (or deeply complex but with a simple surface layer) and getting caught up in the morass that underlies it. Secondly, what the stack doesn't show is the often terribly anaemic state of the lower stacks behind corporate firewalls. The oft-repeated aphorism "If only we knew what we know" can usually be expanded to its various corellaries:
    • "If only we knew who knew what we know"
    • "If only we knew where we stored what we know"
    • "If only we could find what we know"
    • "If only I had permission to know what we know"
    • etc.
    So, I think the social business stack as represented is correct. But I am terribly concerned about what such a diagram would be used for. Because, ultimately, it is people — not technology or processes — that are the deciding factor. And people have astonishing resilience and patience for "making things work" when they have an interest in the outcome.

    Friday, May 6, 2011

    What I'm Playing: 9 Hours 9 Persons 9 Doors

    I am currently playing three puzzle games on the Nintendo DS. They are all different, but playing them together helps clarify what works — and what doesn't work — in each.

    The first game I started was 9 Hours 9 Persons 9 Doors (affectionately referred to as 999). This is a puzzle/story game, where to progress through the story you need to solve puzzles. It is a pretty well-established genre, similar in nature to the Nancy Drew games, Professor Layton, Myst, etc.

    999 is what might be described as a survival-horror puzzler, because the story involves your character, Junpei, being kidnapped and trapped on a sinking ship with eight other people. They must work together to escape before time runs out (the eponymous 9 hours).

    Let me just say I expected to like this game. It sounded unusual and got quite good reviews for both its puzzles and its ambiance. But I was seriously disappointed.

    The puzzles are fine. In fact, the game starts off well with a locked room puzzle of moderate difficulty. No wasting half an hour on simplistic "training" levels. Unfortunately, the game takes a turn for the worst, in several respects, when story elements are introduced.

    For a story-driven puzzler, the story line is grisly and unnecessarily so. Death and mutilation is intended to give you as the player a sense of suspense and tension. But since you have so little control over the action of the game (except tapping the screen to advance the story) the gruesome events are only uncomfortable.

    And the discomfort is extenuated by the unrealistic story line. You are trapped with eight other people who look like they just came from a circus (literally) or a Village People tribute concert. Each a unique and comically stereotyped representation of.... something. A belly dancer, a heavy-set laborer, an effete aristocrat... you get the idea.

    To make matters worse, the story is presented in a crude 2D pantomime. Static images, with flat images of the cartoony cast (literally, drawn as cartoons) floating in and out of view like shadow puppets. This is suspense? Even viewed as a retro "edgy" presentation style, the clumsy graphics become tedious.

    Worse yet, the awkwardness of the presentation carries over to the game play. At one point, you are required to turn a ship's wheel. A relatively simple puzzle device, given you are shifted to a direct front view. And with touch control, you would think it a simple thing to allow the user to touch and drag the wheel to turn it, no?

    No. The UI puts up arrows pointing left and right above the wheel which you are forced to click on to make the wheel turn. They almost had to go out of their way to make the interaction so... unnatural.

    And finally, the game fails at its own goal of making you feel like it is something more than just a handful of meaningless puzzles. For all of the portents, unnecessary curse words, and grisliness, the game is constantly forcing you to respond to relatively meaningless or obtuse statements and suggestions from the other members of the party. But when it matters, when something is seriously wrong and you — even if you believed the story and wanted to "play" — you are given no control except clicking and clicking and clicking while line after line of text inches past until your bizarrely frozen in place persona is killed...

    Yes, I played all the way through to one of the "bad" endings. I was then given the option of replaying the game, with the advantage that I could jump through text and scenes I was already familiar with. No thank you. Once was more than enough.

    [To be continued....]

    Tuesday, May 3, 2011

    A Small Piece of Gaming History: CHASE-N-COUNTER

    While sorting through some boxes I had stored on a shelf in the basement, I came across a box labeled "small games". Inside I found many familiar items I had put away, but I also encountered one I had completely forgotten.

    For many years my mother-in-law worked for the game company, Milton Bradley. She knew I liked games and puzzles and so she often gave me the latest games as Christmas and birthday presents. At one point Milton Bradley bought the small electronics firm GCE in an effort to get a foothold in the burgeoning video game market. GCE was developing a gaming machine called the Vectrex. Vectrex was unique in many ways: it used vector graphics rather than a raster display, it was black and white, it was an all-in-one design including a tiny 9" screen, etc.

    Within a year, Milton Bradley decided to get out of the video game market and sold off its inventory of Vectrex to its employees at a steep discount. (I still have a complete working Vectrex system, which we bring out every couple of years.)

    In addition to Vectrex, GCE created a series of handheld games. I had forgotten that she gave me one of these systems as well and that is what I rediscovered.



    The GCE handhelds, such as CHASE-N-COUNTER, were also unique. Not because they used vector graphics (they use very crude LED displays instead), but because they doubled as a calculator. ("N-COUNTER", get it?) A sliding plastic cover switched the handheld from game machine to calculator instantly, hiding its other function. The ultimate "boss screen" in a way.

    When I found it, the batteries were dead. But after replacing the batteries it works like new. Well... new, 15 years ago. No one would mistake CHASE-N-COUNTER for a modern video game. The rudimentary shapes (a single dot for your "character" if you could call it that) and the sparse plink-plink of the sound effects, and lurching movement take you back to — let's be truthful — a much more difficult time.

    These games are not easy. Timing is everything. It is really you against the computer as you try to time your moves to the precise moment your dot jumps to the next LED and before the crushing "game over" sound indicates you missed it.



    So, although the games are simpler, the play is much harder. It takes a lot of practice just to get the basics of the game down. But once you have them down, then it is simply a matter of increasing difficulty, with the same mechanics over and over.

    This was true of arcade games at the time as well. Pac-Man is a good example. On your first try you lasted about 30 seconds. But if you kept at it, you could manage to clear several screens without losing a life.

    That said, I don't know that I'm going to be playing CHASE-N-COUNTER again any time soon. It was a fun experience at the time, but it is hard to resurrect the interest (or the free time) that kept me at it originally.

    But it is nice to see it still works. And just hearing those tinny sounds reminds me of the simple pleasures of concentrating on something entirely meaningless, but mesmerizing.

    P.S. After finding CHASE-N-COUNTER, I also found an article written by the game's programmer. A great read if you are interested in the story of how such a game was developed.

    Friday, April 1, 2011

    The Ultimate Architecture Diagram

    I carry a small notebook with me at all times to jot down ideas, reminders, fragments of thoughts, or just doodle in my spare moments. Over the years I have filled up quite a few such notebooks.

    Many of the items in them I can identify, some I cannot. They include drafts of messages to coworkers, a phrase or word I thought critical at the time, or notes for business presentations long since given and forgotten.

    I was leafing through one of my notebooks the other day when I came across a curious diagram. I don't remember where or when I drew it, but looking at it now I am struck by its simplicity, its utter honesty and completeness.



    Yes, there is stuff. And there is other stuff. Beyond that, very little matters.

    The issue, from a business, technical, and/or personal perspective, is being able to separate the right "stuff" from everything else. There lies the rub. Maybe I captured that in another diagram, but I can't find that one at the moment...

    Monday, February 21, 2011

    Whose Knowledge Is It Anyway?

    In previous posts I have discussed the shifting relationship between employer and employee in terms of the ownership and responsibility for knowledge. Many people are taking advantage of the web 2.0 revolution — through blogs, wikis, etc. — to assert the individuality of what they know and their hard-won professional experience.

    Employment always combines a certain amount of both the carrot and the stick. As much as you might enjoy what you do professionally, there are always a few things that are necessary for the company that you would choose not to do if given the option. So, the employer/employee relationship is always a collaboration, a compromise of activities that meet the needs of each.

    Salary, bonuses, and promotions are obviously "carrots". Performance reviews, management dictates, and the threat of a pink slip are part of the "stick". In balance, these two components benefit both the employer and the employee. However, when they fall out of balance, negative things start to happen.

    In the early twentieth century, when industry used the unrestrained threat of firing, low wages, and even physical violence to control the workers, the result was the labor movement and emergence of unions in the United States. As the twentieth century came to a close, the rise of the global economy and multinational corporations gave employers a new out. Not only could work be moved out of state, it could now be moved to another country entirely — leading to 10-15 years of aggressive business tactics euphemistically called downsizing, rightsizing, outsourcing, and offshoring, among other things.

    There would seem to be little the employee could do to counter this trend. Except, we are no longer in an age dominated by physical manufacturing. We are in what is referred to as the "information age". Business magazines have been touting the power and transformative capabilities of information for years now.

    And if information is the currency, ownership of information is power. So, who owns the information? Corporations would like to think they own the creative output of their employees. And it seems true enough that they rightfully own the direct output and artifacts of work done under their employ. This output may be physical products (such as tables and chairs for a furniture manufacturer), electronic products (such as software), services (such as installation, repair, or management services), processes (such as standard operating procedures or decisions trees) and any source code, documentation, or preliminary designs that led to that output.

    But do they own the knowledge and intellect used to create that output?

    They would like to think so and often lay claim to the knowledge, putting restrictions on what their employees can do with that information during — and in some cases after — their employment. But unlike the industrial age where the means of production were tangible objects (such as looms, kilns, and presses), today the means of production is knowledge.

    And knowledge, unlike a physical device such as a loom or a lathe, is not tied to a specific task. Knowledge is also not separate from the employee who possesses it. Part of the power of the web 2.0 revolution is the synergy between frictionless global communication and the realization by individuals of the importance of the knowledge they possess. And make no mistake, they possess the knowledge, not their employers.

    You cannot separate knowledge from the knower. And although you can argue that part of what they know may be a trade secret or other company-specific information, the abstract understanding of how things work and the experience of doing it, belong to the employee.

    If this sounds a little like Karl Marx revisited, it is not a surprise. The pressures being applied on employees in the late 20th/early 21st century are not unlike those of the late 19th/early 20th. The difference is that — unlike in the late 1800's —the ability to code is more important than the computer you write the code on. And as the technology itself becomes commoditized, the ability to code becomes more expensive as well.

    So what are the repercussions on employees, employers, and how you manage the knowledge between them?
    • Heavy-handed attempts to assert ownership or control over how employees use knowledge will produce resentment and resistance. This was true even before the internet age or the knowledge economy came into full bloom. But now it is brought into higher relief, especially when the employee can choose to limit use of that knowledge, either consciously or subconsciously, if they feel it is being undervalued.
    • In response, rather than unionizing, which was the approach chosen in the early 20th century, information workers in the 21st century are choosing to "socialize".
    By "socializing" I mean establishing a vibrant, dynamic community of peers where knowledge and experience is traded freely outside corporate boundaries. How is this beneficial to the employees? In several ways. Most importantly, it creates a reciprocal arrangement bartering knowledge for reputation, which works like this:
    • Individuals in need of information search the internet — including blogs and technical forums — in search of answers. If they cannot find what they need, they may ask in a forum, discussion list, or openly through services such as Quora or Twitter. By looking outside the corporate, these individuals are likely to find more unique, complete, and specific answers faster than if they stay within the firewall. In addition, they often get credit for the solution.
    • At the same time, their peers post information about their experiences — either in response to questions or as knowledge in blogs, wikis, etc. — both to help other people and to establish a reputation for themselves as knowledgeable about their field of expertise.
    Since the individual bits of knowledge being traded have minimal commercial value in and of themselves, there is no loss to the individual sharing what they know. At the same time, those knowledge tidbits can have great value to peers who are trying to solve a specific problem. As a result, a collective market of sharing and reputation building is created among practitioners completely outside of corporate boundaries.

    This knowledge sharing ecosystem is extremely loose; there are no formal definitions or boundaries. The community is composed of like-minded individuals communicating through blogs, forums, websites, and social media with no official connection beyond a commonality found in search results, comment threads, blog rolls, retweets, and the like.



    Beyond just the basic exchange of information, the blogosphere provides knowledge workers with additional benefits:
    • An outlet for ideas that are overlooked, under appreciated, or simply out of scope of their current work environment.
    • A far greater, sometimes critical but often more enthusiastic, audience for their thoughts
    Finally, the relationships established through interactions within one's profession and the reputation garnered in the open, critical eye of peers can be indispensable in the not-so-distant future. For example, peer connections made now can be indispensible when when looking for a job some time down the road.

    And smart companies are taking advantage of this change, often using blogs and forums to actively promote openings, search for good candidates, or to qualify those who apply for positions.

    The hyperbolic claims often found in resumes can be hard to verify. But an openly published and proven knowledge of the subject at hand goes a long way to convincing a potential employer that someone has what it takes.



    Of course, there are downsides. Just as participation in these extracurricular activities can help establish your reputation as a leader in the field, individuals with an aggressive, dismissive, or over-assertive personality can establish a reputation of a very different kind. When you participate in public discussion for any length of time, both the good and bad aspects of your personality will come to light.



    Ultimately, it the the individual's knowledge that counts. And the world of web 2.0 provides a vehicle for that individual to share knowledge with their employer, their fellow employees, and peers around the world to the ends he or she sees fit. Whether their employer approves or not. And, quite frankly, in many ways the world, and the individual, are better off for it.

    The Mechanics of Handling Two Screens

    I once read a review that commented on the discontinuity created by the space between the two screens on a Nintendo DS game. The blank space was treated as part of the play area and there was a noticeable delay as the player's avatar passed from one screen to another.

    This got me thinking about the other games I'd played and how they handled the two screens, because in many cases I simply had not noticed. But in a few cases, the mechanism stands out as both innovative and a complement or enhancement to the game play.

    There are, so far (there's always room for innovation), essentially three or four generic mechanics for handling the two screens that I have seen:

    • Separate worlds, separate screens
    • Ignore the gap
    • The invisible game space: the DMZ
    • The invisible game space: playing in the dark

    Separate Screens, Separate Worlds

    In this mode the two screens are handled as separate entities. This is the most common technique for racing games, where the top screen is used for the racer's view and the bottom screen shows a map, statistics, current standings, etc.

    Separate screens is also very common for platformers and "educational" titles (such as Brain Training). The advantage of this method is that the gap becomes a non issue. The disadvantage is that if you don't have much additional content, the second screen is essentially wasted. This is very noticeable in some of the early titles such as Ridge Racer and Rayman, where the bottom screen is primarily a very bad replacement for an analog stick.


    Ignore the Gap

    In this mode the game ignores the physical gap and acts as if the two screens are two adjoining segments of a seamless view. This avoids any issues of what happens "in the gap", but does create a bit of a discontinuity as objects "jump" across the physical gap between the screens.

    As a side note, I can't  think of any games that are designed this way. It is possible and even likely that some game has created a partitioned game field ignoring the gap. But in most cases where a game uses both screens for the same "environment", they use one of the following modes to handle the gap.

    The Invisible Game Space: The DMZ

    In this mode the two screens form a single playing field and the gap between them is treated as part of the field -- an invisible space. However, the game ensures that the player either never enters that space or is "safe" while passing through.

    Note that both the player and enemies may pass through this space, but not together since that would risk a collision or attack in the invisible space. Examples of this are Yoshi Touch n Go where the enemies pass through the gap but Yoshi doesn't. Except in the first scene where baby Mario is falling and only after the enemies have cleared the area (as Baby Mario makes the final fall to be caught be Yoshi).


    The Invisible Game Space: Playing in the Dark

    The last possibility is where the gap is an invisible part of a single playing field, but the games lets interactions occur in the gap! If this were accidental it would be a serious flaw in the game mechanics because the player could get, literally, blindsided. However, done well it adds a new wrinkle to the games.

    One of the best examples I have seen of this technique is in Bomberman where "tunnels" lead through the gap from the top screen to the bottom and the player (or enemies) can use the tunnels to hide bombs or to trap opponents with blasts from one screen to the next.

    Friday, February 11, 2011

    "Someone Speaks"

    Someone recently posted a few lines from one of my poems over on Tumblr. The lines came from a poem called "Someone Speaks", which was originally published in the Chicago Review.

    I am pleased to know people enjoyed what they saw. But the poem in its entirety is hard to find so I thought I would post it here if anyone is looking for it.

    Someone Speaks
    Someone speaks
    and the room fills with words. 
    I am surprised by the whiteness
    of sheets folded in cupboards and drawers. 
    Because the leaves have fallen
    footsteps can be heard much farther away. 
    When I entered the room
    I could see what had passed between them. 
    These and other things
    mean nothing at twenty below zero. 
    If we were ghosts, he said,
    we could pass through each other without causing harm. 
    If we were ghosts, she said,
    we would not see each other coming.

    This poem is part of a larger manuscript called A Life of Feasting. You can find more more of my work, here. Enjoy.

    Sunday, January 23, 2011

    Nintendo 3DS Pricing

    OK. So Nintendo has finally announced the release date and pricing for the upcoming Nintendo 3DS handheld (March 27th for $249 US). Let the wailing and lamentation begin.

    I shouldn't joke. I have complained about overpriced hardware myself in the past (eg. Dreamcast, PSP, PS3...). But quite frankly, I am over it. There is clearly a price at which electronics overreach their audience. This was true of the 3DO ($699 in 1993), the PS3 (originally priced at $499-$599 in 2006), and certainly true of the PSP Go ($249 in 2009), which is perhaps the poster child of over eager pricing.

    So, how can you justify the 3DS at $249 when the PSP Go was "overpriced" at the same price? Because when it comes to price, "too much" is relative.

    It is now 2011. The last Nintendo  handheld, DS, started around $149 and rose to $189 for the DSiXL — which is an interesting, but ultimately minor, upgrade on the base unit. So another $60 jump for a major new platform is not unreasonable. Especially when you compare it to the PSP Go which had a new form factor, but no really new functionality.

    The real question is what is happening to console prices? All three consoles are now priced starting around $200-$300. So the 3DS will come in pretty much even with a home game console. 3-5 years ago this would have been inconceivable. But the fact is, the age of console gaming is over.

    I don't mean consoles are going away; I expect video game consoles and console games to continue. There will always be a place for "serious" gaming. But the era where consoles dominate the industry is over. Smart phones play a part in this. Casual gaming is also involved. But perhaps more importantly, video game consoles have evolved to a point of diminishing returns. The expense of producing the hardware and of developing games to exercise that hardware is barely sustainable.

    Nintendo avoided this cycle by moving (no pun intended) in a new direction with the Wii, to great success. But in the five years since Wii debuted, much of the technology involved is now possible in handheld form. Besides its eponymous 3D gaming, the 3DS has cameras, a microphone, accelerometer, wifi, and touch control (as do many smart phones). So as the amount of additional graphic power that can be eked out of consoles shrinks, we get closer to the day where the only thing that separates consoles from handheld gaming is the big screen. (And I expect someone will soon figure out how to link that to a handheld as well...)

    But I digress. Is the 3DS worth $249? For a portable "console" that is is backwards compatible (with DS), upgrades the processor significantly, and delivers an entirely new form of play? Sounds like it to me.

    Of course, the real question is what will Sony do when it announces its rumored successor to the PSP. They have traditionally been at the high end of both features and pricing. Their new device may make the 3DS look like a toy. But it is unclear (as it was with the original PS3) whether people will be willing to pay the premium for a... toy?

    Thursday, January 20, 2011

    Top Ten Games of 2010

    A friend of mine told me that, as a holiday activity, he and his college-age sons were discussing their choices for top music of the year. A sort of top ten for 2010. Knowing that we play a lot, he suggested that I do the same with my sons with regards to video games. It sounded like a good idea, so we tried it.

    The first thing we agreed upon was that we didn't have ten top games. In fact, we could only name three. There are several reasons for this:

    • I personally don't get a lot of time to play, so when I do play we tend to play games we can play together. Fewer and fewer modern console games support split-screen multi-player. So we tend to play older games.
    • A lot of the "big" games this year were sequels (Uncharted 2, Assassin's Creed 2, Call of Duty I-lost-count, etc.). As good as these games are, they tend to be more of the same. Not really top ten material.
    • Most of our time is spent playing games that we've been playing for a year or more. When we think of our favorite games, they are often two or more years old. They may be top ten for our year of gaming, but not valid candidates as recent releases.
    So we quickly realized we had not one, but three lists: a short list of top games for 2010, our actual favorite games for playing, and those games we are looking forward to for 2011. So let's start with...

    Top Games for 2010

    • Red Dead Redemption (PS3/Xbox360)
    • Metal Gear Solid: Peace Walker (PSP)
    • Monster Hunter Tri (Wii)
    Red Dead Redemption is clearly one of the best games of 2010. Massive, graphically beautiful, and engrossing game play/story line. No, it is not "art" or an interactive novel (despite side quests, the missions are relatively linear). If there are any negatives to the game it would be that once you are through the missions, there isn't much else to do.

    I didn't play Metal Gear Solid: Peace Walker. But my sons did. Obsessively. For two weeks straight. It seems to be the best and most complete example of a 3D action/strategy game on a handheld device. Mind you, probably best played co-op with a friend. (Rumor has it some levels are almost too tough in single-playermode.)

    Best and most complete example of a 3D action/strategy game on a handheld device except for Monster Hunter Freedom 2 on the PSP, which we have been playing continuously for over a year now. But this is an example of the games we play and the best of the year not being in sync. Monster Hunter came out more than two years ago. It is probably still the best 3D action/strategy on a handheld device. But for 2010, Metal Gear Solid: Peace Walker outshines anything else.

    Finally, Monster Hunter Tri. Perhaps not quite as good as Freedom 2, but that's splitting hairs. Tri is definitely better as a single player experience and no other game even comes close to it in style of play or game experience on the Wii.

    So those are the top three. There were two others we considered adding to the list:

    • Little Big Planet (PSP)
    • Picross 3D (DS)
    We originally thought we had four top games for 2010, because LittleBigPlanet on the PSP is an amazing game. Sure, it is a "downsizing" of Little Big Planet on the PS3. But LBP on the PS3 is such a good game (close to Super Mario 64 and Shadow of the Colossus in terms of best video game ever and reason enough, by itself, to recommend buying a PS3) that a portable version, even without co-op play,  is an amazing game. Problem is, it actually came out at the end of 2009, not the beginning of 2010 as we thought. Sigh.

    I also wanted to add Picross 3D. It doesn't have amazing graphics. It doesn't have terribly innovative game play. And, yes, it too is a sequel. But as puzzle games go, it is about as complete an example as you can find; where the music, the game play, the meaningless-though-entertaining animations add up to an addictive experience. But my sons wouldn't agree to adding it to the list. So let's call it a runner up. (While they're not looking!)

    That's it for top games of the year. But what are we actually playing?

    Favorite Games (What We Actually Play)

    Hands down, the games we play the most are LittleBigPlanet on the PS3 and Monster Hunter Freedom 2 on the PSP. As mentioned before, I believe LBP is a candidate for one of the best video games ever. And Monster Hunter is an enthralling, addictive, immersive experience, once you get into it. Mind you, it takes some doing (several hours) before you get hooked. Which might explain why it hasn't caught on in the US yet.

    The fact that both games came out more than two years ago (three for Monster Hunter) and we are still playing them gives you some indication of how good we think they are.


    After that, comes a slew of games we played and enjoyed: Scribblenauts, Hammerin' Hero, Assassin's Creed, Uncharted 1 and 2, Super Mario Bros. Wii... The list gets longer every time we think about it. But quite frankly, it tends to be an amorphous bundle of fond memories. Each game with its pros and cons, but few that stand out against the few I've already mentioned -- or other spectacular games from the past we haven't played recently (such as Katamari Damacy).

        Sunday, January 16, 2011

        Cheating

        I was talking to a friend about video games when he said — by way of explaining why he hacked his son's game to add a few more powerful Pokemon — "everyone cheats".

        That could well be true. it certainly seems like there is a lot of cheating going on. But I suspect the world can be divided into two camps: those who believe everyone cheats and those who believe most people cheat.

        What's the point? I am not interested in discussing the repercussions on society (which there are plenty of). I am really only thinking of the narrower scope of games.

        The distinction is that if everyone cheats, the only way to participate is to cheat as well. Or else you are a chump. If most, but not all, people cheat, there is still a moral question to be answered. And a question of purpose.

        What is the purpose of gaming? If it is purely to win, then cheating has no negatives since it more quickly achieves the goal. If, on the other hand, gaming is about playing — about facing a challenge and overcoming it in the safe confines of a virtual world — then cheating defeats the purpose because it eliminates the challenge rather than overcoming it.

        This is easy enough to understand when playing single player games. Take solitaire for example. There is no benefit to peeking at the cards that are face down or rearranging the deck — you are only cheating yourself and will quickly tire of the game. Since if you cheat you can always win and then the game has no point.

        But the question of cheating becomes more complex when you are playing with or against other players. The incentives become more involved. When playing with others, there are additional incentives: wanting to do better than the other players; wanting not to look stupid or ineffectual; wanting to demonstrate mastery over the game... All of these can play a part, but with differing levels of importance for each player.

        Online gaming is replete with its own language of competition and "pwnage", making the challenge of doing well for your own sake a much lesser force than the desire for bragging rights. Even single player games now come with "trophies", "badges", and other awards so you can compare your skills against other players.

        Which brings up a special category of cheating: not losing. A number of games have had to modify their leader boards to account for players who "turn off" before a competition is over because they don't want a loss to negatively impact their total score.

        This whole discussion sounds very self-righteous. I tend not to play many online multiplayer games, so it is easy for me to be holier-than-thou to those who prefer competitive play. But the fact is, I cheat as well.

        Since I don't tend to play multi-player games (except face-to-face with friends) my cheating is of a different nature. That is, I cheat to continue. Or, in other words, the strategy guide cheat.

        Games can be hard — they're meant to be challenging. Sometimes the solutions are just too hard or too obscure to figure out alone. For platformers, which tend to be linear in nature, this can be critical: if you can't solve the puzzle or beat the boss, you can't proceed. So your choice is either solve the problem or give up the game.

        I don't like cheating. (I'm the kind of person who refuses to look at the box lid when doing a jigsaw puzzle because working off the picture would be "cheating".) But I will cheat for a game I am enjoying if I get stuck. It is a tradeoff I am willing to make under two conditions:

        • The game is enjoyable enough that I really want to proceed.
        • I have tried enough times to work it out, without success, that I know (or think I know) that I can't figure it out without assistance.
        There is a third condition, which is that I only need to "cheat" a few times in a game. Once I have to look up the answer two, three, or more times in a row, I start to feel the game is too hard to be fun any more. (E.g. the original Kingdom Hearts on PS2 felt this way.) When this happens, then you are no longer playing, you are simply working your way through the strategy guide.

        No, thank you. I'd rather be playing.

        Sunday, January 9, 2011

        What Knowledge Management Can Learn from Small Groups

        I used to work for a large multinational corporation. I now work for a small startup consisting of 12 people who work in one room together. There is not much "knowledge management" needed with a group that size. That doesn't mean knowledge management doesn't happen, just that it happens more instinctively and with less stress around the edges.

        In fact it would seem reasonable to assume that there is little if any relationship between the two situations. But in fact there are some interesting lessons when observing small groups that can be applied to larger corporations:

        Everyone is different

        Even though my current company consists of only 12 people, there are 12 different personalities and approaches to work, communication, technology, etc. When you work in large organizations, there is a tendency to talk about how people will respond to new programs as if it were a unary decision, where all (or at least most) people respond in one way. We are then surprised by the number of people who fall outside of the defined norm.

        Every project or process has a target behavior — how you want people to use the process. But that behavior is only a target. The overall response, when all individual behaviors are taken as an average — may fall within the target range. But any single person is likely to have their own particular usage model that may well be unexpected.

        Multiple, overlapping technologies are not a problem

        Within my small sampling, we have 12 unique sets of technologies, including different operating systems; different hardware (some laptops, some desktops, often both); different software tools; and countless communication devices. Everyone has email, everyone has an instant messaging client of some kind (or two or three), we also have wikis, blogs, forums, and an IRC channel. Not to mention smart phones, blackberries, iPads, etc.

        We occasionally have a discussion about the appropriate place to post information — especially material under review or in draft form. But I have yet to hear any complaints that there are too many choices.

        In large organizations, one of the basic requirements for any project is keeping the toolset small. If there is to be a knowledge management "system", it has to be a single system accessible by anyone in the target audience. Better yet, a single system covering multiple disciplines (KM, project management, resource management, etc.)

        The rationale for keeping the toolset small sounds good in theory:

        • Universal accessibility
        • Reduced learning curve/training costs
        • Only one place to look for information
        • Reduced IT/support costs & complexity
        But this rationale is based on 1980's computing constraints:

        • Universal accessibility — what technology, especially collaboration technology, isn't available through a web browser or across platforms (e.g. IM clients)? Selecting one tool does not make the information more or less accessible.
        • Reduced learning curve/training costs — at the same time corporations are trying to restrict the number of applications to "reduce the learning curve", their employees are busy trying out Facebook, Twitter, Skype... The main reason learning curve is a problem is because you are trying to teach people something they don't want to learn. Perhaps the issue is with the content, not a limitation of the audience's ability to multi-task.
        • One place to look for information — I have heard this argument for years, but I have yet to see a single instance where a company has successfully integrated all information into a single application. In fact, corporations seem determined to segment their knowledge into individual repositories. The closest they come to "one place for all information" is intranet search. However, they determinedly resist efforts to use generalized search engines (such as Google) and often limit what information is indexed by search under the auspices of "qualifying" content. (What happened to "one place"?)
        • Reduced IT/support costs — I used to believe this argument, because it was true. But over time, just as the locked door computer room has shrunk and more and more technology (and computing power) has migrated from a secure, air-conditioned environment.... onto the desk... out of the office... into the pocket... the role of IT in controlling — or even choosing — technology has changed significantly. But IT as an organization and as a profession seems unwilling to accept or accede to that change.
        However, individual technologies can run foul of individual preferences

        The fact that people have multiple technologies, doesn't mean they use them the same way (cf. everyone is different). Years ago, I was shocked when I answered my office phone to discover that the caller was in an office no more than thirty feet away. But I thought nothing of sending email to the person in the cube next to me.

        More recently, I was bemused the first time I received an instant message from a fellow worker two cubes down. (They didn't want to disturb the others by talking, since we work in such close proximity.)

        How and when individuals use different technologies seems like an almost limitless set of permutations. Of the 11 people I currently work with:

        • At least one answers email before IM
        • One seems to respond to both equally (and instantaneously)
        • Several answer either IMs or email, but with no clear pattern or preference
        • One will respond to IMs more often than email, but will answer the IM via email.
        • One never responds to IMs.
        Is this good or bad use of the technology? It is neither. It is how individuals work. Part of "knowledge management" is managing your sources of knowledge, your technology, and your contacts. It is not enough to know how to use the technology; you must also know how it is used by your community.

        I know this concept — the preeminence of personal choice — is an anathema to many KM practitioners. It is like trying to establish order without disturbing the chaos. How can you promote a company-wide program if each individual gets to choose for themselves?

        Well, it is not quite that bad. It is not that each individual gets to decide for themselves. You can dictate, require, or recommend specific technologies and approaches. But you need to recognize that your audience will perform those actions in the way they think is best.

        Finally...

        Have faith in people

        It is easy to see other people's behavior — when it runs counter to expectations — as stubborn or willful when it is nothing of the sort. People will be altruistic, especially when it involves assisting other individuals. However...

        KM is not their job!

        As well intentioned as they may, people have a job to do, deadlines to meet, and responsibilities to uphold. If they think of it, they are willing to share with others. But more often than not, it does not occur to them that the information they hold is valuable to others — especially if that value will not be realized until some indeterminate time in the future.

        Discussions held in hallways or decisions made over lunch are sometimes the most important events within a project. But no one thinks to capture them in a wiki or email the rest of the team. This is not knowledge hoarding, it is simply an inability to recognize that anyone else cares.

        Ultimately, perhaps the most powerful KM tool any company has is nagging repetition. When someone writes something down, remind them to post it to a forum or wiki. When they say something interesting suggest others they should tell via email. Suggest alternate ways to search for solutions to project problems.

        This sort of gentle persuasion on an individual basis can be tedious, since the scope is limited to specific situations with one or two people at a time. And I am not suggesting it alone is sufficient to make KM work on a large scale. However, it is surprising how soon you see others (who you have prodded) acting without instigation or suggesting it to others. And from such small efforts, large effects can accrue.

        Well, that is OK for a small office, but how does this apply to large corporations? The most successful KM programs I have seen, even in very large corporations,  have always had one or two advocates who were tireless in not only promoting "the program", but jumping in and helping individuals with their specific problems and demonstrating KM-ish techniques along the way. Their influence went far beyond just the person they helped, but to anyone that person then spoke to, their friends, etc... Not only their reputation preceded them, but the behavior they modeled went with it to corners of the company they might never have visited personally.