I carry a small notebook with me at all times to jot down ideas, reminders, fragments of thoughts, or just doodle in my spare moments. Over the years I have filled up quite a few such notebooks.
Many of the items in them I can identify, some I cannot. They include drafts of messages to coworkers, a phrase or word I thought critical at the time, or notes for business presentations long since given and forgotten.
I was leafing through one of my notebooks the other day when I came across a curious diagram. I don't remember where or when I drew it, but looking at it now I am struck by its simplicity, its utter honesty and completeness.
Yes, there is stuff. And there is other stuff. Beyond that, very little matters.
The issue, from a business, technical, and/or personal perspective, is being able to separate the right "stuff" from everything else. There lies the rub. Maybe I captured that in another diagram, but I can't find that one at the moment...
Friday, April 1, 2011
Monday, February 21, 2011
Whose Knowledge Is It Anyway?
In previous posts I have discussed the shifting relationship between employer and employee in terms of the ownership and responsibility for knowledge. Many people are taking advantage of the web 2.0 revolution — through blogs, wikis, etc. — to assert the individuality of what they know and their hard-won professional experience.
Employment always combines a certain amount of both the carrot and the stick. As much as you might enjoy what you do professionally, there are always a few things that are necessary for the company that you would choose not to do if given the option. So, the employer/employee relationship is always a collaboration, a compromise of activities that meet the needs of each.
Salary, bonuses, and promotions are obviously "carrots". Performance reviews, management dictates, and the threat of a pink slip are part of the "stick". In balance, these two components benefit both the employer and the employee. However, when they fall out of balance, negative things start to happen.
In the early twentieth century, when industry used the unrestrained threat of firing, low wages, and even physical violence to control the workers, the result was the labor movement and emergence of unions in the United States. As the twentieth century came to a close, the rise of the global economy and multinational corporations gave employers a new out. Not only could work be moved out of state, it could now be moved to another country entirely — leading to 10-15 years of aggressive business tactics euphemistically called downsizing, rightsizing, outsourcing, and offshoring, among other things.
There would seem to be little the employee could do to counter this trend. Except, we are no longer in an age dominated by physical manufacturing. We are in what is referred to as the "information age". Business magazines have been touting the power and transformative capabilities of information for years now.
And if information is the currency, ownership of information is power. So, who owns the information? Corporations would like to think they own the creative output of their employees. And it seems true enough that they rightfully own the direct output and artifacts of work done under their employ. This output may be physical products (such as tables and chairs for a furniture manufacturer), electronic products (such as software), services (such as installation, repair, or management services), processes (such as standard operating procedures or decisions trees) and any source code, documentation, or preliminary designs that led to that output.
But do they own the knowledge and intellect used to create that output?
They would like to think so and often lay claim to the knowledge, putting restrictions on what their employees can do with that information during — and in some cases after — their employment. But unlike the industrial age where the means of production were tangible objects (such as looms, kilns, and presses), today the means of production is knowledge.
And knowledge, unlike a physical device such as a loom or a lathe, is not tied to a specific task. Knowledge is also not separate from the employee who possesses it. Part of the power of the web 2.0 revolution is the synergy between frictionless global communication and the realization by individuals of the importance of the knowledge they possess. And make no mistake, they possess the knowledge, not their employers.
You cannot separate knowledge from the knower. And although you can argue that part of what they know may be a trade secret or other company-specific information, the abstract understanding of how things work and the experience of doing it, belong to the employee.
If this sounds a little like Karl Marx revisited, it is not a surprise. The pressures being applied on employees in the late 20th/early 21st century are not unlike those of the late 19th/early 20th. The difference is that — unlike in the late 1800's —the ability to code is more important than the computer you write the code on. And as the technology itself becomes commoditized, the ability to code becomes more expensive as well.
So what are the repercussions on employees, employers, and how you manage the knowledge between them?
This knowledge sharing ecosystem is extremely loose; there are no formal definitions or boundaries. The community is composed of like-minded individuals communicating through blogs, forums, websites, and social media with no official connection beyond a commonality found in search results, comment threads, blog rolls, retweets, and the like.
Beyond just the basic exchange of information, the blogosphere provides knowledge workers with additional benefits:
And smart companies are taking advantage of this change, often using blogs and forums to actively promote openings, search for good candidates, or to qualify those who apply for positions.
The hyperbolic claims often found in resumes can be hard to verify. But an openly published and proven knowledge of the subject at hand goes a long way to convincing a potential employer that someone has what it takes.
Of course, there are downsides. Just as participation in these extracurricular activities can help establish your reputation as a leader in the field, individuals with an aggressive, dismissive, or over-assertive personality can establish a reputation of a very different kind. When you participate in public discussion for any length of time, both the good and bad aspects of your personality will come to light.
Ultimately, it the the individual's knowledge that counts. And the world of web 2.0 provides a vehicle for that individual to share knowledge with their employer, their fellow employees, and peers around the world to the ends he or she sees fit. Whether their employer approves or not. And, quite frankly, in many ways the world, and the individual, are better off for it.
Employment always combines a certain amount of both the carrot and the stick. As much as you might enjoy what you do professionally, there are always a few things that are necessary for the company that you would choose not to do if given the option. So, the employer/employee relationship is always a collaboration, a compromise of activities that meet the needs of each.
Salary, bonuses, and promotions are obviously "carrots". Performance reviews, management dictates, and the threat of a pink slip are part of the "stick". In balance, these two components benefit both the employer and the employee. However, when they fall out of balance, negative things start to happen.
In the early twentieth century, when industry used the unrestrained threat of firing, low wages, and even physical violence to control the workers, the result was the labor movement and emergence of unions in the United States. As the twentieth century came to a close, the rise of the global economy and multinational corporations gave employers a new out. Not only could work be moved out of state, it could now be moved to another country entirely — leading to 10-15 years of aggressive business tactics euphemistically called downsizing, rightsizing, outsourcing, and offshoring, among other things.
There would seem to be little the employee could do to counter this trend. Except, we are no longer in an age dominated by physical manufacturing. We are in what is referred to as the "information age". Business magazines have been touting the power and transformative capabilities of information for years now.
And if information is the currency, ownership of information is power. So, who owns the information? Corporations would like to think they own the creative output of their employees. And it seems true enough that they rightfully own the direct output and artifacts of work done under their employ. This output may be physical products (such as tables and chairs for a furniture manufacturer), electronic products (such as software), services (such as installation, repair, or management services), processes (such as standard operating procedures or decisions trees) and any source code, documentation, or preliminary designs that led to that output.
But do they own the knowledge and intellect used to create that output?
They would like to think so and often lay claim to the knowledge, putting restrictions on what their employees can do with that information during — and in some cases after — their employment. But unlike the industrial age where the means of production were tangible objects (such as looms, kilns, and presses), today the means of production is knowledge.
And knowledge, unlike a physical device such as a loom or a lathe, is not tied to a specific task. Knowledge is also not separate from the employee who possesses it. Part of the power of the web 2.0 revolution is the synergy between frictionless global communication and the realization by individuals of the importance of the knowledge they possess. And make no mistake, they possess the knowledge, not their employers.
You cannot separate knowledge from the knower. And although you can argue that part of what they know may be a trade secret or other company-specific information, the abstract understanding of how things work and the experience of doing it, belong to the employee.
If this sounds a little like Karl Marx revisited, it is not a surprise. The pressures being applied on employees in the late 20th/early 21st century are not unlike those of the late 19th/early 20th. The difference is that — unlike in the late 1800's —the ability to code is more important than the computer you write the code on. And as the technology itself becomes commoditized, the ability to code becomes more expensive as well.
So what are the repercussions on employees, employers, and how you manage the knowledge between them?
- Heavy-handed attempts to assert ownership or control over how employees use knowledge will produce resentment and resistance. This was true even before the internet age or the knowledge economy came into full bloom. But now it is brought into higher relief, especially when the employee can choose to limit use of that knowledge, either consciously or subconsciously, if they feel it is being undervalued.
- In response, rather than unionizing, which was the approach chosen in the early 20th century, information workers in the 21st century are choosing to "socialize".
- Individuals in need of information search the internet — including blogs and technical forums — in search of answers. If they cannot find what they need, they may ask in a forum, discussion list, or openly through services such as Quora or Twitter. By looking outside the corporate, these individuals are likely to find more unique, complete, and specific answers faster than if they stay within the firewall. In addition, they often get credit for the solution.
- At the same time, their peers post information about their experiences — either in response to questions or as knowledge in blogs, wikis, etc. — both to help other people and to establish a reputation for themselves as knowledgeable about their field of expertise.
This knowledge sharing ecosystem is extremely loose; there are no formal definitions or boundaries. The community is composed of like-minded individuals communicating through blogs, forums, websites, and social media with no official connection beyond a commonality found in search results, comment threads, blog rolls, retweets, and the like.
Beyond just the basic exchange of information, the blogosphere provides knowledge workers with additional benefits:
- An outlet for ideas that are overlooked, under appreciated, or simply out of scope of their current work environment.
- A far greater, sometimes critical but often more enthusiastic, audience for their thoughts
And smart companies are taking advantage of this change, often using blogs and forums to actively promote openings, search for good candidates, or to qualify those who apply for positions.
The hyperbolic claims often found in resumes can be hard to verify. But an openly published and proven knowledge of the subject at hand goes a long way to convincing a potential employer that someone has what it takes.
Of course, there are downsides. Just as participation in these extracurricular activities can help establish your reputation as a leader in the field, individuals with an aggressive, dismissive, or over-assertive personality can establish a reputation of a very different kind. When you participate in public discussion for any length of time, both the good and bad aspects of your personality will come to light.
Ultimately, it the the individual's knowledge that counts. And the world of web 2.0 provides a vehicle for that individual to share knowledge with their employer, their fellow employees, and peers around the world to the ends he or she sees fit. Whether their employer approves or not. And, quite frankly, in many ways the world, and the individual, are better off for it.
The Mechanics of Handling Two Screens
I once read a review that commented on the discontinuity created by the space between the two screens on a Nintendo DS game. The blank space was treated as part of the play area and there was a noticeable delay as the player's avatar passed from one screen to another.
This got me thinking about the other games I'd played and how they handled the two screens, because in many cases I simply had not noticed. But in a few cases, the mechanism stands out as both innovative and a complement or enhancement to the game play.
There are, so far (there's always room for innovation), essentially three or four generic mechanics for handling the two screens that I have seen:
Separate Screens, Separate Worlds
In this mode the two screens are handled as separate entities. This is the most common technique for racing games, where the top screen is used for the racer's view and the bottom screen shows a map, statistics, current standings, etc.
Separate screens is also very common for platformers and "educational" titles (such as Brain Training). The advantage of this method is that the gap becomes a non issue. The disadvantage is that if you don't have much additional content, the second screen is essentially wasted. This is very noticeable in some of the early titles such as Ridge Racer and Rayman, where the bottom screen is primarily a very bad replacement for an analog stick.
Ignore the Gap
In this mode the game ignores the physical gap and acts as if the two screens are two adjoining segments of a seamless view. This avoids any issues of what happens "in the gap", but does create a bit of a discontinuity as objects "jump" across the physical gap between the screens.
As a side note, I can't think of any games that are designed this way. It is possible and even likely that some game has created a partitioned game field ignoring the gap. But in most cases where a game uses both screens for the same "environment", they use one of the following modes to handle the gap.
The Invisible Game Space: The DMZ
In this mode the two screens form a single playing field and the gap between them is treated as part of the field -- an invisible space. However, the game ensures that the player either never enters that space or is "safe" while passing through.
Note that both the player and enemies may pass through this space, but not together since that would risk a collision or attack in the invisible space. Examples of this are Yoshi Touch n Go where the enemies pass through the gap but Yoshi doesn't. Except in the first scene where baby Mario is falling and only after the enemies have cleared the area (as Baby Mario makes the final fall to be caught be Yoshi).
The Invisible Game Space: Playing in the Dark
The last possibility is where the gap is an invisible part of a single playing field, but the games lets interactions occur in the gap! If this were accidental it would be a serious flaw in the game mechanics because the player could get, literally, blindsided. However, done well it adds a new wrinkle to the games.
One of the best examples I have seen of this technique is in Bomberman where "tunnels" lead through the gap from the top screen to the bottom and the player (or enemies) can use the tunnels to hide bombs or to trap opponents with blasts from one screen to the next.
This got me thinking about the other games I'd played and how they handled the two screens, because in many cases I simply had not noticed. But in a few cases, the mechanism stands out as both innovative and a complement or enhancement to the game play.
There are, so far (there's always room for innovation), essentially three or four generic mechanics for handling the two screens that I have seen:
- Separate worlds, separate screens
- Ignore the gap
- The invisible game space: the DMZ
- The invisible game space: playing in the dark
Separate Screens, Separate Worlds
In this mode the two screens are handled as separate entities. This is the most common technique for racing games, where the top screen is used for the racer's view and the bottom screen shows a map, statistics, current standings, etc.
Separate screens is also very common for platformers and "educational" titles (such as Brain Training). The advantage of this method is that the gap becomes a non issue. The disadvantage is that if you don't have much additional content, the second screen is essentially wasted. This is very noticeable in some of the early titles such as Ridge Racer and Rayman, where the bottom screen is primarily a very bad replacement for an analog stick.
Ignore the Gap
In this mode the game ignores the physical gap and acts as if the two screens are two adjoining segments of a seamless view. This avoids any issues of what happens "in the gap", but does create a bit of a discontinuity as objects "jump" across the physical gap between the screens.
As a side note, I can't think of any games that are designed this way. It is possible and even likely that some game has created a partitioned game field ignoring the gap. But in most cases where a game uses both screens for the same "environment", they use one of the following modes to handle the gap.
The Invisible Game Space: The DMZ
Note that both the player and enemies may pass through this space, but not together since that would risk a collision or attack in the invisible space. Examples of this are Yoshi Touch n Go where the enemies pass through the gap but Yoshi doesn't. Except in the first scene where baby Mario is falling and only after the enemies have cleared the area (as Baby Mario makes the final fall to be caught be Yoshi).
The Invisible Game Space: Playing in the Dark
The last possibility is where the gap is an invisible part of a single playing field, but the games lets interactions occur in the gap! If this were accidental it would be a serious flaw in the game mechanics because the player could get, literally, blindsided. However, done well it adds a new wrinkle to the games.
One of the best examples I have seen of this technique is in Bomberman where "tunnels" lead through the gap from the top screen to the bottom and the player (or enemies) can use the tunnels to hide bombs or to trap opponents with blasts from one screen to the next.
Friday, February 11, 2011
"Someone Speaks"
Someone recently posted a few lines from one of my poems over on Tumblr. The lines came from a poem called "Someone Speaks", which was originally published in the Chicago Review.
I am pleased to know people enjoyed what they saw. But the poem in its entirety is hard to find so I thought I would post it here if anyone is looking for it.
This poem is part of a larger manuscript called A Life of Feasting. You can find more more of my work, here. Enjoy.
I am pleased to know people enjoyed what they saw. But the poem in its entirety is hard to find so I thought I would post it here if anyone is looking for it.
Someone Speaks
Someone speaks
and the room fills with words.
I am surprised by the whiteness
of sheets folded in cupboards and drawers.
Because the leaves have fallen
footsteps can be heard much farther away.
When I entered the room
I could see what had passed between them.
These and other things
mean nothing at twenty below zero.
If we were ghosts, he said,
we could pass through each other without causing harm.
If we were ghosts, she said,
we would not see each other coming.
This poem is part of a larger manuscript called A Life of Feasting. You can find more more of my work, here. Enjoy.
Sunday, January 23, 2011
Nintendo 3DS Pricing
OK. So Nintendo has finally announced the release date and pricing for the upcoming Nintendo 3DS handheld (March 27th for $249 US). Let the wailing and lamentation begin.
I shouldn't joke. I have complained about overpriced hardware myself in the past (eg. Dreamcast, PSP, PS3...). But quite frankly, I am over it. There is clearly a price at which electronics overreach their audience. This was true of the 3DO ($699 in 1993), the PS3 (originally priced at $499-$599 in 2006), and certainly true of the PSP Go ($249 in 2009), which is perhaps the poster child of over eager pricing.
So, how can you justify the 3DS at $249 when the PSP Go was "overpriced" at the same price? Because when it comes to price, "too much" is relative.
It is now 2011. The last Nintendo handheld, DS, started around $149 and rose to $189 for the DSiXL — which is an interesting, but ultimately minor, upgrade on the base unit. So another $60 jump for a major new platform is not unreasonable. Especially when you compare it to the PSP Go which had a new form factor, but no really new functionality.
The real question is what is happening to console prices? All three consoles are now priced starting around $200-$300. So the 3DS will come in pretty much even with a home game console. 3-5 years ago this would have been inconceivable. But the fact is, the age of console gaming is over.
I don't mean consoles are going away; I expect video game consoles and console games to continue. There will always be a place for "serious" gaming. But the era where consoles dominate the industry is over. Smart phones play a part in this. Casual gaming is also involved. But perhaps more importantly, video game consoles have evolved to a point of diminishing returns. The expense of producing the hardware and of developing games to exercise that hardware is barely sustainable.
Nintendo avoided this cycle by moving (no pun intended) in a new direction with the Wii, to great success. But in the five years since Wii debuted, much of the technology involved is now possible in handheld form. Besides its eponymous 3D gaming, the 3DS has cameras, a microphone, accelerometer, wifi, and touch control (as do many smart phones). So as the amount of additional graphic power that can be eked out of consoles shrinks, we get closer to the day where the only thing that separates consoles from handheld gaming is the big screen. (And I expect someone will soon figure out how to link that to a handheld as well...)
But I digress. Is the 3DS worth $249? For a portable "console" that is is backwards compatible (with DS), upgrades the processor significantly, and delivers an entirely new form of play? Sounds like it to me.
Of course, the real question is what will Sony do when it announces its rumored successor to the PSP. They have traditionally been at the high end of both features and pricing. Their new device may make the 3DS look like a toy. But it is unclear (as it was with the original PS3) whether people will be willing to pay the premium for a... toy?
I shouldn't joke. I have complained about overpriced hardware myself in the past (eg. Dreamcast, PSP, PS3...). But quite frankly, I am over it. There is clearly a price at which electronics overreach their audience. This was true of the 3DO ($699 in 1993), the PS3 (originally priced at $499-$599 in 2006), and certainly true of the PSP Go ($249 in 2009), which is perhaps the poster child of over eager pricing.
So, how can you justify the 3DS at $249 when the PSP Go was "overpriced" at the same price? Because when it comes to price, "too much" is relative.
It is now 2011. The last Nintendo handheld, DS, started around $149 and rose to $189 for the DSiXL — which is an interesting, but ultimately minor, upgrade on the base unit. So another $60 jump for a major new platform is not unreasonable. Especially when you compare it to the PSP Go which had a new form factor, but no really new functionality.
The real question is what is happening to console prices? All three consoles are now priced starting around $200-$300. So the 3DS will come in pretty much even with a home game console. 3-5 years ago this would have been inconceivable. But the fact is, the age of console gaming is over.
I don't mean consoles are going away; I expect video game consoles and console games to continue. There will always be a place for "serious" gaming. But the era where consoles dominate the industry is over. Smart phones play a part in this. Casual gaming is also involved. But perhaps more importantly, video game consoles have evolved to a point of diminishing returns. The expense of producing the hardware and of developing games to exercise that hardware is barely sustainable.
Nintendo avoided this cycle by moving (no pun intended) in a new direction with the Wii, to great success. But in the five years since Wii debuted, much of the technology involved is now possible in handheld form. Besides its eponymous 3D gaming, the 3DS has cameras, a microphone, accelerometer, wifi, and touch control (as do many smart phones). So as the amount of additional graphic power that can be eked out of consoles shrinks, we get closer to the day where the only thing that separates consoles from handheld gaming is the big screen. (And I expect someone will soon figure out how to link that to a handheld as well...)
But I digress. Is the 3DS worth $249? For a portable "console" that is is backwards compatible (with DS), upgrades the processor significantly, and delivers an entirely new form of play? Sounds like it to me.
Of course, the real question is what will Sony do when it announces its rumored successor to the PSP. They have traditionally been at the high end of both features and pricing. Their new device may make the 3DS look like a toy. But it is unclear (as it was with the original PS3) whether people will be willing to pay the premium for a... toy?
Thursday, January 20, 2011
Top Ten Games of 2010
A friend of mine told me that, as a holiday activity, he and his college-age sons were discussing their choices for top music of the year. A sort of top ten for 2010. Knowing that we play a lot, he suggested that I do the same with my sons with regards to video games. It sounded like a good idea, so we tried it.
The first thing we agreed upon was that we didn't have ten top games. In fact, we could only name three. There are several reasons for this:
Top Games for 2010
I didn't play Metal Gear Solid: Peace Walker. But my sons did. Obsessively. For two weeks straight. It seems to be the best and most complete example of a 3D action/strategy game on a handheld device. Mind you, probably best played co-op with a friend. (Rumor has it some levels are almost too tough in single-playermode.)
Best and most complete example of a 3D action/strategy game on a handheld device except for Monster Hunter Freedom 2 on the PSP, which we have been playing continuously for over a year now. But this is an example of the games we play and the best of the year not being in sync. Monster Hunter came out more than two years ago. It is probably still the best 3D action/strategy on a handheld device. But for 2010, Metal Gear Solid: Peace Walker outshines anything else.
Finally, Monster Hunter Tri. Perhaps not quite as good as Freedom 2, but that's splitting hairs. Tri is definitely better as a single player experience and no other game even comes close to it in style of play or game experience on the Wii.
So those are the top three. There were two others we considered adding to the list:
I also wanted to add Picross 3D. It doesn't have amazing graphics. It doesn't have terribly innovative game play. And, yes, it too is a sequel. But as puzzle games go, it is about as complete an example as you can find; where the music, the game play, the meaningless-though-entertaining animations add up to an addictive experience. But my sons wouldn't agree to adding it to the list. So let's call it a runner up. (While they're not looking!)
That's it for top games of the year. But what are we actually playing?
Favorite Games (What We Actually Play)
Hands down, the games we play the most are LittleBigPlanet on the PS3 and Monster Hunter Freedom 2 on the PSP. As mentioned before, I believe LBP is a candidate for one of the best video games ever. And Monster Hunter is an enthralling, addictive, immersive experience, once you get into it. Mind you, it takes some doing (several hours) before you get hooked. Which might explain why it hasn't caught on in the US yet.
The fact that both games came out more than two years ago (three for Monster Hunter) and we are still playing them gives you some indication of how good we think they are.
After that, comes a slew of games we played and enjoyed: Scribblenauts, Hammerin' Hero, Assassin's Creed, Uncharted 1 and 2, Super Mario Bros. Wii... The list gets longer every time we think about it. But quite frankly, it tends to be an amorphous bundle of fond memories. Each game with its pros and cons, but few that stand out against the few I've already mentioned -- or other spectacular games from the past we haven't played recently (such as Katamari Damacy).
The first thing we agreed upon was that we didn't have ten top games. In fact, we could only name three. There are several reasons for this:
- I personally don't get a lot of time to play, so when I do play we tend to play games we can play together. Fewer and fewer modern console games support split-screen multi-player. So we tend to play older games.
- A lot of the "big" games this year were sequels (Uncharted 2, Assassin's Creed 2, Call of Duty I-lost-count, etc.). As good as these games are, they tend to be more of the same. Not really top ten material.
- Most of our time is spent playing games that we've been playing for a year or more. When we think of our favorite games, they are often two or more years old. They may be top ten for our year of gaming, but not valid candidates as recent releases.
Top Games for 2010
- Red Dead Redemption (PS3/Xbox360)
- Metal Gear Solid: Peace Walker (PSP)
- Monster Hunter Tri (Wii)
I didn't play Metal Gear Solid: Peace Walker. But my sons did. Obsessively. For two weeks straight. It seems to be the best and most complete example of a 3D action/strategy game on a handheld device. Mind you, probably best played co-op with a friend. (Rumor has it some levels are almost too tough in single-playermode.)
Best and most complete example of a 3D action/strategy game on a handheld device except for Monster Hunter Freedom 2 on the PSP, which we have been playing continuously for over a year now. But this is an example of the games we play and the best of the year not being in sync. Monster Hunter came out more than two years ago. It is probably still the best 3D action/strategy on a handheld device. But for 2010, Metal Gear Solid: Peace Walker outshines anything else.
Finally, Monster Hunter Tri. Perhaps not quite as good as Freedom 2, but that's splitting hairs. Tri is definitely better as a single player experience and no other game even comes close to it in style of play or game experience on the Wii.
So those are the top three. There were two others we considered adding to the list:
- Little Big Planet (PSP)
- Picross 3D (DS)
I also wanted to add Picross 3D. It doesn't have amazing graphics. It doesn't have terribly innovative game play. And, yes, it too is a sequel. But as puzzle games go, it is about as complete an example as you can find; where the music, the game play, the meaningless-though-entertaining animations add up to an addictive experience. But my sons wouldn't agree to adding it to the list. So let's call it a runner up. (While they're not looking!)
That's it for top games of the year. But what are we actually playing?
Favorite Games (What We Actually Play)
Hands down, the games we play the most are LittleBigPlanet on the PS3 and Monster Hunter Freedom 2 on the PSP. As mentioned before, I believe LBP is a candidate for one of the best video games ever. And Monster Hunter is an enthralling, addictive, immersive experience, once you get into it. Mind you, it takes some doing (several hours) before you get hooked. Which might explain why it hasn't caught on in the US yet.
The fact that both games came out more than two years ago (three for Monster Hunter) and we are still playing them gives you some indication of how good we think they are.
After that, comes a slew of games we played and enjoyed: Scribblenauts, Hammerin' Hero, Assassin's Creed, Uncharted 1 and 2, Super Mario Bros. Wii... The list gets longer every time we think about it. But quite frankly, it tends to be an amorphous bundle of fond memories. Each game with its pros and cons, but few that stand out against the few I've already mentioned -- or other spectacular games from the past we haven't played recently (such as Katamari Damacy).
Sunday, January 16, 2011
Cheating
I was talking to a friend about video games when he said — by way of explaining why he hacked his son's game to add a few more powerful Pokemon — "everyone cheats".
That could well be true. it certainly seems like there is a lot of cheating going on. But I suspect the world can be divided into two camps: those who believe everyone cheats and those who believe most people cheat.
What's the point? I am not interested in discussing the repercussions on society (which there are plenty of). I am really only thinking of the narrower scope of games.
The distinction is that if everyone cheats, the only way to participate is to cheat as well. Or else you are a chump. If most, but not all, people cheat, there is still a moral question to be answered. And a question of purpose.
What is the purpose of gaming? If it is purely to win, then cheating has no negatives since it more quickly achieves the goal. If, on the other hand, gaming is about playing — about facing a challenge and overcoming it in the safe confines of a virtual world — then cheating defeats the purpose because it eliminates the challenge rather than overcoming it.
This is easy enough to understand when playing single player games. Take solitaire for example. There is no benefit to peeking at the cards that are face down or rearranging the deck — you are only cheating yourself and will quickly tire of the game. Since if you cheat you can always win and then the game has no point.
But the question of cheating becomes more complex when you are playing with or against other players. The incentives become more involved. When playing with others, there are additional incentives: wanting to do better than the other players; wanting not to look stupid or ineffectual; wanting to demonstrate mastery over the game... All of these can play a part, but with differing levels of importance for each player.
Online gaming is replete with its own language of competition and "pwnage", making the challenge of doing well for your own sake a much lesser force than the desire for bragging rights. Even single player games now come with "trophies", "badges", and other awards so you can compare your skills against other players.
Which brings up a special category of cheating: not losing. A number of games have had to modify their leader boards to account for players who "turn off" before a competition is over because they don't want a loss to negatively impact their total score.
This whole discussion sounds very self-righteous. I tend not to play many online multiplayer games, so it is easy for me to be holier-than-thou to those who prefer competitive play. But the fact is, I cheat as well.
Since I don't tend to play multi-player games (except face-to-face with friends) my cheating is of a different nature. That is, I cheat to continue. Or, in other words, the strategy guide cheat.
Games can be hard — they're meant to be challenging. Sometimes the solutions are just too hard or too obscure to figure out alone. For platformers, which tend to be linear in nature, this can be critical: if you can't solve the puzzle or beat the boss, you can't proceed. So your choice is either solve the problem or give up the game.
I don't like cheating. (I'm the kind of person who refuses to look at the box lid when doing a jigsaw puzzle because working off the picture would be "cheating".) But I will cheat for a game I am enjoying if I get stuck. It is a tradeoff I am willing to make under two conditions:
No, thank you. I'd rather be playing.
That could well be true. it certainly seems like there is a lot of cheating going on. But I suspect the world can be divided into two camps: those who believe everyone cheats and those who believe most people cheat.
What's the point? I am not interested in discussing the repercussions on society (which there are plenty of). I am really only thinking of the narrower scope of games.
The distinction is that if everyone cheats, the only way to participate is to cheat as well. Or else you are a chump. If most, but not all, people cheat, there is still a moral question to be answered. And a question of purpose.
What is the purpose of gaming? If it is purely to win, then cheating has no negatives since it more quickly achieves the goal. If, on the other hand, gaming is about playing — about facing a challenge and overcoming it in the safe confines of a virtual world — then cheating defeats the purpose because it eliminates the challenge rather than overcoming it.
This is easy enough to understand when playing single player games. Take solitaire for example. There is no benefit to peeking at the cards that are face down or rearranging the deck — you are only cheating yourself and will quickly tire of the game. Since if you cheat you can always win and then the game has no point.
But the question of cheating becomes more complex when you are playing with or against other players. The incentives become more involved. When playing with others, there are additional incentives: wanting to do better than the other players; wanting not to look stupid or ineffectual; wanting to demonstrate mastery over the game... All of these can play a part, but with differing levels of importance for each player.
Online gaming is replete with its own language of competition and "pwnage", making the challenge of doing well for your own sake a much lesser force than the desire for bragging rights. Even single player games now come with "trophies", "badges", and other awards so you can compare your skills against other players.
Which brings up a special category of cheating: not losing. A number of games have had to modify their leader boards to account for players who "turn off" before a competition is over because they don't want a loss to negatively impact their total score.
This whole discussion sounds very self-righteous. I tend not to play many online multiplayer games, so it is easy for me to be holier-than-thou to those who prefer competitive play. But the fact is, I cheat as well.
Since I don't tend to play multi-player games (except face-to-face with friends) my cheating is of a different nature. That is, I cheat to continue. Or, in other words, the strategy guide cheat.
Games can be hard — they're meant to be challenging. Sometimes the solutions are just too hard or too obscure to figure out alone. For platformers, which tend to be linear in nature, this can be critical: if you can't solve the puzzle or beat the boss, you can't proceed. So your choice is either solve the problem or give up the game.
I don't like cheating. (I'm the kind of person who refuses to look at the box lid when doing a jigsaw puzzle because working off the picture would be "cheating".) But I will cheat for a game I am enjoying if I get stuck. It is a tradeoff I am willing to make under two conditions:
- The game is enjoyable enough that I really want to proceed.
- I have tried enough times to work it out, without success, that I know (or think I know) that I can't figure it out without assistance.
No, thank you. I'd rather be playing.
Subscribe to:
Posts (Atom)