Ten Kinds of Dumb

by Darius Kazemi on August 28, 2007

in games,reviews

So, let’s get this straight.

Gamespot reviews old games that are being re-released on the Virtual Console.

They review Super Metroid and say it’s effectively a perfect game that they can’t find any flaws with, except that “Nintendo waited 13 years to let us play” it. It’s even perfectly emulated, no flickering or artifacts or anything.

And they give it an 8.5.

Is this because Gamespot’s duty is to let consumers know whether something is worth their money, and they believe that most modern gamers wouldn’t consider an old SNES game to be a perfect 10? That seems unlikely, since the game costs 800 Wii points, also known as $8.00.

Is it because they feel that most gamers who’d be interested in the title have already played some of the many games that do a pretty good job of cloning the Super Metroid experience, like the Castlevania series? And thus Super Metroid is, to the modern day gamer, less shockingly great than it was back in the early 1990s? I don’t buy that, either. The idea is predicated on the notion that the game’s brilliance relies on novelty. Most people who have played the game would disagree.

So what is it? Why do you give an admittedly perfect game an 8.5?

{ 3 comments }

Patrick August 28, 2007 at 4:32 pm

The problem is in the instutition of game reviewing versus game criticism – the emphasis is quantitative (is game X worth money value Y?) instead of qualitative (this game is meangingful because of Z – Z!). When reviewers try to veer into the qualitative, they’re ultimately thwarted by the qualitative institution that they participate in.

Bradley Momberger August 28, 2007 at 5:53 pm

I think this is why the editors of Next Generation (the ’90s print magazine, not the current game business digest) always railed against a scoring system as any kind of useful metric… not that it stopped them from having one. Their star system, though, intentionally said nothing about whether the game was any fun. It supposedly stated how much of a progression (or regression) in video games the title represented. This had the convenient quality of skirting the question of “How would this game be rated now?” because the rating only had meaning in the context of the game’s contemporaries, and let them dock points for uninspired sequels just because.

What I always found to be far more telling about a game’s quality was in issues of Diehard GameFan where the editors would list the games they’d been playing the most for that particular month. For any given month, being on the list didn’t mean much, but any game that stayed on it over the course of several months must have had some serious value (or major payoff from the publisher).

Besides this, there really aren’t many perfect games. I can think of Pong and Tetris as two, because their play mechanics never needed any embellishment to be compelling, and they can’t really be improved through addition of features (though it hasn’t stopped legions of developers from trying). Does it make them the best games ever? Maybe, but there are other games which are just as fun despite room for improvement (like Bomberman — great when done right but so hit-or-miss over its history). Just being a perfect game doesn’t automatically make it the most valuable game at a price point, especially where simplicity yields replicability — $50 is too much for Tetris, considering all the free clones out there.

A review score accurately reflects none of these: perfection, innovation, longevity, value at a price point, or value regardless of price point. It only gives a subjective and corruptible degree of recommendation which is meaningless without both a time context and a set of other scores from the same reviewer with which to compare. It does not expose how much is related to the reviewer’s own bias rather than any objective evaluation. It is not smart, useful, nor even “the best we’ve got,” and it is brittle over time.

But in aggregate they can be indicative. I took Alien Syndrome back to Gamestop and exchanged it for Mario Strikers without even opening it, because the Gamerankings average for the former was 40+ percentage points below that of the latter. But I can’t say with perfect confidence that I made the right decision. With a few notable exceptions, I generally prefer games in which I am competent. MSC does not fall into this category, despite all the playtime I have put into it so far. I have to think that my inability to grasp the game dynamics would color my review and its associated score if I had to write one (“friendly AI is poorly implemented. Teammates stand around while the opponent scores goals on you”).

Ian Schreiber August 29, 2007 at 2:54 am

Ooh, I know the answer to this one, because I’ve been whining about it ever since I saw the first reviews in Nintendo Power.

The qualitative score isn’t just “0 to 10″, it’s divided into categories. Two of those categories are Graphics and Sound… which means that an ugly but brilliant game gets at most a 6.0, which is kinda stupid.

Yes, Super Metroid is pretty darn near perfect. Without even looking at the review, I bet it got 10 for gameplay, but I also bet it got significantly less in Graphics and Sound categories. Not because its graphics and sound weren’t appropriate for the time, but because they’re reviewing it NOW within the context of today’s graphics. And while the graphics were great for the time, saying that it gives a graphical experience on par with Bioshock is straining the bounds of reality.

One might suggest review scores for graphics and sound should be graded on a curve, based on the time the game came out. Two problems with that. First, it’s confusing, since you can’t just ask what score a game got but also WHEN it got it. Second, it’s inconsistent; graphics and sound are graded on a curve, but gameplay isn’t?

Comments on this entry are closed.

Previous post:

Next post: