Comments on: Ten Kinds of Dumb http://tinysubversions.com/2007/08/ten-kinds-of-dumb/ Wed, 10 Sep 2014 18:53:13 +0000 hourly 1 http://wordpress.org/?v=3.8.1 By: Ian Schreiber http://tinysubversions.com/2007/08/ten-kinds-of-dumb/comment-page-1/#comment-3487 Wed, 29 Aug 2007 02:54:00 +0000 http://tinysubversions.com/?p=932#comment-3487 Ooh, I know the answer to this one, because I’ve been whining about it ever since I saw the first reviews in Nintendo Power.

The qualitative score isn’t just “0 to 10″, it’s divided into categories. Two of those categories are Graphics and Sound… which means that an ugly but brilliant game gets at most a 6.0, which is kinda stupid.

Yes, Super Metroid is pretty darn near perfect. Without even looking at the review, I bet it got 10 for gameplay, but I also bet it got significantly less in Graphics and Sound categories. Not because its graphics and sound weren’t appropriate for the time, but because they’re reviewing it NOW within the context of today’s graphics. And while the graphics were great for the time, saying that it gives a graphical experience on par with Bioshock is straining the bounds of reality.

One might suggest review scores for graphics and sound should be graded on a curve, based on the time the game came out. Two problems with that. First, it’s confusing, since you can’t just ask what score a game got but also WHEN it got it. Second, it’s inconsistent; graphics and sound are graded on a curve, but gameplay isn’t?

]]>
By: Bradley Momberger http://tinysubversions.com/2007/08/ten-kinds-of-dumb/comment-page-1/#comment-3486 Tue, 28 Aug 2007 17:53:00 +0000 http://tinysubversions.com/?p=932#comment-3486 I think this is why the editors of Next Generation (the ’90s print magazine, not the current game business digest) always railed against a scoring system as any kind of useful metric… not that it stopped them from having one. Their star system, though, intentionally said nothing about whether the game was any fun. It supposedly stated how much of a progression (or regression) in video games the title represented. This had the convenient quality of skirting the question of “How would this game be rated now?” because the rating only had meaning in the context of the game’s contemporaries, and let them dock points for uninspired sequels just because.

What I always found to be far more telling about a game’s quality was in issues of Diehard GameFan where the editors would list the games they’d been playing the most for that particular month. For any given month, being on the list didn’t mean much, but any game that stayed on it over the course of several months must have had some serious value (or major payoff from the publisher).

Besides this, there really aren’t many perfect games. I can think of Pong and Tetris as two, because their play mechanics never needed any embellishment to be compelling, and they can’t really be improved through addition of features (though it hasn’t stopped legions of developers from trying). Does it make them the best games ever? Maybe, but there are other games which are just as fun despite room for improvement (like Bomberman — great when done right but so hit-or-miss over its history). Just being a perfect game doesn’t automatically make it the most valuable game at a price point, especially where simplicity yields replicability — $50 is too much for Tetris, considering all the free clones out there.

A review score accurately reflects none of these: perfection, innovation, longevity, value at a price point, or value regardless of price point. It only gives a subjective and corruptible degree of recommendation which is meaningless without both a time context and a set of other scores from the same reviewer with which to compare. It does not expose how much is related to the reviewer’s own bias rather than any objective evaluation. It is not smart, useful, nor even “the best we’ve got,” and it is brittle over time.

But in aggregate they can be indicative. I took Alien Syndrome back to Gamestop and exchanged it for Mario Strikers without even opening it, because the Gamerankings average for the former was 40+ percentage points below that of the latter. But I can’t say with perfect confidence that I made the right decision. With a few notable exceptions, I generally prefer games in which I am competent. MSC does not fall into this category, despite all the playtime I have put into it so far. I have to think that my inability to grasp the game dynamics would color my review and its associated score if I had to write one (“friendly AI is poorly implemented. Teammates stand around while the opponent scores goals on you”).

]]>
By: Patrick http://tinysubversions.com/2007/08/ten-kinds-of-dumb/comment-page-1/#comment-3485 Tue, 28 Aug 2007 16:32:00 +0000 http://tinysubversions.com/?p=932#comment-3485 The problem is in the instutition of game reviewing versus game criticism – the emphasis is quantitative (is game X worth money value Y?) instead of qualitative (this game is meangingful because of Z – Z!). When reviewers try to veer into the qualitative, they’re ultimately thwarted by the qualitative institution that they participate in.

]]>