An Interview With Futuremark

Written By : Jeff Nettleton
December 2004

Introduction:::...

FUTUREMARK went through a bit of a torrid time earlier in the year, an odd situation for the former darlings of the hardware benchmarking community.

Now that much of the dust has settled I wanted to fire a few questions their way to see what they had to say, and to use the opportunity to share some of my own thoughts on the whole benchmarking issue.

My sincere thanks go to Nicklas Renqvist and Tero Sarkkinen for taking time out to provide us with the answers.

 

1/ 3DV
Let me start by poking an old wound, though I prefer to think of it as cleansing it. MadOnion.com and 3DMark were the gamer's best friend. Even people who didn't know a GPU from a UPS could quote you their 3DMark score. Then came the cheating allegations, and as a result the public brawl which quite badly dented people's faith in 3DMark as a viable benchmark. You took a real hammering for this and lost a lot of respect as a result. Was this fair and do you feel you've done enough to win back that respect?

Nick: Whenever you manage to do something as popular as we did with the brand "3DMark", you can always expect that something will happen. What happened after the launch of 3DMark03 and all the allegations against us was just that. We worked very hard to set the record straight, and decided to start work even more closely with the press & media in order to get some balance in the discussion. I personally think that we have proven our point, and demonstrated that what we did was necessary and right. It seems that now couple of months after the launch of 3DMark05, pretty much everyone has again adopted 3DMark. I think that everyone has noticed that we consistently follow our benchmarking guidelines which we set for 3DMark03, and that we stick to them. Note that these guidelines have stayed essentially the same since the beginning of our company!


 

2/ 3DV
How can you ever hope to convince people that results from a synthetic benchmark are as valid as those from a "real" retail games title? The stock quote the keeps appearing is that "people play games, not benchmarks".

Tero: While it is true that people play games and not benchmarks, it really does not mean that only games can be used as benchmarks! Both games and synthetic applications can be designed well or poorly to serve as a benchmark. You really have to determine these case by case. There are good game benchmarks and there are bad game benchmarks, similarly, there are good synthetic benchmarks and then there are bad synthetic benchmarks.

Some people ask that does benchmark-X (game-benchmark) tell you at what frames per second they can play game-Z? Of course the answer is 'No', in the same vein though, you have to understand that your PC's performance in playing game-Z does not tell you anything at all about how well your PC can play game-Y, not even if they have the same game engine technology. Reason for this is that games differ from each other in many different ways. Even when two games share the same engine, they usually have substantial differences in e.g. the level of DirectX technologies they use, the amount of polygons and scene complexity they have, differing fallback paths, etc. What makes it even worse is that these game-benchmarks are often poorly documented or even undocumented and thus the user really does not know what the benchmark has 'eaten'.

3DMark, on the other hand, is scientifically designed to feature technologies and workloads that are characteristic of next generation games. Furthermore, it has code paths that are meticulously designed to be fair for all hardware from the performance measurement point of view. To top it off, it is professionally documented and there are user friendly controls to turn on or off nearly every imaginable feature for advanced performance analysis.

In short, if you want to know how well your PC plays a certain game, there is no better benchmark for you than that specific game. However, if you do not know which games you will be playing, or you want to know how well in general your PC can play next generation games, it is a wise decision to include 3DMark as part of your testing to get that information!

Nick: We think it is very important that reviewers use as many game-benchmarks as possible, along with 3DMark and/or PCMark. The games should of course be the most popular ones, and from various genres of game types. To make it even better, the timedemos could, or perhaps even should, be recorded in-house. Using 10 different first person shooters in a graphics card review isn't in my opinion a very reasonable decision, as everyone is not playing those types of games. I think that for reviewing a graphics card, the best way to got is to use both a bunch of different genres of popular games (FPS, simulators, racing games, RTS etc) and then professional benchmarks like 3DMark and/or PCMark. To really emphasize how well the graphics card performs in different types of games, different situations, how well it stacks up in raw performance to others and how future proof the hardware may be.


 

3/ 3DV
With all the recent claims of unfair 3DMark driver optimisations, how difficult would it be to implement some kind of compiler that randomly juggles code to defeat optimisations without affecting how a scene is rendered? Are you actively working on any other forms of optimisation detection or defeating?

Nick: We have done some research in how to "fool" the drivers, but we haven't come up with a 100% working solution. Juggling shaders works only so far, as at some point juggling shaders might interfere with the performance. It wouldn't be fair if someone gets a juggled shader which is less efficient than the other. We are working on this matter as we speak, but as I said, so far there is no bullet-proof way. We are of course up for suggestions & ideas, so if anyone out there has a rock-solid working solution, don't hesitate to contact us!

 
Website Design and Graphics Copyright Jeff Nettleton 2004
All images Copyright 3DVelocity.com unless otherwise stated