Re: PS2 v CSC2 v F1 v RE050A v AD07 v P-zeros & others
Testing criteria -
It took three days to perform all the tests. On day one, we did wet and dry brake tests. We accelerated to 50 mph and then braked to a standstill. (We were unable to drive to our traditional 70-mph mark because there simply wasn't enough track.) There was a benefit to that lower speed: It ensured that we were measuring the braking performance of the tires and not just brake fade.
Over the next two days, when we performed the wet and dry autocross tests, we were joined by Spencer Geswein of Full-Lock Industries. Geswein and Brian Smith formed Full-Lock in 2001 and offer a variety of driving-related services that include instruction, testing, and racing (
www.full-lock.com). Both men spent at least 10 years testing tires for Michelin, so we figured that kind of experience would uncover some subtle tire traits that we might miss. And in case you're thinking Geswein might play favorites, he asked that we not tell him which tires he was driving on, so his testing of the 11 brands was done blind.
Geswein drove three laps through the autocross course, and then we drove three laps—in dry and wet conditions. We averaged the six dry and six wet laps to come up with a time for each tire's performance on wet and dry surfaces. In the end, we had performance results for six tests: braking, autocross, and lateral grip—each in wet and dry conditions.
We gave the top-performing tire in each test a score of 100 and scored the rest on their relative performance. For example, the tire that had the highest dry lateral grip of 0.95 g scored 100 points, and the tire placing last with 0.88 g received 92.6 points (0.88 is 92.6 percent of 0.95). We then added the scores from all the dry tests to arrive at a dry-performance rating and did the same for the wet-test results.
Things were more complicated when it came to determining the finishing order of the 11 tires. In addition to factoring in the wet and dry scores, we gave points based on a tire's price (we used the typical selling price in our calculations) and tread-wear grade, which is a rough estimate of how long a tire will have usable tread. For the price and tread-wear ratings we used the same proportional method.
But the categories were not all weighted equally. Our test focused on measuring performance, so we decided that results in the dry—lateral grip, for example—would carry the most weight. Wet performance is important, but less so for the purposes of this test. The price and the tread-wear scores had nothing to do with performance, but who doesn't consider what something will cost before buying?
After a lot of debate, our scheme worked like this: The proportional weighting for tread-wear and price scores was cut in half, the wet scores were not changed, and we doubled the dry-performance scores. And that's how a tire's overall ranking was calculated for this test.