The surprising thing about Holte's results is not the ceiling effect, but that 1R* is a much simpler algorithm than C4. Roughly speaking, 1R* bases its classifications on the single most predictive feature of the items to be learned, whereas C4 builds more complex classification rules with many features. classification algorithm Intuition tells us more features are better; for example, to classify mushrooms as poisonous or benign we might look at color, shape, odor, habitat and so on. But in fact, the most predictive feature of mushrooms (odor) is an excellent basis for classification (see the MU column of Table 3.2). C4's decision rules included six features of mushrooms (and 6.6 features on average, over all the datasets), but its average classification accuracy is less than two percent higher.
Holte's algorithm, 1R*, provides a powerful control condition for research on classification algorithms. Suppose your innovative algorithm achieves an average 86% classification accuracy. Until Holte's study, you didn't know how much of this score was purchased with your innovation. Now, however, you know that your idea is worth perhaps two or three percentage points, because 1R*, an utterly simple algorithm that lacks your idea, performs nearly as well.
If as we suspect the practical maximum classification accuracy for the Irvine datasets is roughly 87%, and the average performance of 1R* is roughly 84%, what does this mean for research on classification algorithms? Two interpretations come to mind: Perhaps the Irvine datasets are typical of the world's classification tasks, in which case the range of performance between the simplest and the most sophisticated algorithms is only two or three points (this interpretation is discussed further in chapter 9). Alternatively, C4 and other sophisticated algorithms might be at ceiling on the Irvine datasets, and the range of performance will not widen until more challenging datasets are introduced.