Execution traces are often expensive to collect. Consequently, much of the effort required to execute dependency detection is expended collecting execution traces. We expect that the results of dependency detection will vary based on how many execution traces we collect; the total number of patterns (i.e., possible combinations of different types of precursors and failures) and the ratios of the patterns (i.e., the ratio of the counts in the first column to the counts in the second column) in a contingency table influences the results of the G-test. To determine how the size and number of execution traces collected influences the results of the G-test, we need to answer two questions: How does the value of G change as the number of patterns in the execution traces increases? How does the value of G change as the precursor to failure co-occurrence (i.e., the ratio of the upper right to upper left cells in the contingency table) varies from the rest of the execution traces (i.e., the ratio of the lower left to the lower right cells in the contingency table)? The first question addresses the sensitivity of the test to the size of the execution traces; the second addresses the sensitivity to noise: how much of a difference is required to detect a dependency?