Event detection analysis is a data-model comparison technique in which all observational and numerical values are converted into a binary yes-no categorization of being in or out of ”event state.” Common metrics within event detection analysis include several skill scores – specifically those of Heidke, Peirce (true skill statistic), Clayton, and Gilbert (equitable threat score). All of these skill scores use the general skill score formula, comparing a metric score for the new model against that of a reference model. Moreover, all of them use, to some degree, the same ”expected random matrix” as the reference model. This matrix reshuffles the two number sets of observed and modeled events, randomizing when events occur. These skill scores are, therefore, based on the new model results and thus depend on its performance. That is, these are not calculated relative to an independent reference model. It is shown that for a given metric score (holding proportion correct or critical success index constant), these skill scores have a range of possible values. Conversely, identical skill scores could result from a range of original metric scores. It is recommended to stop using these named skill scores and instead use one of the presented alternatives. One reference model option uses the observed events in place of the new modeled events, while the other uses a 50-50 ”coin flip” option (i.e., truly random chance). These new skill score formulas map one-to-one with the underlying metric values and are, therefore, appropriate for inter-model comparison or intra-model assessment.