Statistical Analysis
Vscan assessments made by the outpatient providers were compared to
those made by the blinded expert echocardiography readers. Discordance
between the two readers was considered when disagreement in diagnoses of
significant abnormality was found. Kappa coefficients (κ) were
calculated as the degree of agreement between the two p-values
<0.05 were considered significant.
Kappa coefficients above 0.90 were considered identical or nearly
identical in their agreement, 0.80-0.90 were considered a strong level
of agreement, 0.60-0.79 as moderate, and 0.40-0.59 weak.
The study was powered based on Kappa agreements of the two variables:
(i) Accuracy of valve disease severity grading (none, mild,
moderate/severe) (ii) Accuracy of grading of ejection fraction (none,
mild, moderate/severe) and assumes the proportion of patients that fall
into each of these groups will be equally distributed at one-third.
Power analyses were carried out for agreement levels (alternate
hypothesis) of 0.9 and 0.8 compared to baseline (null hypothesis) levels
of 0.6 and 0.7 at sample sizes of 100 to 500 incremented by 50 with a
two-sided alpha of 0.05. Because of the large volume of echocardiograms
performed at Abbott Northwestern Hospital, the investigators believe we
should use a sample size of 350, which achieves a power of 0.91 to
detect a true Kappa value of 0.80 (alternate hypothesis) in a test of
H0: Kappa = 0.70.