Vision-Language-Action (VLA) models condition robot actions on natural language, yet sensitivity to instruction wording has not been characterised. This letter evaluates OpenVLA-7B on three manipulation tasks, comparing action differences from synonymous rephrasing (e.g., ”put” vs. ”place” vs. ”set”) against differences from specificity variation (brief vs. step-by-step). Across 5 scenes per task with balanced comparisons (n = 50 pairs each), phrasing sensitivity is task-dependent: one task shows significantly larger phrasing than specificity differences (1.6x, p = 0.018), one shows no difference (p = 0.957), and one trends in the opposite direction (p = 0.092). In aggregate, phrasing and specificity produce comparable action differences (p = 0.395). Both exceed the stochastic noise floor by 2-4x. The results indicate that VLA instruction sensitivity is real but task-specific, and that deployment robustness cannot be assumed from single-task evaluation.