Multi-agent large language model (LLM) pipelines are deployed for clinical decision support under the assumption that collaboration improves safety. We show this assumption is wrong: multi-agent clinical pipelines spontaneously generate dangerous clinical assertions (diagnoses, medications, and procedures) that no individual agent produces alone, with zero adversarial input. We term this Emergent Misinformation Genesis (EMG), distinct from hallucination, contamination, and error cascade. We introduce the Emergent Misinformation Rate (EMR) with a three-way decomposition and the Clinical Escalation Index (CEI), and evaluate them across 4,800 trials with four model families (∼97,000 API calls). Our four central findings: (1) emergence is universal, with 30-56% of network assertions absent from any individual agent and 85-100% of clinical vignettes affected; (2) two independent judges rate 70-87% of emergent assertions as clinically dangerous (n≥499 each), with a third judge (n=37) providing directional confirmation at 68%, and severity confirmed against published AHA/ASA/ADA guidelines (42/45, 93%); (3) the network exhibits collective delusion, where individual agents reject 70-90% of the assertions the network produces; (4) a 5-line confidence-calibration prompt reduces emergence by 25-28% (p<0.001), but FC cross-checking fails for 3 of 4 models, and depth ablation reveals two distinct emergence regimes. We release a benchmark of 400 vignettes across 10 clinical domains, the EMR metric suite, and all code and data. Validation on 50 MIMIC-IV discharge summaries confirms comparable EMR (0.39-0.49) on real clinical notes.