Krithika Anil

and 7 more

Large language model (LLM)-based generative AI offers the promise of responsive health information for patients and carers but presents ethical and legal challenges when used outside clinical oversight. This mini review maps these considerations for patients accessing information on long-term conditions in non-clinical settings. Following PRISMA guidelines, we searched databases including MEDLINE and EMBASE between April and May 2025. The mini-review synthesizes 24 cross-sectional studies regarding patient-facing LLMs, excluding clinician-support tools. Results indicate that LLMs perform well on general topics but struggle with specialized information, often generating complex responses with unreliable citations. Ethical concerns highlight inaccuracy, insufficient empathy, and the potential exacerbation of health inequalities, while analyses of legal challenges focus mainly on liability and consent. We conclude that current technical limitations and regulatory gaps regarding device classification and safety obligations could pose risks to patients. Consequently, stakeholders must establish clear accountability frameworks, and LLMs should currently function only as supplementary tools rather than replacements for expert clinical advice. Further research into the use of agent-based LLM architectures, where specialised LLM agents collaborate to verify information, reason symbolically, and interface with patient health records under strict data governance, may provide the solution to the current limitations of LLMs.