Text this: Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support