Today's Artificial Intelligence systems are at the epicenter of security threats across industries. Attackers are trying to turn AI systems against the organization and society, intending to cause harm at various levels. Security, along with the explainability of AI, is a cornerstone for Digital Trust and Trustworthy AI. AI-based systems provide new attack surfaces, and adversaries can utilize attack surfaces to construct attacks to exploit vulnerabilities. Mission-critical systems using AI need to address the crucial problem of AI Security(AISec) and Explainable AI (XAI). In the early version of the discussion paper, we propose the new sub-field of Explainable AI Security (XAISec) at the intersection of AISec, XAI, and Explainable Security (XSec) for Mission-critical systems. We propose that XAISec should aim to explain AI Security's workings (justification of attack and transparency about defense) at an appropriate level considering multiple aspects. XAISec is a niche multidisciplinary greenfield with an ascertained need and validated using informal interview settings. We invite constructive criticism, collaboration, and contribution to jump-start the sub-field. We believe that with XAISec as an integral part of AI, AI can impact millions of lives across the globe, enabling smarter, sustainable, and evolutionary transformations.