The adoption of Deep Neural Networks (DNNs)—e.g., CNNs, RNNs, GANs, LSTMs, and LLMs—in critical domains such as healthcare, governance, misinformation, and hate speech detection has created an unprecedented demand for high-performance and interpretable models. Despite their success, the opaque nature of DNNs undermines user trust and raises ethical concerns, especially in sensitive applications. The need for Explainable Artificial Intelligence (XAI) methods has never been more pressing, and both post-hoc and self-explaining approaches hold promise in addressing this challenge. The IJCNN 2025 Special Session on Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025) aims to bring together researchers and practitioners to explore innovative methodologies that enhance the interpretability of DNNs while maintaining their predictive accuracy. Specic areas of focus include post hoc techniques (e.g. SHAP, LIME, Grad-CAM), self-explaining neural network architectures (e.g., features saliency, attention mechanisms, prototype networks e SENNs (Self-Explaining Neural Networks)), and interdisciplinary evaluations of AI systems in terms of fairness, trust, and social impact.
In this special session, we aim to foster interdisciplinary collaboration, promote the ethical design of AI systems, and encourage the development of benchmarks and datasets for explainability research. Our goal is to advance both post-hoc and intrinsic interpretability approaches, bridging the gap between the high performance of deep neural networks and their transparency. By doing so, we seek to enhance human trust in these models and mitigate the risks of negative social impacts. Topics of interest include, but are not limited to:
The list above is by no means exhaustive, as the aim is to foster the debate around all aspects of the suggested integration.
Papers should be formatted according to the IJCNN-2025 formatting guidelines and submitted as a single PDF file. We welcome submissions across the full spectrum of theoretical and practical work including research ideas, methods, tools, simulations, applications or demos, practical evaluations, and position papers and surveys. All papers will be peer-reviewed in a double-blind process and assessed based on their novelty, technical quality, potential impact, clarity, and reproducibility (when applicable). Special Session submissions will be handled by EDAS; the submission link is as follows: https://edas.info/N31614 Please select the respective special session title (Explainable Deep Neural Networks for Responsible AI: Post-Hoc and Self-Explaining Approaches (DeepXplain 2025)) under the list of research topics in the submission system.
Note: All deadlines are AoE (Anywhere on Earth).
The accepted papers will appear on the Special Session website and are included in the IJCNN conference proceedings.
TBA
TBA