(Open)Special Issue on Frontiers in Robotics and AI(SCI)

(Open)Special Issue on Frontiers in Robotics and AI(SCI)

Journal:  Frontiers in Robotics and AI

Special Issue: Trustworthy and Explainable Machine Learning for Human-Aligned Robotics and Intelligent Systems

Editors: Xingsi Xue, Pei-Wei Tsai, Jianhui Lv, Jeng-Shyang Pan

Submission Deadline:  25 April 2026

Website: https://www.frontiersin.org/research-topics/74642/trustworthy-and-explainable-machine-learning-for-human-aligned-robotics-and-intelligent-systemsScope

Background

With the rapid advancement of robotics, intelligent sensing, and autonomous decision-making, robotic and AI systems are increasingly influencing human life in domains such as healthcare, manufacturing, transportation, and assistive technologies. Originally designed to automate repetitive tasks, these systems have evolved into autonomous, adaptive, and collaborative platforms that leverage machine learning (ML) to perform perception, reasoning, and decision-making in dynamic real-world environments. Such capabilities are reshaping the way humans interact with machines and enabling new forms of cooperation in both industrial and everyday contexts.

As ML-driven robotics continues to expand, new challenges have emerged in terms of trust, transparency, and human alignment. Although AI integration has enhanced autonomy, efficiency, and responsiveness, it has also raised concerns regarding reliability, interpretability, and ethical deployment. Current approaches such as deep learning, reinforcement learning, and transfer learning achieve impressive performance but often operate as opaque black boxes, limiting the ability of users to understand, predict, or validate system behavior. In safety-critical applications like autonomous vehicles, surgical robotics, and humanrobot collaboration, these limitations present significant risks to acceptance, accountability, and regulatory compliance.

In response, this Special Issue focuses on advancing trustworthy and explainable ML techniques that ensure robotic and intelligent systems are effective, transparent, robust, and aligned with human values. Contributions are encouraged that bridge theory and practice, spanning algorithmic design, system-level integration, and real-world applications, with particular emphasis on human-centered, safety-critical, and dynamic environments. Interdisciplinary research that combines machine learning, robotics, control, humanrobot interaction, and system engineering is especially welcome. Topics of interest include, but are not limited to:

(1) Robust and reliable ML algorithms for robotic autonomy in uncertain and dynamic environments

(2) Interpretable and explainable ML for robotics, autonomous systems, and humanrobot collaboration

(3) Human-in-the-loop learning and feedback-driven adaptation in intelligent robotic systems

(4) Safety assurance, reliability assessment, and self-diagnosis of ML models in robotic applications

(5) Lightweight and resource-aware interpretable ML techniques for embedded robotic platforms

(6) Fail-safe control, anomaly detection, and recovery strategies for autonomous and assistive robots

(7) Transparent decision-making and explainable recommendation models in humanrobot interaction

(8) Case studies demonstrating trust, accountability, and usability improvements in real-world deployments

(9) Benchmarking frameworks and metrics for evaluating reliability, interpretability, and fairness in ML for robotics and intelligent systems

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

Brief Research Report

Data Report

Editorial

FAIR² Data

General Commentary

Hypothesis and Theory

Methods

Mini Review

Opinion

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.