Explainable AI for Interpretable Robot Decision-Making

Explainable AI for Interpretable Robot Decision-Making

Authors

  • Prof. Lui

Abstract

In the context of robotics, decision-making is a crucial aspect of autonomous systems. Ensuring that robots make transparent and interpretable decisions is of paramount importance, particularly in applications where human-robot collaboration, safety, and trust are essential. This paper delves into the realm of explainable AI (XAI) techniques as a means to enhance the interpretability of robot decision-making processes. It explores various methods and approaches, such as rule-based systems, model-agnostic interpretability tools, and explainable machine learning, aimed at making robots' decisions more comprehensible to humans.

References

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., & Giannotti, F. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1-42.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).

Carvalho, C. R., Freire, M. M., & Santos, M. F. (2019). A survey on explainability in autonomous robotics. Robotics and Autonomous Systems, 118, 1-16.

Published

2019-11-05

Issue

Section

Articles

How to Cite

Explainable AI for Interpretable Robot Decision-Making. (2019). International Numeric Journal of Machine Learning and Robots, 3(3). https://injmr.com/index.php/fewfewf/article/view/3

Most read articles by the same author(s)

1 2 3 4 > >>