Explainable AI (XAI) is artificial intelligence that is programmed to describe its purpose, rationale and decision-making process in a way that can be understood by the average person. XAI is often discussed in relation to deep learning and plays an important role in the FAT ML model (fairness, accountability and transparency in machine learning). XAI provides general information about how an AI program makes a decision by disclosing: - The program's strengths and weaknesses.
- The specific criteria the program uses to arrive at a decision.
- Why a program makes a particular decision as opposed to alternatives.
- The level of trust that's appropriate for various types of decisions.
- What types of errors the program is prone to.
- How errors can be corrected.
An important goal of XAI is to provide algorithmic accountability. Until recently, AI systems have essentially been black boxes. Even if the inputs and outputs are known, the algorithms used to arrive at a decision are often proprietary or not easily understood, despite when the inner-workings of the programming is open source and made freely available. As artificial intelligence becomes increasingly prevalent, it is becoming more important than ever to disclose how bias and the question of trust are being addressed. The EU's General Data Protection Regulation (GDPR), for example, includes a right to explanation clause. |
No comments:
Post a Comment