An entire research field is working towards describing the rationale behind AI decision-making: explainable AI (XAI). Momentum in the field is growing as AI systems demonstrate performance and capabilities far beyond previous technologies, but encounter hurdles of practicality and legal compliance.
XAI will prove to be especially valuable in the financial services, where the low signal-to-noise ratio typical of financial data demands a strong feedback loop between user and machine. AI solutions that do not leave room for human feedback to guide outputs may never be adopted in favor of traditional approaches that rely on domain expertise and experience honed over many years. Regulation, too, raises the stakes by preventing AI-powered products from even entering the market if they are not auditable.
Our new discussion paper offers a primer on XAI and how successfully applying it will mean taking a user-centric approach that starts at the beginning of the solution development. As we explore, designing for explainability will require evaluating the needs for transparency across AI systems and taking them into account from the initial steps of building a solution to the system rollout.
Fill out the form to download our new discussion paper on explainable AI.