Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more

· Packt Publishing Ltd
電子書
304
頁數

關於這本電子書

Leverage top XAI frameworks to explain your machine learning models with ease and discover best practices and guidelines to build scalable explainable ML systems


Key Features

• Explore various explainability methods for designing robust and scalable explainable ML systems

• Use XAI frameworks such as LIME and SHAP to make ML models explainable to solve practical problems

• Design user-centric explainable ML systems using guidelines provided for industrial applications


Book Description

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases.

Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users.

By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.


What you will learn

• Explore various explanation methods and their evaluation criteria

• Learn model explanation methods for structured and unstructured data

• Apply data-centric XAI for practical problem-solving

• Hands-on exposure to LIME, SHAP, TCAV, DALEX, ALIBI, DiCE, and others

• Discover industrial best practices for explainable ML systems

• Use user-centric XAI to bring AI closer to non-technical end users

• Address open challenges in XAI using the recommended guidelines


Who this book is for

This book is for scientists, researchers, engineers, architects, and managers who are actively engaged in machine learning and related fields. Anyone who is interested in problem-solving using AI will benefit from this book. Foundational knowledge of Python, ML, DL, and data science is recommended. AI/ML experts working with data science, ML, DL, and AI will be able to put their knowledge to work with this practical guide. This book is ideal for you if you're a data and AI scientist, AI/ML engineer, AI/ML product manager, AI product owner, AI/ML researcher, and UX and HCI researcher.

關於作者

Aditya Bhattacharya is an Explainable AI Researcher at KU Leuven with the mission to bring AI closer to end-users. Previously, I had worked as the AI Lead and a data scientist at West Pharmaceuticals. I have an overall exposure of 6 years in Data Science, Machine Learning, IoT, and Software Development. I have led more than 20 AI projects and programs democratizing AI practice for West and Microsoft. In West, I have contributed to forming the AI team and developed end-to-end solutions from scratch. I also have people management experience of about 2 years at West and have led and managed a global team of 10+ members.

為這本電子書評分

請分享你的寶貴意見。

閱讀資訊

智能手機和平板電腦
請安裝 Android 版iPad/iPhone 版「Google Play 圖書」應用程式。這個應用程式會自動與你的帳戶保持同步,讓你隨時隨地上網或離線閱讀。
手提電腦和電腦
你可以使用電腦的網絡瀏覽器聆聽在 Google Play 上購買的有聲書。
電子書閱讀器及其他裝置
如要在 Kobo 等電子墨水裝置上閱覽書籍,你需要下載檔案並傳輸到你的裝置。請按照說明中心的詳細指示,將檔案傳輸到支援的電子書閱讀器。