Second International Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC 2019).- Testing the robustness of attribution methods for convolutional neural networks in MRI-based Alzheimer's disease classification.- UBS: A Dimension-Agnostic Metric for Concept Vector Interpretability Applied to Radiomics.- Generation of Multimodal Justification Using Visual Word Constraint Model for Explainable Computer-Aided Diagnosis.- Incorporating Task-Specific Structural Knowledge into CNNs for Brain Midline Shift Detection.- Guideline-based Additive Explanation for Computer-Aided Diagnosis of Lung Nodules.- Deep neural network or dermatologist?.- Towards Interpretability of Segmentation Networks by analyzing DeepDreams.- 9th International Workshop on Multimodal Learning for Clinical Decision Support (ML-CDS 2019).- Towards Automatic Diagnosis from Multi-modal Medical Data.- Deep Learning based Multi-Modal Registration for Retinal Imaging.- Automated Enriched Medical Concept Generation for Chest X-ray Images.
Описание: The key idea of this book is that hinging hyperplanes, neural networks and support vector machines can be transformed into fuzzy models, and interpretability of the resulting rule-based systems can be ensured by special model reduction and visualization techniques.
Автор: Jorge Casillas; O. Cord?n; Francisco Herrera Trigu Название: Interpretability Issues in Fuzzy Modeling ISBN: 3642057020 ISBN-13(EAN): 9783642057021 Издательство: Springer Рейтинг: Цена: 36570.00 р. Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Fuzzy modeling has become one of the most productive and successful results of fuzzy logic. The research developed in the topic during the last two decades has been mainly focused on exploiting the fuzzy model flexibility to obtain the highest accuracy.
Автор: Mishra Название: Explainable AI Recipes ISBN: 1484290283 ISBN-13(EAN): 9781484290286 Издательство: Springer Рейтинг: Цена: 4890.00 р. Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution, and activation attribution. After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses. What You Will Learn * Create code snippets and explain machine learning models using Python * Leverage deep learning models using the latest code with agile implementations * Build, train, and explain neural network models designed to scale * Understand the different variants of neural network models Who This Book Is For AI engineers, data scientists, and software developers interested in XAI
ООО "Логосфера " Тел:+7(495) 980-12-10 www.logobook.ru