چو ایران نباشد تن من مباد
Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety

دانلود کتاب Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety

49000 تومان موجود

کتاب شبکه‌های عصبی عمیق و داده‌ها برای رانندگی خودکار: استحکام، کمی‌سازی عدم قطعیت، و بینش نسبت به ایمنی نسخه زبان اصلی

دانلود کتاب شبکه‌های عصبی عمیق و داده‌ها برای رانندگی خودکار: استحکام، کمی‌سازی عدم قطعیت، و بینش نسبت به ایمنی بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 4


توضیحاتی در مورد کتاب Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety

نام کتاب : Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety
عنوان ترجمه شده به فارسی : شبکه‌های عصبی عمیق و داده‌ها برای رانندگی خودکار: استحکام، کمی‌سازی عدم قطعیت، و بینش نسبت به ایمنی
سری :
نویسندگان : , ,
ناشر : Springer
سال نشر : 2022
تعداد صفحات : 434 [435]
ISBN (شابک) : 3031012321 , 9783031012327
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 12 Mb



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.

توضیحاتی در مورد کتاب :




این کتاب دسترسی آزاد آخرین پیشرفت‌های صنعت و تحقیقات در مورد رانندگی خودکار و هوش مصنوعی را گرد هم می‌آورد.

ادراک محیطی برای رانندگی بسیار خودکار به شدت از سیستم عصبی عمیق استفاده می‌کند. شبکه ها، با چالش های زیادی روبرو هستند. چه مقدار داده برای آموزش و آزمایش نیاز داریم؟ چگونه از داده های مصنوعی برای صرفه جویی در هزینه های برچسب گذاری برای آموزش استفاده کنیم؟ چگونه استحکام را افزایش دهیم و استفاده از حافظه را کاهش دهیم؟ برای شرایط ناگزیر بد: چگونه متوجه شویم که شبکه در مورد تصمیمات خود نامطمئن است؟ آیا می‌توانیم کمی بیشتر در مورد آنچه که در داخل شبکه‌های عصبی اتفاق می‌افتد، درک کنیم؟ این منجر به یک مشکل بسیار عملی به ویژه برای DNN های مورد استفاده در رانندگی خودکار می شود: تکنیک های اعتبار سنجی مفید چیست و در مورد ایمنی چگونه است؟

این کتاب دیدگاه های دانشگاهی و صنعتی را که در آن بینایی کامپیوتری است، متحد می کند. و یادگیری ماشین با ادراک محیط برای رانندگی بسیار خودکار روبرو می شود. به طور طبیعی، جنبه های داده، استحکام، کمی سازی عدم قطعیت، و در نهایت، ایمنی در هسته اصلی آن قرار دارند. این کتاب منحصر به فرد است: در بخش اول آن، بررسی گسترده ای از تمام جنبه های مربوطه ارائه شده است. بخش دوم شامل توضیحات فنی دقیق سوالات مختلفی است که در بالا ذکر شد.


فهرست مطالب :


Foreword Preface About This Book Introduction Contents About the Editors Safe AI—An Overview Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety 1 Introduction 2 Dataset Optimization 2.1 Outlier/Anomaly Detection 2.2 Active Learning 2.3 Domains 2.4 Augmentation 2.5 Corner Case Detection 3 Robust Training 3.1 Hyperparameter Optimization 3.2 Modification of Loss 3.3 Domain Generalization 4 Adversarial Attacks 4.1 Adversarial Attacks and Defenses 4.2 More Realistic Attacks 5 Interpretability 5.1 Visual Analytics 5.2 Intermediate Representations 5.3 Pixel Attribution 5.4 Interpretable Proxies 6 Uncertainty 6.1 Generative Models 6.2 Monte Carlo Dropout 6.3 Bayesian Neural Networks 6.4 Uncertainty Metrics for DNNs in Frequentist Inference 6.5 Markov Random Fields 6.6 Confidence Calibration 7 Aggregation 7.1 Ensemble Methods 7.2 Temporal Consistency 8 Verification and Validation 8.1 Formal Testing 8.2 Black-Box Methods 9 Architecture 9.1 Building Blocks 9.2 Multi-task Networks 9.3 Neural Architecture Search 10 Model Compression 10.1 Pruning 10.2 Quantization 11 Discussion References Recent Advances in Safe AI for Automated Driving Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance? 1 Introduction 2 How Much Data is Needed for Direct Statistical Evidence of Better-Than-Human Driving? 3 Measurement of Failure Probabilities 3.1 Statistical Evidence for Low Failure Probability 3.2 Test Data for Redundant Systems 3.3 Data Requirements for Statistical Evidence of Low Correlation 4 Correlation Between Errors of Neural Networks in Computer Vision 4.1 Estimation of Reliability for Dependent Subsystems 4.2 Numerical Experiments 5 Conclusion and Outlook References Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces 1 Introduction 2 Related Works 3 Building Blocks of Our Approach 3.1 Variational Autoencoder DNN 3.2 Methods for Novelty Detection 3.3 Dataset Comparison Approach 4 Experimental Setup 4.1 Datasets 4.2 Experiments 4.3 VAE Hyperparameters 4.4 Training 5 Experimental Results and Discussion 5.1 Results 5.2 Discussion 6 Conclusions References Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation 1 Introduction 2 Related Works 3 Methods 3.1 Sensor Simulation 3.2 Dataset Divergence Measure 3.3 Datasets 4 Results and Discussion 4.1 Sensor Parameter Extraction 4.2 Sensor Artifact Optimization Experiment 4.3 EMD Cross-evaluation 5 Conclusions References Improved DNN Robustness by Multi-task Training with an Auxiliary Self-Supervised Task 1 Introduction 2 Related Works 3 Multi-task Training Approach 4 Evaluation Setup 5 Experimental Results and Discussion 6 Conclusions References Improving Transferability of Generated Universal Adversarial Perturbations for Image Classification and Segmentation 1 Introduction 2 Related Works 2.1 Environment Perception for Automated Vehicles 2.2 Adversarial Attacks 3 Method 3.1 Mathematical Notations 3.2 Method Introduction and Initial Analysis 3.3 Non-targeted Perturbations 3.4 Targeted Perturbations 4 Experiments on Image Classification 4.1 Experimental Setup 4.2 Non-Targeted Universal Perturbations 4.3 Targeted Universal Perturbations 5 Experiments on Semantic Segmentation 5.1 Experimental Setup 5.2 Non-targeted Universal Perturbations 5.3 Targeted Universal Perturbations 6 Conclusions References Invertible Neural Networks for Understanding Semantics of Invariances of CNN Representations 1 Introduction 2 Background 3 Method 3.1 Recovering the Invariances of Deep Models 3.2 Interpreting Representations and Their Invariances 3.3 Implementation Details 4 Experiments 4.1 Comparison to Existing Methods 4.2 Understanding Models 4.3 Effects of Data Shifts on Models 4.4 Modifying Representations 4.5 Evaluation Details References Confidence Calibration for Object Detection and Segmentation 1 Introduction 2 Related Works 3 Calibration Definition and Evaluation 3.1 Definitions of Calibration 3.2 Measuring Miscalibration 4 Position-Dependent Confidence Calibration 4.1 Histogram Binning 4.2 Scaling Methods 5 Experimental Evaluation and Discussion 5.1 Object Detection 5.2 Instance Segmentation 5.3 Semantic Segmentation References Uncertainty Quantification for Object Detection: Output- and Gradient-Based Approaches 1 Introduction 2 Related Work 3 Methods 3.1 Uncertainty Quantification Protocols 3.2 Deep Object Detection Frameworks 3.3 Output-Based Uncertainty: MetaDetect 3.4 Gradient-Based Uncertainty for Object Detection 4 Experimental Setup 4.1 Databases, Models, and Metrics 4.2 Implementation Details 4.3 Experimental Setup and Results References Detecting and Learning the Unknown in Semantic Segmentation 1 Introduction 2 Anomaly Detection Using Information and Entropy 3 Related Works 3.1 Anomaly Detection in Semantic Segmentation 3.2 Incremental Learning in Semantic Segmentation 4 Anomaly Segmentation 4.1 Methods 4.2 Evaluation and Comparison of Anomaly Segmentation Methods 4.3 Combining Entropy Maximization and Meta Classification 5 Discovering and Learning Novel Classes 5.1 Unsupervised Identification and Segmentation of a Novel Class 5.2 Class-Incremental Learning 5.3 Experiments and Evaluation 5.4 Outlook on Improving Unsupervised Learning of Novel Classes References Evaluating Mixture-of-Experts Architectures for Network Aggregation 1 Introduction 2 Related Works 3 Methods 3.1 MoE Architecture 3.2 Disagreements Within an MoE 4 Experiment Setup 4.1 Datasets and Metrics 4.2 Topologies 4.3 Training 5 Experimental Results and Discussion 5.1 Architectural Choices 5.2 MoE Performance 5.3 Disagreement Analysis References Safety Assurance of Machine Learning for Perception Functions 1 Introduction 2 Deriving Acceptance Criteria for ML Functions 2.1 Definition of Risk According to Current Safety Standards 2.2 Definition of Safety Acceptance Criteria for the ML Function 3 Understanding the Contribution of Safety Evidence 3.1 A Causal Model of Machine Learning Insufficiencies 3.2 Categories of Safety Evidence 4 Evidence Workstreams—Empowering Experts from Safety Engineering and ML to Produce Measures and Evidence 5 Examples for Evidence 5.1 Evidence from Confirmation of Residual Failure Rates 5.2 Evidence from Evaluation of Insufficiencies 5.3 Evidence from Design-Time Controls 5.4 Evidence from Operation-Time Controls 6 Combining Safety Evidence in the Assurance Case 7 Conclusions References A Variational Deep Synthesis Approach for Perception Validation 1 Introduction 2 Related Work 3 Concept and Overview 4 VALERIE: Computational Deep Validation 4.1 Scene Generator 4.2 Validation Parameter Space (VPS) 4.3 Computation of Synthetic Data 4.4 Sensor Simulation 4.5 Computation and Evaluation of Perceptional Functions 4.6 Controller 4.7 Computational Aspects and System Scalability 5 Evaluation Results and Discussion 6 Outlook and Conclusions References The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique 1 Introduction 2 Related Works 3 Granularity and Novelty 4 Formal Analysis of Achievable Coverage 4.1 Description of the Task 4.2 Experimental Setup 4.3 Experimental Results 4.4 Discussion 5 Conclusion References Joint Optimization for DNN Model Compression and Corruption Robustness 1 Introduction 2 Related Works 3 HCRC: A Systematic Approach 3.1 Preliminaries on Semantic Segmentation 3.2 Robustness Objective 3.3 Compression Objective 3.4 HCRC Core Method 4 Experimental Setup 4.1 Datasets and Semantic Segmentation Network 4.2 Image Corruptions 4.3 Metrics 4.4 Training Framework 5 Experimental Results and Discussion 5.1 Ablation Studies 5.2 Comparison With Reference Baselines 6 Conclusions References

توضیحاتی در مورد کتاب به زبان اصلی :


This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence.

Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety?

This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above.




پست ها تصادفی