The Cambridge Handbook Of Responsible Artificial Intelligence: Interdisciplinary Perspectives

دانلود کتاب The Cambridge Handbook Of Responsible Artificial Intelligence: Interdisciplinary Perspectives

49000 تومان موجود

کتاب هوش مصنوعی مسئول کمبریج: دیدگاه های بین رشته ای نسخه زبان اصلی

دانلود کتاب هوش مصنوعی مسئول کمبریج: دیدگاه های بین رشته ای بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 6


توضیحاتی در مورد کتاب The Cambridge Handbook Of Responsible Artificial Intelligence: Interdisciplinary Perspectives

نام کتاب : The Cambridge Handbook Of Responsible Artificial Intelligence: Interdisciplinary Perspectives
ویرایش : 1 ed.
عنوان ترجمه شده به فارسی : کتاب هوش مصنوعی مسئول کمبریج: دیدگاه های بین رشته ای
سری : Cambridge Law Handbooks
نویسندگان : , , ,
ناشر : Cambridge University Press
سال نشر : 2022
تعداد صفحات : 528
ISBN (شابک) : 1009207865 , 9781009207898
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 5 Mb



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.


فهرست مطالب :


Cover Half-title Title page Copyright information Contents List of Figures List of Contributors Acknowledgements Introduction Outline Part I: Foundations Part II: Approaches to AI Governance Part III: Responsible AI Liability Schemes Part IV: Fairness and Non-Discrimination Part V: Responsible Data Governance Part VI: Responsible Corporate Governance of AI Systems Part VII: Responsible AI in Healthcare and Neurotechnology Part VIII: Responsible AI for Security Applications and in Armed Conflict Part I Foundations of Responsible AI 1 Artificial Intelligence: Key Technologies and Opportunities I. Introduction II. Machine Learning 1. Key Technology Big Data 2. Key Technology Deep Learning III. Robotics 1. Key Technology Navigation 2. Key Technology Autonomous Manipulation IV. Current and Future Fields of Application and Challenges 2 Automating Supervision of AI Delegates I. AIs As Tools, Agents, or Delegates II. Aligned Delegates III. The Necessity of Human Supervision IV. Beyond Human Supervision V. AI Supervision As a Cooperative Endeavour 3 Artificial Moral Agents: Conceptual Issues and Ethical Controversy I. Artificial Morality and Machine Ethics II. Some Examples for Artificial Moral Agents III. Classification of Artificial Moral Agents IV. Artificial Systems As Functional Moral Agents V. Approaches to Moral Implementation: Top-Down, Bottom-Up, and Hybrid VI. Ethical Controversy about Artificial Moral Agents 1. Are Artificial Moral Agents Inevitable? 2. Are Artificial Moral Agents Reducing Ethics to Safety? 3. Can Artificial Moral Agents Increase Trust in AI? 4. Do Artificial Moral Agents Prevent Immoral Use by Design? 5. Are Artificial Moral Agents Better than Humans? 6. Does Reasonable Pluralism in Ethics Speak against Artificial Moral Agents? 7. Do Artificial Moral Agents Threaten Our Personal Bonds? 8. Which Impact Does Artificial Morality Have on Ethical Theory? 9. Is It Wrong to Delegate Moral Decision-Making to Artificial Moral Agents? 10. Who Is Responsible for the Decisions of Artificial Moral Agents? VII. Conclusion: Guidelines for Machine Ethics 4 Risk Imposition by Artificial Agents: The Moral Proxy Problem I. Introduction II. The Moral Proxy Problem III. The Low-Level Challenge to Risk Neutrality in Artificial Agent Design IV. Risk Aversion and the High-Level Agential Perspective V. Back to the Moral Proxy Problem VI. Conclusion 5 Artificial Intelligence and Its Integration into the Human Lifeworld I. Introduction II. The Object and the Subject Conception of AI III. Why the Comparison of Human and Artificial Intelligence Is Misleading IV. The Multiple Roles of the Evaluator in the Turing Test V. Hidden in Plain Sight: The Contribution of the Evaluator VI. The Overlooked Lifeworld Part II Current and Future Approaches to AI Governance 6 Artificial Intelligence and the Past, Present, and Future of Democracy I. Introduction: How AI Is Political II. Democracy and Technology 1. Early Democracy and the Materiality of Small-Scale Collective Choice 2. Modern Democracy and the Materiality of Large-Scale Collective Choice 3. Democracy and Technology: Natural Allies? III. Democracy, AI, and the Grand Narratives of Techno-Skepticism 1. Lewis Mumford and the Megamachine 2. Martin Heidegger and the World As Gestell 3. Herbert Marcuse and the Power of Entertainment Technology 4. Jacques Ellul and Technological Determinism IV. The Grand Democratic AI Utopia V. AI and Democracy: Possibilities and Challenges for the Digital Century 1. Public Spheres 2. Political Power 3. Economic Power VI. Conclusion 7 The New Regulation of the European Union on Artificial Intelligence: Fuzzy Ethics Diffuse into Domestic Law and Sideline International Law I. Introduction II. The Creation of Ethical Norms on AI III. Diffusion of Ethical Norms into Domestic Law: The New Regulation of the European Union on AI IV. International Law Sidelined V. Conclusion and Outlook 8 Fostering the Common Good: An Adaptive Approach Regulating High-Risk AI-Driven Products and Services I. Introduction II. Key Notions and Concepts III. Drawbacks of Current Regulatory Approaches of High-Risk AI Products and Services IV. Specific Lacunae and Shortcomings of Current AI Regulation 1. EU Regulation of AI-Driven Medical Devices 2. National Regulation of Semi-Autonomous Cars 3. General AI Rules and Principles: International Soft Law and the Draft EU AI Regulation a. International Regulation? International Soft Law! b. Draft EU AI Regulation 4. Interim Conclusion V. A New Approach: Adaptive Regulation of AI-Driven High-Risk Products and Services 1. A New Approach 2. Key Elements of Adaptive Regulation of AI High-Risk Products and Services 3. Advantages of Adaptive Regulation a. Flexibility b. Risk Sensitiveness c. Potential Universality and Possible Regionalization d. Monitoring of Risks e. Democratic Legitimacy and Expert Commissions f. No Insurance Market Dependency 4. Challenges of an Adaptive Regulation Approach for AI-Driven High-Risk Products a. No Financial Means? b. Ambiguity and Overregulation? c. Too Early to Regulate? d. No Independent Experts? e. Unacceptable Joint Liability of Companies? VI. Determining the Regulatory Capital VII. Dissent and Expert Commission VIII. Summary 9 China's Normative Systems for Responsible AI: From Soft Law to Hard Law I. Introduction II. The Multiple Exploration of Responsible AI 1. The Impact of AI Applications Is Regarded As a Revolution II. The Social Consensus Established by Soft Law III. The Ambition toward a Comprehensive Legal Framework IV. The Legally Binding Method to Achieve Responsible AI 1. Responsible AI Based on Data Governance 2. Responsible AI Based on Algorithm Governance 3. Responsible AI Based on Platform Governance 4. Responsible AI under Specific Scenarios IV. Conclusion 10 Towards a Global Artificial Intelligence Charter I. Introduction II. The 'Race-to-the-Bottom' Problem III. Prevention of an AI Arms Race IV. A Moratorium on Synthetic Phenomenology V. Dangers to Social Cohesion VI. Research Ethics VII. Meta-Governance and the Pacing Gap VIII. Conclusion 11 Intellectual Debt: With Great Power Comes Great Ignorance Part III Responsible AI Liability Schemes 12 Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks I. Introduction II. Dimensions of AI and Corresponding Risks Posed 1. Traditional (Physical) Safety Risks 2. Fundamental Rights Risks 3. Overlaps and In-Between Categories a. Cybersecurity and Similar New Safety Risks b. Pure Economic Risks III. AI As a Challenge to Existing Liability Regimes 1. Classification of Liability Regimes a. Fault Liability b. Non-Compliance Liability c. Defect and Mal-Performance Liability d. Strict Liability 2. Challenges Posed by AI a. Liability for the Materialisation of Safety Risks (i) 'Complexity', 'Openness', and 'Vulnerability' of Digital Ecosystems (ii) 'Autonomy' and 'Opacity' (iii) Strict and Vicarious Liability as Possible Responses b. Liability for the Materialisation of Fundamental Rights Risks IV. The Emerging Landscape of AI Safety Legislation 1. The Proposed Machinery Regulation a. General Aims and Objectives b. Qualification As High-Risk Machinery c. Essential Health and Safety Requirements 2. The Proposed Artificial Intelligence Act a. General Aims and Objectives b. The Risk-Based Approach (i) Prohibited AI Practices (ii) High-Risk AI Systems (iii) AI Systems Subject to Specific Transparency Obligations c. Legal Requirements and Conformity Assessment for High-Risk AI Systems V. The Emerging Landscape of AI Liability Legislation 1. The European Parliament's Proposal for a Regulation on AI Liability a. Strict Operator Liability for High-Risk AI Systems b. Enhanced Fault Liability for Other AI Systems c. Liability for Physical and Certain Immaterial Harm 2. Can the EP Proposal be Linked to the AIA Proposal? a. Can an AI Liability Regulation Refer to the AIA List of 'High-Risk' Systems? b. Can the AIA Keep Liability for Immaterial Harm within Reasonable Boundaries? VI. Possible Pillars of Future AI Liability Law 1. Product Liability for AI a. Traditional Safety Risks b. Product Liability for Products Falling Short of 'Fundamental Rights Safety'? 2. Strict Operator Liability for 'High-Physical-Risk' Devices a. Why AI Liability Law Needs to be More Selective than AI Safety Law b. Differentiating 'High-Risk' and 'High-Physical-Risk-As-Such' c. Avoiding Inconsistencies with Regard to Human-Driven Devices 3. Vicarious Operator Liability a. The 'Accountability Gap' that Exists in a Variety of Contexts b. Statutory or Contractual Duty on the Part of the Principal c. A Harmonised Regime of Vicarious Liability 4. Non-Compliance and Fault Liability VII. Conclusions 13 Forward to the Past: A Critical Evaluation of the European Approach to Artificial Intelligence in Private International Law I. Introduction II. The Current European Framework 1. The Goals of PIL Harmonisation 2. The Subject of Liability 3. Non-Contractual Obligations: The Rome II Regulation a. Scope b. The General Rule (Article 4 Rome II) c. The Rule on Product Liability (Article 5 Rome II) d. Special Rules in EU Law (Article 27 Rome II) 4. Contractual Obligations: The Rome I Regulation a. Scope b. Choice of Law (Article 3 Rome I) c. Objective Rules (Articles 4 to 8 Rome I) d. Special Rules in EU Law (Article 23 Rome I) III. The Draft Regulation of the European Parliament 1. Territorial Scope 2. The Law Applicable to High Risk Systems 3. The Law Applicable to Other Systems 4. Personal Scope IV. Evaluation V. Summary and Outlook Part IV Fairness and Nondiscrimination in AI Systems 14 Differences That Make a Difference: Computational Profiling and Fairness to Individuals I. Introduction II. Discrimination 1. Suspect Grounds 2. Human Rights 3. Social Identity or Social Practice 4. Disparate Burdens III. Profiling 1. Statistical Discrimination a. Spurious Data b. Fallacious Reasoning 2. Procedural Fairness 3. Measuring Fairness IV. Conclusion 15 Discriminatory AI and the Law: Legal Standards for Algorithmic Profiling I. Introduction II. Legal Framework for Profiling and Decision-Making 1. Profiling 2. Decision-Making a. Anti-Discrimination Law b. Data Protection Law 3. Data Protection and Anti-Discrimination Law III. Causes for Discrimination 1. Preferences and Statistical Correlations 2. Technological and Methodological Factors a. Sampling Bias b. Labelling Bias c. Feature Selection Bias d. Error Rates IV. Justifying Direct and Indirect Forms of Discriminatory AI: Normative and Technological Standards 1. Proportionality Framework a. Proportionality as a Standard for Equality and Anti-Discrimination b. Three Steps: Suitability, Necessity, Appropriateness 2. General Considerations Concerning Statistical Discrimination/Group Profiling a. Different Harms: Decision Harm, Error Harm, Attribution Harm b. Alternative Means: Profiling Granularity and Information Gathering 3. Methodology of Automated Profiling: A Right to Reasonable Inferences a. Explicit and Implicit Methodology Standards b. Technical and Legal Elements of Profiling Methodology 4. Direct and Indirect Discrimination a. Justifying Differential Treatment b. Justifying Detrimental Impact V. Conclusion Part V Responsible Data Governance 16 Artificial Intelligence and the Right to Data Protection I. Traditional Concept of the Right to Data Protection II. The Intransparency Challenge of AI III. The Alternative Model: A No-Right Thesis IV. The Implication for the Legal Perspective on AI 1. Refocusing on Substantive Liberty and Equality Interests 2. The Threshold of Everyday Digital Life Risks 3. A Systemic Perspective V. Conclusion 17 Artificial Intelligence as a Challenge for Data Protection Law: And Vice Versa I. Introduction II. AI and Principles Relating to the Processing of Data 1. Transparency 2. Automated Decisions/Right to Explanation 3. Purpose Limitation/Change of Purpose 4. Data Minimisation/Storage Limitation 5. Accuracy/Integrity and Confidentiality 6. Lawfulness/Fairness a. Consent b. Withdrawal of Consent c. Balancing of Interests d. Special Categories of Personal Data 7. Intermediate Conclusion III. Compliance Strategies (de lege lata) 1. Personal Reference a. Anonymisation b. Pseudonymisation c. Synthetic Data 2. Responsibility 3. Privacy by Default/Privacy by Design 4. Data Protection Impact Assessment 5. Self-Regulation IV. Legal Policy Perspectives (de lege ferenda) 1. Substantive-Law Considerations 2. Conflicts between Data Protection Jurisdictions 3. Private Power V. Summary and Outlook 18 Data Governance and Trust: Lessons from South Korean Experiences Coping with COVID-19 I. Introduction II. Legal Frameworks Enabling Extensive Use of Technology-Based Contact Tracing 1. Consent Principle under Data Protection Laws 2. Legal Basis for Centralized Contact Tracing 3. Legal Basis for QR Code Tracking 4. Legal Basis for the Disclosure of the Routes of Confirmed Cases 5. Legal Basis for Quarantine Monitoring III. Role of Technology in Korea's Response to COVID-19 1. Use of Smart City Technology for Contact Tracing 2. Use of QR Codes for Tracking Visitors to High-Risk Premises 3. Public Disclosure of the Routes of Confirmed Cases 4. Use of GPS Tracking Technology and Geographic Information System (GIS) for Quarantine Monitoring IV. Flow of Data 1. Centralized Contact Tracing 2. QR Code Tracking 3. Public Disclosure of the Routes of Confirmed Cases 4. Quarantine Monitoring V. Data Flow and Data Governance 1. Centralized Contact Tracing (including QR Code Tracking) 2. Public Disclosure of the Routes of Confirmed Cases 3. Quarantine Monitoring 4. Data Governance VI. Looking Ahead Part VI Responsible Corporate Governance of AI Systems 19 From Corporate Governance to Algorithm Governance: Artificial Intelligence as a Challenge for Corporations and Their Executives I. Introduction II. Algorithms As Directors III. Management Board 1. Legal Framework 2. AI Related Duties a. General Responsibilities b. Delegation of Responsibility c. Data Governance d. Management Liability e. Composition of the Management Board 3. Business Judgement Rule a. Adequate Information b. Benefit of the Company c. Freedom from Conflicts of Interest IV. Supervisory Board 1. Use of AI by the Supervisory Board Itself 2. Monitoring of the Use of AI by the Management Board V. Conclusion 20 Autonomization and Antitrust: On the Construal of the Cartel Prohibition in the Light of Algorithmic Collusion I. Introduction II. Algorithmic Collusion As a Phenomenon on Markets III. On the Scope of the Cartel Prohibition and Its Traditional Construal IV. Approaches for Closing Legal Gaps 1. On the Idea of Personifying Algorithms 2. On the Idea of a Prohibition of Tacit Collusion 3. Harmful Informational Signals As Point of Reference for Cartel Conduct a. Conceptualization b. Possible Objections V. Conclusion 21 Artificial Intelligence in Financial Services: New Risks and the Need for More Regulation? I. Introduction II. AI As a New General Purpose Technology III. Robo-Finance: From Automation to the Wide-Spread Use of AI Applications IV. A Short Overview of Regulation in the Financial Services Industry V. New Risk Categories in Robo-Finance Stemming from AI? VI. Responsibility Frameworks As a Solution for Managing AI Risks in financial services? VII. Standardization, High Risk AI Applications, and the New EU AI Regulation VIII. Conclusion Part VII Responsible AI Healthcare and Neurotechnology Governance 22 Medical AI: Key Elements at the International Level I. Introduction II. Application Areas of AI in Medicine Addressed by International Guidelines So Far 1. Anamnesis and Diagnostic Findings 2. Diagnosis 3. Information, Education, and Consent 4. Treatment and Aftercare 5. Documentation and Issuing of Certificates 6. Data Protection 7. Interim Conclusion III. International Guidance for the Field of AI Application during Medical Treatment 1. International Organizations and Their Soft-Law Guidance 2. The World Medical Association 3. Effect of International Measures in National Law a. Soft Law b. Incorporation of WMA Measures into Professional Law IV. Conclusion: Necessity of Regulation by the World Medical Association 23 ''Hey Siri, How Am I Doing?'': Legal Challenges for Artificial Intelligence Alter Egos in Healthcare I. Introduction II. AI Alter Egos in Healthcare: Concepts and Functions 1. Individual Health Data Storage and Management 2. Individual Medical Diagnostics 3. Interface for Collective Analysis and Evaluation of Big Health Data III. Key Elements of the Legal Framework and Legal Challenges 1. European Data Protection Law a. Limitation of Data Processing: Data Protection-Friendly and Secure Design b. Securing a Self-Determined Lifestyle and Protection from Processing-Specific Errors through Transparency 2. European Medical Devices Regulation a. Classifying AI Alter Ego Functions in Terms of the Medical Devices Regulation b. Objectives and Requirements Stipulated in the MDR IV. Conclusion 24 'Neurorights': A Human Rights-Based Approach for Governing Neurotechnologies I. Introduction II. Mental Privacy 1. The Mental Realm: The Spectre of Dualism, Freedom of Thought and Related Issues 2. Privacy of Data and Information: Ownership, Authorship, Interest, and Responsibility 3. Mental Privacy: Protecting Data and Information about the Human Brain and Associated Mental Phenomena III. Mental Integrity through the Lens of Vulnerability Ethics IV. Neurorights: Legal Innovation or New Wine in Leaky Bottles? 1. The Current Debate on the Conceptual and Normative Foundations and the Legal Scope of Neurorights 2. New Conceptual Aspects: Mental Privacy and Mental Integrity As Anthropological Goods 3. Making Neurorights Actionable and Justiciable: A Human Rights-Based Approach V. Summary and Conclusions 25 AI-Supported Brain-Computer Interfaces and the Emergence of 'Cyberbilities' I. Introduction II. From Capabilities to Cyberbilities 1. Capabilities 2. Agency and Human-Machine Interactions a. The 'Standard View': Compensating Causality and Interactivity b. Reframing Causality c. Reframing Interactivity III. Hybrid Agency As the Foundation of Cyberbilities 1. Distributed Agency and Hybrid Agency 2. Cyberbilities As Neurotechnological Capabilities IV. Cyberbilities and the Responsible Development of Neurotechnology 1. Introducing a List of Cyberbilities 2. Remarks on the Responsible Development of Neurotechnology V. Discussion and Closing Remarks Part VIII Responsible AI for Security Applications and in Armed Conflict 26 Artificial Intelligence, Law, and National Security I. Introduction: Knowledge Is Power II. Cyberspace and AI III. Catastrophic Risk: Doomsday Machines IV. Autonomous Weapons System V. Existing Military Capabilities VI. Reconnaissance VII. Foreign Relations VIII. Economics IX. Conclusion 27 Morally Repugnant Weaponry?: Ethical Responses to the Prospect of Autonomous Weapons I. Introduction II. What Is an Autonomous Weapon? III. Programmed to Kill: Three Ethical Responses 1. Responsibility Gaps 2. Dignity 3. Human and Artificial Agency IV. Three Emerging Ethical Problems with AWS V. Conclusion 28 On 'Responsible AI' in War: Exploring Preconditions for Respecting International Law in Armed Conflict I. Introduction II. Framing 1. Definitional Parameters 2. Diversity of Applications 3. International Debates on 'Emerging Technologies in the Area of Lethal Autonomous Weapons Systems' 4. Technical Opacity Coupled with Military Secrecy III. Overview of International Law Applicable to Armed Conflict IV. Preconditions Arguably Necessary to Respect International Law 1. Preconditions Concerning Respect for International Law by Human Agents Acting on Behalf of a State or an International Organization 2. Preconditions Concerning Non-Involved Humans and Entities Related to Respect for International Law by a State or an International Organization 3. Preconditions Concerning Respect for the ICC Statute V. Conclusion




پست ها تصادفی