Intelligent Internet of Things Networks

دانلود کتاب Intelligent Internet of Things Networks

60000 تومان موجود

کتاب شبکه های اینترنت اشیاء هوشمند نسخه زبان اصلی

دانلود کتاب شبکه های اینترنت اشیاء هوشمند بعد از پرداخت مقدور خواهد بود
توضیحات کتاب در بخش جزئیات آمده است و می توانید موارد را مشاهده فرمایید


در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد

این کتاب نسخه اصلی می باشد و به زبان فارسی نیست.


امتیاز شما به این کتاب (حداقل 1 و حداکثر 5):

امتیاز کاربران به این کتاب:        تعداد رای دهنده ها: 9


توضیحاتی در مورد کتاب Intelligent Internet of Things Networks

نام کتاب : Intelligent Internet of Things Networks
عنوان ترجمه شده به فارسی : شبکه های اینترنت اشیاء هوشمند
سری : Wireless Networks
نویسندگان : ,
ناشر : Springer
سال نشر : 2023
تعداد صفحات : 413
ISBN (شابک) : 3031269861 , 9783031269868
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 14 مگابایت



بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.


فهرست مطالب :


Preface
Contents
Acronyms
1 Introduction
1.1 Background
1.2 Overview of Internet of Things Network and Machine Learning
1.2.1 Internet of Things Architecture
1.2.2 Internet of Things Network Technologies
1.2.3 Emerging Network Technologies for IoT
1.2.4 Machine Learning Technologies
1.3 Related Research and Development
1.3.1 IEEE CIIoT
1.3.2 Aeris
1.3.3 Google GCP
1.3.4 3GPP RedCap
1.3.5 SwRI Intelligent Networks and IoT
1.3.6 IBM Smart City
1.4 Organizations of This Book
1.5 Summary
References
2 Intelligent Internet of Things Networking Architecture
2.1 In-network Intelligence Control: A Self-driving Networking Architecture
2.1.1 In-network Functionality
2.1.1.1 Programmable Network Hardware
2.1.1.2 In-network Functionality
2.1.2 In-network Intelligent Control
2.1.2.1 Hybrid In-network Intelligence Architecture
2.1.2.2 In-network Load Balance
2.1.3 In-network Congestion Control
2.1.4 In-network DDoS Detection
2.1.4.1 What Functions Should Be Implemented Inside the Network
2.2 Summary
References
3 Intelligent IoT Network Awareness
3.1 Capsule Network Assisted IoT Traffic Classification Mechanism
3.1.1 Methodology
3.1.1.1 Brief Introduction of Dataset
3.1.1.2 Data Classification
3.1.2 The Capsule Network Architecture
3.1.2.1 Capsule Network
3.1.3 Experiments and Result Analysis
3.1.3.1 Experimental Environment
3.1.3.2 Evaluation Metrics
3.1.3.3 Experimental Result Analysis
3.1.3.4 Preliminary Experiment
3.1.3.5 Main Experiment
3.2 Hybrid Intrusion Detection System Relying on Machine Learning
3.2.1 Traditional Machine Learning Aided Detection Methods
3.2.2 Deep Learning Aided Detection Methods
3.2.3 The Hybrid IDS Architecture
3.3 Identification of Encrypted Traffic Through Attention Mechanism Based LSTM
3.3.1 Methodology
3.3.1.1 Dataset
3.3.1.2 Data Preprocessing
3.3.2 Attention Based LSTM and HAN Architecture
3.3.2.1 Attention Based LSTM
3.3.2.2 HAN Architecture
3.3.2.3 Output Layer and Objective Function
3.3.3 Experiments and Result Analysis
3.3.3.1 Experimental Environment
3.3.3.2 Evaluation Metrics
3.4 Distributed Variational Bayes-Based In-network Security
3.4.1 System Model and Problem Formulation
3.4.2 Hybrid Variational Bayes Algorithm
3.4.2.1 In-network Security Architecture
3.4.2.2 Hybrid Variational Bayes Algorithm
3.4.2.3 Knowledge Sharing
3.4.2.4 Complexity Analysis
3.4.3 Experiment Results and Analysis
3.4.3.1 Experimental Setup and Data Preprocessing
3.4.3.2 Baseline Algorithm
3.4.3.3 Performance Analysis
3.4.3.4 Characteristic Analysis
3.4.4 Simulation on Mininet
3.4.4.1 Network Topology
3.4.4.2 Experiment Results
3.5 Summary
References
4 Intelligent Traffic Control
4.1 QMIX Aided Routing in Social-Based Delay-Tolerant Networks
4.1.1 System Model
4.1.1.1 Network Model
4.1.1.2 Social Attribute Definitions and Problem Formulation
4.1.1.3 Evaluation Metrics
4.1.2 Community Detection
4.1.3 Dec-POMDP Model and Cooperative MARL Protocol
4.1.3.1 Cooperative Markov Game
4.1.3.2 Dec-POMDP Model
4.1.3.3 Cooperative Multi-agent Reinforcement Learning
4.1.3.4 Complexity Analysis
4.1.4 Experiments and Simulation Results
4.1.4.1 Experiment Configuration
4.1.4.2 Training Performance
4.1.4.3 Performance of the Real-Life Datasets
4.2 A Learning-Based Approach to Intra-domain QoS Routing
4.2.1 The Basic Routing Based on Node Vectors
4.2.1.1 The Principle of Algorithm
4.2.1.2 Training of the Vectors
4.2.1.3 The Auxiliary Algorithm
4.2.1.4 Complexity Analysis
4.2.2 The Constrained Routing Based on Node Vectors
4.2.3 Extension to the Multicast Scenario
4.2.4 Experiments and Simulation Results
4.2.4.1 Simulations on the Basic Routing
4.2.4.2 The Demo of Shortest Path Routing
4.2.4.3 Simulations on the Constrained Routing
4.2.4.4 Experiments on the Throughput of Routing
4.3 Artificial Intelligence Empowered QoS-oriented Network Association
4.3.1 System Model
4.3.1.1 Data Transmission Framework in SDN
4.3.1.2 Network Model
4.3.1.3 Traffic Model
4.3.1.4 M/M/C/N Queueing Model
4.3.2 QoS Routing with Resource Allocation
4.3.2.1 Problem Formulation
4.3.2.2 Optimality Conditions
4.3.2.3 QoS Routing Strategy with Resource Allocation
4.3.2.4 Computational Complexity Analysis
4.3.3 Deep Reinforcement Learning for QoS-oriented Routing
4.3.3.1 Deep Reinforcement Learning Framework
4.3.3.2 DDPG Model for QoS-Oriented Routing
4.3.3.3 DDPG Aided QoS-Oriented Routing
4.3.3.4 Computational Complexity Analysis
4.3.4 Experiments and Simulation Results
4.3.4.1 Database
4.3.4.2 QoS Routing with Resource Allocation
4.3.4.3 Adaptive Routing Strategy
4.3.4.4 Expansion Experiment
4.4 Machine Learning Aided Load Balance Routing Scheme
4.4.1 System Model
4.4.1.1 Packets Detection in the Dataplane
4.4.1.2 Routing Scheme
4.4.1.3 Metrics
4.4.1.4 PCA-Based Feature Extraction
4.4.2 Network Modeling
4.4.2.1 Input and Output Design
4.4.2.2 Intialization Phase
4.4.2.3 Training Phase
4.4.2.4 Running Phase
4.4.3 Routing Based on Queue Utilization
4.4.3.1 The Representation and Update of Queue Utilization
4.4.3.2 The Preprocessing of Metric Data
4.4.3.3 The Selection Mechanism of Next Hop
4.4.3.4 Time Complexity of the Algorithm
4.4.4 Experiments and Simulation Results
4.4.4.1 Datasets
4.4.4.2 Experiments Settings
4.4.4.3 Training Results
4.4.4.4 Performance Evaluation
4.5 Summary
References
5 Intelligent Resource Scheduling
5.1 Transfer Reinforcement Learning aided Network Slicing Optimization
5.1.1 System Model
5.1.1.1 Network Slicing Architecture
5.1.1.2 System Model
5.1.1.3 Problem Formulation
5.1.2 Transfer Multi-agent Reinforcement Learning
5.1.2.1 Markov Decision Process
5.1.2.2 Deep Deterministic Policy Gradient
5.1.2.3 Transfer Reinforcement Learning
5.1.3 Experiments and Simulation Results
5.1.3.1 Simulation Settings
5.1.3.2 Convergence Analysis
5.1.3.3 Performance Analysis
5.1.3.4 Slice Performance Analysis
5.2 Reinforcement Learning-Based Continuous-Decision Virtual Network Embedding
5.2.1 System Model and Evaluation Metrics
5.2.1.1 System Model
5.2.1.2 Evaluation Metrics
5.2.2 Embedding Algorithm
5.2.2.1 Seq2seq Model
5.2.2.2 Information Extraction
5.2.2.3 Markov Decision Process
5.2.2.4 Policy Gradient
5.2.2.5 Training and Testing
5.2.2.6 Computational Complexity
5.2.3 Experiments and Simulation Results
5.2.3.1 Experiment Setup
5.2.3.2 Training Performance
5.2.3.3 Simulation Result
5.3 Multi-agent Reinforcement Learning Aided Service Function Chain
5.3.1 System Model and Problem Formulation
5.3.1.1 System Model
5.3.1.2 Multi-user Competition Game Model
5.3.1.3 Problem Formulation
5.3.2 Multi-Agent Reinforcement Learning
5.3.2.1 The Hybrid Control Framework
5.3.2.2 Markov Game Model
5.3.2.3 Multi-agent Reinforcement Learning Approach
5.3.3 Experiments and Simulation Results
5.3.3.1 Simulation Setup
5.3.3.2 Convergence Evaluation
5.3.3.3 Performance Evaluation
5.4 Summary
References
6 Mobile Edge Computing Enabled Intelligent IoT
6.1 Auction Design for Edge Computation Offloading
6.1.1 Architecture of SDN-Based Ultra Dense Networks
6.1.2 System Model
6.1.2.1 SBS Edge Clouds\' Transmission Rate
6.1.2.2 MBS Edge Cloud\'s Cooperative and Competitive Modes
6.1.2.3 Second-Price Auction Design
6.1.2.4 Auction Outcomes
6.1.3 SBSs\' Equilibrium Bidding Strategies
6.1.3.1 Definition of the Symmetric Bayesian Nash Equilibrium
6.1.3.2 Equilibrium for R [rmin,rmax)
6.1.3.3 Equilibrium for R [0,N-1+σSBSNrmin]
6.1.3.4 Equilibrium for R (N-1+σSBSNrmin,rmin)
6.1.3.5 Equilibrium for R [rmax,+∞)
6.1.4 MBS Edge Cloud\'s Expected Utility Analysis
6.1.4.1 Definition of MBS Edge Cloud\'s Expected Utility
6.1.4.2 MBS Edge Cloud\'s Optimal Expected Utility
6.1.4.3 MBS Edge Cloud\'s Optimal Offloading Rate
6.1.5 Experiments and Simulation Results
6.1.5.1 Uniqueness of r̃x(t)
6.1.5.2 Impact on Offloading Rate R*
6.1.5.3 Expected Utility of MBS Edge Cloud
6.1.5.4 Utility Analysis of SBS Edge Cloud
6.2 Edge Intelligence-Driven Offloading and Resource Allocation
6.2.1 System Model
6.2.1.1 Computation Offloading Mode
6.2.1.2 Local Computing Mode
6.2.1.3 Problem Formulation
6.2.2 System Optimization
6.2.2.1 Deep Reinforcement Learning Framework
6.2.2.2 Offloading Policy Generation
6.2.2.3 Network Parameters Update
6.2.3 Experiments and Simulation Results
6.2.3.1 Experimental Settings
6.2.3.2 Convergent Performance Analysis
6.2.3.3 3AUS Parameter Interval
6.2.3.4 System Performance
6.3 Multi-Agent Driven Resource Allocation for DEN
6.3.1 System Model
6.3.1.1 Single Edge Network for DENs
6.3.1.2 Multiple Edge Networks for DENs
6.3.2 Algorithm Design in Single Edge Network
6.3.2.1 Offloading Decision Generation
6.3.2.2 Optimal Local Execution Overhead
6.3.2.3 Optimal Edge Cloud Execution Overhead
6.3.2.4 Training Methods
6.3.3 Algorithm Design in Multiple Edge Networks
6.3.3.1 DRL Background Knowledge
6.3.3.2 MADDPG Algorithm Framework
6.3.4 Experiments and Simulation Results
6.3.4.1 Single Edge Network Scene
6.3.4.2 Multiple Edge Networks Scene
6.4 Summary
References
7 Blockchain-Enabled Intelligent IoT
7.1 Cloud Mining Pool-Aided Blockchain-Enabled IoT
7.1.1 System Model
7.1.1.1 Cloud Mining Pool-Aided BCoT
7.1.1.2 System Model
7.1.2 Evolutionary Game Formulation
7.1.2.1 Replicator Dynamics of Pool Selection
7.1.2.2 Evolutionary Equilibrium and Stability Analysis
7.1.2.3 Two Mining Pool Study
7.1.2.4 Delay in Replicator Dynamics
7.1.2.5 Evolutionary Game-Based Pool Selection Algorithm
7.1.3 Distributed Reinforcement Learning Approach
7.1.3.1 The Multi-agent System
7.1.3.2 Policy Generation
7.1.4 Performance Evaluation
7.1.4.1 Evolution Analysis
7.1.4.2 Evolution Analysis with Different Pooling Strategies
7.1.4.3 Evolutionary Game-Based Pool Selection Algorithm
7.1.4.4 Impact of Delay in Strategy Adaptation
7.1.5 Wolf-PHC Based Pool Selection
7.1.5.1 Convergence Performance of WoLF-PHC
7.1.5.2 The Reward vs. Pooling Strategies
7.1.5.3 The Reward vs. The Number of Miners
7.2 Resource Trading in Blockchain-Based IIoT
7.2.1 Industrial IoT DAO Platform
7.2.1.1 DAO Platform Assisted by Cloud/Fog Computing
7.2.1.2 Problem Formulation
7.2.1.3 Game Analysis
7.2.2 Multi-agent Reinforcement Learning
7.2.2.1 The Multi-agent System
7.2.2.2 Policy Generation
7.2.3 Experiments and Simulation Results
7.2.3.1 Convergence Performance of WoLF-PHC
7.2.3.2 The Number of Miners vs. Service Demand
7.2.3.3 The Number of Miners vs. Price
7.3 Summary
References
8 Conclusions and Future Challenges
8.1 Conclusions
8.2 Future Challenges
8.2.1 IoT Standards and Unified Architecture
8.2.2 Adopting Emerging Naming and Addressing in IoT
8.2.3 Privacy and Security Issues in IoT
8.2.4 Quality of Service in IoT
References
Index




پست ها تصادفی