توضیحاتی در مورد کتاب Natural Computing Algorithms
نام کتاب : Natural Computing Algorithms
عنوان ترجمه شده به فارسی : الگوریتم های محاسبات طبیعی
سری : Natural Computing Series
نویسندگان : Anthony Brabazon, Michael O’Neill, Seán McGarraghy
ناشر : Springer
سال نشر : 2015
تعداد صفحات : 554
ISBN (شابک) : 9783662436301 , 2015951433
زبان کتاب : English
فرمت کتاب : pdf
حجم کتاب : 8 مگابایت
بعد از تکمیل فرایند پرداخت لینک دانلود کتاب ارائه خواهد شد. درصورت ثبت نام و ورود به حساب کاربری خود قادر خواهید بود لیست کتاب های خریداری شده را مشاهده فرمایید.
فهرست مطالب :
Preface
Acknowledgment
Contents
1 Introduction
1.1 Natural Computing Algorithms: An Overview
1.1.1 Biologically Inspired Algorithms
Populational Perspective
Dispersion and Diversity
Communication
Robustness
Adaptiveness
1.1.2 Families of Naturally Inspired Algorithms
Evolutionary Computing
Social Computing
Neurocomputing
Immunocomputing
Developmental and Grammatical Computing
1.1.3 Physically Inspired Algorithms
Quantum Inspired Algorithms
1.1.4 Plant Inspired Algorithms
1.1.5 Chemically Inspired Algorithms
1.1.6 A Unified Family of Algorithms
1.1.7 How Much Natural Inspiration?
1.2 Structure of the Book
Part I Evolutionary Computing
2 Introduction to Evolutionary Computing
2.1 Evolutionary Algorithms
Evolutionary Computation in Computer Science
3 Genetic Algorithm
3.1 Canonical Genetic Algorithm
3.1.1 A Simple GA Example
3.2 Design Choices in Implementing a GA
3.3 Choosing a Representation
3.3.1 Genotype to Phenotype Mapping
3.3.2 Genotype Encodings
3.3.3 Representation Choice and the Generation of Diversity
3.4 Initialising the Population
3.5 Measuring Fitness
Estimating Fitness
3.6 Generating Diversity
3.6.1 Selection Strategy
Fitness Proportionate Selection
Ordinal Selection
3.6.2 Mutation and Crossover
Binary Genotypes
Real-Valued Genotypes
3.6.3 Replacement Strategy
3.7 Choosing Parameter Values
3.8 Summary
4 Extending the Genetic Algorithm
4.1 Dynamic Environments
4.1.1 Strategies for Dynamic Environments
4.1.2 Diversity
Diversity Generation if Change Is Detected
Diversity Maintenance During Run
Memory
Measurement of Performance
Summary
4.2 Structured Population GAs
Distributed EA
Cellular EA
4.3 Constrained Optimisation
Penalty Functions
Repair Operators
Tailored Diversity-Generation Operators
4.4 Multiobjective Optimisation
Approaches to Multiobjective Optimisation
Multiobjective Optimisation with a GA
4.5 Memetic Algorithms
4.6 Linkage Learning
Messy GA
Competent GA
4.7 Estimation of Distribution Algorithms
4.7.1 Population-Based Incremental Learning
4.7.2 Univariate Marginal Distribution Algorithm
Continuous Univariate Marginal Distribution Algorithm
4.7.3 Compact Genetic Algorithm
Shortcomings of Univariate EDAs
4.7.4 Bayesian Optimisation Algorithm
The Algorithm
Learning a Bayesian Network
4.8 Summary
5 Evolution Strategies and Evolutionary Programming
5.1 The Canonical ES Algorithm
5.1.3 Mutation in ES
5.1.4 Adaptation of the Strategy Parameters
The 1/5 Success Rule
Self-adaptive Mutation Step Size
Single-Step-Size Mutation
Step-Size Mutation
5.1.5 Recombination
Covariance Matrix Adaptation Evolution Strategies
5.2 Evolutionary Programming
Numerical Optimisation with EP
5.3 Summary
6 Differential Evolution
6.1 Canonical Differential Evolution Algorithm
Mutation Operator
Trial Vector
Selection
Example of DE
Parameters in DE
6.2 Extending the Canonical DE Algorithm
6.2.1 Selection of the Base Vector
6.2.2 Number of Vector Differences
6.2.3 Alternative Crossover Rules
6.2.4 Other DE Variants
6.3 Discrete DE
Angle-Modulated DE
6.4 Summary
7 Genetic Programming
7.1 Genetic Programming
Representation in GP
Open-Ended Nature of Evolution in GP
7.1.1 GP Algorithm
7.1.2 Function and Terminal Sets
Generating Numerical Values
Incorporating More Complex Structures
7.1.3 Initialisation Strategy
Full Method
Grow Method
7.1.4 Diversity-Generation in GP
Crossover
Mutation
7.2 Bloat in GP
7.3 More Complex GP Architectures
7.3.1 Functions
Why Use ADFs?
7.3.2 ADF Mutation and Crossover
Mutation
Crossover
7.3.3 Memory
7.3.4 Looping
7.3.5 Recursion
7.4 GP Variants
7.4.1 Linear and Graph GP
7.4.2 Strongly Typed GP
7.4.3 Grammar-Based GP
7.5 Semantics and GP
7.6 Summary
Part II Social Computing
8 Particle Swarm Algorithms
8.1 Social Search
8.2 Particle Swarm Optimisation Algorithm
Synchronous vs. Asynchronous Updates
8.2.1 Velocity Update
Decomposing the Velocity Update Equation
8.2.2 Velocity Control
Velocity Clamping
Momentum Weight
Constriction Coefficient Version of PSO
8.2.3 Neighbourhood Structure
Information Flow in Neighbourhood Structures
8.3 Comparing PSO and Evolutionary Algorithms
8.4 Maintaining Diversity in PSO
Premature Convergence
Dynamic Environments
Multiple Solutions
8.4.1 Simple Approaches to Maintaining Diversity
8.4.2 Predator–Prey PSO
8.4.3 Charged Particle Swarm
8.4.4 Multiple Swarms
8.4.5 Speciation-Based PSO
8.5 Hybrid PSO Algorithms
Multiple Algorithm Hybrids
Blended Hybrids
8.6 Discrete PSO
8.6.1 BinPSO
8.6.2 Angle-Modulated PSO
8.7 Evolving a PSO Algorithm
8.8 Summary
9 Ant Algorithms
9.1 A Taxonomy of Ant Algorithms
9.2 Ant Foraging Behaviours
9.3 Ant Algorithms for Discrete Optimisation
9.3.1 Graph structure
Pheromone Matrix as a History
9.3.2 Ant System
Pheromone Initialisation
Constructing Protosolutions
Updating Pheromone Trails
Which Solutions Participate in the Update Process?
9.3.3 MAX-MIN Ant System
9.3.4 Ant Colony System
Construction Rule in ACS
Pheromone Update in ACS
Local Update in ACS
9.3.5 Ant Multitour Systems
9.3.6 Dynamic Optimisation
9.4 Ant Algorithms for Continuous Optimisation
Applying the CACS Algorithm
CACS Algorithm and EDAs
9.5 Multiple Ant Colonies
Parallel Implementation
Migration Frequency
Migration Structure
Colony Birth and Death
9.6 Hybrid Ant Foraging Algorithms
9.7 Ant-Inspired Clustering Algorithms
9.7.1 Deneubourg Model
9.7.2 Lumer and Faieta Model
Classification Using Ant Clustering Algorithms
9.7.3 Critiquing Ant Clustering
9.8 Classification with Ant Algorithms
AntMiner Algorithm
9.9 Evolving an Ant Algorithm
9.10 Summary
10 Other Foraging Algorithms
10.1 Honeybee Dance Language
10.2 Honeybee Foraging
10.2.1 The Honeybee Recruitment Dance
10.3 Designing a Honeybee Foraging Optimisation Algorithm
10.3.1 Bee System Algorithm
10.3.2 Artificial Bee Colony Algorithm
10.3.3 Honeybee Foraging and Dynamic Environments
Critiquing the Algorithms
10.4 Bee Nest Site Selection
10.4.1 Bee Nest Site Selection Optimisation Algorithm
10.5 Honeybee Mating Optimisation Algorithm
10.6 Summary
11 Bacterial Foraging Algorithms
11.1 Bacterial Behaviours
11.1.1 Quorum Sensing
11.1.2 Sporulation
11.1.3 Mobility
11.2 Chemotaxis in E. Coli Bacteria
11.3 Bacterial Foraging Optimisation Algorithm
11.3.1 Basic Chemotaxis Model
11.3.2 Chemotaxis Model with Social Communication
Initialisation of the Algorithm
Notation Used
Chemotaxis Loop
Reproduction Cycle
Elimination-Dispersal Events
Parameter Values for the BFOA
11.4 Dynamic Environments
11.5 Classification Using a Bacterial Foraging Metaphor
11.6 Summary
12 Other Social Algorithms
12.1 Glow Worm Algorithm
Sensor Range
Luminescence Update
Location Update
Local Decision Range Update
Comparison with Other Swarm Algorithms
12.2 Bat Algorithm
12.2.1 Bat Vocalisations
12.2.2 Algorithm
Generate New Solution
Local Search
Parameters
12.2.3 Discussion
12.3 Fish School Algorithm
12.3.1 Fish School Search
Individual Movement
Feeding
Collective-Instinctive Movement
Collective-Volitive Movement
12.3.2 Summary
12.4 Locusts
12.4.1 Locust Swarm Algorithm
12.5 Summary
Part III Neurocomputing
13 Neural Networks for Supervised Learning
13.1 Biological Inspiration for Neural Networks
13.2 Artificial Neural Networks
13.2.1 Neural Network Architectures
13.3 Structure of Supervised Neural Networks
13.3.1 Activation and Transfer Functions
13.3.2 Universal Approximators
13.4 The Multilayer Perceptron
13.4.1 MLP Transfer Function
13.4.2 MLP Activation Function
13.4.3 The MLP Projection Construction and Response Regions
13.4.4 Relationship of MLPs to Regression Models
13.4.5 Training an MLP
The Backpropagation Algorithm
13.4.6 Overtraining
13.4.7 Practical Issues in Modelling with and Training MLPs
Measure of Error
Parameters for the Backpropagation Algorithm
Selecting Network Structure
Multiple Hidden Layers
Data Quality and Predictive Ability
13.4.8 Stacking MLPs
13.4.9 Recurrent Networks
13.5 Radial Basis Function Networks
13.5.1 Kernel Functions
13.5.2 Radial Basis Functions
13.5.3 Intuition Behind Radial Basis Function Networks
13.5.4 Properties of Radial Basis Function Networks
13.5.5 Training Radial Basis Function Networks
13.5.6 Developing a Radial Basis Function Network
13.6 Support Vector Machines
13.6.1 SVM Method
13.6.2 Issues in Applications of SVM
13.7 Summary
14 Neural Networks for Unsupervised Learning
14.1 Self-organising Maps
14.2 SOM Algorithm
14.3 Implementing a SOM Algorithm
Initialisation of Weight Vectors
Topology of the Mapping Layer
Distance Measure
Neighbourhood Function
Learning Rate
Forming Clusters
14.4 Classification with SOMs
Learning Vector Quantisation
14.5 Self-organising Swarm
14.6 SOSwarm and SOM
SOM-PSA Hybrids
14.7 Adaptive Resonance Theory
14.7.1 Unsupervised Learning for ART Structure of ART Neural Network
Working of ART Neural Network
Choice of Vigilance Parameter
ART Training
14.7.2 Supervised Learning for ARTs
14.7.3 Weaknesses of ART Approaches
14.8 Summary
15 Neuroevolution
15.1 Direct Encodings
15.1.1 Evolving Weight Vectors
15.1.2 Evolving the Selection of Inputs
15.1.3 Evolving the Connection Structure
Algorithm 15.1:
15.1.4 Other Hybrid MLP Algorithms
15.1.5 Problems with Direct Encodings
Noisy Fitness
Designing Efficient Crossover Mechanisms
Over-elaborate MLP Structures
15.2 NEAT
Algorithm 15.2:
15.2.1 Representation in NEAT
15.2.2 Diversity Generation in NEAT
Crossover in NEAT
Mutation in NEAT
15.2.3 Speciation
Fitness-Sharing
15.2.4 Incremental Evolution
15.3 Indirect Encodings
15.4 Other Hybrid Neural Algorithms
15.5 Summary
Part IV Immunocomputing
16 Artificial Immune Systems
16.1 The Natural Immune System
16.1.1 Components of the Natural Immune System
16.1.2 Innate Immune System
Plant Immune Systems
16.1.3 Adaptive Immune System
B Cells and T Cells
T Cell-Dependent Humoral Immune Response
T Cell Tolerogenesis
Immune System Memory
16.1.4 Danger Theory
16.1.5 Immune Network Theory
16.1.6 Optimal Immune Defence
16.2 Artificial Immune Algorithms
16.3 Negative Selection Algorithm
Canonical Real-Valued Algorithm
Efficiency of the Algorithm
General Issues in Negative Selection
16.4 Dendritric Cell Algorithm
Operationalising the Algorithm
Summary
16.5 Clonal Expansion and Selection Inspired Algorithms
16.5.1 CLONALG Algorithm
16.5.2 B Cell Algorithm
16.5.3 Real-Valued Clonal Selection Algorithm
Parallels with Evolutionary Algorithms
16.5.4 Artificial Immune Recognition System
AIRS Algorithm
Normalisation and Initialisation
Antigen Training
Competition for Limited Resources
Memory Cell Selection
Summary
16.6 Immune Programming
16.7 Summary
Part V Developmental and Grammatical Computing
17 An Introduction to Developmental and Grammatical Computing
17.1 Developmental Computing
17.2 Grammatical Computing
17.3 What Is a Grammar?
17.3.1 Types of Grammar
Regular Grammars
Context-Free Grammars
Context-Sensitive Grammars
Free Grammars
17.3.2 Formal Grammar Notation
17.4 Grammatical Inference
17.5 Lindenmayer Systems
17.6 Summary
18 Grammar-Based and Developmental Genetic Programming
18.1 Grammar-Guided Genetic Programming
18.1.1 Other Grammar-Based Approaches to GP
18.2 Developmental GP
18.2.1 Genetic L-System Programming
18.2.2 Binary GP
18.2.3 Cellular Encoding
18.2.4 Analog Circuits
18.2.5 Other Developmental Approaches to GP
18.3 Summary
19 Grammatical Evolution
19.1 A Primer on Gene Expression
The Translation Grammar
19.2 Extending the Biological Analogy to GE
19.3 Example GE Mapping
19.4 Search Engine
19.4.1 Genome Encoding
19.4.2 Mutation and Crossover Search Operators
19.4.3 Modularity
19.4.4 Search Algorithm
Grammatical Swarm
Grammatical Differential Evolution
19.5 Genotype–Phenotype Map
19.6 Grammars
19.7 Summary
20 Tree-Adjoining Grammars and Genetic Programming
20.1 Tree-Adjoining Grammars
20.2 TAG3P
20.3 Developmental TAG3P
20.4 TAGE
20.5 Summary
21 Genetic Regulatory Networks
21.1 Artificial Gene Regulatory Model for Genetic Programming
21.1.1 Model Output
21.2 Differential Gene Expression
21.3 Artificial GRN for Image Compression
21.4 Summary
Part VI Physical Computing
22 An Introduction to Physically Inspired Computing
22.1 A Brief Physics Primer
22.1.1 A Rough Taxonomy of Modern Physics
22.2 Classical Mechanics
22.2.1 Energy and Momentum
22.2.2 The Hamiltonian
22.3 Thermodynamics
22.3.1 Statistical Mechanics
22.3.2 Ergodicity
22.4 Quantum Mechanics
22.4.1 Observation in Quantum Mechanics
22.4.2 Entanglement and Decoherence
22.4.3 Noncommuting Operators
22.4.4 Tunnelling
22.4.5 Quantum Statistical Mechanics
22.5 Quantum Computing
22.5.1 Two-State Systems and Qubits
22.5.2 Digital Quantum Computers
22.5.3 Quantum Information
22.5.4 Adiabatic Quantum Computation
22.6 Annealing and Spin Glasses
22.6.1 Ising Spin Glasses
22.6.2 Quantum Spin Glasses
22.7 Summary
23 Physically Inspired Computing Algorithms
23.1 Simulated Annealing
23.1.1 Search and Neighbourhoods
23.1.2 Acceptance of ‘Bad’ Moves
23.1.3 Parameterisation of SA
23.1.4 Extensions of SA
23.1.5 Concluding Remarks
23.2 Simulated Quantum Annealing
23.2.1 Implementation of SQA
23.2.2 SQA Application to TSP-Type Problems
23.3 Constrained Molecular Dynamics Algorithm
23.4 Physical Field Inspired Algorithms
23.4.1 Central Force Optimisation
23.4.2 Gravitational Search Algorithm and Variants
The Binary Gravitational Search Algorithm (BGSA)
23.4.3 Differences Among Physical Field-Inspired Algorithms
23.5 Extremal Optimisation Algorithm
23.6 Summary
24 Quantum Inspired Evolutionary Algorithms
24.1 Qubit Representation
24.2 Quantum Inspired Evolutionary Algorithms (QIEAs)
24.3 Binary-Valued QIEA
24.3.1 Diversity Generation in Binary QIEA
Quantum Gates
Quantum Mutation
24.4 Real-Valued QIEA
24.4.1 Initialising the Quantum Population
24.4.2 Observing the Quantum Chromosomes
24.4.3 Crossover Mechanism
24.4.4 Updating the Quantum Chromosomes
24.5 QIEAs and EDAs
24.6 Other Quantum Hybrid Algorithms
Quantum Binary PSO
The Algorithm
24.7 Summary
Part VII Other Paradigms
25 Plant-Inspired Algorithms
25.1 Plant Behaviours
25.2 Foraging
25.2.1 Plant Movement and Foraging
Phototropism
Geotropism
Hydrotropism
Thigmotropism
25.2.2 Root Foraging
25.2.3 Predatory Plants
Dodder Plant
Venus Flytrap
25.3 Plant-Level Coordination
Plant Neurobiology
25.4 A Taxonomy of Plant-Inspired Algorithms
25.5 Plant Propagation Algorithms
25.5.1 Invasive Weed Optimisation Algorithm
Seed Production
Seed Dispersal
Competition for Resources
Performance of the Algorithm
25.5.2 Paddy Field Algorithm
Sowing
Selection
Seeding
Pollination
Dispersion
25.5.3 Strawberry Plant Algorithm
25.6 Plant Growth Simulation Algorithm
25.6.1 The Algorithm
25.6.2 Variants on the Plant Growth Simulation Algorithm
25.7 Root-Swarm Behaviour
25.7.1 Modelling Root Growth in Real Plants
25.7.2 Applying the Root-Swarm Metaphor for Optimisation
25.8 Summary
26 Chemically Inspired Algorithms
26.1 A Brief Chemistry Primer
26.2 Chemically Inspired Algorithms
26.2.1 Chemical Reaction Optimisation (CRO)
26.2.2 Artificial Chemical Reaction Optimisation Algorithm (ACROA)
26.3 The CRO Algorithm
26.3.1 Potential and Kinetic Energy and the Buffer
26.3.2 Types of Collision and Reaction
26.3.3 The High-Level CRO Algorithm
26.3.4 On-wall Ineffective Collision
26.3.5 Decomposition
26.3.6 Intermolecular Ineffective Collision
26.3.7 Synthesis
26.4 Applications of CRO
26.5 Discussion of CRO
26.5.1 Potential Future Avenues for Research
26.6 Summary
Part VIII The Future of Natural Computing Algorithms
27 Looking Ahead
27.1 Open Issues
27.1.1 Hybrid Algorithms
27.1.2 The Power and the Dangers of Metaphor
27.1.3 Benchmarks and Scalability
27.1.4 Usability and Parameter-Free Algorithms
27.1.5 Simulation and Knowledge Discovery
27.2 Concluding Remarks
References
Index