Neural networks

a comprehensive foundation

2nd ed.
  • 8 Want to read
Locate

My Reading Lists:

Create a new list

  • 8 Want to read

Buy this book

Last edited by Drini
September 13, 2025 | History

Neural networks

a comprehensive foundation

2nd ed.
  • 8 Want to read

This book represents the most comprehensive treatment available of neural networks from an engineering perspective. Thorough, well-organized, and completely up to date, it examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.

Written in a concise and fluid manner, by a foremost engineering textbook author, to make the material more accessible, this book is ideal for professional engineers and graduate students entering this exciting field. Computer experiments, problems, worked examples, a bibliography, photographs, and illustrations reinforce key concepts.

Publish Date
Publisher
Prentice Hall
Language
English
Pages
842

Buy this book

Previews available in: English

Edition Availability
Cover of: Neural networks
Neural networks: a comprehensive foundation
1999, Prentice Hall
in English - 2nd ed.
Cover of: Neural networks
Neural networks: a comprehensive foundation
1994, Macmillan, Maxwell Macmillan Canada, Maxwell Macmillan International
in English

Add another edition?

Book Details


Table of Contents

Preface
Page xii
Acknowledgments
Page xv
Abbreviations and Symbols
Page xvii
1. Introduction
Page 1
1.1. What Is a Neural Network?
Page 1
1.2. Human Brain
Page 6
1.3. Models of a Neuron
Page 10
1.4. Neural Networks Viewed as Directed Graphs
Page 15
1.5. Feedback
Page 18
1.6. Network Architectures
Page 21
1.7. Knowledge Representation
Page 23
1.8. Artificial Intelligence and Neural Networks
Page 34
1.9. Historical Notes
Page 38
Notes and References
Page 45
Problems
Page 45
2. Learning Processes
Page 50
2.1. Introduction
Page 50
2.2. Error-Correction Learning
Page 51
2.3. Memory-Based Learning
Page 53
2.4. Hebbian Learning
Page 55
2.5. Competitive Learning
Page 58
2.6. Boltzmann Learning
Page 60
2.7. Credit Assignment Problem
Page 62
2.8. Learning with a Teacher
Page 63
2.9. Learning without a Teacher
Page 64
2.10. Learning Tasks
Page 66
2.11. Memory
Page 75
2.12. Adaptation
Page 83
2.13. Statistical Nature of the Learning Process
Page 84
2.14. Statistical Learning Theory
Page 89
2.15. Probably Approximately Correct Model of Learning
Page 102
2.16. Summary and Discussion
Page 105
Notes and References
Page 106
Problems
Page 111
3. Single Layer Perceptrons
Page 117
3.1. Introduction
Page 117
3.2. Adaptive Filtering Problem
Page 118
3.3. Unconstrained Optimization Techniques
Page 121
3.4. Linear Least-Squares Filters
Page 126
3.5. Least-Mean-Square Algorithm
Page 128
3.6. Learning Curves
Page 133
3.7. Learning Rate Annealing Techniques
Page 134
3.8. Perceptron
Page 135
3.9. Perceptron Convergence Theorem
Page 137
3.10. Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
Page 143
3.11. Summary and Discussion
Page 148
Notes and References
Page 150
Problems
Page 151
4. Multilayer Perceptrons
Page 156
4.1. Introduction
Page 156
4.2. Some Preliminaries
Page 159
4.3. Back-Propagation Algorithm
Page 161
4.4. Summary of the Back-Propagation Algorithm
Page 173
4.5. XOR Problem
Page 175
4.6. Heuristics for Making the Back-Propagation Algorithm Perform Better
Page 178
4.7. Output Representation and Decision Rule
Page 184
4.8. Computer Experiment
Page 187
4.9. Feature Detection
Page 199
4.10. Back-Propagation and Differentiation
Page 202
4.11. Hessian Matrix
Page 204
4.12. Generalization
Page 205
4.13. Approximations of Functions
Page 208
4.14. Cross-Validation
Page 213
4.15. Network Pruning Techniques
Page 218
4.16. Virtues and Limitations of Back-Propagation Learning
Page 226
4.17. Accelerated Convergence of Back-Propagation Learning
Page 233
4.18. Supervised Learning Viewed as an Optimization Problem
Page 234
4.19. Convolutional Networks
Page 245
4.20. Summary and Discussion
Page 247
Notes and References
Page 248
Problems
Page 252
5. Radial-Basis Function Networks
Page 256
5.1. Introduction
Page 256
5.2. Cover's Theorem on the Separability of Patterns
Page 257
5.3. Interpolation Problem
Page 262
5.4. Supervised Learning as an Ill-Posed Hypersurface Reconstruction Problem
Page 265
5.5. Regularization Theory
Page 267
5.6. Regularization Networks
Page 277
5.7. Generalized Radial-Basis Function Networks
Page 278
5.8. XOR Problem (Revisited)
Page 282
5.9. Estimation of the Regularization Parameter
Page 284
5.10. Approximation Properties of RBF Networks
Page 290
5.11. Comparison of RBF Networks and Multilayer Perceptrons
Page 293
5.12. Kernel Regression and Its Relation to RBF Networks
Page 294
5.13. Learning Strategies
Page 298
5.14. Computer Experiment
Page 305
5.15. Summary and Discussion
Page 308
Notes and References
Page 308
Problems
Page 312
6. Support Vector Machines
Page 318
6.1. Introduction
Page 318
6.2. Optimal Hyperplane for Linearly Separable Patterns
Page 319
6.3. Optimal Hyperplane for Nonseparable Patterns
Page 326
6.4. How to Build a Support Vector Machine for Pattern Recognition
Page 329
6.5. Example: XOR Problem (Revisited)
Page 335
6.6. Computer Experiment
Page 337
6.7. E-Insensitive Loss Function
Page 339
6.8. Support Vector Machines for Nonlinear Regression
Page 340
6.9. Summary and Discussion
Page 343
Notes and References
Page 347
Problems
Page 348
7. Committee Machines
Page 351
7.1. Introduction
Page 351
7.2. Ensemble Averaging
Page 353
7.3. Computer Experiment I
Page 355
7.4. Boosting
Page 357
7.5. Computer Experiment II
Page 364
7.6. Associative Gaussian Mixture Model
Page 366
7.7. Hierarchical Mixture of Experts Model
Page 372
7.8. Model Selection Using a Standard Decision Tree
Page 374
7.9. A Priori and a Posteriori Probabilities
Page 377
7.10. Maximum Likelihood Estimation
Page 378
7.11. Learning Strategies for the HME Model
Page 380
7.12. EM Algorithm
Page 382
7.13. Application of the EM Algorithm to the HME Model
Page 383
7.14. Summary and Discussion
Page 386
Notes and References
Page 387
Problems
Page 389
8. Principal Components Analysis
Page 392
8.1. Introduction
Page 392
8.2. Some Intuitive Principles of Self-Organization
Page 393
8.3. Principal Components Analysis
Page 396
8.4. Hebbian-Based Maximum Eigenfilter
Page 404
8.5. Hebbian-Based Principal Components Analysis
Page 413
8.6. Computer Experiment: Image Coding
Page 419
8.7. Adaptive Principal Components Analysis Using Lateral Inhibition
Page 422
8.8. Two Classes of PCA Algorithms
Page 430
8.9. Batch and Adaptive Methods of Computation
Page 430
8.10. Kernel-Based Principal Components Analysis
Page 432
8.11. Summary and Discussion
Page 437
Notes and References
Page 439
Problems
Page 440
9. Self-Organizing Maps
Page 443
9.1. Introduction
Page 443
9.2. Two Basic Feature-Mapping Models
Page 444
9.3. Self-Organizing Map
Page 446
9.4. Summary of the SOM Algorithm
Page 453
9.5. Properties of the Feature Map
Page 454
9.6. Computer Simulations
Page 461
9.7. Learning Vector Quantization
Page 466
9.8. Computer Experiment: Adaptive Pattern Classification
Page 468
9.9. Hierarchical Vector Quantization
Page 470
9.10. Contextual Maps
Page 474
9.11. Summary and Discussion
Page 476
Notes and References
Page 477
Problems
Page 479
10. Information-Theoretic Models
Page 484
10.1. Introduction
Page 484
10.2. Entropy
Page 485
10.3. Maximum Entropy Principle
Page 490
10.4. Mutual Information
Page 492
10.5. Kullback-Leibler Divergence
Page 495
10.6. Mutual Information as an Objective Function to Be Optimized
Page 498
10.7. Maximum Mutual Information Principle
Page 499
10.8. Infomax and Redundancy Reduction
Page 503
10.9. Spatially Coherent Features
Page 506
10.10. Spatially Incoherent Features
Page 508
10.11. Independent Components Analysis
Page 510
10.12. Computer Experiment
Page 523
10.13. Maximum Likelihood Estimation
Page 525
10.14. Maximum Entropy Method
Page 529
10.15. Summary and Discussion
Page 533
Notes and References
Page 535
Problems
Page 541
11. Stochastic Machines and Their Approximates Rooted in Statistical Mechanics
Page 545
11.1. Introduction
Page 545
11.2. Statistical Mechanics
Page 546
11.3. Markov Chains
Page 548
11.4. Metropolis Algorithm
Page 556
11.5. Simulated Annealing
Page 558
11.6. Gibbs Sampling
Page 561
11.7. Boltzmann Machine
Page 562
11.8. Sigmoid Belief Networks
Page 569
11.9. Helmholtz Machine
Page 574
11.10. Mean-Field Theory
Page 576
11.11. Deterministic Boltzmann Machine
Page 578
11.12. Deterministic Sigmoid Belief Networks
Page 579
11.13. Deterministic Annealing
Page 586
11.14. Summary and Discussion
Page 592
Notes and References
Page 594
Problems
Page 597
12. Neurodynamic Programming
Page 603
12.1. Introduction
Page 603
12.2. Markovian Decision Processes
Page 604
12.3. Bellman's Optimality Criterion
Page 607
12.4. Policy Iteration
Page 610
12.5. Value Iteration
Page 612
12.6. Neurodynamic Programming
Page 617
12.7. Approximate Policy Iteration
Page 618
12.8. Q-Learning
Page 622
12.9. Computer Experiment
Page 627
12.10. Summary and Discussion
Page 629
Notes and References
Page 631
Problems
Page 632
13. Temporal Processing Using Feedforward Networks
Page 635
13.1. Introduction
Page 635
13.2. Short-Term Memory Structures
Page 636
13.3. Network Architectures for Temporal Processing
Page 640
13.4. Focused Time Lagged Feedforward Networks
Page 643
13.5. Computer Experiment
Page 645
13.6. Universal Myopic Mapping Theorem
Page 646
13.7. Spatio-Temporal Models of a Neuron
Page 648
13.8. Distributed Time Lagged Feedforward Networks
Page 651
13.9. Temporal Back-Propagation Algorithm
Page 652
13.10. Summary and Discussion
Page 659
Notes and References
Page 660
Problems
Page 660
14. Neurodynamics
Page 664
14.1. Introduction
Page 664
14.2. Dynamical Systems
Page 666
14.3. Stability of Equilibrium States
Page 669
14.4. Attractors
Page 674
14.5. Neurodynamical Models
Page 676
14.6. Manipulation of Attractors as a Recurrent Network Paradigm
Page 680
14.7. Hopfield Models
Page 680
14.8. Computer Experiment I
Page 696
14.9. Cohen-Grossberg Theorem
Page 701
14.10. Brain-State-in-a-Box Model
Page 703
14.11. Computer Experiment II
Page 709
14.12. Strange Attractors and Chaos
Page 709
14.13. Dynamic Reconstruction of a Chaotic Process
Page 714
14.14. Computer Experiment III
Page 718
14.15. Summary and Discussion
Page 722
Notes and References
Page 725
Problems
Page 727
15. Dynamically Driven Recurrent Networks
Page 732
15.1. Introduction
Page 732
15.2. Recurrent Network Architectures
Page 733
15.3. State-Space Model
Page 739
15.4. Nonlinear Autoregressive with Exogenous Inputs Model
Page 746
15.5. Computational Power of Recurrent Networks
Page 747
15.6. Learning Algorithms
Page 750
15.7. Back-Propagation Through Time
Page 751
15.8. Real-Time Recurrent Learning
Page 756
15.9. Kalman Filters
Page 762
15.10. Decoupled Extended Kalman Filters
Page 765
15.11. Computer Experiment
Page 770
15.12. Vanishing Gradients in Recurrent Networks
Page 773
15.13. System Identification
Page 776
15.14. Model-Reference Adaptive Control
Page 780
15.15. Summary and Discussion
Page 782
Notes and References
Page 783
Problems
Page 785
Epilogue
Page 790
Bibliography
Page 796
Index
Page 837

Edition Notes

Includes bibliographical references (p. 796-836) and index.

Published in
Upper Saddle River, N.J

Classifications

Dewey Decimal Class
006.3/2
Library of Congress
QA76.87 .H39 1999, QA76.87.H39 1999

The Physical Object

Pagination
xxi, 842 p. :
Number of pages
842

Edition Identifiers

Open Library
OL347870M
ISBN 10
0132733501
LCCN
98007011
OCLC/WorldCat
38908586
LibraryThing
294250
Goodreads
391008

Work Identifiers

Work ID
OL1824445W

Community Reviews (0)

No community reviews have been submitted for this work.

Lists

Download catalog record: RDF / JSON / OPDS | Wikipedia citation