Structure Level Adaptation for Artificial Neural Networks

The Springer International Series in Engineering and Computer Science

Book 133
Springer Science & Business Media
Free sample

63 3. 2 Function Level Adaptation 64 3. 3 Parameter Level Adaptation. 67 3. 4 Structure Level Adaptation 70 3. 4. 1 Neuron Generation . 70 3. 4. 2 Neuron Annihilation 72 3. 5 Implementation . . . . . 74 3. 6 An Illustrative Example 77 3. 7 Summary . . . . . . . . 79 4 Competitive Signal Clustering Networks 93 4. 1 Introduction. . 93 4. 2 Basic Structure 94 4. 3 Function Level Adaptation 96 4. 4 Parameter Level Adaptation . 101 4. 5 Structure Level Adaptation 104 4. 5. 1 Neuron Generation Process 107 4. 5. 2 Neuron Annihilation and Coalition Process 114 4. 5. 3 Structural Relation Adjustment. 116 4. 6 Implementation . . 119 4. 7 Simulation Results 122 4. 8 Summary . . . . . 134 5 Application Example: An Adaptive Neural Network Source Coder 135 5. 1 Introduction. . . . . . . . . . 135 5. 2 Vector Quantization Problem 136 5. 3 VQ Using Neural Network Paradigms 139 Vlll 5. 3. 1 Basic Properties . 140 5. 3. 2 Fast Codebook Search Procedure 141 5. 3. 3 Path Coding Method. . . . . . . 143 5. 3. 4 Performance Comparison . . . . 144 5. 3. 5 Adaptive SPAN Coder/Decoder 147 5. 4 Summary . . . . . . . . . . . . . . . . . 152 6 Conclusions 155 6. 1 Contributions 155 6. 2 Recommendations 157 A Mathematical Background 159 A. 1 Kolmogorov's Theorem . 160 A. 2 Networks with One Hidden Layer are Sufficient 161 B Fluctuated Distortion Measure 163 B. 1 Measure Construction . 163 B. 2 The Relation Between Fluctuation and Error 166 C SPAN Convergence Theory 171 C. 1 Asymptotic Value of Wi 172 C. 2 Energy Function . .
Read more
Collapse
Loading...

Additional Information

Publisher
Springer Science & Business Media
Read more
Collapse
Published on
Dec 6, 2012
Read more
Collapse
Pages
212
Read more
Collapse
ISBN
9781461539544
Read more
Collapse
Read more
Collapse
Best For
Read more
Collapse
Language
English
Read more
Collapse
Genres
Computers / Information Technology
Computers / Intelligence (AI) & Semantics
Read more
Collapse
Content Protection
This content is DRM protected.
Read more
Collapse

Reading information

Smartphones and Tablets

Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.

Laptops and Computers

You can read books purchased on Google Play using your computer's web browser.

eReaders and other devices

To read on e-ink devices like the Sony eReader or Barnes & Noble Nook, you'll need to download a file and transfer it to your device. Please follow the detailed Help center instructions to transfer the files to supported eReaders.
A manipulator, or 'robot', consists of a series of bodies (links) connected by joints to form a spatial mechanism. Usually the links are connected serially to form an open chain. The joints are either revolute (rotary) or prismatic (telescopic), various combinations of the two giving a wide va riety of possible configurations. Motive power is provided by pneumatic, hydraulic or electrical actuation of the joints. The robot arm is distinguished from other active spatial mechanisms by its reprogrammability. Therefore, the controller is integral to any de scription of the arm. In contrast with many other controlled processes (e. g. batch reactors), it is possible to model the dynamics of a ma nipulator very accurately. Unfortunately, for practical arm designs, the resulting models are complex and a considerable amount of research ef fort has gone into improving their numerical efficiency with a view to real time solution [32,41,51,61,77,87,91]. In recent years, improvements in electric motor technology coupled with new designs, such as direct-drive arms, have led to a rapid increase in the speed and load-carrying capabilities of manipulators. However, this has meant that the flexibility of the nominally rigid links has become increasingly significant. Present generation manipulators are limited to a load-carrying capacity of typically 5-10% of their own weight by the requirement of rigidity. For example, the Cincinatti-Milicron T3R3 robot weighs more than 1800 kg but has a maximum payload capacity of 23 kg.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.
©2019 GoogleSite Terms of ServicePrivacyDevelopersArtistsAbout Google|Location: United StatesLanguage: English (United States)
By purchasing this item, you are transacting with Google Payments and agreeing to the Google Payments Terms of Service and Privacy Notice.