Functional Networks
Functional networks have been recently introduced in the book:
Introduction to Functional Networks With Applications. A Neural Based Paradigm. E. Castillo,A. Cobo, J.M. Gutiérrez, and E. Pruneda.
as well as in several papers devoted to specific applications.
Some Mathematica programs implementing the various algorithms and methodologies presented in this book are available both for versions 2.2 (FN22.ma) and 3.0 (FN30.nb). Download the file and save is in text format. Now, the file is ready to be opened with Mathematica.
For a full understanding of these programs some knowledge of Mathematica is needed.
Some Java Applets implementing the various algorithms and methodologies presented in this book are also available.
A brief introduction to functional networks
Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural networks are inspired in the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. This learning process is achieved by changing the network architecture (links) and the connection weights according to the given information. To this aim several well known learning methods, as the backpropagation algorithm, has been proposed in the literature.
However, despite of its wide diffusion and extensive application in several domains, the neural networks paradigm is very restrictive, not sufficient to give a satisfactory solution to many practical problems, and can be improved in several directions. In this book we deal with functional networks, a recently introduced extension of neural networks where the activation functions are unknown functions from a given family, to be estimated during the learning process. Functional networks extend neural networks by allowing neurons to be not only true multiargument and multivariate functions, but to be different and learnable, instead of fixed functions. In addition, functional networks allow connecting neuron outputs, forcing them to be coincident. This leads to functional equations or systems of functional equations, which impose some compatibility conditions on the neural functions.
Unlike neural networks, which are black boxes, functional networks can reproduce some physical or engineering properties, which lead, in a very natural way, to the corresponding network. Thus, the initial functional network can arise directly from the problem under consideration. However, the constraints imposed by functional equations allow derive a simplified functional network structure, which normally transforms the complex initial neural functions in another much more simpler functions. In this book we do not deal with neural networks, which are very well covered in many other books. Contrary, we present functional networks as an alternative, and show that functional network architectures can be efficiently applied to solve many interesting practical problems.
Last update: August, 7, 1998