Notable network architectures include the following “families”: Network structure refers to the structure of a computer network. A computer network is composed of computer systems, communication links, and network nodes. It is a field in which computer technology and communication technology are closely combined and undertake two types of work: data communication and data processing. From the perspective of logical functions, the network can be divided into resource subnets and communication subnets. The resource subnet provides the ability to access the network and process data, and it consists of the main computer system, terminal controllers, and terminals. The communication subnet provides network communication functions, and it consists of network nodes, communication links, and signal conversion equipment. The structure of the communication subnet in the network directly affects the network structure. Communication subnets can be divided into point-to-point communication channels and broadcast communication channels according to the technology used to transmit data. Commonly used structures in LAN are the star, ring, bus, and tree.
Typical CNN architecture
The long short-term memory unit
Convolution Network – These networks are suitable for learning information that is invariant (n) in time or space and are suitable for image processing.
Recurrent neural network
Recurrent neural network – These networks are designed to “decipher” serial information – that is, to take into account the “order of appearance” of the phenomenon being treated. These networks include an element of memory – so that the network “knows how to” consider history. These networks are especially suitable for reading a script – when it is very important to arrange the appearance of the letters, to turn “audio” into text, and the like (network architectures). A specific genre of these networks is Bidirectional recurrent neural networks. Because the network receives a decoding segment that is of a given length – it can also take into account the “normal” order of appearance, that is, take into account the past regarding “what is expected” to be the next letter and at the same time, it can “look” at the data in the opposite direction (e.g. From the complete of the word to its start). In addition, these networks make it possible to take into account times (or proximity relations) of varying length, that is, for example, in the context of writing a script, to take into account both the letter closest to the letter you are trying to decipher and letters or parts of words farther away.
Autoencoder
Autoencoder – These networks are built in such an architecture that the size of the hidden layers decreases and then, it grows back, usually, to the original size, that is, a network in the architecture that includes a “bottleneck”. In this situation, the part of the network from the data entry layer to the bottleneck is an “encoder” of the information while the layer from the bottleneck to the data output layer is a “decoder”. Such networks have many uses, for example, the network can be trained on many Van Gogh images, which will make the “encoder” “understand” what is drawn in the image while the “decoder” will learn how to take encoded information that expresses. “what is drawn in the image” and turn it For a painting “in the style of Van Gogh”. Such a network, once fostered, can receive a painting or other photograph, “understand” what is in the painting, and then “reopen” the encoded information (network architectures). so that at the end of the process the original painting/photograph is obtained – but in the style of “Van Gogh”. This method makes it possible to convert day photographs into images that seem to have been taken at night and the like.
Generative adversarial network
Generative adversarial network – Networks that include two separate architectures. One network is a “generator”. This network knows how to produce its own entity that we are interested in – say, for example, a picture of a flower. A second network is a “judge” and it tries to compare the products of the “generator” with real images of flowers. If the judge manages to understand which image is created by the generator and which image is “real” – improve the “generator” if the judge fails to distinguish between the real images and the images the generator produces – improve the “judge” and so on. Such networks can “produce for themselves” diverse entities, remarkably similar to reality itself. As of 2019, capabilities have already been demonstrated to produce portraits of non-existent people, birds, flowers, sounds, and so on. This architecture was first developed and published by Ian Goodfellow, in 2014.