Interesting facts about neural networks: the history of their creation and use in the modern world - Hitecher
Interesting facts about neural networks: the history of their creation and use in the modern world

Interesting facts about neural networks: the history of their creation and use in the modern world

by Evan Mcbride

Do you use Siri on your iPhone? Just to let you know, it is a neural network! What about Yandex Station? It is a neural network, too. The technology behind facial recognition in CCTV cameras is also a neural network. This includes artificial intelligence, whether small or hidden inside some services.

Do you use Siri on your iPhone? Just to let you know, it is a neural network! What about Yandex Station? It is a neural network, too. The technology behind facial recognition in CCTV cameras is also a neural network. This includes artificial intelligence, whether small or hidden inside some services.

How do neural networks work? Is it true that they have been around for centuries? We will share the history of neural networks and describe which areas they are developing in now.

The history of neural networks: since the time of Aristotle to the present day

Questions relating to the thinking process are something artificial intelligence has been successfully mastering for 80 years now, and which have preoccupied philosophers since time immemorial. Arguably, preconditions for the development of neural networks emerged many years before the current era, when ancient philosophers started wondering whether it was possible to draw the right conclusion with formal rules and what the origin of knowledge was. Aristotle formulated the principles of rationality. He can be named the first researcher of future neural networks: he suggested a system of syllogisms for conducting right reasoning, which leads to the same right answer. Thomas Hobbes subsequently returned to this (he drew an analogy between calculations and logic). René Descartes (looked at the differences between mind and matter), and Francis Bacon (studied the source of knowledge), too. Next, the researcher George Boole analysed the logic of suggestions in detail. Based on Boolean algebra (as George's teaching was called), the Italian mathematician Gerolamo Cardano devised the idea of ​​probability theory. Likewise, the renowned Blaise Pascal also hugely contributed to its development.

Despite various attempts to approach artificial intelligence, the term “neural network” was only used in the middle of the 20th century. Subsequently, scientists began to study the brain, which proved to have billions of neurons equipped with bizarre reactions. Finally, a computer prototype of a neural network appeared in 1943. McCulloch-Pitts neurons, named after the creators, could learn using relatively simple parameter tuning. Therefore, the scientists believed that the neural network could possess all the signs of intelligence with proper development. Thanks to this, there are now two different approaches to developing neural networks: the first focuses on studying the human brain, and the second on artificial intelligence created by neural networks. In 1949, the “self-learning theory” appeared, implying that artificial intelligence, like real intelligence, may spontaneously learn to perform a task without any third-party intervention. A network capable of spontaneous learning was invented 5 years later.

Scientists did not stop there. In 1957, models of the brain’s information perception suddenly saw the light of day. They were already running learning neural networks. In 1961, the first working system emerged, capable of recognising letters written on cards. The cards had to be held up to the “eyes” of the device, which resembled a film camera.

However, widespread interest in neural networks started to wane as soon as the scientific paper by Minsky and Papert was published in 1969. They discovered that a neural network could not choose between two variables to implement an “exclusive” function and could not process a significant amount of information as it lacked the necessary power to accomplish this. However, the first problem was solved in 1975, and the second in the early 1980s, with the advent of more powerful computers. In 1982, the process of a two-way transfer of information between neurons began. It was a real breakthrough: previously, information could only be transmitted in one direction. Five years later, the public was presented with prototypes of neural networks that did not destroy former data. When the latest information was obtained and employed, it was used to improve their reactions. In 2006, scientists suggested several alternatives for unsupervised training procedures for neural networks. Now, research dedicated to new discoveries in artificial intelligence is being released every year: neural networks are not just being studied but get actively incorporated into various products.

How does a neural network work?

Similar to our brains, the simplest processors that oversee various signals play the role of neurons. There are, of course, far fewer of them when compared to the human brain. Neural networks consist of three layers: sensory (receives information from the outside), associative (processes information and creates a set of associations) and reactive (gives you the finished result). Artificial intelligence can learn using information about tasks, associations, and results.

A neural network easily navigates through the “information noise”, selecting what is important. Then, with a slight slowdown, it can continue to work even if it loses one element or another. However, the result of its calculations (if it is not the simplest algorithm trained over the years, like variable voice search) requires human verification: the neural network's answers are not always accurate. Therefore, neural networks are used in Siri, although they are not applied in solving complex mathematical equations.

Types of neural networks

Currently, neural networks vary in the direction of signals. For example, they can be unidirectional or bidirectional.

  • Unidirectional

The signal passes in one direction: from the input layer, which recognises information, to the final layer, which makes decisions. So, for example, we analyse an image. We look at a picture and see a cat in it. We realise this is a cat, although we do not send the image a “signal back”. Likewise, this is how unidirectional artificial intelligence works.

  • Bidirectional

As the name suggests, this allows you to receive information and send a response, which could be a certain action. Most modern robots are bidirectional. They can process data and offer one or more responses. However, they are not always able to learn.

  • Recurrent

These neural networks cannot remember the result of previous analysis. Therefore, every time they learn the information, it is from scratch. So, for example, this artificial intelligence will understand that the photo shows a cat but will not be able to use this knowledge the next time it faces the same task.

Where are neural networks used?

  • Event forecasting. Thanks to contrasting information, a neural network can predict major scenarios for the development of events. Therefore, artificial intelligence is frequently used to study the stock market.
  • Classification of objects. Artificial intelligence analyses an object and determines whether it matches the specified parameters. This ability is actively used by banks that evaluate potential borrowers and foreign employers who screen CVs without keywords.
  • Recognition. This neural network compares photos, videos, texts, and images from different content. For example, this artificial intelligence property is used when searching for certain people on video from surveillance cameras. It is also present in Yandex Pictures, Google Photos, and your smartphones. It identifies your friends in pictures based on the other photos in your gallery.
  • Search. Search engines use artificial intelligence to improve their search results and increase the relevance of their responses.

Which companies are developing neural networks?

Active development is already being conducted at Google (they have several “ambassador” products for artificial intelligence). The same goes for Microsoft (they created an entire lab for this), Facebook (they work in the same way as Microsoft), and Baidu (have created their own institute for the study of neural networks.) Employees from Google often leave for companies that develop artificial intelligence that expertly recognises faces in a crowd, even if they are wearing masks or heavy makeup.

Products with neural networks are not produced only by global companies. For example, the Russian organisation Yandex offers users the ability to obtain images based on numerous relevant phrases or create a story from their first sentence.

Where could neural networks be used in the near future?

We will not tell you about the professions that may become obsolete because of artificial intelligence. Instead, looking at the business sectors that neural networks can help is much more interesting.

  • The agricultural sector

Neural networks are now being introduced into the agricultural sector, for example, in agricultural machinery. So, robots can help control harvesters and decide on weeding or fertilising.

  • The medical sector

Neural networks can help make scientists' work easier by studying similar information for them. Additionally, artificial intelligence opens up opportunities for developing cells within a digital format, allowing biological experiments to be conducted online.

  • The marketing sector

Neural networks can learn so much information, advising analysts and marketers about what audience segments are buying and how they feel about acquisitions. Likewise, artificial intelligence analyses the user's content consumption. It helps marketers make decisions about changing their strategy for delivering messages to their audience.

  • The online commerce sector

Artificial intelligence analyses user behaviour on various pages of a website. This gives UX professionals working with the website an opportunity to improve the user experience and guide them to buy something.

Neural networks represent a genuine future technology capable of bringing us closer to a wonderful new world within decades. It is evolving daily, improving the algorithms to recognise and shape the response. Artificial intelligence is highly valued by the consumer and is actively introduced into every other gadget (if not more often). We are closely following the news from the world of neural networks – do not forget to visit our website more frequently to avoid missing out on the latest discoveries!

Share this with your friends!

Evan Mcbride

Evan Mcbride

Hitecher staff writer, high tech and science enthusiast. His work includes news about gadgets, articles on important fundamental discoveries, as well as breakdowns of problems faced by companies today. Evan has his own editorial column on Hitecher.

All posts by Evan Mcbride

Be the first to comment