The central processing unit is the "brain" of the computer. What do computers and humans have in common? What is the brain of a computer called?

This will be quite possible with a computer simulation of the human brain. In the 2030s, nanomachines will be implanted directly into brain, carrying out arbitrary input and output of signals from cells brain. This will create a "total immersion" virtual reality, ... year, with the goal of finding out if a machine can think. The standard interpretation of the test is: “A person interacts with one computer and one person. Based on the answers to the questions, he must determine with whom he is talking: with ...

https://www.site/journal/132280

Contain an order of magnitude more than usual. And all this gigantic array of information will change consistently in one working cycle. When in quantum computer one bit changes (it called quantum bit - a qubit), then all the others change in concert with it, and the entire superposition is instantly rearranged. It's like in a "magic" tube, where colored...

https://www.site/journal/17731

... computer at the same time as your partner. After each attempt, the subjects were told who pressed the button earlier and who pressed the button later, and at what interval. The scientists found that male couples performed better on the task than female couples, but activity brain... that pairs of people of both sexes performed just as well on the task as male partners, however, in their brain no agreement was found. The results of scientists may help explain how the ability to cooperate between people has evolved ...

https://www.site/psychology/110777

brain. It is known that exactly brain computer

https://www.site/journal/121444

Or growth. Modern research scientists are approaching the solution of an ancient mystery: the key to it is control over work brain. It is known that exactly brain manages all processes in our body. It's small computer, working according to the program given to him throughout our lives. But, unfortunately, without our active participation. This program...

https://www.site/journal/121449

It will be given to myopia, because basically, in our days, prolonged work with a monitor leads to the occurrence of this disease. computer. A modern urban dweller spends most of his life indoors and, working behind a blue screen and with documents, ... . With constant sedentary work, displacement of the vertebrae and other spinal deformities occur, which lead to compression of the spinal brain, which also negatively affects the functioning of the organs of vision. Therefore, during the "five minutes" do not forget and ...

https://www.site/journal/137368

Like an increase in the number of divorces, the migration of families in search of a better life, the filling of children's leisure time with TV and computer, which replaced everyday contacts with parents and weakened the emotional attachment to them. There are frequent escapes because of ... the fact that we call"living nature". Virtual reality does not give a true picture of the environment and forms a distorted picture of the world in the case when the child is "give up for education" computer. Controversy about whether it is harmful or beneficial computer for the development of children...

An organ that coordinates and regulates all vital functions of the body and controls behavior. All our thoughts, feelings, sensations, desires and movements are connected with the work of the brain, and if it does not function, a person goes into a vegetative state: the ability to perform any actions, sensations or reactions to external influences is lost.

Computer model of the brain

The University of Manchester is building the first computer of a new type, the design of which mimics the device human brain, reports the BBC. The cost of the model will be 1 million pounds.

A computer built according to biological principles, according to Professor Steve Furber, should demonstrate significant stability in operation. “Our brain continues to function despite the constant failure of the neurons that make up nervous tissue,” says Furber. "This property is of great interest to designers who are interested in making computers more reliable."

brain interfaces

In order to lift a glass several feet with mental energy alone, wizards had to practice for several hours a day.
Otherwise, the principle of leverage could easily squeeze the brain through the ears.

Terry Pratchett, The Color of Magic

Obviously, the crown of the human-machine interface should be the ability to control the machine with just one thought. And getting data straight into the brain is already the pinnacle of what virtual reality can achieve. This idea is not new and has been featured in the most diverse science fiction literature for many years. There are almost all cyberpunks with direct connection to cyberdecks and biosofts. And the control of any technique through a standard brain connector (for example, Samuel Delany in the novel "Nova"), and a lot of all sorts of other interesting things. But fantasy is good, but what happens in the real world?

It turns out that the development of brain interfaces (BCI or BMI - brain-computer interface and brain-machine interface) is in full swing, although few people know about it. Of course, the successes are very far from what they write about in science fiction novels, but, nevertheless, they are quite noticeable. Now work on brain and nerve interfaces is mainly carried out as part of the creation of various prostheses and devices to make life easier for partially or completely paralyzed people. All projects can be conditionally divided into interfaces for input (restoration or replacement of damaged sensory organs) and output (control of prostheses and other devices).

In all cases of direct data entry, it is necessary to perform an operation to implant electrodes into the brain or nerves. In the case of withdrawal, you can get by with external sensors for taking an electroencephalogram (EEG). However, the EEG is a rather unreliable tool, since the skull greatly weakens the brain currents and only very highly generalized information can be obtained. In the case of implantation of electrodes, data can be taken directly from the necessary brain centers (for example, motor centers). But such an operation is a serious matter, so for now, experiments are conducted only on animals.

In fact, humanity has long had such a "single" computer. According to Wired magazine co-founder Kevin Kelly, millions of Internet-connected PCs Cell phones, PDA and other digital devices can be considered as components of a single computer. Her CPU are all processors of all connected devices, its HDD - hard disks and flash drives around the world, and RAM is the total memory of all computers. Every second, this computer processes a volume of data equal to all the information contained in the Library of Congress, and its operating system is the World Wide Web.

Instead of nerve cell synapses, it uses functionally similar hyperlinks. Both are responsible for creating associations between node points. Each unit of thought process, such as an idea, grows as more and more connections with other thoughts arise. Also in the network: more links to a certain resource (nodal point) means its greater significance for the Computer as a whole. Moreover, the number of hyperlinks in World Wide Web comes close to the number of synapses in the human brain. Kelly estimates that by 2040 the planetary computer will have computing power commensurate with the collective power of the brains of all 7 billion people who will inhabit the Earth by that time.

But what about the human brain itself? Long outdated biological mechanism. Our gray matter runs at the speed of the very first Pentium processor, a 1993 model. In other words, our brain operates at a frequency of 70 MHz. In addition, our brains operate according to the analog principle, so there can be no question of comparison with the digital method of data processing. This is the main difference between synapses and hyperlinks: synapses, reacting to their environment and incoming information, skillfully change the organism, which never has two identical states. A hyperlink, on the other hand, is always the same, otherwise problems begin.

Nevertheless, it is impossible not to recognize that our brain is far superior in efficiency to any artificial system created by people. In a completely mysterious way, all the gigantic computing abilities of the brain fit in our cranium, weighs a little over a kilogram, and at the same time, it requires only 20 watts of energy to function. Compare these figures with those 377 billion watts that the Single Computer currently consumes, according to approximate calculations. This, by the way, is as much as 5% of the world's electricity production.

The mere fact of such a monstrous power consumption will never allow the Unified Computer to even closely compare with the human brain in terms of efficiency. Even in 2040, when the computing power of computers will become sky-high, their power consumption will steadily increase.

The last century marked the strongest leap in the development of mankind. Having passed the difficult path from the primer to the Internet, people have not been able to solve the main riddle that has been tormenting the minds of the great for more than one hundred years, namely, how does the human brain work and what is it capable of?

Until now, this organ remains the most poorly studied, and it was he who made a person what he is now - the highest stage of evolution. The brain, continuing to keep its secrets and mysteries, continues to determine the activity and consciousness of a person at every stage of his life. So far, no modern scientist has been able to unravel all the possibilities that he is capable of. That is why a large number of myths and unsubstantiated hypotheses are concentrated around one of the main organs of our body. This can only indicate that the hidden potential of the human brain is yet to be explored, but for now its abilities go beyond the boundaries of already established ideas about its work.


Photo: Pixabay/geralt

Brain device

This organ consists of a huge number of connections that create a stable interaction of cells and processes. Scientists suggest that if this connection is presented as a straight line, its length will be eight times the distance to the moon.

The mass fraction of this organ in the total body weight is no more than 2%, and its weight varies between 1019-1960 grams. From the moment of birth to the last breath of a person, he conducts uninterrupted activity. Therefore, it needs to absorb 21% of all oxygen that constantly enters the human body. Scientists have compiled an approximate picture of the assimilation of information by the brain: its memory can contain from 3 to 100 terabytes, while the memory of a modern computer this moment upgraded to 20 terabytes.

The most common myths about the human biological computer

The neural tissues of the brain die during the life of the organism, and new ones are not formed. This is a fallacy, the absurdity of which was proved by Elizabeth Goode. Nervous tissue and neurons are constantly updated, and new connections come to replace the dead. Studies have confirmed that in the foci of cells destroyed by a stroke, the human body is able to "build up" new material.

The human brain is open only by 5-10%, all other possibilities are not involved. Some scientists explained this by the fact that nature, having created such a complex and developed mechanism, came up with a protective system for it, protecting the organ from excessive load. This is wrong. It is reliably known that the brain during any human activity is 100% involved, just at the time of any action, its individual parts react in turn.

Superpowers. What can surprise the human mind?

Some people who do not outwardly show signs of having incredible abilities may have truly incredible abilities. They do not appear in everyone, but scientists say that regular enhanced brain training can develop superpowers. Although the secret of "selection" of people who may have the right to be called a genius has not yet been revealed. Someone knows how to competently get out of difficult situations, someone on a subconscious level anticipates the approaching danger. But more interesting from the point of view of science are the following superpowers:

  • The ability to perform mathematical operations of any complexity without the help of a calculator and calculations on paper;
  • The ability to create ingenious creations;
  • photographic memory;
  • Speed ​​reading;
  • Psychic abilities.

Amazing cases of revealing the unique abilities of the human brain

Throughout the history of human existence, a large number of stories have appeared confirming the fact that the human brain can have hidden abilities, adapt to changing situations and shift certain functions from the affected department to the healthy part.

sonar vision. This ability is usually developed after loss of vision. Daniel Kish has mastered the technique of echolocation inherent in bats. The sounds he makes, such as clicking his tongue or his fingers, help him walk without a cane.

Mnemonics- a unique technique that allows you to perceive and remember any amount of information, regardless of its nature. Many people master it in adulthood, and for the American Kim Peak, this is an innate gift.

gift of foresight. Some people claim to be able to see the future. At the moment, this fact has not been fully proven, but history knows many people whom such an ability has glorified throughout the world.

Phenomena that the human brain is capable of

Carlos Rodriguez lost more than 59% of his brain at the age of 14 after the accident, but still lives a completely normal life.

Yakov Tsiperovich, after clinical death and a week's stay in a coma, stopped sleeping, eats little and does not age. Three decades have passed since then, and he is still young.

Fenias Gage was horribly injured in the middle of the 19th century. A thick crowbar went through his head, depriving him of a good part of his brain. The medicine of those years was not sufficiently advanced, and the doctors foreshadowed his imminent death. However, the man not only did not die, but also retained his memory and clarity of consciousness.

The human brain, like its body, needs to be subjected to constant exercise. It can be both complex, specially designed programs, as well as reading books, solving puzzles and logical tasks. At the same time, one should not forget about the saturation of this organ with nutrients. For example, HeadBooster http://hudeemz.com/headbooster has a lot of them. But still, only constant training allows the brain to constantly develop and increase its capabilities.

Imagine an experimental nanodrug that can link the minds of different people. Imagine a group of enterprising neuroscientists and engineers discovering new way The use of this drug is to run the operating system right inside the brain. Then people will be able to telepathically communicate with each other using a mental chat, and even manipulate the bodies of other people, subordinating the actions of their brains. And despite the fact that this is the plot of the science fiction book Nexus by Ramez Naam, the future of technology he described no longer seems so far away.

How to connect your brain to a tablet and help paralyzed patients communicate

For patient T6, 2014 was the happiest year of his life. It was the year she was able to manage tablet computer Nexus using electromagnetic radiation your brain and literally be transported from the era of the 1980s with their disco-oriented systems (Disk operating system, DOS) in the new age of Android OS.

T6 is a 50 year old woman suffering from amyotrophic lateral sclerosis, also known as Lou Gehrig's disease, which causes progressive damage to motor neurons and paralysis of all organs of the body. T6 is paralyzed almost completely from the neck down. Until 2014, she absolutely could not interact with the outside world.

Paralysis can also come from bone marrow damage, stroke, or neurodegenerative diseases that block the ability to speak, write, and generally communicate with others in any way.

The era of brain-machine interfaces blossomed two decades ago, with the creation of assistive devices to help such patients. The result was fantastic: eye-tracking and head-tracking made it possible to track eye movements and use them as output to control the mouse cursor on the computer screen. Sometimes the user could even click on the link, fixing their eyes on one point of the screen. This is called "delay time".

However, eye-tracking systems were heavy on the user's eyes and too expensive. Then the technology of neural prosthetics appeared, when the intermediary in the form of a sensory organ is eliminated and the brain communicates directly with the computer. A microchip is implanted in the patient's brain, and neurosignals associated with desire or intention can be decoded using complex algorithms in real time and used to control the cursor on the computer interface.

Two years ago, patient T6 had a 100-channel electrode array implanted in the left side of her brain responsible for movement. In parallel, the Stanford Lab was working on a prototype prosthesis that would allow paralyzed people to type words on a specially designed keyboard just by thinking about those words. The device worked as follows: electrodes embedded in the brain recorded the patient's brain activity at the moment when she looked at the desired letter on the screen, transmitted this information to the neuroprosthesis, which then interprets the signals and turns them into continuous control of the cursor and clicks on the screen.

However, this process was extremely slow. It became clear that the output will be a device that works without a direct physical connection to the computer through the electrodes. The interface itself also had to look more interesting than in the 80s. The BrainGate Clinical Institute team behind this research realized that their point-and-click system was like tapping a finger on touch screen. And since most of us use touch tablets every day, the market for them is huge. It is enough just to choose and buy any of them.

Paralyzed patient T6 was able to “press” on the screen of the Nexus 9 tablet. The neuroprosthesis communicated with the tablet via the Bluetooth protocol, that is, like a wireless mouse.

The team is now working on extending the life of the implant, as well as developing systems for other movement maneuvers, such as select-and-drag and multi-sensory movements. In addition, BrainGate plans to expand their program to other operating systems.

Computer chip made from living brain cells

A few years ago, researchers in Germany and Japan were able to simulate 1 percent of human brain activity in one second. This was only possible thanks to the computing power of one of the most powerful supercomputers in the world.

But the human brain is still the most powerful, low power and efficient computer. What if you could use the power of this computer to power the machines of future generations?

As wild as it sounds, neuroscientist Osh Aghabi launched the Koniku project just to achieve this goal. He created a prototype of a 64-neuron silicon chip. The first application of this development was a drone that can "smell" the smell of explosives.

One of the most sensitive olfactory abilities is distinguished by bees. In fact, they even move through space by smell. Agabi has created a drone that rivals a bee's ability to recognize and interpret odors. It can be used not only for military purposes and bomb detection, but also for the study of farmland, oil refineries - all places where the level of health and safety can be measured by smell.

During the development process, Agabi and his team tackled three main problems: structure neurons the same way they are structured in the brain, read and write information to each individual neuron, and create a stable environment.

Technology of induced differentiation of a pluripotent cell - a method when a mature cell, for example, skin, is genetically built into the original stem cell, allows any cell to turn into a neuron. But like any electronic components, living neurons need a special habitat.

Therefore, the neurons were placed in environmentally controlled shells to regulate the level of temperature and hydrogen inside, as well as to supply them with power. In addition, such a shell allows you to control the interaction of neurons with each other.

Electrodes under the shell allow you to read or write information to neurons. Agabi describes the process as follows:

“We encase the electrodes with DNA and enriched proteins, which encourages neurons to form an artificial close connection with these conductors. So, we can read information from neurons or, conversely, send information to neurons in the same way or through light or chemical processes.

Agabi believes that the future of technology lies in the discovery of the capabilities of the so-called wetware - the human brain in correlation with the machine process.

“There are no practical limits to how big we can make our future devices or how differently we can model the brain. Biology is the only frontier."

Koniku's future plans include the development of chips:

  • with 500 neurons, which will control the car without a driver;
  • with 10,000 neurons - will be able to process and recognize images in the same way as the human eye does;
  • with 100,000 neurons - will create a robot with a multi-sensory input, which will be practically indistinguishable from a person in terms of perceptual properties;
  • with a million neurons - will give us a computer that will think for itself.

Memory chip embedded in the brain

Every year, hundreds of millions of people experience difficulties due to memory loss. The reasons for this are varied: brain damage that plagues veterans and football players, strokes or Alzheimer's disease that manifests itself in old age, or simply brain aging that awaits us all. Dr. Theodore Berger, a biomedical engineer at the University of Southern California, funded by the Defense Advanced Research Projects Agency DARPA, is testing a memory-enhancing implant that mimics signal processing when neurons refuse to process new long-term memories.

For the device to work, scientists must understand how memory works. The hippocampus is an area of ​​the brain that is responsible for transforming short-term memories into long-term ones. How he does it? And is it possible to simulate its activity within a computer chip?

“Essentially, memory is a series of electrical impulses that occur over time and that are generated by a certain number of neurons,” Berger explains. “This is very important, because it means that we can reduce this process to a mathematical equation and put it into the framework of the computational process.

So, neuroscientists began to decode the flow of information inside the hippocampus. The key to this deciphering was a strong electrical signal that travels from an area of ​​the organ called CA3 - the "entrance" of the hippocampus - to CA1 - the "exit" node. This signal is attenuated in people with a memory disorder.

“If we could recreate it using a chip, we would restore or even increase the amount of memory,” says Berger.

But it is difficult to trace this decoding path, since neurons work non-linearly. And any insignificant factor involved in the process can lead to completely different results. However, mathematics and programming do not stand still, and today they can together create the most complex computational structures with many unknowns and many “outputs”.

To begin with, scientists taught rats to press one or another lever to get a treat. In the process of memorizing by rats and turning this memory into a long-term one, the researchers carefully recorded and recorded all the transformations of neurons, and then created a computer chip using this mathematical model. Next, they injected the rats with a substance that temporarily destabilized their ability to remember and injected the chip into their brains. The device acted on the “outgoing” organ CA1, and, suddenly, the scientists found that the rats’ memory of how to achieve a treat was restored.

The following tests were carried out on monkeys. This time, the scientists focused on the prefrontal cortex, which receives and modulates memories received from the hippocampus. The animals were shown a series of images, some of which were repeated. Having fixed the activity of neurons at the moment they recognized the same picture, a mathematical model and a microcircuit based on it. After that, the work of the prefrontal cortex of monkeys was suppressed by cocaine, and scientists were again able to restore memory.

When the experiments were carried out on humans, Berger selected 12 epileptic volunteers with electrodes already implanted in their brains to trace the source of their seizures. Repetitive seizures destroy key parts of the hippocampus needed to form long-term memories. If, for example, to study the activity of the brain at the time of seizures, it will be possible to restore the memory.

In the same way as in previous experiments, a special human "memory code" was fixed, which would later be able to predict the pattern of activity in CA1 cells, based on data stored or generated in CA3. Compared to "real" brain activity, such a chip works with an accuracy of about 80%.

It is too early to talk about concrete results after experiments on humans. Unlike the motor cortex of the brain, where each section is responsible for a specific organ, the hippocampus is organized chaotically. It is also too early to say whether such an implant would be able to restore memory to those who suffer from damage to the "exit" region of the hippocampus.

The issue of the generalization of the algorithm for such a chip remains problematic, since the experimental prototype was created on the basis of individual data of specific patients. What if the memory code is different for everyone, depending on the type of incoming data it receives? Berger recalls that the brain is also limited by its biophysics:

"There are only so many ways in which electrical signals in the hippocampus can be processed, which, despite its multitude, is nevertheless limited and finite,” says the scientist.

The central idea of ​​the works of the famous Ray Kurzweil is artificial intelligence, which will eventually dominate all areas of people's lives. In his new book, The Evolution of the Mind, Kurzweil reveals the endless possibilities of reverse engineering the human brain.

In the same article, Turing made another surprising discovery about unsolvable problems. Unsolvable problems are those that are well described by a single solution (which can be shown to exist), but (which can also be shown) cannot be solved by any Turing machine (that is, by no machine at all). The idea of ​​the existence of such tasks fundamentally contradicts the concept that had been formed by the beginning of the 20th century. the dogma that all problems that can be formulated are solvable. Turing showed that the number of unsolvable problems is not less than the number of solvable problems. In 1931, Kurt Gödel came to the same conclusion with his "incompleteness theorem." Such a strange situation: we can formulate a problem, we can prove that it has a unique solution, but at the same time we know that we will never be able to find this solution.

Turing showed that computers operate on a very simple mechanism. Since a Turing machine (and therefore any computer) can determine its next function based on the results it has previously obtained, it is able to make decisions and create hierarchical information structures of any complexity.

In 1939, Turing designed the Bombe electronic calculator to help decipher messages written by the Germans on the Enigma coding machine. By 1943, Turing's team of engineers had completed the Colossus, sometimes referred to as the first computer in history. This allowed the Allies to decipher the messages created by the more complex version of the Enigma. The Bombe and Colossus machines were designed for a single task and could not be reprogrammed. But they performed their function brilliantly. It is believed that partly thanks to them, the Allies were able to anticipate German tactics throughout the war, and the Royal Air Force of Great Britain in the Battle of Britain was able to defeat the Luftwaffe, which outnumbered them three times.

It was on this basis that John von Neumann created the computer of modern architecture, reflecting the third of the four most important ideas of information theory. In the nearly seventy years that have passed since then, the core core of this machine, called the "von Neumann machine," has remained virtually unchanged - just like the microcontroller in your washing machine, and in the largest supercomputer. In an article published on 30 June 1945 entitled "The First Draft of a Report on the EDVAC", von Neumann outlined the main ideas that have guided the development of computer science ever since. The von Neumann machine has a central processor where arithmetic and logical operations are performed, a memory module that stores programs and data, mass memory, a program counter, and input / output channels. Although the article was intended for internal use as part of the project, for the creators of computers, it has become a bible. This is how sometimes a simple routine report can change the world.

The Turing Machine was not designed for practical purposes. Turing's theorems were not concerned with the efficiency of problem solving, but rather described the range of problems that could theoretically be solved by a computer. On the contrary, von Neumann's goal was to create the concept of a real computer. His model replaced Turing's one-bit system with a multi-bit (usually multiple of eight bits) system. The Turing machine has a sequential memory tape, so programs take a very long time to move the tape back and forth to write and retrieve intermediate results. On the contrary, in the von Neumann system, memory is accessed in an arbitrary way, which allows you to immediately retrieve any data you need.

One of von Neumann's key ideas is the concept of a stored program, which he developed ten years before the computer was built. The essence of the concept is that the program is stored in the same random access memory module as the data (and often even in the same block of memory). This allows the computer to be reprogrammed to solve different problems and create self-modifying code (in the case of recorders), which provides the possibility of recursion. Until that time, almost all computers, including Colossus, were created to solve specific problems. The concept of a stored program allowed the computer to become a truly universal machine, corresponding to Turing's idea of ​​the universality of machine computing.

Another important property of a von Neumann machine is that each instruction contains an operating code that defines an arithmetic or logical operation and the address of the operand in the computer's memory.

Von Neumann's concept of computer architecture was reflected in the EDVAC project, on which he worked with Presper J. Eckert and John Mauchly. The EDVAC only began to function in 1951, when other stored program computers already existed, such as the Manchester Small Experimental Machine, ENIAC, EDSAC, and BINAC, all influenced by von Neumann's paper and with contributions from Eckert and Mauchly. Von Neumann was also involved in some of these machines, including latest version ENIAC, where the stored program principle was used.

The von Neumann computer had several predecessors, but none of them - with one unexpected exception - can be called a true von Neumann machine. In 1944, Howard Aiken produced the Mark I, which could be reprogrammed to some extent, but did not use a stored program. The machine read instructions from a punched card and immediately executed them. The machine also did not provide for conditional jumps.

In 1941, the German scientist Konrad Zuse (1910–1995) created the Z-3 computer. He also read the program from the tape (in this case encoded on tape) and also did not perform conditional jumps. Interestingly, Zuse received financial support from the German Aircraft Institute, which used this computer to study aircraft wing flutter. However, Zuse's proposal to finance the replacement of the relays with radio tubes was not supported by the Nazi government, which considered the development of computer technology "of no military importance". This, it seems to me, to a certain extent influenced the outcome of the war.

In fact, von Neumann had one brilliant predecessor, and he lived a hundred years earlier! The English mathematician and inventor Charles Babbage (1791–1871) described his analytical engine in 1837, based on the same principles as the von Neumann computer, and using a stored program printed on the punched cards of jacquard weaving machines. The random access machine's memory contained 1,000 words of 50 decimal places each (equivalent to about 21 kilobytes). Each instruction contained an opcode and an operand number - just like in modern computer languages. The system did not use conditional jumps and loops, so it was a real von Neumann machine. Completely mechanical, it apparently surpassed both the design and organizational capabilities of Babbage himself. He created parts of the machine, but never got it to work.

It is not known for certain whether the pioneers of 20th-century computing, including von Neumann, were aware of Babbage's work.

However, the creation of Babbage's machine marked the beginning of the development of programming. The English writer Ada Byron (1815–1852), Countess of Lovelace, the only legitimate child of the poet Lord Byron, became the world's first computer programmer. She wrote programs for Babbage's Analytical Engine and debugged them in her mind (because the computer never worked). Programmers now call this practice table checking. She translated an article by the Italian mathematician Luigi Menabrea on the Analytical Engine, adding significant comments of her own and noting that "the Analytical Engine weaves algebraic patterns, like a jacquard loom weaves flowers and leaves." She may have been the first to mention the possibility of creating artificial intelligence, but concluded that the Analytical Engine "is not capable of inventing anything by itself."

Babbage's ideas seem amazing, given the era in which he lived and worked. However, by the middle of the XX century. these ideas were practically forgotten (and rediscovered only later). It was von Neumann who invented and formulated the key principles of the computer in its modern form, and it is not for nothing that the von Neumann machine is still considered the main model of a computer. However, let's not forget that the von Neumann machine constantly exchanges data between individual modules and within these modules, so it could not be created without Shannon's theorems and the methods that he proposed for reliable transmission and storage of digital information.

All of this brings us to the fourth important idea, which overcomes Ada Byron's conclusions about the computer's inability to think creatively and allows us to find the key algorithms used by the brain, and then apply them to turn a computer into a brain. Alan Turing formulated this problem in the article " Computing machines and Intelligence, published in 1950, which describes the now widely known Turing test to determine the proximity of AI to human intelligence.

In 1956, von Neumann began preparing a series of lectures for the prestigious Silliman Readings at Yale University. The scientist was already ill with cancer and could neither read his lectures, nor even finish the manuscript on the basis of which the lectures were created. Nevertheless, this unfinished work is a brilliant prediction of what I personally perceive as the most difficult and important project in the history of mankind. Already after the death of the scientist, in 1958, the manuscript was published under the title "Computer and Brain". It so happened that the last work of one of the most brilliant mathematicians of the last century and one of the founders of computer technology turned out to be devoted to the analysis of thinking. This was the first serious study of the human brain from the perspective of a mathematician and computer scientist. Before von Neumann, computer technology and neuroscience were two separate islands with no bridge between them.

Von Neumann begins the story by describing the similarities and differences between the computer and the human brain. Considering the era in which this work was written, it seems surprisingly accurate. The scientist notes that the output signal of the neuron is digital - the axon is either excited or remains at rest. At the time, it was far from obvious that output signal processing could be done in an analog way. Signal processing in the dendrites leading to the neuron and in the body of the neuron is analogous, and von Neumann described this situation with a weighted sum of input signals with a threshold value.

This model of the functioning of neurons led to the development of connectionism and to the use of this principle to create both hardware design and computer programs. (As I described in the previous chapter, the first such system, namely the IBM 704 program, was created by Frank Rosenblatt of Cornell University in 1957, just after the von Neumann lecture manuscript became available.) Now we have more complex models describing combinations of neuron input signals, but general idea about analog signal processing by changing the concentration of neurotransmitters is still true.

Based on the concept of the universality of computer computing, von Neumann came to the conclusion that even with a seemingly radical difference in the architecture and structural units of the brain and computer, using the von Neumann machine, we can simulate the processes occurring in the brain. The converse postulate, however, is not true, since the brain is not a von Neumann machine and has no stored program (although we can simulate the operation of a very simple Turing machine in the head). Algorithms or methods of functioning of the brain are determined by its structure. Von Neumann came to the right conclusion that neurons can learn appropriate patterns based on input signals. What was not known in von Neumann's time, however, is that learning also occurs by making and breaking connections between neurons.

Von Neumann also pointed out that the speed of information processing by neurons is very low - about a hundred calculations per second, but the brain compensates for this by simultaneously processing information in many neurons. This is another obvious but very important discovery. Von Neumann claimed that all 10 10 neurons in the brain (this estimate is also quite accurate: according to today's ideas, there are from 10 10 to 10 11 neurons in the brain) process signals at the same time. Moreover, all contacts (on average from 10 3 to 10 4 for each neuron) are calculated simultaneously.

Given the primitive state of neuroscience at the time, von Neumann's assessments and descriptions of neuronal function are remarkably accurate. However, I cannot agree with one aspect of his work, namely the idea of ​​the brain's memory capacity. He believed that the brain remembers each signal for life. Von Neumann estimated the average life expectancy of a person at 60 years, which is approximately 2 x 10 9 seconds. If each neuron receives about 14 signals in one second (which is actually three orders of magnitude lower than the true value), and there are 10 10 neurons in total in the brain, it turns out that the memory capacity of the brain is about 10 20 bits. As I wrote above, we remember only a small part of our thoughts and experiences, but even these memories are not stored as bitwise information of a low level of complexity (as in a video), but rather as a sequence of higher order images.

As von Neumann describes each mechanism in the function of the brain, he simultaneously demonstrates how a modern computer could perform the same function, despite the apparent difference between brain and computer. The analog mechanisms of the brain can be modeled using digital mechanisms, since digital calculations can model analog values ​​with any degree of accuracy (and the accuracy of the transmission of analog information in the brain is quite low). It is also possible to simulate the massive parallelism of brain function, given the significant superiority of computers in serial computation speed (since the time of von Neumann, this superiority has increased even more). In addition, we can do parallel processing of signals in computers using parallel von Neumann machines - this is how modern supercomputers work.

Given the ability of people to make quick decisions at such a low speed of neurons, von Neumann came to the conclusion that brain functions cannot use long sequential algorithms. When the third baseman receives the ball and decides to throw it to first base instead of second base, he makes that decision in a fraction of a second—during which time each neuron barely has time to complete several firing cycles. Von Neumann comes to the logical conclusion that the remarkable ability of the brain is due to the fact that all 100 billion neurons can process information at the same time. As I noted above, the visual cortex makes complex inferences in just three or four firing cycles of neurons.

It is the considerable plasticity of the brain that allows us to learn. However, the computer has much more plasticity - its methods can be completely changed by changing software. Thus, a computer can mimic the brain, but the converse is not true.

When von Neumann compared the possibilities of massively parallel brain activity with the few computers of the day, it seemed clear that the brain had much greater memory and speed. Today, the first supercomputer has already been designed, according to the most conservative estimates, that meets the functional requirements that are needed to simulate the functions of the human brain (about 10 16 operations per second). (In my opinion, computers of this capacity in the early 2020s will cost about $ 1,000.) In terms of memory, we have moved even further. Von Neumann's work appeared at the very beginning of the computer era, but the scientist was sure that at some point we would be able to create computers and computer programs capable of mimicking the human brain; that is why he prepared his lectures.

Von Neumann was deeply convinced of the acceleration of progress and its significant impact on people's lives in the future. A year after von Neumann's death, in 1957, fellow mathematician Stan Yulam quoted von Neumann as saying in the early 1950s that "any acceleration of technological progress and change in the way people live gives the impression that some major singularity in history is approaching. human race beyond which human activity as we know it today can no longer continue.” This is the first known use of the word "singularity" to describe the technological progress of mankind.

Von Neumann's most important insight was to find similarities between the computer and the brain. Note that part of the human intellect is emotional intelligence. If von Neumann's conjecture is correct, and if we agree with my assertion that a non-biological system that satisfactorily reproduces the intelligence (emotional and other) of a living person has consciousness (see the next chapter), one will have to conclude that between a computer (with correct software) And conscious thinking there is a clear similarity. So, was von Neumann right?

Majority modern computers- completely digital machines, while the human brain uses both digital and analog methods. However, analog methods are easily reproduced digitally with any degree of accuracy. American specialist in the field computer technology Carver Mead (born 1934) showed that analog brain techniques could be directly replicated in silicon, and implemented this in the form of so-called neuromorphic chips. Mead has shown that this approach can be thousands of times more efficient than digital simulation of analog methods. When it comes to coding redundant neocortex algorithms, it might make sense to use Mead's idea. An IBM research team led by Dharmendra Modhi is using chips that mimic neurons and their connections, including their ability to form new contacts. One of the chips, called SyNAPSE, directly modulates 256 neurons and roughly a quarter of a million synaptic connections. The aim of the project is to simulate a new cortex, consisting of 10 billion neurons and 100 trillion contacts (equivalent to the human brain), using only one kilowatt of energy.

More than fifty years ago, von Neumann noticed that the processes in the brain are extremely slow, but they are characterized by massive parallelism. Modern digital circuits operate at least 10 million times faster than the brain's electrochemical switches. On the contrary, all 300 million recognition modules of the cerebral cortex are active simultaneously, and a quadrillion contacts between neurons can be activated at the same time. Therefore, in order to create computers that can adequately mimic the human brain, an appropriate amount of memory and computational performance are needed. There is no need to directly copy the architecture of the brain - this is a very inefficient and inflexible method.

What should be the corresponding computers? Many research projects aim to model the hierarchical learning and pattern recognition that occurs in the neocortex. I myself do similar research involving hierarchical hidden Markov models. I estimate that it takes about 3,000 calculations to model one recognition cycle in one recognition module of the biological neocortex. Most simulations are built on a much smaller number of calculations. If we assume that the brain performs about 10 2 (100) recognition cycles per second, we get a total number of 3 x 10 5 (300 thousand) calculations per second for one recognition module. If we multiply this number by the total number of recognition modules (3 x 10 8 (300 million, according to my estimates)), we get 10 14 (100 trillion) calculations per second. I give about the same value in the book The Singularity is Near. I predict that a functional simulation of the brain requires a speed of 10 14 to 10 16 calculations per second. Hans Moravec estimates, based on extrapolation from data for initial visual processing throughout the brain, that this value is 10 14 calculations per second, which is in line with my calculations.

Standard modern machines can operate at speeds up to 10-10 calculations per second, however, with the help of cloud resources, their performance can be significantly increased. The fastest supercomputer, the Japanese K computer, has already reached a speed of 10 16 calculations per second. Given the massive redundancy of the neocortex algorithms, good results can be achieved using neuromorphic chips, as in the SvNAPSE technology.

In terms of memory requirements, we need about 30 bits (approximately 4 bytes) for each contact with one of the 300 million recognizers. If an average of eight signals fit each recognizer, we get 32 ​​bytes per recognizer. If we take into account that the weight of each input signal is one byte, we get 40 bytes. Let's add 32 bytes for downstream contacts - and we get 72 bytes. Note that the presence of uplinks and downlinks results in a much larger number of signals than eight, even if we take into account that many recognizers use a common, highly branched system of connections. For example, hundreds of recognition modules may be involved in recognizing the letter "p". This means that thousands of next-level recognizers are involved in recognizing words and phrases containing the letter "p". However, each module responsible for recognizing "p" does not repeat this tree of connections that feed all levels of recognition of words and phrases with "p", all these modules have a common tree of connections.

The above is also true for downstream signals: the module responsible for recognizing the word apple will tell all one thousand lower modules responsible for recognizing "e" that the image "e" is expected if "a", "p", "p" are already recognized. ' and 'l'. This link tree is not repeated for every word or phrase recognizer that wants to inform lower-level modules that the pattern "e" is expected. This is a general tree. For this reason, an average estimate of eight upstream and eight downstream signals for each recognizer is quite reasonable. But even if we raise this value, it won't change the final result much.

So, taking into account 3 x 10 8 (300 million) recognition modules and 72 bytes of memory for each, we get that the total amount of memory should be about 2 x 10 10 (20 billion) bytes. And this is a very modest value. Such memory is possessed by ordinary modern computers.

We performed all these calculations for an approximate estimation of the parameters. Considering that digital circuits are about 10 million times faster than networks of neurons in the biological cortex, we do not need to reproduce the massive parallelism of the human brain - a very moderate parallel processing (compared to trillions of parallelism in the brain) will be enough. Thus, the required computational parameters are quite achievable. The ability of brain neurons to rewire (remember that dendrites are constantly creating new synapses) can also be mimicked with appropriate software, since computer programs are much more plastic than biological systems, which, as we have seen, are impressive but have limits.

The redundancy of the brain necessary to obtain invariant results can certainly be reproduced in a computer version. The mathematical principles for optimizing such self-organizing hierarchical learning systems are quite clear. The organization of the brain is far from optimal. But it doesn't have to be optimal - it has to be good enough to make it possible to create tools that compensate for its own limitations.

Another limitation of the new cortex is that it does not have a mechanism to eliminate or even evaluate conflicting data; this partly explains the very common illogicality of human reasoning. To solve this problem, we have a very weak ability called critical thinking, but people use it much less often than they should. In the computer neocortex, a process can be envisaged to identify conflicting data for subsequent revision.

It is important to note that the construction of an entire brain region is easier than the construction of a single neuron. As already mentioned, for more high level model hierarchies are often simplified (there is an analogy with a computer). To understand how a transistor works, you need to understand the physics of semiconductor materials in detail, and the functions of a single real transistor are described by complex equations. A digital circuit that multiplies two numbers contains hundreds of transistors, but one or two formulas are enough to create a model of such a circuit. An entire computer of billions of transistors can be modeled with a set of instructions and register descriptions on several pages of text using several formulas. Programs for operating systems, language compilers or assemblers are quite complex, but modeling a private program (for example, a language recognition program based on hidden hierarchical Markov models) is also reduced to several pages of formulas. And nowhere in similar programs you will not find a detailed description of the physical properties of semiconductors or even computer architecture.

A similar principle is true for brain modeling. One particular neocortical recognition module that detects certain invariant visual patterns (such as faces), performs audio frequency filtering (limiting the input signal to a certain range of frequencies), or estimates the temporal proximity of two events can be described in far fewer specific details than real ones. physical and chemical interactions that control the functions of neurotransmitters, ion channels and other elements of neurons involved in the transmission of a nerve impulse. While all of these details need to be carefully considered before moving on to the next level of complexity, much can be simplified when modeling the operating principles of the brain.

<<< Назад
Forward >>>
2023 wisemotors.ru. How it works. Iron. Mining. Cryptocurrency.