differentiate between Artificial Intelligence with Human Intelligence.

AI design to create machines that can imitate human behavior and perform human-like actions. Where as Human Intelligence looks to adjust to new environments by using a combination of various cognitive processes. Today, AI generally refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.”

differentiate between Artificial Intelligence with Human Intelligence.
differentiate between Artificial Intelligence with Human Intelligence.

Artificial intelligence (AI) is the ability of computer-controlled robot system to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

                       

What is intelligence?


All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence.

What is the difference?

Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

                                                 

         AI Bot vs Human Intelligence. “ As computers become more and more… | by  Sayali Sonawane | Medium          Should Artificial Intelligence Copy the Human Brain? - WSJ

 

Learning and Examples of AI:

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution.

This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.

Learn form a Story;

One approach is to compare hemispheres of our brain to a deep learning engine for the instinctive part, and a traditional (procedural) computing engine for the reasoning part of our brain. Such hypothesis has even been popularized in Dan Brown’s novel “Origin” back in 2017.

At the same time, I was reading this book my life experience built a strong belief about AI: our learning process, and in turn how we react in case of unknown situations, is very close to the behaviour of a deep learning powered computer program, and not necessarily the smartest one.

Learning to fly is quite similar to learning to drive. You repeat actions until they become a natural behaviour, a reflex. You repeat the pattern again and again and again. Take-off, 300 feet flaps up, pump off, 500ft turn in climb at climb speed, 1000ft reduce to pattern hold speed, enter downwind, pump on, flaps down, landing lights on, turn in base and reduce speed for 400ft/min negative vario (a smooth descent rate), turn final, full flaps, final approach speed.

What you don’t realise while learning this is what’s happening inside your brain. Each action you do, each view of your instrument panel, each view on the runway is an input to your cognititive system. You learn sequences of actions which are themselves triggered by a succession of events. But wait, there is more to learn.

Later you might have the chance to fly on a high-performance plane with a retractable gear and a constant speed propeller. Things get a bit more complex. Take off, positive climb checked, gear up, flaps up, pump off, reduce throttle, reduce prop speed, turn in climb, 1000ft reduce throttle to cruise pressure, set prop speed to cruise, turn downwind, pump ON, reduce throttle, set propeller speed to go-around, wait for flaps speed, flaps down, set throttle to pattern hold pressure, check speed, gear down, check gear down(all three lights green), landing lights ON, open cowl flags if equipped to limit engine temperature in case of go around, turn base, reduce throttle, turn final, check propeller speed, throttle, pump, gear, light…

OK, that looks pretty obvious. After all the plane has a retractable gear, so you just have to drop it before landing if you aren’t stupid. After all the plane has a constant speed propeller, so you should set a small pitch (like a first gear on a car) so that you have plenty of power available in case of a problem requiring you to cancel the landing and climb back to pattern altitude. After all the plane has flaps to reduce the approach speed so you should set them for a proper landing where you won’t need more than the runway length due to overspeed. After all the plane has landing lights to be seen (also) and they should be on so that planes waiting to enter the runway can better see you.

                                                Artificial Intelligence Austin Luczak, Katie Regin, John Trawinski. - ppt  download

Pretty obvious, isn’t it? There is no rocket science. There aren’t so many systems, and they all have a clearly defined function. Obviously who would not understand that the landing gear is for … landing ?

In fact, nothing is obvious here. Something is totally wrong in this reasoning. And guess what? A condition is missing. And this condition is written above.

A few lines above the condition is given: you just have to drop the gear if you’re not stupid.

I’m sorry to have to say that, but the problem is the following: you are stupid. At least I am. Just not in all situations.

Flying or driving are not reasoning, just pure learning. Our brain is much too slow to enable reasoning in high workload situations. All preparatory actions occur downwind, when the plane is flying parallel to the runway before turning twice and descending to align with the runway. They are usually executed while the plane is beside the runway, so during a journey length of 1000 meters (3000 feet). At 85kts that’s a bit more than 25 seconds. There are 7 actions to execute and to monitor, so 14 individual actions or checks. So we are left with 2 seconds per action.

                                                      8 Aims and Objectives of Artificial Intelligence - Elements Magazine

How long does it take to your brain to build and execute an algorithm, to recall an item from your memory, to make sure that’s in line with what you have learned? It takes much more than 2 seconds. That’s much shorter than the capabilities of the conscious, reasoning part of our brain. The only solution is to use the deep learning part of it, with the same strengths and weaknesses as any deep learning algorithm.

As a matter of fact, any good flight instructor is able to put any pilot into a situation that will fail his learning algorithm, resulting in a gear-up landing. There is even a popular saying that there are only two kinds of pilots flying retractable planes: those who had a gear-up landing, and those who will.

Let’s repeat the downwind section of the pattern with such an instructor:

  • Instructor: pull down one notch of flaps
  • Pilot (thinking): I should slow down the plane first, why is he telling me to drop the flaps? I’m too fast, let’s check. Yes, I’m too fast, let’s slow down the plane
  • Instructor (yelling): Pump on before reduction
  • Pilot (thinking): ok, easy, pump on, and let’s anticipate and execute the other actions
  • Instructor: Plane at 2 o’clock, converging!
  • Pilot (looking around, thinking): no plane…
  • Instructor: that was a wrong information from the tower
  • Pilot : …
  • Instructor: low cloud in front of you, larger plane wanting to align on the runway, shorten pattern, hurry up! Don’t forget landing light, they must see us!
  • Pilot: …
  • Instructor: Turn base, turn base, reduce throttle, turn at 500ft, watch your vario, check fuel pressure, needle is wrong
  • Pilot (thinking): ok landing light is on, they will see us
  • Instructor: Turn final, watch the plane, watch fuel pressure, signal your position, watch ultralight heading 370, might overshoot his final turn
  • Pilot: … ok speed controlled, engine fine, pressure fine
  • Instructor: signal your position!
  • Pilot: Killfields tower from F-DEEP, final 23
  • Instructor (with a smile): watch your flare, you will need a perfect one
  • Pilot: ???
  • Instructor: Killfields tower from F-DEEP, go-around 23 following a gear-up landing exercise
  • Pilot: ???
  • Instructor: you just crashed the plane, didn’t you hear the gear-up alert signal?     

Here is what just happened: the instructor generated events that forced the pilot to interrupt his sequence of actions. He also overloaded the pilot with information and instructions. The interruptions broke the deep learning automatic – unconscious – reaction as the chain of events has no similarities with the ones learned, breaking the search path to the reference case stored in the brain. The overload prevented the brain to switch to a reasoning mode, forcing the use of the learning pattern… which didn’t work for the reason mentioned.

This is called a mental tunnel. Actions described here are slightly different from reality: but the principle is always the same. One might say that in real life there is no instructor giving false information. That might be true, but reality can also be worse, with wrong information from the tower (anybody can fail) or from the instruments, or both. I personally experienced such a situation of mental tunnel where only an external event could reboot your brain.

Situations of mental tunnel are well known in aviation and are often part of the chain of causes that lead to major crashes. As a computer scientist, this kind of situation made be understand how stupid we can become, and how close our learning mechanisms are to deep learning systems. Any false or unknown entry can even put us in a loop situation where the brain has no solution anymore. The only way to exit such a situation is a “reset” from an external help. Our brain can totally fail in finding a solution, just like a deep learning algorithm placed in a situation where it cannot converge.

Deep learning stacks up multiple layers of learning algorithms that are purely based on data. There is no reasoning such as “The plane needs to have the gear down for landing, so I should drop the gear”. It works more like “I am in downwind my speed is 85mph, I have done this and that actions, I drop the gear, I land without crashing: this is fine” or “I am in downwind my speed is 85mph, I have done this and that actions, I do not drop the gear, I crash the plane: this is not fine”. The several layers will work on different levels of information and will produce series of micro-predictions in the form of a success percentage. Just like humans, insufficient training will lead to a bad decision, and just like humans, proper training will lead to a good decision. But unlike pilots, a deep learning system can be trained on a huge set of data representing not only the situations a pilot may experience in his/her life, but situations of many pilots and instructors. Unlike humans, deep learning is able to learn from other’s mistakes.

                                                 Difference Between Artificial Intelligence and Human Intelligence |  Difference Between

Methods and goals in AI;

AI research follows two distinct, and to some extent competing, methods, the symbolic (or “top-down”) approach, and the connectionist (or “bottom-up”) approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols—whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain’s structure—whence the connectionist label.

Artificial intelligence (AI) | Why & goals | Lec-2 | Bhanu Priya - YouTube

To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by “tuning” the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.) In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

                                                 Artificial Intelligence Versus Human Intelligence - Aretove Technologies

In The Fundamentals of Learning (1932), Edward Thorndike, a psychologist at Columbia University, New York City, first suggested that human learning consists of some unknown property of connections between neurons in the brain. In The Organization of Behavior (1949), Donald Hebb, a psychologist at McGill University, Montreal, Canada, suggested that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections. The notion of weighted connections is described in a later section, Connectionism.

https://traumaticpower.com/d.mUFAz/dEG/NXv/ZJGnUr/levme9kugZqU/lgk/PlTEU/yFNgDKE/zRNwTOImtKNITiIc0BMjTIMX1UMJwg