Sebastian Thrun – There Does Exist A Lot More Than Meets The Eye Listed Below.

As you might imagine, crunching through enormous datasets to extract patterns requires a LOT of computer processing power. In the 1960s they simply did not have machines powerful enough to accomplish it, which is why that boom failed. In the 1980s the computers were powerful enough, however they discovered that machines only learn effectively when the amount of data being fed in their mind is big enough, and they also were not able to source big enough levels of data to feed the machines.

Then came the net. Not only did it solve the computing problem completely from the innovations of cloud computing – which essentially permit us to access as much processors as we need at the touch of a button – but people on the internet have already been generating more data each day than has been created in the whole past of planet earth. The volume of data being produced on a constant basis is absolutely mind-boggling.

What this implies for machine learning is significant: we currently have ample data to actually start training our machines. Think of the number of photos on Facebook and you also start to realize why their facial recognition technology is so accurate. There is now no major barrier (that people currently know of) preventing A.I. from achieving its potential. We are only just starting to work out whatever we can do along with it.

Once the computers will think for themselves. You will find a famous scene from your movie 2001: A Place Odyssey where Dave, the primary character, is slowly disabling the artificial intelligence mainframe (called “Hal”) after the latter has malfunctioned and decided to attempt to kill each of the humans on the space station it was meant to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it is afraid of dying.

This movie illustrates one of the big fears surrounding A.I. in general, namely what is going to happen after the computers begin to think by themselves rather than being controlled by humans. The fear applies: we have been already utilizing machine learning constructs called neural networks whose structures are based on the neurons inside the human brain. With neural nets, the info is fed in and then processed by way of a vastly complex network of interconnected points that build connections between concepts in much much the same way as associative human memory does. Which means that computers are slowly starting to build up a library of not just patterns, but in addition concepts which ultimately lead to the basic foundations of understanding rather than just recognition.

Imagine you are looking at a picture of somebody’s face. When you see the photo, several things occur in your mind: first, you recognise that it must be a human face. Next, you might recognise that it must be female or male, old or young, black or white, etc. You will additionally possess a quick decision from the brain about whether you recognise the face, though sometimes the recognition requires deeper thinking depending on how often you have come across this particular face (the experience of recognising a person although not knowing straight away where). All this happens basically instantly, and computers already are able to perform all of this too, at almost the same speed. For example, Facebook cannot only identify faces, but could also tell you who the face area belongs to, if said individual is also on Facebook. Google has technology that will identify the race, age along with other characteristics of a person based just tstqiy a photo with their face. We have now advanced significantly since the 1950s.

But true Udacity – which is referred to as Artificial General Intelligence (AGI), where the machine is really as advanced as a human brain – is quite a distance off. Machines can recognise faces, however they still don’t really know exactly what a face is. For example, you could look at a human face and infer a lot of things which are drawn from the hugely complicated mesh of different memories, learnings and feelings. You could look at a picture of any woman and guess that she is actually a mother, which may make you think that she actually is selfless, or indeed the exact opposite depending by yourself experiences of mothers and motherhood. A guy might glance at the same photo and locate the girl attractive that can lead him to make positive assumptions about her personality (confirmation bias again), or conversely realize that she resembles a crazy ex girlfriend which will irrationally make him feel negatively for the woman. These richly varied but often illogical thoughts and experiences are what drive humans to the various behaviours – good and bad – that characterise our race. Desperation often leads to innovation, fear results in aggression, and so on.

For computers to really be dangerous, they require a few of these emotional compulsions, but it is a very rich, complex and multi-layered tapestry of various concepts that is very difficult to train a pc on, regardless how advanced neural networks may be. We are going to arrive there some day, but there is plenty of time to make certain that when computers do achieve AGI, we is still in a position to switch them off if required.

Meanwhile, the advances being made are discovering increasingly more useful applications inside the human world. Driverless cars, instant translations, A.I. mobile phone assistants, websites that design themselves! Most of these advancements are intended to make our everyday life better, and thus we must not be afraid but alternatively pumped up about our artificially intelligent future.