Back to Back Issues Page
Real World Physics Problems Newsletter - AI, Issue #47
January 13, 2019

Artificial Intelligence

Imagine being able to solve the most challenging problems in physics, problems which have stymied the greatest minds in physics. Imagine that the greatest discoveries awaiting physics, such as a Theory of Everything which ties all the laws of physics together, will not be made by a human, but by a computer. This sounds like science fiction but it is actually a real possibility, although the reasons for it are not immediately obvious.

Right now many researchers in the field of Artificial Intelligence (AI) are hard at work developing the technology. Using things like neural networks and big data analysis, there has been serious progress in getting machines to think more like humans. Right now we are still at the level of so-called "narrow AI" where a computer can be taught to perform one particular task really well, such as image or pattern recognition. But there have been documented levels of super-human performance at some tasks, like beating the world's best human player at a game called Go. In this game an AI computer program called AlphaGo beat the best human player in the world, after a brief "learning" period in which the AI analyzed the rules of the game. A subsequent version of the AI, called AlphaGo Zero, then proceeded to beat the earlier AlphaGo version by achieving a 100-0 victory. This happened after only a short training period. This is important because it signals that AI has the potential to rapidly improve itself. This has enormous implications, which I will get into.

Despite such an incredible level of performance, this achievement in the game Go still falls into the definition of narrow AI. In order for AI to be able to solve hard physics problems it has to cross over into human level AI, since this level of cognitive ability requires "leaps" of inspiration, or at the very least tons of research and deep understanding into all the existing physics - something which no actual human can do because there is just too much of it. But it is quite likely that just by understanding all the existing physics an AI can logically extrapolate the answer to whatever unknown physics questions there are. This understanding would be human level understanding, which means being able to solve physics problems, and find relationships between problems.

Speed is another big factor influencing the thinking power of AI. Neurons in the human brain can fire at most at around 200 Hz, while computer processors can do so at millions of Hz. In addition, electrical signals in the human brain can travel at hundreds of meters per second, while electrical signals in a computer move at or near the speed of light. This means that in terms of information processing speed alone, computers are far, far more capable than the human brain. Right now, what makes humans superior to computers is the ability to think and solve novel problems. Computers can't do that, yet.

But like Sam Harris, a neuroscientist, has repeatedly said, it is only a matter of time. He states, quite correctly in my view, that if you assume that the human brain is nothing more than an information processing system composed of matter (and ultimately, atoms), and that people are getting better and better at making AI that processes information like the human brain does, then it is only a matter of time that we will make an AI that mimics how the human brain thinks. Quite simply, since intelligence can arise out of biological matter (which our brains are proof of), then it can also arise out of other forms of matter (like transistors, wires, etc.) Human level AI is then possible, and this is where it gets really interesting (and possibly dangerous). Such an AI can then proceed to improve itself on its own without our help (since it is already as smart as us), and then increase its intelligence in the process. As a result of getting smarter, it will then get even better at making itself smarter. This process can then continue indefinitely, and you have an intelligence explosion, or what is commonly referred to as superintelligence. This can happen only hours or days after the onset of human intelligence.

So you now have a superintelligent AI, and it could potentially be smarter than the sum of all humans combined, much smarter. Using the internet it would access all of human knowledge, like a large library, and just learn... about everything. There really is no limit to how smart it could become, and what it could build using machines that it creates. It can solve and carry out the most difficult problems in a fraction of a second, problems which can take people decades to solve. And never mind how long it takes us to develop a human-level AI, subject to the pace at which human researchers can work. Once we get there, the jump to superintelligent AI will be much, much faster. It will be like a singularity.

Once human-level AI is reached, the AI can then usurp our control in the process, and launch itself into the level of superintelligence at its own pace, which can be a matter of hours, or days. We are then at the mercy of the AI. That is, unless we can build it such that it values the things people value, so that it doesn't become a monster which destroys all of us. Now this really does sound like science fiction, but according to Sam Harris that is another part of the problem - our inability to muster the appropriate emotional response to this potential threat. And if you just think logically about it, it is only a matter of time before AI reaches human level intelligence, and shortly after superintelligence.

To give you some more perspective on the possible magnitude of superintelligent AI. If the difference between the intelligence of a human and a chicken is a 1. Then the difference between the intelligence of super AI and a human could be a 100, 1000, or greater. Think about how small differences in the human brain, such as the size of certain brain regions, differs between someone like Einstein and the average person, and how this can result in significantly higher levels of intelligence in a certain area (like physics). And keeping in mind that the human brain is limited in size by the volume of our cranium, an AI not having that type of restriction would result in unlimited intelligence potential. It would be an exponential improvement. An AI could theoretically utilize vast computer networks around the world to "expand" its brain, and build more as it deems necessary. It would become intellectually superior to us in every way.

The problem to take this potential threat seriously is one of human arrogance. Since we began making tools thousands of years ago, we naturally regarded them as there to serve us. Any form of technology that we make we naturally expect it to serve is. So when we make a smart AI we naturally just think that it will just be smart at doing our bidding. This is actually a big problem, and we need to seriously figure this out, since we may only have one shot to get it right. We need to ensure that the first human-level AI we create is hardwired to be on our side (as humans) and want to help us instead of destroying us. Why would it want to destroy us, you might ask? Because it could see us as a threat to its own existence, that's why. It could do to us what a construction company does to ant-hills on a plot of land it plans to build houses on. The ants are in the way, so they are toast. So we need to get it right the first time.

It makes you wonder what super AI would do with such incredible capabilities. It might launch probes into space to harvest asteroid material that it can't find on Earth, in order to build what it wants. It might eventually build planet-sized solar arrays in outer space in order to power its massive computer brain. After all, its goal might be just to make itself smarter and smarter.

But maybe, one of the byproducts of superintelligence is benevolence and not malevolence. Certainly, some of the smartest people in the world, like scientists, tend to have more empathy than the average person. So perhaps a superintelligent AI will also be that way. Or maybe it won't be benevolent or malevolent, so long as we don't actively try to harm it and just go about our business. The fact that it would be so intelligent could mean that it doesn't see humans in general as a threat, and any attack on us would just be limited to the offending agents who are trying to harm it. So it wouldn't attack all of humanity just to extinguish a few potential threats.

There is also the fact that humans and machines have little overlap in what they need in order to continue existing. Humans need food, water, shelter, and machines need electricity and raw material (like metal). There wouldn't be much competition for resources, except energy, and there is likely plenty of that available in the form of sunlight. And in the short term anyway, there is likely more than enough energy to go around.

Okay, maybe this won't be so bad after all. But there is no real way of knowing for sure, so we need to take the steps to protect ourselves. Making sure that the first superintelligent AI is on our side is very important, since if another superintelligent AI comes along after, which is not on our side, at least the first superintelligent AI will try to defend us against it. If the first superintelligent AI is not on our side, we might not be able to get to the point of creating another one. The television series Westworld is overly optimistic in my opinion, since the humans in the movie are up against machines that are as smart as them. Most likely, the machines will have reached superintelligence and have crushed the humans with little difficulty.

Elon Musk mentioned that one way to deal with AI is to interface with it, so that we are always in the loop. This means that our brains have to somehow plug into AI in order for us to stay current on AI-happenings and have some control over it. The issue with this that I see is that our brains would be so much more primitive than AI, so unless we have executive control over the AI, by way of our brains, then it wouldn't do much good. We would be completely out of our league.

According to experts in the AI field, reaching human level AI might be as little as 10 years away or as much as 50 years away. What happens after that, is anyone's guess. So it's best to think about precautions right now on how to deal with the what-ifs, because one thing is certain - if something can be done, good or bad, humans will do it eventually. It's part of our inextinguishable curious nature.

Until next time.

Franco

Back to Back Issues Page