For those unfamiliar with the concept of deep learning, in its basic form it is the way computers use algorithms to digest and analyze data provided to it to form decisions. Artificial intelligence is one example of this, and there is no doubt many people are seeing this as the future of computing – for better or worse. But there are a number of people who are actually seeing AI as a thing of the past in a few years, as crazy as that might sound.
One reason is the history of the field. It shows that every 10 years or so there is a shift in focus by both scientists and engineers, so though it may seem odd the field has been changing for the last 50+ years. An important measurement of this change is based on the number of research papers that are focused on the subject, a number that has been decreasing over the last few years. In fact, many of the AI routines that are being used have been around for several decades, and are only being refined and retested to fit a particular application. What the public sees in the way of a product may not interest the researcher and scientist very much.
There are several possibilities that will replace AI. One is older technology research that has been shelved in favor of things like AI but can be revived and begin a new direction in future applications of technology. Most people know but fail to remember that research is mainly driven by interest, so yesterday’s afterthought has the potential to be tomorrow’s new gadget.
Another possibility that is having more of a place for discussion among the non-techies is quantum computing. It is just beginning to be explored in the sense of making it available to the general public, and most of that focus has been on PCs that will be able to harness this awesome power. There is a general consensus that quantum computing will drastically change the face of our everyday world, much like the smartphone changed cultures around the world. But development is a slow process because it is largely uncharted territory, and program development will take a viable and stable operating system, followed by the development of applications that will become a part of everyday life.
There is an argument for the development of neural networks as an alternative to deep learning, but those arguments largely are based on two general areas – problems that are complex and problems that are chaotic. In layperson’s terms, the greater the complexity of a problem the greater the need for a solution beyond deep learning to be developed. There is little doubt that the very existence of computers has made the world a far more complex place, and that can be seen in how countries struggle to deal with the many problems of AI and machine learning. If you want to make the case that complexity results in more complexity, then there is plenty of evidence to support the claim.
As for the chaos problem, it is obvious that cybersecurity and other constantly changing challenges to existing computer systems needs to be addressed. This involves more than complexity, with the focus being on unpredictability rather than statistical number crunching.
The underlying problem to all of this is computing power, not only in the physical machine/CPU but in the actual amount of energy required to run the programs. If you own a smartphone you know how much you complain about battery power, and if you own a laptop you know how much heat is generated from a comparably measly Intel i5 processor. Multiply this by a factor of 100 or even 1000 and the problems begin to mount.
It seems inevitable the onward march of technology will bring the fearfully popular application of AI to a halt, replacing it with something else. Few people know what has been set aside in research labs to pursue the current AI technology. But this time around there needs to be developers and politicians who understand what they are getting into. Otherwise the future may make our greatest fears a whimper in the future of humanity and make Terminator seem like our best friend.