🇨🇱 Leer en español
History repeats itself every time we are before a revolution, those radical changes that make us feel uncomfortable, because radical changes always do. But to begin a race against artificial intelligence supposes to lose even before the start.
Man has always had to work to subsist. Since prehistory, mankind has had to dedicate efforts to hunt, gather, protect, educate and guide. All these concepts were closely interrelated to enable our ancestors to thrive, even in unfavorable conditions, such as drought and scarcity.
Advancing thousands of years to the present day, such forms of “work” are still valid if we only make a “small” update to replace “hunt” and “harvest” for “produce” and “buy.”
With such an ancestral association between work and survival, it is normal that there is concern when listening and reading to different media on how Artificial Intelligence (AI) will be able to snatch work from people in different sectors.
A founded fear, if we resort to history. But unfounded, according to the same history.
A Journey Through Time
When the agricultural revolution began to take place 25 thousand years ago, the jobs most demanded by the humans of that time began to be threatened. Hunters and gatherers were less demanded as settlements were proliferating, and the new concept of family was providing food directly from the farming and cattle raising. Can we say that “agriculture” put an end to jobs? Of course not, it just demanded that humanity learn new skills to continue its progress.
When the industrial revolution reached its pinnacle in the nineteenth century, several kind of jobs were left obsolete thanks to industries that started to use machinery to make production more efficient. Textile weavers, glassblowers and, to follow suit, farmers saw industries leave them relegated to mere unemployed artisans. Seeding tools, threshing machines and mechanical reapers required only one operator who replaced hundreds of workers. Can we say that industrial machines put an end to jobs? Of course not, the industries generated a demand for workers so large that many cities grew as residential centers while doing so as industrial centers. Humanity needed to accommodate this growth, especially in working conditions, and learn new jobs to continue its progress.
Surely the hunters who could not do anything else at the dawn of agriculture felt as deferred as the British Luddites who devoted themselves to destroying textile machinery in the form of protest and frustration in the nineteenth century.
In the twentieth century, the media kept the threat of human put-aside on the covers: serial production, electronics, computing and robotics were sequentially filling headlines where unemployment was going to be a major scourge. Can we say that these advances put an end to jobs? Of course not, but it is true that technical and college degrees are now considered basic studies.
In this twenty-first century that we are going through, the threat of job stealer is called Artificial Intelligence. Multiple media take for deceased different professions as lawyers, analysts, accountants, even general practitioners. This is not just because algorithms programmed in automated systems have reached an important level of sophistication, but above all because systems now have processing levels that rival those of the human brain and because these systems can continue to learn on their own, Improving their own algorithms, optimizing their codes according to experience and without the need of a human.
Learning To Learn
On processing capabilities, Masayoshi Son, CEO of Softbank, and Ray Kurzweil, Engineering Director at Google, think that the technological singularity, that irreversible point in which machines surpass the human in processing capabilities, will occur in the next 30 years .
On the ability for machines to learn automatically, we can say with certainty that this is already happening.
Even if we consider a system as trivial as Netflix, we can see that the algorithm used to present recommendations is constantly “learning” from millions of users. The system is able to detect patterns of similar behavior and classify profiles in hundreds of categories at a time to ensure a high degree of accuracy. When viewers declare that they don’t like a recommended movie, it is not a failure of the system, it is another fact that is learned, without human intervention.
The case of Netflix that uses machine learning to get recommendations resembles that of Amazon with its classic “Customers who bought this also bought” and services like Spotify that specialize in helping users to discover new songs and artists. But there are other types of machine learning in our day-to-day technological life: e-mail anti-spam filters, for example, must be continually recognizing techniques for detecting spam, thus solving a classification problem, similar to what Facebook does to recognize faces in photos. The Google search engine must solve a ranking problem to create lists with the results that make the most sense in the context of a specific and perhaps unique query.
While this type of machine learning is still very basic, there are already more complex systems such as autonomous cars, which have generated a huge database of real situations that serve as the basis for an even greater set of potential situations on roads and highways.
Another complex example of machine learning is the so-called Deep Learning, which is based on neural network models –where the processing is distributed in several “layers” from which feedback is also obtained for the next cycle– and has been used since 1965 for voice recognition systems, face and expression recognition, drug detection and, in a recent milestone for Artificial Intelligence, to help a machine win in the Chinese “Go” board game.
Ready. Set. Go.
The supercomputer is called AlphaGo and is part of the Google Deep Mind project. The word “deep” related to computers brings to the memory the chess match between Garry Kasparov and Deep Blue, the IBM computer that managed to defeat the chess master, being the first time a computer beat a world champion. But when that happened in 1997, things were very different. Deep Blue won by using “brute force,” that is, processing hundreds of thousands of plays in seconds and analyzing them against Kasparov’s moves. In the case of the game Go, the number of possible moves surpasses those of chess by an amount greater than the sum of atoms in the universe, therefore a machine has to use learned “strategies” to win.
When AlphaGo defeated the European Go champion in January 2016, the scientific community understood the importance of the historical milestone. But when AlphaGo won 4 games against 1 in March of the same year against the Korean Lee Se-Dol, a planetary eminence in the game, then the astonishment and stupefaction was total.
Surely, there were more than a few that also felt a chill up their spine.
AlphaGo learned to play based on a programmed algorithm, but it learned to win and design strategies based on multiple plays against itself.
Up to now, humanity has always had a certain ascendancy over machines. From the Antikythera mechanism the machines have been used to replace tasks far from the most intelligent humans or the largest battalions. This is considered normal because it falls within the category of “tools”: it is humans who design them to facilitate jobs that humans define. A robotic mechanical arm in an automobile assembly can perform tasks of force and precision that no human could, but each movement, the synchronization between them and the date and time log, to the millisecond in which it occurs, it’s a programmed behavior coded by a human. Even their shape and materials are the work of human engineers and technicians.
But we get to the point where those tools can be designed and perfected by themselves. They can invent solutions that no human thought before. They can even build their own designs. So it is not unreasonable to think that the time will come when we do not know how the machines do what they do and we only have to resort to reverse engineering to find out. An unsettling idea.
History Of A Future
Can we say that the machines will put an end to jobs?
Of course not. We will need all the hands we have to defend humanity from intelligent machines.
That’s a joke, of course. We are not on the verge of machine apocalypse, or at least not as presented by the films with terminators and Skynet. The great transformation will be in the way we solve problems and conflicts, in how we generate scientific advances, how we resolve demands among civilians, or how we facilitate foreign policy among nations. Unlike humans, machines do not have an ego to foster and it is very difficult for them to decide to design one because, as it has been seen in all evils caused by man, the ego is not an optimum but quite the opposite.
Will the workers of today be replaced as the hunters were with the farmers or as the farmers were with the tractors and reapers? Many of us will. If we think of doctors who only diagnose diseases, lawyers who only process papers, accountants who only optimize balances, teachers who only give lectures, officials who only comply with the bureaucracy, engineers who only perform calculations, then they most likely will face the same fate of the hunter, losing a race before it starts.
There are two paths before us, then. We resist against machines like the Luddites of the nineteenth century, causing artificial intelligence entities to pay taxes for “snatched” jobs, for example, or we decide to adapt ourselves with the great advantage of knowing history and a repeated pattern. 100 years ago there was no technological industry. Today that industry represents 20% of world GDP. In all this time, the electricians gave space to electronic technicians that in turn embraced the system programmers who welcomed the engineers who today lay the foundations of Artificial Intelligence. It is our responsibility, as a society, to prepare ourselves and continue with progress.
Let us think for a moment about the way in which machines will influence our jobs and societies in an exercise of futurology:
- In 5 years, the combination of Deep Learning and quantum computing began to show great discoveries and advances on prospects such as the human genome, discovery of life-potential galaxies, chemical reactions at the molecular level to cure bacterial, mycotic and viral diseases, results in protein folding studies that lead to live DNA modification techniques, personalized medical diagnoses according to genetic profile and in an automated manner, automated legal consulting, personal assistants capable of automating aspects of our daily lives such as buying groceries and supplies when needed or manage the family budget directly from a bank account enabled for that purpose. The same machines begin to propose plans of exploitation and distribution of resources to ensure a sustainable supply and trade. However, there are several legal problems and increasing rates of unemployment.
- In 10 years, corporations base an increasing share of their labor for repetitive work on centrally monitored and synchronized machinery. The same system can detect optimization opportunities and productive improvements, as well as raise risks. This occurs in industries such as agriculture, manufacturing, production, mining, reducing a large number of jobs previously occupied by humans, but creating sustainable wealth that allows companies to reinvest in initiatives of social responsibility proposed also by artificial intelligence, in order to integrate the community and the corporations. Governments take advantage of complex pension fund administration models and health plans that benefit the majority of the population. Social conflicts have reached their pinnacle, but they begin to decline with a substantial subsidy from the states, which have fresh revenues from taxes on the growing profits of companies thanks to the contributions of artificial intelligence.
- In 20 years, a large percentage of chronic diseases have been solved thanks to machine learning. Genetic manipulation modeled by artificial intelligence is a reality, both for the embryonic stages of life and for adults. Global life expectancy has averaged 90 years, while in developed countries the line of 100 has been passed, improving the quality of life in last moments before death. Society has adapted and integrated artificial intelligence as an important part of its daily living. There are subscription services, inserted in state programs, for medical consultations by Internet and practically all important legal procedures are done remotely. Scientific discoveries outweigh the technological capacity to apply them, so states increasingly encourage the training of scientists and technicians who can materialize the knowledge generated. Many aspects of life are personalized, such as education, vocational guidance and the acquisition of goods; this generates major identity crisis in many individuals.
- In 40 years the humans have established colonies on Mars and have discovered a huge number of sites with potential for life both in the Solar System and beyond. Artificial intelligence has made it possible for humans to delegate high-risk and decision-making activities in distant worlds, at a time when contact with Earth takes several minutes. Spaceships are highly efficient, following impossible designs created by artificial intelligence. Personal assistants have evolved from software into computers and intelligent devices to humanoids capable of running everyday tasks and being integrated on humans through biological implants. The population living below the poverty line has been drastically reduced and some countries have adopted artificial intelligence models to calculate taxes in order to ensure a better distribution of wealth.
Reaching The Finish Line, Returning To The Beginning
I started this report saying that humans have always needed to work to subsist, but that is a half-truth. In the beginning, the human being worked to subsist for his families and his community. Eventually, we stop doing so to work for money, for the enrichment of businesses and corporations. Indirectly, of course, we do it for our subsistence, but perhaps the fear we project on machines, which we have projected so many times, is not necessary.
Perhaps we can trust that machines, if left free from corporate control, will be the impartial entity we do not find in ego-oriented humans. So, instead of fearing machines for things we do not understand, we may have to take advantage of them in a more integrated, impartial and fair society, while we worry, as before, about the livelihood of our families and communities, understanding what are their needs and how to cover them.
If we compete against machines in data processing capacity, in strength or precision, we are lost, but if we find a way to be more human with the help of machines, we all win the race.