Technology Update: Artificial Intelligence and the Fourth Industrial Revolution

Artificial Intelligence Concept Art

Automation is an increasingly common feature of everyday life. Self-service airport check-in, self-service checkouts, and even checkout-less stores offer speed and convenience, and consumers are growing comfortable with these interactions throughout their daily journeys. As we look to the future, the implications of increased automation – enabled by artificial intelligence (AI) – could be far-reaching.

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, which has been at the center of global affairs for over four decades, is convinced that we are at the beginning of a revolution that is fundamentally changing the way we live, work, and relate to one another: The Fourth Industrial Revolution.

In May 2018, the Future of Humanity Institute at Oxford University released a study that claimed AI will outperform humans in many activities in the next ten years, such as translating languages and driving a truck. They predict that by 2049, AI will produce a best-selling book, and by 2053 will be more effective than a human in performing surgery. Researchers believe there is a 50 percent chance of AI outperforming humans in all tasks in 45 years, and of automating all human jobs in 120 years. They state that “advances in AI will have massive social consequences. AI will transform modern life by reshaping transportation, health, science, finance, and the military.”

So, what is AI, and how are these advances possible? explains that AI is a branch of computer science that aims to create intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include speech recognition, learning, planning, and problem solving.

When developing AI, a computer is given a vast quantity of real data and allowed to learn. The highly specialized programming comes in how the learning software is designed, and data-processing is based loosely on the structures in the brain (known as “deep learning”). And of course, data is increasingly available through the likes of Google, Facebook, and Amazon.

Leo Benedictus, writing in The Guardian newspaper, explains machine learning by observing AI company DeepMind’s computer learning to play the Atari game Breakout:

“DeepMind’s Breakout player knew nothing [about Breakout]… It was not programmed with instructions on how the game works; it wasn’t even told how to use the controls. All it had was the image on the screen and the command to try to get as many points as possible.

“At first, the paddle lets the ball drop into oblivion, knowing no better. Eventually, just mucking about, it knocks the ball back, destroys a brick and gets a point, so it recognizes this and does it more often. After two hours’ practice, or about 300 games, it has become seriously good, better than you or I will ever be. Then, after about 600 games, things get spooky. The algorithm starts aiming at the same spot, over and over, in order to burrow through the bricks into the space behind. Once there, as any Breakout player knows, the ball will bounce around for a while, gathering free points. It’s a good strategy that the computer came up with on its own.”1

Advances in computing power mean that today’s AI devices are small enough to carry in your pocket. A smartphone can recognize photographs, identify songs, translate voice to text, and even translate into other languages. We are seeing the technology giants investing in AI: in 2014, Google bought DeepMind for more than $500M, IBM is now working on deep learning, and Facebook has launched its own project on intelligent chatbots.

IBM claims that its Project Debater is “the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity.”2 Project Debater’s skills were unveiled in its first-ever, live public debate in San Francisco in June 2018. In the machine vs. human event, each side delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. The Guardian reported that the AI won the audience vote for “most persuasive” in one of the night’s two arguments. “Project Debater represents a huge leap forward in AI’s potential ability to aid in human decision-making,” said computer scientist Chris Reed of the University of Dundee in the United Kingdom.

The possible uses for AI are seemingly endless, and some seem more intuitive than others. In 2016, a program manager called Josh Newlan developed a program called “Say What?” that listens to his conference calls, and when his name is mentioned, it automatically sends him an alert and a transcript of the last minute’s dialogue. To give him time to digest this without an awkward silence, the program waits for 15 seconds and then plays a recording of Newlan saying “Sorry, I didn’t realize my microphone was on mute.”3

However, widespread adoption and uptake of AI depends not only on killer solutions, but also on humankind’s readiness to accept AI in the workplace.

According to a new study conducted by Oracle and Future Workplace entitled AI at Work, people are ready to take instructions from robots at work. In the study of 1,320 U.S. HR leaders and employees, employees believe that AI will improve operational efficiencies (59 percent), enable faster decision-making (50 percent), significantly reduce cost (45 percent), enable better customer experiences (40 percent) and improve the employee experience (37 percent). In fact, 93 percent of employees in the study said they would trust orders from a robot.4

However, the study also identified a large gap between the way people are using AI at home and at work. While 70 percent of people are using some form of AI in their personal life, only six percent of HR professionals are actively deploying AI, and only 24 percent of employees are currently using some form of AI at work.

The study identified that despite its clear potential to improve business performance, there are many barriers holding back AI in the workplace:

  • Almost all (90 percent) HR leaders are concerned they will not be able to adjust to the rapid adoption of AI as part of their job, and that they are not currently empowered to address an emerging AI skill gap in their organization.
  • HR leaders and employees identified cost, failure of technology, and security risks as the other major barriers to AI adoption.

What will happen next?

IGT’s research partner and Trendspotting agency, Foresight Factory, believes that in 10 years, jobs in certain sectors, with repetitive and predictable tasks, will be largely replaced by automated alternatives, and those previously employed in such jobs will be forced to re-skill. They predict that AI-powered assistance will be present in the home and in personal devices, helping automate daily tasks – from routine purchases to appliance repairs. Changes in the employment landscape and new opportunities will see consumers widening and constantly updating their skill set, a process that will become ever easier with augmented reality (AR) and virtual reality (VR) learning.

Foresight Factory has this advice for businesses contemplating the Fourth Industrial Revolution:

  • Explore how automation and AI can enhance the shopper journey and make choices on the consumer’s behalf to create lasting B2C connections.
  • Support employees in self-improvement and retraining or upskilling in the face of potential “techno-unemployment.”

The World Economic Forum says that this Fourth Industrial Revolution is fundamentally different from previous industrial revolutions. It is characterized by a range of new technologies that are fusing the physical, digital, and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human. In his book The Fourth Industrial Revolution, Klaus Schwab calls for leaders and citizens to “together shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Besides the quote from Schwab about AI and people, we should also be cautious about the predictions that AI will out-perform humans in all tasks. Consider that, so far, people haven’t understood the process of human creativity, which raises the question of how we teach creativity to a machine.

Furthermore, better, faster, easier, and cheaper ways of doing work could have significant social consequences. If technology is designed to serve mankind, then perhaps its role will become that of human helper in the workplace, or to provide resources where there is a shortage of human workers. The expanding 65+ demographic will need some form of care as they age, and elder-care experts foresee a shortage of human caregivers to meet this growing demand.5 Robots may be able to meet this emerging need, or assist the human helper with tasks such as lifting and taking on the role of supporter rather than challenger to best serve the people that created it.


1 Source:
2 Source:
3 Source:
4 Source:
5 Source:

For the latest blog updates, follow IGT Lottery:

Follow on LinkedIn Follow us on Facebook


Remove GDPR Cookie