Humanity's impact and influence on our planet is undeniable.
We have constructed cities that sprawl for miles and built skyscrapers that pierce the heavens. We have tunneled through mountains, redirected rivers and spawned new bodies of land. Roads and power lines crisscross the ground, while airplanes and satellites clog the atmosphere and beyond. With our dominion over Earth secure, we have even set our sights on conquering the solar system.
It took us several million years to reach this point. But progress has snowballed since the first industrial revolution of the late 18th century. In less than 250 years, we catapulted from horse-drawn carts to self-driving cars; from navigating by the stars to relying on voice-activated GPS instructions; from penning letters to loved ones to having awkward conversations with Siri.
The Internet, above all, has shaped more aspects of society across civilizations than any single invention of the past. The ability to instantaneously communicate and share and consume information – be it cat videos or scientific research – has amplified the pace of technological breakthroughs. As Gordon Moore, the co-founder of Intel and Fairchild Semiconductor, observed in 1965, processing power doubles every two years as transistors in a chip gain in abundance and speed - a rate emblematic of broader discovery and innovation.
We have entered the fourth industrial revolution, an era that will be defined and driven by extreme automation and ubiquitous connectivity.
Like the three other industrial revolutions, the changes borne during this period will irrevocably alter the course of our future and the way we interact with technology and each other. But in this new digital age, there will be one development so profound and seismic that it will rupture the Earth's long-held human-centric status quo - the birth and transcendence of artificial intelligence (AI).
The quote made by mathematician and cryptographer Alan Turing in 1951 has been a key guiding principle for data scientists and business leaders interested in AI. Until recently, AI was only a concept - a "one day" conversation that came alive in science-fiction novels and movies. The idea of an artificial being with humanlike consciousness can be traced back to tales of mechanical men from the Middle Ages and Mary Shelley's Frankenstein to, more recently, author and scientist Isaac Asimov's three laws of robotics.
The earliest emotion-relevant work in AI dates back to the 1970s, when cognitive science came of age as a discipline, inspired in part by Allen Newell and Herbert A. Simon’s 1972 book “Human Problem Solving.” In the same year, Kenneth Colby invented one of the first devices of emotional AI: a computer system called PARRY that simulated a conversation with a human paranoiac.
But as the fourth industrial revolution clearly materialized in the 21st century, so did the advent of AI. Around 2,000 start-ups globally now have AI as a core part of their business model. And with headline-grabbing news, like Google's AlphaGo defeating the Go world champion or Baidu's personal assistant Duer accepting orders at KFC restaurants in China, the foundation has been set for progress to avalanche in the years to come.
What's all the fuss about?
Artificial intelligence can be understood as a set of tools and programs that makes software "smarter" in a way an outside observer thinks the output is generated by a human.
In the most simplistic terms, AI leverages self-learning systems by using multiple tools like data mining, pattern recognition and natural language processing. It operates similar to how a normal human brain functions during regular tasks like common-sense reasoning, forming an opinion or social behavior.
The main business advantages of AI over human intelligence are its high scalability, resulting in significant cost savings. Other benefits include AI's consistency and rule-based programs, which eventually reduce errors (both omission and commission), AI's longevity coupled with continuous improvements and its ability to document processes - some of the few reasons why AI is drawing wide interest.
AI is divided broadly into three stages: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI).
The first stage, ANI, as the name suggests, is limited in scope with intelligence restricted to only one functional area. ANI is, for example, on par with an infant. The second stage, AGI, is at an advanced level: it covers more than one field like power of reasoning, problem solving and abstract thinking, which is mostly on par with adults. ASI is the final stage of the intelligence explosion, in which AI surpasses human intelligence across all fields.
The transition from the first to the second stage has taken a long time (see chart), but we believe we are currently on the cusp of completing the transition to the second stage - AGI, in which the intelligence of machines can equal humans. This is by no means a small achievement.
AI will become a massive sector that unleashes a torrent of financial opportunities.
Although still embryonic in its full lifespan, AI's potential has captivated the minds of not just scientists and philosophers but also politicians and business leaders. The reason is simple: AI will become a massive sector that unleashes a torrent of financial opportunities and will provide industry captains, both governments and corporates, with unparalleled technological power.
Whatever form AI takes, its journey will be fraught with ethical idiosyncrasies and met, often simultaneously, with fear and celebration. Some will worry about job redundancies, privacy and control, while others will herald the next step in human greatness. Regardless of your stance, AI will undoubtedly change us and our world in many ways; so it's paramount to be prepared for the world ahead.
Read on to time-travel to the future and learn about how AI will develop and impact our lives.