Community Partner Jim Gibson on the current and future state of AI and ML
communitynowmagazine (August 2020, Volume 3, Issue 1)
In my book, Tip of the Spear: Our Species and Technology at a Crossroads, I outlined the three laws of disruption that – at a macro, “species” level – was a useful framework for thinking about technology’s continuous march into the future. In this article, I will use these three laws to provide a more “micro” view of the current and future state of Artificial Intelligence (AI) and Machine Learning (ML).
Before we begin, I will summarize my bias by quoting one of my favourite researchers and writers in the field, Nick Bostrom.
“Artificial Intelligence is the last invention of mankind.”
There is a lot to unpack in that single sentence. But suffice to say it expresses the level of importance and relevance of the topic – at least in my mind! Let’s bring that forward using the three Laws and see where things stand.
Three Laws of Disruption: A Review.
• The slope of the technology disruption curve is dramatically increasing.
• The technology “genie” never goes back in the bottle.
• Our linear systems of human organization are unprepared for sustained exponential change.
As a reminder, I noted in the book that our future will be decided – soon – by how well we cope with and address these forces that are fundamentally at odds with each other. Individually, these three observations that I have elevated to “laws” are at first blush mostly self-evident. Taken together, however, they present a set of circumstances that connects the human animal across all of time. The difference today is that the velocity of change is met head on with the impossible struggle to absorb change across complex and intractable societies and institutions.
Law 1: Slope of the Curve
The underpinning of most technological change – Moore’s Law – is also absolutely at the centre of what’s happening in AI/ML. The staggering capacity increases in processor speeds and computer architecture design is creating the ability of even the simplest AI algorithms to ingest more data, traverse more logic, and store more output per unit of time. It is an exponential curve of increasing capability that shows no sign of slowing. In AI, better algorithms are important, but raw horsepower rules. The first law of disruption clearly tells us that things are changing – and fast!
Because the laws of physics are starting to limit the continued exponential increase in silicon-based processors, much speculation surrounds the potential of the arrival of specific purpose-built quantum computers for AI for algorithm execution and in the training of specific AIs that would exponentially advance the state of the art. Other developments include the recent partial release from Open AI – the not-for-profit that recently became a for-profit AI organization – of the Generative Pretrained.
Transformer 3, commonly known by its abbreviated form GPT-3. This potential allows for huge advances in natural language queries and translation as well as the ability to create free form, context rich text, code and scripting that could significantly increase the breadth of applications and depth of machine “intelligence.” It has the potential to be a very powerful– and potentially dangerous – new form of AI capabilities and is being watched very carefully.
Conclusion: Computing power is at the heart of increasingly capable AIs. The progressive march of Moore’s Law – both in computation costs and capacity as well as similar exponential curves in other support technologies such as storage – will continue for the foreseeable future. Artificial General Intelligence – or AGI, the holy grail of AI researchers, is for most a matter of when not if and is predicted to arrive given the current technology curve in 20 or so years.
Law 2: The Genie and the Bottle
There are many researchers and writers – me included – who have raised fundamental concerns about the release of general-purpose AI at the level of GPT-3 and beyond because – once released – we can never return it to the proverbial bottle. Simply stated: Never in the history of the intelligent human species have we ever invented or discovered something new and then decided as a species it was not moral, useful or valuable to someone and subsequently discarded it. Never.
It is simply not in our fundamental nature. And I won’t argue that it should be. This story is ancient and archetypal: Eve in the garden of Eden eating the apple of knowledge; Prometheus creator of man, stealing the fire to protect humans. Away from the myths, our recorded history begins with the discovery and taming of fire. The biggest challenge is that as AIs become more prevalent, powerful and prescient, the possibility of unintended consequences goes up, inherent and intended or unintended biases of all forms get amplified, and human impactful decisions and actions will occur.
Conclusion: The future of AI is a complex ethical issue. Whether you agree or disagree with the likely impact of AIs released generally on the human population, we can all agree that the time to have the conversation is before, not after, because the genie will never go back in the bottle.
Law 3: Linear Systems
The third law looks at systemic reaction to the exponential advances of technology. Most of our largest complex systems struggle hugely and are often pulled into change only when the costs of staying static become so high that they cannot be ignored. Big shocks that can “jar” slow moving systems and bureaucracies do happen, often in dramatic fashion. COVID-19 is one such example. AI has been slow to be adopted in some of these larger “linear” type systems because the capability has often not matched the hype, the data are not in a form that makes it accessible and cost effective to do so and the talent and skills are not widely available.
But this is rapidly changing as capabilities increase and widespread use starts to become part of our everyday lives.
In the field of Health Care, for example – currently in the global spotlight of a global pandemic – changes are evolving in the way experts are using AI to better understand patients, diseases and care. In the medical profession, artificial intelligence is often called augmented intelligence – using it to pave the way to a new kind of relationship between the human expert, data and algorithms. While threatening to some, the promise is huge. Similarly, the world of energy, smart cities, food and other fundamental areas of our lives, are all undergoing similar shifts as augmented intelligence helps us understand and allow greater interaction with complex adaptive systems. Slowly at first, then suddenly – just like the exponential curves that drive the underlying technology of the AI promise.
The three laws of disruption are a useful way of looking at the state of Artificial Intelligence and remind us of the challenges and opportunities AI represents. Coming back to our Nick Bostrom quote, we have before us an existential opportunity and challenge all at the same time. It is not a time to be naïve nor complacent. If we get AI right, we will indeed have created the machines that build other machines – technology that will be able to model, adapt to and engage with the complex adaptive systems of our natural and invented worlds. If we get it right, these machines will be designed, advanced and managed by the best of what makes us human not the worst.
Fundamentally the future of AI makes us ask the core human question: “If we can, does it mean we should?” I believe the arc of humanity bends toward the good. But the question of today is, will that arc bend fast enough to keep pace with the speed of the fundamental changes represented by AI.
Humans still have the monopoly on evil. The problem today that we’re facing is not the Terminator. The problems today are bad humans, the evil that exists in this imperfect world, using technology invented in the free world to undermine the very foundation of the free world. That’s a real problem.