Symbolism Versus Connectionism In AI: Is There A Third Way? - Stefanini

Symbolism Versus Connectionism In AI: Is There A Third Way?


The debate between two divergent schools of thought in the underlying architecture of AI systems is starting to ask a new question: Why not both?

Artificial intelligence (AI) is a concept that has moved relatively quickly in the public consciousness from a vague science-fiction notion to a familiar tool that affects our everyday lives in a myriad of ways. From service calls to smartphones, AI-based systems are ubiquitous. They impact individuals and industries in ways both profound and mundane. To many of us, AI tech hasn’t just ceased to be a novelty; it has largely faded into the background. In most applications, AI functionality passes unnoticed, and few indeed are those that understand how AI systems actually work. How are the increasingly precise and sophisticated predictions and decisions made?

This question isn’t an abstraction or a casual inquiry. It’s an essential prerequisite for deciding how we want critical decisions about our health and well-being to be made — possibly for a very long time to come.

To understand why the “how” behind AI functionality is so important, we first have to appreciate the fact that there have historically been two very different approaches to AI. The first is symbolism, which deals with semantics and symbols. Many early AI advances utilized a symbolistic approach to AI programming, striving to create smart systems by modeling relationships and using symbols and programs to convey meaning. But it soon became clear that one weakness to these semantic networks and this “top-down” approach was that true learning was relatively limited.

Many more recent AI-based systems utilize a bottom-up approach: connectionism. Connectionism is known by its most successful techniques, deep learning or deep neural networks. It is the architecture behind the vast majority of machine-learning systems.

While the comparison is an imperfect one, it might be helpful to think of the distinction between symbolism-based AI and connectionism as similar to the difference between the mind and the brain. While the line between mind and brain has long been a source of debate in everything from religion to cognitive science, we generally recognize the mind as an expression of our thinking consciousness — the origins of thought, emotion and abstract logic.

The brain, on the other hand, is the extraordinary network of neural connections and electrical impulses that makes thought possible. In contrast with symbolism AI, which strives to start with the higher-level concepts of the mind, connectionism essentially mimics the brain, creating adaptive networks that can “learn” and recognize patterns from vast amounts of data. Connectionists hypothesize that with a sufficiently sophisticated network and enough data, we can achieve the equivalent of higher-level AI functionality: something akin to true thinking.

Connectionism has led to remarkable breakthroughs in everything from predicting financial markets to powering autonomous vehicles. But AI experts are also coming to terms with the fact that connectionism has its own limitations. Even with exponential increases in computing power and increasingly vast quantities of data, the improvement curve for predictive power is leveling off, suggesting that there is a cap to how far connectionism can take us. Even more concerning, however, is the fact that connectionism lacks something many consider absolutely essential: explainability.

The nature of connectionism-based systems is that, for all their power and performance, they are logically opaque. It is almost always impossible to understand why decisions are made. And in the absence of any kind of identifiable or verifiable train of logic, we are left with systems that are making potentially catastrophic decisions that are challenging to understand, extremely difficult to correct and impossible to trust. For a society that needs AI to be based on some shared framework of ethics or values, transparency in the pursuit of refinement is critically important.

It’s important because the stakes are so high. As AI technology becomes more entwined with our lives and livelihoods, AI systems are making decisions about loans, powering facial recognition technology, piloting driverless cars and impacting fields like health care — and even military and law enforcement applications. AI decision-making can quite literally be a matter of life and death. Questions are being asked about whether we should let AI systems that lack transparency make decisions or take actions with such potentially drastic consequences. Even some of the original creators of deep learning technology are expressing skepticism and highlighting the need for a new way forward.

The good news is that there is growing momentum behind this third wave of AI advancement. The scientific community is increasingly looking for ways to get the best of both worlds: to design AI systems that combine the power of connectionism with the explainability of symbolism. As a July 2019 article from Forbes suggests, “There is no need to throw out the deep learning baby with the explainability bath water.” There is optimism that the ability of AI systems to “explain” how conclusions are reached — and to allow for inspection and traceability of their actions — will not only improve trust and reliability but also allow for changes and upgrades that will significantly enhance performance.

Those seeking to capitalize on the potential of third-wave AI technology should:

  • Think critically and be selective and intentional about the AI solutions they use.
  • Improve their AI literacy and recognize that connectionism and symbolism are no longer commonly used terms today — and take advantage of those deep neural and learning techniques that offer better explainability (even without true symbolic logic).
  • Identify and utilize symbolic solutions that include techniques like knowledge graphs, semantic networks and decision trees.
  • Prioritize hybrid solutions when feasible, ideally combining the best of both scenarios. Although not (yet) widespread, they are not particularly difficult to find for those who make an effort to seek them out.

It is important that the pursuit of explainable AI systems not become an esoteric and poorly understood niche in the technology landscape. Ultimately, this isn’t about metrics or algorithms, but it’s about core priorities that will shape systems and societies for decades and possibly generations to come.

*Fabio Caversan is the Artificial Intelligence Research & Development Director for Stefanini NA

Join over 15,000 companies

Get Our Updates Sent Directly To Your Inbox.

Get Our Updates Sent Directly To Your Inbox.

Join our mailing list to receive monthly updates on the latest at Stefanini.

Ask SophieX