Understanding of and by Deep Knowledge

Understanding of and by Deep Knowledge: How knowledge constructs can transform AI from surface correlation to comprehension of the world

What knowledge makes you intelligent? What are the constructs used by your cognition to understand the world, interpret new experiences, and make thoughtful choices? Defining a framework that articulates the kinds of knowledge that enable understanding and higher cognition for humans or artificial intelligence (AI) will facilitate a structured discussion on ways to effectively materialize these constructs and chart a path to more intelligent machines.

Knowledge constructs that allow an AI system to organize its view of the world, comprehend meaning, and demonstrate understanding of events and tasks will likely be at the center of higher levels of machine intelligence. Machine cognition will expand beyond data to be anchored in knowledge constructs including dimensions such as descriptive knowledge, models of the world dynamics, and provenance, among others.

When studying language, we distinguish between form and meaning: form refers to the symbols — the surface expressions — used to express meaning. Each form has a particular meaning in a particular context, and forms can have different meanings in different contexts. As summarized in an article by Schölkopf, Bengio et al, “the majority of current successes of machine learning boil down to large scale pattern recognition on suitably collected independent and identically distributed (i.i.d.) data.” Systems ingest observable elements such as text characters, vocal signals and image pixels, and establish patterns and stochastic correlations, while yielding outstanding results for recognition-based tasks.

There is growing agreement that algorithms must go beyond surface correlations into meaning and understanding to achieve a higher level of machine intelligence. This categorical shift will enable what is referred to as System 23rdWave, or broad generalization/flexible AI. As I outlined in the core blog The Rise of Cognitive AI’this next level of machine intelligence requires deep constructs of knowledge that can transform AI from surface correlation to comprehension of the world, representing abstractions, relations, learned experiences, insights, models and other types of structured information.

John Launchbury of DARPA calls out the aspects of AI that will see a transformational improvement in the 3rd wave of AI as abstraction (i.e., creating new meaning) and reasoning (planning and deciding). The 3rd wave itself is characterized by contextual adaptation where systems construct contextual explanatory models for classes of real-world phenomena. The framework presented here offers a perspective on how knowledge constructs will facilitate such a leap.

Two of the knowledge dimensions reflect a view of the world — the descriptive dimension with its conceptual abstractions of what is in the world, and the dynamic models of the real world and its phenomena. Stories add the human capacity to comprehend and communicate complex narratives that build on shared beliefs and mythologies. Context and source attribution as well as value and priorities are meta-knowledge dimensions that provide a condition-based overlay of validity and knowledge-about-knowledge. Finally, concept references are the structural underpinning, binding across dimensions, modalities and references. Together, these six dimensions of knowledge could bring additional depth beyond correlation of events by assuming underlying concepts that are persistent and can explain and predict past and future events, allow for planning and intervention, and consider counterfactual realities — hence the use of the term ‘deep knowledge.’

Articulating and characterizing the kinds of knowledge constructs necessary for machine intelligence can contribute to identifying the best way to implement them to bring about the next level of machine intelligence. The goal of this blog is to establish the fundamental classes of knowledge constructs deemed relevant for the development of the next level of AI cognitive capabilities.

Dimensions of Knowledge in Support of Higher Intelligence

For AI systems, implementing knowledge constructs observed in human comprehension and communication can provide substantial value to intelligence. That value grows substantially when all knowledge types are supported and combined.

1. Descriptive knowledge: Hierarchy, taxonomies and property inheritance

Descriptive knowledge, (i.e., conceptual, propositional or declarative knowledge) describes things, events, their attributes and their relationships to each other. The notion of deep descriptive knowledge expands on this definition, assuming the use (as appropriate) of hierarchical layering of classes or concepts. This category of knowledge can include facts and systems of records. The facts and information relevant for specific use cases and environments can be organized, utilized and updated as hierarchical knowledge.

The underlying ontology used in individual AI systems can be seeded with task-relevant classes and entities from curated systems (e.g., the OpenCyc ontology or the AMR named entity types). It should be extensible with neural network/machine learning technologies — the acquisition of new knowledge will contribute new entities, relations and classes.

2. Models of the world

Models of phenomena in the world enable AI systems to understand situations, interpret inputs/events, predict potential future outcomes and take action. These models are abstractions/generalizations and can be divided into formal models and approximate (informal) real-world models; they allow for the use of variables and the application to instances in particular cases, and enable symbol manipulation of a particular instance or a more generalized class.

Examples of formal models include logic, mathematics/algebra and physics. In contrast to formal models, real-world models are usually empirical, experimental and sometimes messy. They include both physical as well as psychological and sociological models. Procedural models (‘know-how’) are included in this class.

Causality models are a prime example of the types of models that can help progress AI systems to the next level of machine intelligence. In cases of changed context, statistics of the past can only be effectively applied to the present to predict futures, if integrated with a knowledge model such as causality and with the understanding of the context that governs the causes in play and the ability to consider counterfactuals. These models help in understanding situations or events in terms of the conditions and likely factor…. Causal reasoning is an indispensable component of human thought that can be formalized toward achieving human-level machine intelligence.

3. Stories and Scripts

Stories form a key part of the culture and world view of individuals and societies — as was stated by the historian Yuval Harari. The notion of stories is necessary to fully understand and interpret human behavior and communication. Stories are complex and may include multiple events and a variety of information within a connective narrative. They are not just collections of facts and events. Instead, they contain crucial information that helps develop understanding and generalizations beyond the presented data. Unlike models of the world which are expected to provide an operational representation of the world and how one can interact with it, stories can be viewed as historical, referential or spiritual. Stories can represent values and experiences that inform people’s beliefs and actions. Examples include religious or national stories, mythology, and shared stories at any level of groups of people.

4. Context and source attribution

Context can be defined as a frame that surrounds an event or other information and provides resources for its appropriate inte…. It can be seen as an overlaid knowledge structure, modulating the knowledge it encloses. Context can be persistent or transient.

· Persistent context can be long lasting (as in knowledge that is captured from a western vs. eastern philosophy perspective) or it can change over time based on material new learning. It does not change per task.

· Transient context is relevant where particular local context is important. Words are interpreted in the local context of their surrounding sentence or paragraph. Regions of interest in an image are commonly interpreted in context of the overall image or video.

The combination of the persistent and transient context can provide the complete setting to interpret and operationalize knowledge.

Another related aspect of knowledge is data provenance (aka data lineage), which includes data origin, what happens to it and where it moves over time. An AI system cannot assume that all information ingested is generally correct or trustworthy, especially in regard to what has been dubbed as post-truth era. Associating information with its sources might be necessary for establishing credibility, certifiability and traceability

5. Value and Priorities (including goodness/threats and ethics)

All aspects of knowledge (e.g., object, concept, or procedure) can have an associated value across the judgment spectrum — from utmost goodness to greatest evil. It can be assumed that evolution of human intelligence includes the pursuit of rewards and avoiding risks (get lunch; avoid being lunch). This risk/reward association is tightly coupled with the knowledge of things. The potential of gain vs. loss has a utilitarian value; there is also an ethics-based value for entities or potential future states being considered. This can reflect the ethical values that assign “goodness” based not on potential tangible rewards or threats, but rather on an underlying belief of what is right.

Value and priorities are meta-knowledge that reflect the subjective assertion of the AI system about relevant aspects of knowledge, actions and outcome. It establishes the foundation for accountability and should be carefully addressed by those responsible for the particular AI system. When AI systems interact with humans and make choices that affect the humans’ well-being, the underlying value and prioritization system matters.

6. Concept References: disambiguated, unified and cross-modal

Knowledge is based on concepts. For example, “dog” is an abstraction — a concept that has multiple names (e.g., in various languages), some visual characteristics, sound association and so on. However, the underlying concept /dog/ is unique, regardless of its manifestations and usages. It is mapped to the English word “dog,” as well as mapped to the French word “chien.” It is also a property belonging to and the likely source of a barking sound.

A Concept Reference (or ConceptRef for short) is the identifier and set of references to all things related to a given concept. The ConceptRefs by themselves don’t actually include any of the knowledge — the knowledge resides in the dimensions described above. ConceptRefs are the linchpins of a multi-dimensional knowledge base (KB); as they amalgamate all appearances of the concept.

Wikidata is an excellent example of a KB that centrally stores structured data. In Wikidata, items represent all the things in human knowledge, including topics, concepts, and objects. Wikidata’s items are similar to the definition of ConceptRef in this framework — with one key difference. In Wikidata, the term “items” refers to both the given identifier, along with the information about it. ConceptRefs are just the identifiers with the pointers to the KB. The information about the concept is populated in the various views described in previous sections (such as descriptive or procedural knowledge associated with a concept).

Commonsense knowledge

Commonsense knowledge consists of implicit information — the broad (and broadly shared) set of unwritten assumptions that humans automatically apply to make sense of the world. Applying commonsense to situations is essential for understanding and higher cognition. In this framework, commonsense knowledge is considered a subset of each of the above six knowledge types.

The Relationship Between Understanding and Knowledge Types

Understanding is the foundation of intelligence. The impending transition to higher machine intelligence has ignited a discussion on ‘understanding’. Yoshua Bengio characterized human-level AI understanding as follows: capture causality and how the world works; understand abstract actions and how to use them to control, reason and plan, even in novel scenarios; explain what happened (inference, credit assignment); and out-of-distribution generation.

Consider the following knowledge-centric definition of understanding: the ability to create a world view expressed with rich knowledge representation; the ability to acquire and interpret new information to enhance this world view; and the ability to effectively reason, decide and explain over existing knowledge and new information.

Four functions are a prerequisite for this view of understanding: representing rich knowledge, acquiring new knowledge, linking the instances of knowledge across entities and relations, and reasoning over knowledge. Naturally, understanding is not a binary property but instead varies by type and degree. At the center of this view is the nature of knowledge and its representation — the expressivity of knowledge constructs and models can facilitate a categorical difference in the ability to understand and reason.

Imagine all the people [and machines]

As Albert Einstein observed, “The true sign of intelligence is not knowledge but imagination.” To truly understand, machine intelligence must go beyond knowledge of data, facts and stories. Imagination is necessary to reconstruct, discover and invent a model of the universe behind the observable attributes and events. From an AI system’s perspective, imagination is achieved through creative reasoning: performing inductive, deductive or abductive reasoning and generating novel outcomes not strictly prescribed by previous experiences and input-to-output correlations.

Knowledge representation and reasoning is a well-established field of AI that addresses representation of information about the world so a computer system can solve complex tasks. Knowledge and reasoning are not necessarily distinct, but rather represent a spectrum from the known to the inferred (which can become a new known). Machine understanding will be achieved through the capacity to construct knowledge, complemented by advanced and updated associated reasoning (e.g., probabilistic and plausible reasoning, abductive reasoning, analogical reasoning, default reasoning, etc.).

Neuro-symbolic AI Built on Deep Knowledge Foundations

My appreciation for a more cognitive, knowledge-based approach to AI emerged in the mid 2010s while working on deep learning hardware and software solutions at Intel. Gary Marcus’ excellent paper The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence offers a similar perspective.

In the journey to make AI more effective, accountable, and productive in support of people, the goal is to make AI systems more robust, while also driving them to the next level of cognition and comprehension. Great strides have been made in manipulating data, recognizing patterns, and finding the most fleeting of correlations. But it’s still necessary to consider the types of knowledge that will equip an AI system with the capabilities to model and understand the world it operates in.

The time has come for a dialogue on the kinds of knowledge that are required for a more cognitive AI. The types of constructs for knowledge representation discussed in this blog may be implementable in various forms, and the resulting systems will vary in the way neural network capabilities and symbolic knowledge (in whatever representation) achieve the goals of the next phase of AI. Subsequent blogs will address the different dimensions of knowledge introduced in this framework in more detail. As we establish a deeper understanding of the types of knowledge constructs needed for higher cognition, we can proceed to build on this deep knowledge to enable machine comprehension of the world.

References

Fillmore, Charles. “Form and Meaning in Language”. CSLI Publications, 2002. https://web.stanford.edu/group/cslipublications/cslipublications/site/1575862867.shtml (accessed on 05/04/2021)

Schölkopf, B. et al., “Towards Causal Representation Learning”. https://arxiv.org/abs/2102.11107

Bengio. Yoshua. “From System 1 Deep Learning to System 2 Deep Learning”.NIPS 2019, https://slideslive.com/38922304/from-system-1-deep-learning-to-system-2-deep-learning

Launchbury, John. “A DARPA Perspective on Artificial Intelligence”. DARPA, https://www.darpa.mil/attachments/AIFull.pdf (accessed on 23 March 2021)

Chollet, Francois. “What is the Future of Artificial Intelligence?”, https://www.youtube.com/watch?v=GpWLZUbPhr0

Singer, Gadi. “The Rise of Cognitive AI”. Toward Data Science, April 2021. https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc

Forbus, Kenneth. “A Brief Introduction to the OpenCyc Ontology”. https://www.qrg.northwestern.edu/nextkb/IntroOpenCycOnt.pdf (accessed on 05/04/2021)

Newell, Allen. “Physical Symbol Systems”, Cognitive Science, April 1980. https://onlinelibrary.wiley.com/doi/abs/10.1207/s15516709cog0402_2

Pearl, Judea. “The Book of Why”. New York, Basic Books, May 2018. http://bayes.cs.ucla.edu/WHY/

Harari, Yuval Noah. “Sapiens, A Brief History of Humankind”. Harvill Secker, 2014.

De Jong, Tom and Fergusson-Hessler, Monica. “Types and Qualities of Knowledge”. Educational Psychology, 1996.

Pavlus, John. “Common Sense comes closer to Computers”. Quantamagazine, April 2020. https://www.quantamagazine.org/common-sense-comes-to-computers-20200430/

Marcus, Gary. “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence”. Robust AI, February 2020, https://arxiv.org/ftp/arxiv/papers/2002/2002.06177.pdf