The 21st Century Renaissance — The Education Century

Robert Hacker
49 min readSep 28, 2020

“Cognition, Complexity and Computation”

Credit: UNESCO

“A society must assume that it is stable, but the artist must know, and he must let us know, that there is nothing stable under heaven.” James Baldwin

Note: Specific recommendations are in bold throughout the article.

Introduction

As we settle into the 21st century, society faces two pressing issues:

1. Increasing and unacceptable levels of wealth inequality;

2. Environmental apocalypse

I have written about these issues before — here, here and here.

We also have pending two important scientific questions, which have troubled man for over two thousand years:

1. How does consciousness arise?

2. What are the origins-of-life?

In the 20th century we developed some new tools to help us address these scientific questions. Most notable amongst the tools were:

1. Quantum Mechanics

2. Information Theory

3. Complexity Theory

While these tools pushed forward our understanding of science, I think they also made important contributions to epistemology, cognition and problem solving. Spawned by this body of scientific thought, I think we also advanced our understanding of creativity and invention, most notably in the Krebs Creativity Cycle.

Equally important, I think seven concepts emerged as important tools to change our perspective on how we think about reality, knowledge and learning. Those concepts are:

1. Uncertainty — Risk

2. Pattern Recognition

3. Explore — Exploit

4. Assumption

5. Analysis — Synthesis

6. Abstract — Tangible

7. Asymmetry of Information

These concepts help to better prepare us for a future shaped by science, technology and education.

I use two frameworks to consider the outlook for the next 30–50 years, taking into consideration the social, economic, political and environmental issues:

1. Nature — Design — Technology

2. Cognition — Complexity — Computation

The first framework is Fritjof Capra’s holistic approach to systems thinking, where we must recognize that humans are always first just a part of the greater whole of nature. This view strongly encourages a reexamination of priorities, a multidisciplinary perspective and putting environmental sustainability ahead of shareholders, stakeholders and politics.

The second framework is mine. I think it is necessary to balance the first framework with a more explicit recognition of the importance of humanity. Science must always be balanced by considerations of humanity. In the 20th century computerization disrupted and transformed many traditional industries. The HBS mantra of productivity improvement, cost savings and revenue growth were all made much easier by the Computer Age. In the 21st century, what the World Economic Forum calls the 4th Industrial Revolution, I think the regulated industries — healthcare, education, construction — will be disrupted by the AI-IOT-Big Data[1] technology paradigm operating in real time. If we thought of government as an industry, I think it will also be disrupted if not transformed. Critical to this 21st century will be the redefinition of education in terms of purpose, curriculum and delivery. Education needs to be redefined in order for the average citizen to have some chance to understand the new science, technology and applied mathematics that will dominate this century. This outlook is explained in the more detailed discussion of the second Framework below..

This Essay is organized first with a brief introduction to some science, including quantum mechanics, information theory, complexity theory and the Krebs Creativity Cycle (The Tools). Next I discuss how this science spawns seven Concepts which now influence our views of reality and learning. The two Frameworks for the 21st century are then explained and I conclude with some more general thoughts on the future and education.

Two points to note:

- Almost every major topic in this article is worthy of a book or one hundred books. In using the essay form, I necessarily will need to summarize, and the summarization selected hopefully increases readability and understanding. Completeness of topic presentation is sacrificed in this approach. For example, in quantum mechanics, I see no improvement to understanding the major points in the essay by talking about black holes, and so it is omitted.

- Quantum mechanics and information theory are arguably the two most important scientific discoveries of the 20th century. Complexity theory is advancing rapidly and may be the most important body of scientific thought in the 21st century if Nobel physicist Stephen Hawking’s forecast is right. The Krebs Creativity Cycle is not of a level of importance comparable to these tools, but I include it in the tools section for convenience.

The Tools

Quantum Mechanics

While many can claim to have shaped science, perhaps no one has been more important in the history of science than René Descartes. For almost four hundred years, his macroscopic approach shaped science and much of our thinking in general. This Cartesian approach was turned upside down with the investigations that shaped quantum mechanics and introduced a microscopic approach in physics. Quantum mechanics research included many of the luminaries in 19th and 20th century physics including James Clark Maxwell (Laws of thermodynamics), Ludwig Boltzmann (Statistical Mechanics) and Albert Einstein (photoelectric effect). Maxwell’s work laid the groundwork for the Laws of Thermodynamics and the all-important second law which gave us entropy. Entropy shows us that the universe is constantly tending toward uncertainty and disorder and that determinism is a flawed model to think about reality. It also explains the behavior of every living thing to reduce uncertainty. These principles explain many of the tools in the next Section. Boltzmann’s statistical mechanics showed that the statistics of microstates of a system determine the physical state of a system. “Boltzmann provided an alternative interpretation of entropy in his kinetic theory of gases as a measure of the potential disorder in a system. This definition no longer emphasizes energy dissipated through work but the number of unobservable configurations (microstates) of a system”[2] in terms of interacting particles. This use of statistics to explain interacting particles is at the heart of quantum mechanics. These sub-atomic particles, such as protons and electrons, are part of what came to be called particle physics and demonstrate how quantum mechanics shifted the focus of science from the macro to the micro or sub-atomic level of a reality which is described by statistics.

Information Theory

Claude Shannon went on to adopt entropy to explain his Information Theory. Shannon used entropy to describe information shared through a channel. Shannon Entropy measures the uncertainty of information, the greater the uncertainty the greater the value. As information becomes more certain it becomes “signal”, which serves to fulfill a purpose. As information becomes more uncertain, it becomes more random, demonstrates more micro-states, entropy increases and we call it “noise”. The initial value in Shannon’s work is that it brought attention to compression and transmission in a channel, which focused attention on how to increase transmission speeds and the amount of information transmitted. This lead to the realization that any manmade wave could be digitized and transmitted over a channel, which birthed digital media. More importantly, framing problems in terms of information theory became a powerful transdisciplinary methodology that was used to explain living organisms, complexity theory, neuroscience, physics and on and on. The usefulness of Information Theory explains why some people believe that this theory is a greater contribution to knowledge than Einstein’s work on relativity. I believe that quantum mechanics will be as transdisciplinary as Information Theory, but we still need to develop the tools to make more use of it. Information theory was popularized just as the computer was invented and popularized. As Brian Arthur points out, technologies appear to solve the then current problems. Enhanced computing hardware and new algorithms are first required before we can fully use quantum mechanics in applications such as cyber-security and the next generation of the Internet — Quantum Internet.

Complexity

Brian Arthur was an economics professor at Stanford and a founder of the Santa Fe Institute, perhaps the leading research organization in the world on complexity theory. Complexity theory explores the concepts that explain all systems, whether natural or manmade. While some trace complexity back to Goethe, I see it as a 20th century movement based in the critical writings of Arthur Eddington, Herbert Simon, Murray Gell-Mann and David Krakauer. To quote Brian Arthur, a retired Stanford professor, on complexity (examples in the parentheticals are my own),“I think there are five characteristics of CAS [complex adaptive systems]that are important and agreed upon by almost everyone.

1-Emergence — Unexpected outcomes arise that are not explainable or predictable from the characteristics of the agents and [sub]systems (boiling water turns to gas)
2-Hierarchical — Nested [sub]systems are an integral part of a larger system with no dictated instruction from outside the system (the stomach within the digestive system)
3-Non-linear — Agents have random or chaotic behavior (unpredictable and uncertain)
4-Adaptive — Agents learn or evolve based on information or the related feedback
5-Self-organizing — Agents and [sub]systems are autonomous to achieve purpose”

These principles explain all natural and manmade systems, from slime mold and ants to stock markets, corporations and cities. The non-linear quality of these systems explains why probabilistic, chaotic and fractal-like behavior is seen in all CAS. This type of behavior by multiple agents explains long-tail events such as pandemics and stock market crashes. You might now start to understand the popularity of biomimicry. If reality is nonlinear in all CAS, then manmade systems are nonlinear. The largest and oldest collection of solutions to problems in nonlinear systems is nature. Therefore, we could take natural solutions to problems and apply them to manmade problems and systems — biomimicry.

Krebs Creativity Cycle

The hierarchical nature of complex systems provides a useful approach to creativity and is discussed further in Concepts under Analysis-Synthesis. The Krebs Creativity Cycle, shown below, was developed at the MIT Media Lab originally by John Maeda and Rich Gold and then advanced by Neri Oxman[3]. There are four modalities of creativity — Art, Science, Engineering and Design.

Credit: MIT Media Lab

Art changes perception, which generates new information. Science takes information and turns it into knowledge. Engineering takes science and gives it purpose. Design takes engineering and gives it utility. Art and science take the abstract and make them tangible. Engineering and design increase utility. (More on abstract and tangible in the next Section on Concepts.) The power of the Krebs Creativity Cycle is that it formalizes the approach to creativity by defining the four approaches. It also shows the natural affinity between art-science-engineering-design. The Renaissance and modern science were launched by a change in art and the introduction of perception into painting. I believe that we are on the verge of the 2nd Renaissance and a new way to visualize big data will be the catalyst. I do not know whether it will be an Excel type product like “super” Tableau or a virtual reality (VR) walk through big data, but that visualization technology is coming and could be the catalyst for another Renaissance.

The Concepts

Concept I Uncertainty — Risk

“What we observe is not nature itself, but nature exposed to our method of questioning.” –Werner Heisenberg

If we look at the world simply, there are two types of physical entities. One group energy acts on, such as rocks. Energy heats the rock. These objects we call inanimate. They do not reproduce. The other group processes their own energy, such as trees, animals and humans, which allows them to process information, adapt and reproduce. Processing information permits more efficient energy processing even though it requires energy.[4] Therefore, if we wish to understand the human brain, one needs to understand energy management and information processing. This analysis requires a simple, fundamental understanding of perception, quantum theory and entropy.

A human being is constantly bombarded by sensory data. To deal with the volume of data in an energy-efficient way, scientists now believe that the human brain is designed to reject most of the sensory data and only capture data that reduces uncertainty. This capture we call perception. How the uncertainty is reduced is a theme throughout this book. The reduction of uncertainty through this selection process indicates that the data processed is a probabilistic snapshot of the moment. For example, studies show that when a human is threatened — a tiger appears out of the jungle — the brain shuts down almost all analytical processing and focuses almost completely on perception in order to increase the likelihood of escape. Focused perception increases the likelihood of finding the best escape route. Chances are the tiger is also in perception mode given their need for food to create energy. You are probably the tiger’s lunch, but the human brain did the best it could.

Otto Schmitt was a biophysicist remembered for founding the fields of biomedical engineering and what became biomimicry. Biomimicry is the study of manmade outcomes modeled after natural processes. To understand the uncertainty of perception we need look no further than quantum theory, which explains physics at the particle or micro level. Newton described much of physics at the macro level and Einstein was perhaps the most notable contributor to what became quantum theory and physics at the micro level. “Briefly put, quantum theory offers a probabilistic account of how fundamental particles behave. For example, the theory describes a particle’s motion by modeling a distribution of possible paths it may take, rather than a single, deterministic path that it will take. The question this raises is how these multiple possible paths become the single path we actually observe. The most intuitive explanation, at least from a classical view, is that — like the statistical model of water molecules in a balloon — the probabilistic account of a particle’s path does not describe how the world really is, but is only an approximation for some underlying process whose mechanics are not directly observable.”[5]

The explanation of perception, the statistical model, is the theoretical manmade system where neuroscientists still have a long way to go to empirically understand perception. Quantum theory is the natural process which is fundamental to understanding all reality. If we accept quantum theory, we can apply biomimicry to realize that reality is based on the probability we find in quantum theory. So, at the most fundamental level of science, our perceptions are completely stochastic. The challenge in this logic is that almost no one thinks of their perception in statistical terms. Fortunately, we have information entropy.

In classical thermodynamics, entropy is the uncertainty in the universe and constantly increasing. Uncertainty refers to the number of possible states, for an electron, a water molecule or even a manmade system such as a stock market or a messy bedroom. As stated above, Claude Shannon applied the concept of entropy to information. He defined information as the probability of a signal and not the meaning. Information is the variable signal that correlates with an outcome. The alternative, noise, is random interference without purpose or objective. This distinction between signal and noise we return to in the Concept on Assumption.

Information entropy formed the basis for Information Theory, which has become a transdisciplinary tool. Information Theory provided the theoretical basis for all digital information wherein waves were converted to 0,1 bits, a term Shannon coined. In other words, Shannon’s Information Theory launched the Digital Age of all digital media. Shannon’s work also contributed to applying information processing as a biological concept to explain adaption by living organisms and inspired the math which gives us so much understanding of networks which link adaptive agents. Shannon’s concept of noise has also been applied to understand mutation. If genetics transfers information correctly, the signal produces the expected result. If genetics transfers noise, then the organism has a mutation which may or may not survive and be passed on.

At many levels we can now see that a human is just a stochastic outcome of an information processor. Our perception, our genetics, all of the various systems that are our makeup are random, non-linear processes. A more careful examination of these systems shows that there is a hierarchical logic to the systems. Cells make up organs that make up systems such as the digestive system that combine with other systems (reproductive, cardiovascular, etc.) to make up the human. However, the systems alone do not explain things such as consciousness. Where the whole is greater than the sum of the parts, we call this emergence. Emergence is a characteristic of every system where the components do not sum to the whole. Emergent systems are non-linear and random, non-linear by mathematical definition and random because of quantum theory. These systems, which explain all natural and manmade systems, are studied in complexity theory.

To translate all this physics to the practical, let’s look at uncertainty and the related concept of risk from the economist’s viewpoint. In 1921, then University of Chicago Professor Frank Knight published one of the most famous books in economics, “Risk, Uncertainty and Profit”. Knight defined uncertainty as a situation in which you lack the data to determine a probability distribution. In Shannon’s terms, without a determination of probability, all you have is “noise”. If one has sufficient data to determine a probability distribution, Shannon would say one has information and Knight said these were situations with “risk”. In summary, if you had data to determine a probability distribution, one could measure risk. Without the probability, one had uncertainty. If we were to frame our epistemology and teaching to begin here — with probability, risk and uncertainty — would we not bring a more measured and thoughtful approach to risk, encourage the study of statistics at a much younger age and hopefully communicate that uncertainty is merely the absence of data.

Steven Pinker’s quote comes to mind:

“The … ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order.”

Pinker’s beneficial order is Shannon’s signal and Knight’s measured risk. All three scholars embrace the random, unpredictable nature of life. Bassem Hassan, a neurobiologist at the Paris Brain Institute, takes it a step further when he says we are just the combination of the stochastic events in our brain.[6] We will come back to this issue in the Concept of Assumption.

“The point here is that we do not actually “see” with our eyes but with our brain. And we have learned that in turn by becoming able to see how the brain operates. What we see with the eyes, it turns out, is less like a photograph than it is like a rapidly drawn sketch. Seeing the world is not about how we see but about what we make of what we see. We put together an understanding of the world that makes sense from what we already know or think we know. — Nicholas Mirzoeff, How to See the World

Concept II Pattern Recognition

“A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas” — G.H. Hardy[7]

In the last Concept we talked about the capture of sensory data as perception. Cognitive psychology further defines perception as “the organization and explanation of sense information, and the process of acquiring the meanings of sense information.”[8] In order to achieve this organization humans use pattern recognition. “The human brain has evolved to recognize patterns, perhaps more than any other single function. Our brain is weak at processing logic, remembering facts, and making calculations, but pattern recognition is its deep core capability”.[9]

“What the mind is doing when it “recognizes” an image is not matching it against a database of static images. There is no such database in the brain. Instead, it is reconstructing that image on the fly, drawing on many conceptual levels, mixing and matching thousands of patterns at many levels of abstraction to see which ones fit the electric signals coming in through the retina… According to this model, recursively stepping through hierarchical lists of patterns constitutes the language of human thought.”[10] The outcome of the pattern recognition is the abstraction which provides the meaning.

Pattern recognition is not a single tool, approach or process. Research shows that it includes six techniques to analyze perception (and information) and frequently these techniques are combined. Why are multiple techniques applied? To reduce uncertainty and probably to provide redundancy. Another way to describe it, we are looking for meaning. There is little value in recognizing a tail. There is much more value when we realize the tail is attached to a lion and not a mouse. (The tiger was busy having lunch.)

The six main theories of pattern recognition are template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. What I note in this description of pattern recognition is that it is a nearly perfect toolkit to use in complex systems. The complexity of all natural systems is based on a component approach to create systems that are organized in hierarchies which eventually produce emergent properties. Component theory, bottom-up processing and prototype matching all sound strikingly similar to the vocabulary and behavior of complex systems. These concepts also explain creativity, which we discuss further in the Concept Analysis — Synthesis.

One of the most stunning examples of pattern recognition is mathematics. Perhaps we can attribute to Galileo the idea that “math is the science of patterns”. This analysis from the website Magnificent Math looks at many areas of math as patterns:

§ Arithmetic and Number Theory: The Pattern of Counting

§ Geometry: The Pattern of Shapes

§ Calculus: The Pattern of Motion

§ Logic: The Pattern of Reasoning

§ Probability: The Pattern of Chance

§ Topology: The Pattern of Closeness and Position

§ Fractal Geometry: Patterns of Self Similarity Found in the Natural World

Some would say that math helps us to find the patterns in the world. When we think of this powerful tool for pattern recognition, we realize that math is applied in a three-step approach:

1. Identify the appropriate set of rules from mathematics (calculus, logic, topology…)

2. Confirm the applicable rule(s) (equations, theories…) for the specific problem

3. Manipulate the rules and symbols to derive a predictive result

What math produces from the pattern recognition is the abstractions, which are the concepts, theories, formulas and eventually models that enable us to make predictions and thereby reduce uncertainty. The review of the different types of math to finalize Step 1 demonstrates the value of math for pattern recognition and abstraction. The versatility of math may also explain why it is so often used in science to explain the theories (the abstraction) and clarify empirical validation. Richard Feynman, the Nobel Laureate physicist explained it well.

“The method of guessing the equations seems to be a pretty effective way of guessing new laws. This shows again that mathematics is a deep way of expressing nature and attempts to express nature in philosophical principles or in seat-of-the-pants mechanical feelings is not an efficient way.”[11]

We return to abstract ideas in the Concept Abstract — Tangible.

If we look at AI and Machine Learning (ML) in particular, one of the most used algorithms precedes computing by one hundred and fifty years and is called linear regression. Basically linear regression estimates the coefficients for the multiple variables in an equation that explain a problem in order for accurate prediction. This fundamental algorithm of linear regression is now just computerized pattern recognition. In fact, much of Machine Learning and Natural Language Processing is just pattern recognition to derive new findings, the abstraction. Deep Learning, Neural Networks, Genetic algorithms, Reinforcement Learning, some of the most well-known algorithms, are all different approaches to pattern recognition. As artificial intelligence and computation advance in the 21st century, both in terms of algorithms and computing equipment, pattern recognition and abstraction will become even more valuable. We should formally teach the tools of pattern recognition and abstraction as part of an education and an introduction to computation.

Concept III Explore — Exploit

“This interplay between exploration, by which new solutions are tested, and exploitation, by which the best solutions are multiplied and spread, is characteristic not only of evolution via natural selection but also of the way people, companies, and other institutions must allocate their time and effort to survive and thrive in an economy — which is to say that business and markets are shaped by many of the same evolutionary processes that shape the natural world.” — Simon Levin and Andrew Lo[12]

All living things are adaptive. They make better use of their resources through modifications of their behavior. These modifications are initiated by information processing, “how systems get information, how they incorporate that information in models of their surroundings, and how they make decisions on the basis of those models”.[13] Rich interconnections enable diverse components to interact in multi-causal, regenerative feedback loops. Iteration, the repeated interaction of diverse components following relatively simple rules and definable constraints is a very important property of complex adaptive systems. These simple rules are called fractals. “Evolutionary forces carve out the most effective processes for a species [and] then repeat that process — thus, nature is commonly, organically generative. Fractals, the crux of fractal geometry, are infinitely complex [and] detailed patterns that are self-similar across different scales; they’re mathematical objects created by recursions of functions in the complex space.”[14] Fractals are generative or regenerative because they use the previous result as input to determine the next outcome and by this process they are iterative. While mechanistic in appearance, this behavior is still adaptive and a form of information processing. It is through this information processing that we more deeply understand pattern recognition. “This cyclical pattern of iteration within dynamic networks is where the novel forms, behaviors and properties emerge.”[15] The iteration and adaption in behavior is the basis for the fundamental framework of action, which is called “explore and exploit”.

Exploration can also be explained by a principle in Shannon’s Information Theory — the occurrence of a highly likely outcome does not provide much information, whereas a highly unlikely outcome provides a great deal of information to an observer. The iteration continues until the emergence of order, which is the formation of the abstraction and the realization of knowledge. The realization of the abstraction triggers the possible transition to exploitation. Marvin Minsky, the renown MIT researcher in AI and cognitive psychology, helps us to see this result and better tie pattern recognition and abstraction together.

“Abstraction in its main sense is a conceptual process where general rules and concepts are derived from the usage and classification of specific examples, literal (“real” or “concrete”) signifiers, first principles, or other methods… An abstraction” is the outcome of this process — a concept that acts as a common noun for all subordinate concepts, and connects any related concepts as a group, field, or category.”

Perhaps an example will make the point clear. In an ant colony a small number of ants are assigned responsibility each morning to find food for the whole colony. This assignment or sharing of responsibility is probably the most energy-efficient approach for the collective colony as a whole. The assigned ants go out individually to explore, try many different paths to find food because they need to find sufficient food for the entire colony, and then return to the colony leaving a chemical trail behind as a road map for the others. The point at which the ant starts leaving a chemical trail is the point where exploration stops because the ant has achieved the “general rules and concepts” of abstraction — finding the food — and the exploitation can begin. The ant uses a very simple information processor for their exploration and abstraction determination. Entrepreneurs use the same process as ants, albeit with a more advanced processing capability. The good entrepreneur, as capital efficiently as possible, explores the market and the possible customer segments to secure sufficient sales to achieve product-market fit. Product-market fit is the abstraction that signals to the entrepreneur to pour in the money, scale the business and achieve a large, sustainable business. A closer examination reveals that Thomas Kuhn’s scientific revolutions, Carlota Perez’ industrial revolutions and all natural systems follow the same model as the ant — explore-exploit.

To summarize, I turn to David Krakauer, evolutionary theorist and President of Santa Fe Institute (SFI):

Since total information represents the trade-off between rule-based and random behavior, it decreases monotonically as long as addition of extra rules to describe regularities of a system is more than compensated for by a decrease in the system’s apparent randomness. This property of total information means that a learning process that minimizes total information is in a sense optimal: arrival at a minimum of total information implies that one has obtained a complete description of the predictable features of a system and expressed this description in the most compact form.

The point to take away from the entire discussion of explore-exploit is that it is natural to be exploring and learning, that this learning takes place iteratively and that minimizing total information is “in a sense optimal” in order to have real understanding. This leads us to realize that we should be teaching iterative process, rather than punishing mistakes or grading responses, and this iteration — trial and error — is the natural way to learn (as Piaget taught us). More facilitated learning and less teaching would encourage this behavior in students.

Concept IV Assumption

“Whether you can observe a thing or not depends on the theory which you use. It is theory which decides what can be observed.” Albert Einstein

MIT Sloan is the business school at MIT, where I taught an IAP course for seven years every January. A new online course there is titled, “Questions Are the Answer: A Breakthrough Approach to Creative Problem Solving, Innovation, and Change”. Sounds exciting! They illustrate the focus of the course, with several examples, including the story of Salesforce:

“Marc Benioff launched Salesforce with a query, “Why are we still loading and upgrading software when we have the internet?” that created cloud-based software services.”

While MIT may think questions are the most effective means to create innovation, I think a more powerful technique is to focus on assumptions.

Karl Popper, a renown 20th century philosopher of science, said, “We must distinguish between truth, which is objective and absolute, and certainty, which is subjective.” Further defining, “assumptions are factors that we believe to be true, although these factors are not confirmed to be true”. A premise is an assertion of fact yet to be proved.

Aristotle first presented the system of logic and reasoning. He taught us that an argument should be built on premises, which followed logically lead step-by-step to a conclusion, and he taught us the accepted steps of reasoning. Frequently the premises are untested (hypothesis) or unproven — assumptions. Occasionally, the premises are true. The challenge comes in realizing the distinctions because the logical reasoning alone proves nothing. So rather than asking a question, we lay out the logic of an argument and test or substitute the assumptions or truths in the premises. And remember, there could always be hidden premises — enthymemes.

To understand the usefulness of assumptions, let’s look at an example. Suppose we want to launch a new medical diagnostic device. What are the key assumptions to be tested?

1. Does the device accurately screen for the disease? (problem)

2. Will doctors change their method of practice to include the new device? (solution)

3. Will FDA and Medicare approve the use of the device? (scale)

What the example illustrates is that to solve any problem, the solution always involves understanding the key assumptions about the problem, solution and scaling (in order to have social impact). If you cannot articulate these three key assumptions, you do not understand the problem and will never solve it. Frequently, the analysis of the three key assumptions leads to other key assumptions. To paraphrase Marc Andreessen, a well know venture capitalist, entrepreneurship is really just validating assumptions. Solving any problem or scaling any solution really boils down to validating assumptions. Good thing we learned about the iterative approach of Explore-Exploit.

An important example of assumptions are the first principles of science and engineering. First principles are “a fundamental fact or conclusion that you know is true, deconstructing it down to its core elements, and working up from there… In even simpler terms, it’s a fact or premise or conclusion that is the only conclusion, regardless of your perspective.”[16] First principles are truths that have been mathematically and empirically validated. However, by the nature of science and the choice of perspective — macro (Newton) or micro (quantum mechanics) — the truths can change. This was Einstein’s point in the quote at the beginning of the chapter.

When you establish a new first principle in science or engineering, the results are life-changing. Einstein redefining time and space and Maxwell’s work on electromagnetism come to mind. As the Nobel Laureate physicist Richard Feynman put it, “a hundred years from now, the American Civil War would pale into provincial insignificance compared to that other development from the 1860s — the crafting of the basic equations of electromagnetism by James Clerk Maxwell. The former led to a new social contract for the United States; the latter underpins all of modern civilization — including politics, war and peace.”

This power of new first principles explains the powerful allure of research and discovery. The National Science Foundation explains transformative research as follows:

“Transformative research involves ideas, discoveries, or tools that radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education. Such research challenges current understanding or provides pathways to new frontiers.”

When we upend current first principles or assumptions and push out the frontiers of science, we are transformative. When we apply computational biology to biopharma and healthcare, we upset the time tested methodology to validate a hypothesis and replace it with a combination mathematical and empirical approach that produces a truth directly. However, given the general uncertainty of all information, we should be slow to declare a truth with this method when it comes to biopharma, healthcare and the implications for human life.

Another way to think about an assumption is to consider it as “expected value”. When we discussed the generative fractal patterns in iteration, we saw the need to reduce uncertainty. One technique to reduce uncertainty is to use expected values. According to Oxford Languages, an expected value is “a predicted value of a variable, calculated as the sum of all possible values each multiplied by the probability of its occurrence”. This definition of expected value looks strikingly similar to Coase’s definition of risk, and therein lies my point. Assumptions can also be used to identify risk. When Andreessen talks about validating assumptions, he is really talking about de-risking the entrepreneurial venture.

An example from sizing an opportunity — whether it be social, commercial or research — illustrates another point about assumptions and risk. We can size an opportunity or market from the top down or the bottom up. Professionals almost always do bottom-up. That means they might start with the size of the opportunity in Miami-Dade County, then add counties to get a Florida total and then do states the same way to get to a three-year estimate for a market. If they estimate one county wrong, it has a negligible effect because the approach is additive. If they had started with the total opportunity for the United States, assumed users involved nationally, then assumed users for each of three states including Florida, we have five chances to use the wrong assumption and produce a result of little value. This is the multiplicative fallacy. The inter-relationship of the assumptions makes it a powerful tool to be very wrong. Use an additive approach to reduce the risk of assumption.

As my last point on assumptions, let’s return to complexity. Complex systems have units, agents or organisms that have dynamic, nonlinear, unpredictable behaviors. There are three levels of agents or units that are specific to each system — 1) the organism level, 2) behavior level (inter-organism) and 3) ecosystem level.[17] Now here is the quiz question — how do you define the organism or organisms in such a 3-tier ecosystem. One might quickly answer, organisms have two features — the ability to reproduce and the ability to process information. Good answer, but not necessarily fully responsive. Do you think of yourself simply as an information processor? Probably not. There are spatio-temporal boundaries that we apply to identify agents or units. This approach comes from the pattern recognition we use constantly to reduce uncertainty. The pattern recognition is defining an “object” spatio-temporally. However, this is easier said than done. The best definition for a boundary that I have found, ignoring chemistry definitions which are not practical, is A or not A. It is me or it is not. It is a beetle or it is part of the beetle’s environment.

What this exercise makes clear, again, is the importance of assumption in definition. How do we define the agent or unit in the problem or opportunity? Dr. Fauci, the now famous federal COVID-19 czar, illustrates the point about assumption and definition. Fauci cautions that we must not consider the COVID-19 pandemic over when new cases decline. We have to also consider devastating long-term health effects. Fauci cautions that we must address the long-neglected field of post-viral illness. It is long neglected because of an assumption about the boundary of the disease or how we define the disease. Perfect example of the totality of spatio-temporal factors

The great polymath Aristotle over 2400 years ago developed formal or symbolic logic, which provided the foundation for mathematics, computer science and deductive reasoning. Fundamental to all these fields are the concepts of premise and assumption. To properly understand the powerful tools of the 21st century — mathematics, computer science, AI — we should all formally study symbolic logic. Such an approach would improve deductive reasoning, de-risking assumptions and definition.

“Today I escaped from the crush of circumstances, or better put, I threw them out, for the crush wasn’t from outside me but in my own assumptions.” — Marcus Aurelius

Concept V Analysis — Synthesis

“Technical skill is mastery of complexity, while creativity is mastery of simplicity.” Christopher Zeeman

Herbert Simon was a notable 20th-century polymath, comparable in my opinion to John von Neumann. Simon was a Nobel Prize winner in economics, one of the early inventors of AI, an early writer in the 1960s on complexity, a leading thinker on decision making and the father of modern design theory and design research. Simon’s book, “The Sciences of the Artificial”[18] (1972), defined design and presented the very useful framework — Analysis-Synthesis. This thinking was based in part on an earlier article by Simon (1962), “The Architecture of Complexity”, wherein he discusses “the search for common properties among diverse kinds of complex systems”. Analysis is the study of natural systems and synthesis is the study of artificial or designed systems, either of which could be complex for Simon. For Simon and most researchers, complex systems are hierarchical — “the complex system being composed of subsystems that, in turn, have their own subsystems, and so on”[19]. It is in this understanding of hierarchy that we draw our insights about Analysis-Synthesis.

Galileo is perhaps the father of the Scientific Revolution, which began in the mid-16th century. His conjecture that math was not divine and just abstract provided the foundation for macroscopic, top-down physics. Descartes’ writing and Newton’s science cemented in place this macro perspective in science for about four hundred years till the 1930s and the acceptance of quantum mechanics. This macro perspective approach to science was so popular partly because it framed reality as mechanistic, deterministic and predictable. Secondly, the top-down approach mirrored the hierarchical systems of nature. Whether it be ants, trees or humans, it was easy to identify the units (trees), the sub-systems (leaves) and the components (stems). This component approach worked well with universal laws being discovered, such as Newton’s gravity. These laws provided a top-down causality to explain the behavior of components at each level of the hierarchy in a system. This order from top-down causality could be explained analytically by mathematics and therefore was logical and rigorous. This analytical, component approach came to be called “reductionism”. Quantum mechanics reversed this approach to science to a microscopic, bottom up strategy and helped to usher in Simon’s Synthesis.

Design and the more technical engineering were problem solving techniques for Simon. Design is a search through an environment for the components that produce the best solution for the desired outcome. The “search through the environment” … to “produce the best solution” is the Synthesis. Simon’s interest in AI was to provide enhanced computational approaches to improve the discovery and selection of components to optimize the result. Simon’s Nobel Prize for “bounded rationality” demonstrated that available information could at best produce an optimal result and not a maximum.

Simon’s approach closely mirrors the theory of complexity. Simon explains design:

“A natural science is a body of knowledge about some class of things objects or phenomena in the world: about the characteristics and properties that they have; about how they behave and interact with each other … For when we have explained the wonderful, unmasked the hidden pattern, a new wonder arises at how complexity was woven out of simplicity. The aesthetics of natural science and mathematics is at one with the aesthetics of music and painting both inhere in the discovery of a partially concealed pattern.”

“Woven out of simplicity” is the component, bottom-up, synthetic view of design for Simon. (The “partially concealed pattern” for Simon is the abstract in Abstract-Tangible that is discussed in the next Concept.) This foundation in complexity is made ever more clear in the following passage from Simon’s “The Sciences of the Artificial”:

“An artifact can be thought of as a meeting point an “interface” in today’s terms between an “inner” environment, the substance and organization of the artifact itself, and an ‘’outer” environment, the surroundings in which it operates. The outer environment determines the conditions for goal attainment. If the inner system is properly designed, it will be adapted to the outer environment, so that its behavior will be determined in large part by the behavior of the latter, exactly as in the case of “economic man.”

While much of Simon’s writing closely follows the vocabulary of the natural sciences, it is obvious that design and engineering are artificial, manmade pursuits. Andy Haldane, Chief Economist of the Bank of England, makes this point clear in advocating for “resolution” beginning at the micro-level:

“Our economies, like our politics, are local. Like the seashore, the more you magnify an economy, the greater its richness, complexity, self-similarity. Like our bodies, understanding our economic health means taking readings at many resolutions. It means understanding the moving body parts, and their interactions, in microscopic detail. It calls for new data, at a higher frequency and higher resolution, and new ways of stitching it together. It means making micro-to-macro a reality.”[20]

Simon’s writing on design defined modern design. However, it should be remembered that for Simon design applies to the creation of artefacts, compositions, processes and methods. We do not design the complex natural systems from which the micro to macro, hierarchical logic is derived. Such complex systems are non-linear and stochastic and the best we can do is to capture these features in computation. This is the reason why it is so hard to predict behavior in complex manmade systems, such as stock markets and cities.

The power of Simon’s approach to bottom up, synthetic design is demonstrated by two facts. The selection of components to create the design is consistent with Einstein’s definition of creativity, “combinatorial play”, which I find to be the best way to think about creativity. Creativity produces the alternative designs from the components. Descartes and Poincaré both point out that invention is the insightful, intuitive selection from the alternatives.

Perhaps more important, Simon’s component approach is consistent with the particle approach of quantum mechanics, which revolutionized the approach to science in the 20th century. Quantum mechanics adopts a bottom up, component hierarchy beginning with stochastic particle behavior to explain reality. Simon’s interest in applying computational approaches to design even allows for the statistical causality and probabilistic approach fundamental to quantum mechanics.

The lessons we should adopt from Analysis-Synthesis are that the determinism of reductionism in science has been replaced by quantum mechanics and complexity. Both theories are deeply grounded in probability. We should recognize the hierarchical nature of natural systems, manmade systems and design and teach widely statistical concepts as a fundamental part of human knowledge.

“A complex system that works is invariably found to have evolved from a simple system that worked … A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” — Gall’s Law (1975)

“Most of the powerful ideas are in the designs of the combinations.” — Alan Kay

Concept VI Abstract — Tangible

“Scientific knowledge can be defined in terms of a duality between mathematics (mind) and observation (phenomena). More precisely, it both requires and provides a specifically defined link between mind and phenomena.” Edward R. Dougherty[21]

I have never understood art and fiction literature and I am only a bit better with music. Finding Schopenhauer’s insight, to paraphrase, “art makes the emotions tangible”, however, was one of the happier days of my life. It gave me a gateway into this rich part of culture I had never understood. However, rather than becoming a fanatical student of the arts, I became a motivated investigator of the relationship between abstract and tangible. Then I came across a fact — man first demonstrated abstract thought about forty thousand years ago with the first cave drawings. Just like Schopenhauer made clear to me. The third fact that cemented my fascination with abstract-tangible (and lead to this writing) was the realization that entrepreneurship fails if it does not affect the tangible. To the long time professor of entrepreneurship, it was fascinating to put entrepreneurship in these terms. Entrepreneurship has to impact the tangible — time, space, matter and energy. Much of the challenge in mental health and social problems, for example, comes about because the solutions are not tangible — they never change people's behavior!

If we return to the Krebs Creativity Cycle, I think Schopenhauer showed us the abstract-tangible relationship in Art. Now I would like to discuss Science. Depending on the science, the current scientific method is either theory>empirical validation or theory>mathematical validation>empirical validation. For example, biology used to be theory>empirical validation. Today, increasingly the method is theory>computational modeling>empirical validation. In physics Einstein’s Theory of General Relativity shows us the method. First theory, then mathematical validation (and article) and then confirmation by Eddington in 1919 using a total solar eclipse. The theory and mathematical validation are the use of abstraction and the empirical validation in the scientific method shows us the tangible outcome.

If we return to the idea of pattern recognition, we encounter the concept of the abstract and the uncertainty reduction is the quest for an understanding of the tangible. The perceptions are compared to the existing patterns, the archive of patterns, which are abstractions. We process through pattern recognition to formulate a view of the tangible. According to the eminent computer scientist and complexity theorist John Holland, when we apply induction to the pattern recognition the result is emergence. This induction can also be considered as an example of the selection, the intuition or inventive choosing, from the creative alternatives.

Also, it should be noted that abstraction in pattern recognition is a hierarchical concept. For example, in the process of pattern recognition we can begin with the abstraction “living things”, then we have animals, then we have jaguars, then we have cute baby jaguars. This hierarchy provides a type of context which saves energy and increases the speed to resolution in the pattern recognition. This approach works in any ontology, whether it be natural or manmade, which shows the versatility and utility of pattern recognition and abstraction.

Another interesting way to think about abstraction comes from the Berkeley architect and design theorist Christopher Alexander. In his book, Notes on the Synthesis of Form, he writes:

“Design isn’t mystical or intuitive, Alexander wrote. It’s more like math: as a mathematician does when calculating the seventh root of a fifty-digit number, a designer should simply write a problem down and break it into smaller problems. Then those problems can be reorganized into sets and subsets and patterns, which point to the right solution.”

In contrast to Simon’s synthesis approach to design, Alexander begins with an abstraction — the problem — and then constructively simplifies into smaller and smaller abstract problems. I love the idea to think of design as math and I think Simon would have agreed.

As the discussion of design makes clear, the abstract-tangible framework starts with the abstract problem and creates the tangible solution from the analysis of components. Many have taught us the power of design, such as Steve Jobs, Milton Glaser and David Kelly to name a few. In this 21st century design of algorithms will be a hot new field. We need to teach all students design, the power of problem-solving from the abstract — tangible perspectives and the use of the components as an approach to synthetic design.

“Creativity constrained by logic and a set of axioms dictates how ideas can be manipulated and combined to reveal unshakable truths.” — Brian Greene, “Until the End of Time”

Concept VII Asymmetry of Information

“Designing for inclusion starts with recognizing exclusion.”[22] Kat Holmes, Designer

Information asymmetry deals with decision making where one party has better information than another. In economics it has been applied to explain market failures. For example, the continuing prioritization of shareholders ahead of the environment is a failure of financial markets to lower valuations for the errant. I first came across asymmetry of information reading Nobel Economist Michael Spence. In his book, “The Next Convergence: The Future of Economic Growth in a Multispeed World”, he demonstrates the link between information theory and economics, for which he and others have won Nobel Prizes in Economics. Of course, this the same information theory that Claude Shannon proposed. Basically, Spence says that poverty is caused by an asymmetry of information. The poor lack information which allows others to take advantage of them, thereby keeping them poor. This approach to understanding social problems — asymmetry of information — is a powerful analytical tool. For example, did you know that seventy percent of poor people do not know that banks loan money? Perhaps now you know why so many minority businesses are sole proprietorships. They do not know where to find expansion capital. This fact also explains why microfinance has been so successful around the world.

Staying with economics, asymmetry of information also explains entrepreneurship. According to Israel Kirzner, a leading academic writer on entrepreneurship, the entrepreneur sees opportunities that others do not see. In this case, the asymmetry is positive. In poverty the asymmetry is negative. I find it fascinating that poverty and entrepreneurship can fundamentally be explained by the same framework of asymmetry of information. But, do not tell anybody — this is a secret. Alternatively, if you want to solve a social problem, first find the negative asymmetry of information. For information theory to bring insight to poverty and social problems demonstrates to me the versatility of this transdisciplinary framework.

As we look at the 21st century and the increasing amounts of information, now measured in exabytes (one billion gigabytes), we realize that access to information becomes even more important less one suffer from an asymmetry of information. Complicating the problem is that we no longer just need access to information but also the tools to process information at this scale. We must provide public access to tools similar to Google Cloud Platform (GCP) or AWS[23] as a fundamental right comparable to freedom of speech. We also need to train every citizen in basic computational skills and tools so they can process, analyze and predict from this growing information and data. Failure to do so will jeopardize democracy by further increasing the asymmetry of information.

Frameworks

“Your job in a world of intelligent machines is to keep making sure they do what you want, both at the input (setting the goals) and at the output (checking that you got what you asked for).”

— Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine…

Fritjof Capra’s framework of nature-design-technology, the first of the two forward-looking frameworks, advocates for a systems thinking approach where the whole is considered as the emergent qualities derived from the complexity of the components. As Capra explains, “Systems thinking means a shift of perception from material objects and structures to the nonmaterial processes and patterns of organization that represent the very essence of life.” What concerns Capra is that the very essence of life is now in doubt as we continue to ignore the natural environment, put shareholder returns ahead of all other social benefits and live for further abundance with no recognition of Arthur Eddington’s default natural state of scarcity.

Inherent in the “essence of life” is the material components of nature. These components form the systems which create larger systems in a hierarchical manner which is repeated in all systems. This hierarchical structure should remind you of Simon’s synthesis and helps to explain why design is now shaping biology as an engineering science. How can biology be understood in terms of design or engineering? While evolution may have dictated the original organization of the components such as cells or organelles, we now have tools such as CRISPR to rearrange them and AI to propose the most effective new combinations. Now we have biology engineering, computational biology and synthetic biology, all the same thing — computation — depending only on your academic training. These new fields bring the concepts of engineering and design to biology through the application of AI. If you find this idea far-fetched, this history lesson from Scientific American makes a compelling case.

‘In fact, we have been building and designing tools to control, augment, replace or enhance biology as long as humanity itself has existed — whether it’s taming the jungle to build habitable villages; halting and containing infection; making advanced prosthetics for people who lost their limbs; making synthetic drugs to replace defective parts; or now, even creating functionality that nature never had. We can do this because we empirically learned the properties of those materials, then iterated, designed, and built new structures with them. There is no reason why we should not continue be able to do so for our medicines and bodies.’

Carnegie Mellon University describes the current areas of research in computational biology[24] as:

· Analysis of protein and nucleic acid structure and function

· Gene and protein sequence

· Evolutionary genomics and proteomics

· Population genomics

· Regulatory and metabolic networks

· Biomedical image analysis and modeling

· Gene-disease associations, and

· Development and spread of disease

All of this discussion of biology and computation should make clear an important point. Computation may enable us to make enormous strides to understand life and the human mind. With life, remember to reframe the problem. The problem is not to live forever, the problem is “not to die” and computation is helping us to advance rapidly toward such a goal. With the mind, I see the computation being applied to neuroscience through image analysis and psychology. Just the researchers at my university alone show me that within five years we will capture data on smart watches, analyze it with AI/ML and use the findings to do just-in-time interventions to help the autistic, protect children from the psychological effects of family conflict and address depression.

What design will become in the 21st Century is the means to launch the second form of cognition based on AI. This new “design” will reshape how we think and redefine research by offering an alternative approach to Piaget’s seminal findings on the process that explains the creation of new ideas. Piaget documented a process — passion-discovery-creativity-invention. Passion is Freud’s fundamental notion of energy. Discovery is the exploration phase which we see in all natural systems. Creativity is Einstein’s play to combine the components into what Linus Pauling called the ideas. Invention is what the famous French mathematician Henri Poincaré first called the insight or intuition or “picking” from the ideas (before then making the idea tangible and an invention). (For those interested in entrepreneurship and innovation, Joseph Schumpeter, renowned economist, taught us that innovation is invention commercialized (in the market) and I complete the steps by simply defining entrepreneurship as innovation at scale. Voila!)

Design will help us to shape the initiating passion into the questions to be explored, setting aside centuries of status quo and creating the value that will continue to give human cognition its invaluable perspective. If AI keeps advancing, it may well take over the discovery and creativity phases and even the invention in some fields like biology. The “battle” will be whether humans or AI designs the algorithms to shape and direct the discovery-creativity-invention steps in the process. I am betting on the humans.

One of the technologies that will exacerbate the situation is the emergence of cyber-physical systems. These systems link devices inside the body to the means to export that data to another device, perhaps in real-time, which could then be reported to the patient, a doctor or a medical technician. For example, imagine your pacemaker is internally connected to a port under your arm. Your cell phone has an app that automatically collects the data through the port and transfers it to the cell phone screen and the doctor’s office. This type of data collection is what is driving telemedicine — the reporting to the doctor — and digital health — the diagnosis and opportunity for clinical intervention by the doctor or his staff.

These cyber-physical systems are one example of what is more generally referred to as IoT (the Internet of Things). In a typical IoT system sensors are located to capture real-time information on any device or status one wishes to monitor. These are the systems that produce what people call “big data” because of the volumes of data captured in real-time, which we no longer measure in terabytes or even petabytes and now describe in exabytes. Combining big data, IoT and AI I think will be the first technology paradigm of the 4th industrial revolution, although I personally think that we need better tools, such as quantum computing, and better database computing equipment to fully realize the potential benefits from the data.

In Capra’s view new emphasis needs to be given to natural systems, complexity, information networks and patterns of organization, leading to a novel kind of “systemic” thinking about the whole. However, I think we need the second framework — cognition-complexity-computation — to bring attention to the critical intellectual issues for humanity. Capra by training is a physicist and spent much of his career at Berkeley doing research in biology. It is not surprising that his solution to the imminent environmental problems would be heavily weighted toward the sciences. However, we know that great science requires great humanity and that is why I added the second framework.

When I think about humanity I tend to frame the problem in terms of government and individual empowerment, not far from the complexity view of community and agents. My view of government is indebted to Spinoza, Hobbes and Locke. My view of individual empowerment is pure Hayek and the Austrian School of Economics. Government’s role is to provide the minimum services required for the individual to flourish. Individual empowerment enables each person to achieve their individual social, economic and political goals and benefits. (Game theory, behavioral economics and some legal frameworks explain why self-interest does not run wild.) In order for individual empowerment to flourish one needs to satisfy four conditions:

1. Self-esteem

2. Education

3. Social Inclusion

4. Access to information

In satisfying these four conditions we introduce humanity. One might also realize that I have framed four important social issues in the 21st century.[25] Self-esteem might be rephrased as wellness or mindfulness. Education is fundamental to uplifting anyone and provides the skills to use information (4). Social inclusion is the positive phrasing to address discrimination whether it be based on class, economics, gender, race or religion. Access to information we have discussed much already, but here I draw attention to “access” and the need for more, low cost satellite Internet access or alternative means of access. As I have said earlier, overcoming the asymmetry of information is the first step in understanding any social problem.

To improve self-esteem and create a society with social inclusion I think the path forward is through better education. I have spent the last fifteen years working in education, teaching at three universities (FIU, MIT and UMiami), running an education project at the MIT Media Lab (One Laptop per Child — “OLPC” ) and teaching the Goldman Sachs 10KSB national program at Babson and the local Miami program at MDC. The MIT Media Lab was the transformative experience for me, working with some of the world’s leading researchers on self-directed, computer-based child learning. This approach to learning is perfectly consistent with individual empowerment and, of course, the self-directed agents in complex systems. However, I think the most important issue in education in the 21st century will turn out to be curriculum. Did we prepare the students to understand the issues in the context created by the new technologies and the related ethical and legal issues.

Modern schools were designed in the early 1800s to prepare workers for factory jobs. The basic curriculum was crafted to serve that purpose, before the invention of electricity, modern medicine, computers, the Internet, quantum mechanics, information theory or complexity. Perhaps education has been updated partially for electricity, medicine and computers, but how many high schools students (or PhDs) can speak knowledgeably about James Maxwell, Marie Curie and Claude Shannon. As we approach what could be a second Renaissance, I think we need to update education for the last two hundred years of science, engineering, math and statistics. There are two important reasons to do so.

1. For the first time in history government officials and decisionmakers have no idea about the latest technology. It was not too hard to understand electricity and light bulbs, but there is no sensory experience to help one understand computational biology or quantum cybersecurity.

2. The advances in science, engineering and technology will raise legal and moral issues which cannot be anticipated and understood without a much higher level of learning in the general population and especially the leaders of industry, foundations and universities. From the simpler issues of ownership of personal data and health records to the more challenging issues of genetic engineering, the path is perilous and uncharted. In fact, our complete ineptitude with the environment would suggest we stop all work that can change DNA in any living organism anywhere. Facing patient death, such an approach looks extreme. Facing human extinction, maybe we should pause and reconsider.

Edward Frenkel, the Berkeley math professor, explains well why we also need more math to understand all of this science.

“As Galileo famously said, “The laws of Nature are written in the language of mathematics.” Math is a way to describe reality and figure out how the world works, a universal language that has become the gold standard of truth. In our world, increasingly driven by science and technology, mathematics is becoming, ever more, the source of power, wealth, and progress. Hence those who are fluent in this new language will be on the cutting edge of progress.”[26]

And, of course, one cannot do the important work of designing the algorithms and the models and simulations without applied mathematics. If this type of design is to remain the province of man we need to increase the learning of math and statistics. AI is already making progress on designing its own algorithms.

This math is also required as we push out the frontiers of computer science and information management. Quantum computing, graph databases and data topology, to name a few fields, all require even more advanced math. Frenkel makes another interesting point about the relationship between mathematics and information management.

“One of the key functions of mathematics is the ordering of information. This is what distinguishes the brush strokes of Van Gogh from a mere blob of paint. With the advent of 3D printing, the reality we are used to is undergoing a radical transformation: everything is migrating from the sphere of physical objects to the sphere of information and data. We will soon be able to convert information into matter on demand by using 3D printers just as easily as we now convert a PDF file into a book or an MP3 file into a piece of music. In this brave new world, the role of mathematics will become even more central: as the way to organize and order information, and as the means to facilitate the conversion of information into physical reality.” (abstract to tangible, again — author’s note)

We need to realize that physical objects store information, one of the many principles in entropy. An apple is insurance against starvation. Modern science with its computational examination of the components of nature at the most micro-level through AI/ML is enabling us to unpack this information to explore nature at a whole new level.

All of this applied mathematics, combined with the AI, is changing what the corporate consulting firm BCG calls the “rate of learning”. BCG actually sees the rate of learning as a competitive advantage for a corporation. The “exponentially expanding amount of available data” requires “ultra-high throughput analysis” in the “engineering design-build-test-learn cycle”. Successful innovation will now be even more dependent on speed and timing. Managing ultra-high throughput will also be a requirement for the successful individual. They will need to manage the discovery-creativity-invention part of the innovation process in new ways. It will no longer be human vs human. It will be human plus best available ultra-high throughput infrastructure that shapes the competition. Research universities should take note.

This ultra-high throughput concept also applies to how we develop the students from a young age. Perhaps we need to teach Google search the same way we need to teach reading. Validity or truth of information is becoming more important and teaching approaches to validate an author or a source is also now critical. This is particularly important as much of the popular press has abandoned their role of objective reporting. Tools for bibliography hacking and article checking for key words, such as genei.io, are becoming increasingly popular. These tools suggest that since I began tracking information curation twelve years ago, we are finally making some progress. Next, someone will design the Zoom of note management and I can retire my Evernote and Keep apps. Such practical tools for learning may not be consistent with the theory of education, but we should remember what Einstein said — ”Never memorize something that you can look up.” This notion of finding the line between memorization and “looked up” will become an important point in education as information search and curation advance. This is particularly important as we realize the increasing importance of mathematics and the need to find ways to advance mathematics education beyond simply the memorization of techniques.

It is well documented that very successful people read at least 2–3+ hours per day, the same way that they exercise and eat right. We need to teach the discipline of life long daily reading as self-directed learning at a young age (10–11), starting with a child’s passion. First subject is not so important because over time a second and third domain will be adopted. This concentration is a natural tendency caused by chunking, a process whereby the brain naturally integrates new information with old. For a new paradigm of self-directed learning to emerge we must also reposition education from the current backward looking, historical perspective to a futurist perspective of natural exploration, discovery and creativity. This reorientation will also be useful to help students appreciate the predictive approaches of computation. This may also naturally lead students to appreciate information for its predictive value.

Another skill that needs to be taught is decision making. I first came across decision making reading Nobel Laureates Herbert Simon and Elinor Ostrom. However, neither of them ever became as popular as another Nobel Laureate Daniel Kahneman (Thinking Fast and Thinking Slow). I find decision making to be languishing except for the recent efforts to introduce mental models. Mental models are frameworks that can be reused to make better decisions and there are hundreds of them. Personally, I find mental models to be extremely useful almost on a daily basis. My three favorites are 1) “think like a physicist” and start with the solution and work back to the starting point; 2) understand the key assumptions in the problem; 3) understand the asymmetry of information in the problem.

Mental models have been promoted by two legendary billionaires — Charlie Munger and Ray Dalio. Munger is of course Warren Buffett’s partner and Dalio is the founder of the successful hedge fund Bridgewater. The popular blog Farnam Street has also popularized mental models for years. My undergraduate students love mental models and I teach them in a course titled, “Thinking, Design and Impossible Problems”. It is my most popular course. We should be teaching mental models or perhaps symbolic logic to start with students about 10–11 years old.

A place where the market is ahead of traditional education is in the multiple formats for courses — workshops, badges, courses and degrees — which are all offered by non-traditional providers. Three factors explain the popularity: 1) remote format; 2) flexible scheduling to offer the same subject in four hours, two days, six weeks or a full semester; 3) better quality curriculum and teachers. This last point is where I think we will see the biggest impact. If Harvard or Stanford offer an online Bachelors degree priced to compete with the state university system, I think we are in for a disruption to secondary education. Another interesting idea is the Master’s in data science for financial engineering offered by WorldQuant[27]. WorldQuant is a quantitative hedge fund that offers a free Masters in order to develop the firm’s new trading strategies. I think high school students with good math training could skip their undergraduate degree, avoid repeating courses two and three times, and start their education at WorldQuant. Similarly, five F500 companies, say in Minneapolis, could band together, find a source of accreditation, and offer a data science masters to the 100 brightest minority high school students in Minnesota. Now we are being disruptive and increasing social inclusion by helping minority students prepare for the most attractive positions in the 21st century. Maybe pick out the students in 10th grade to ensure they are properly prepared. There are many other ways to creatively address curriculum. We just need the desire and foresight.

Conclusion

“Behind the cotton wool is hidden a pattern… the whole world is a work of art… there is no Shakespeare… no Beethoven… no God; we are the words; we are the music; we are the thing itself.” — Virginia Woolf.

Education today was largely shaped by the thinking of the late 18th and early 19th century. Knowledge at that time was still in large part shaped by the thinking of Aristotle, Newton and Descartes. At best, we could say that math, physics, the scientific method and formal logic were all in their earliest modern stages of development. A curriculum to enable creativity, vision, and innovation needs to recognize the last two hundred years of advances in science, technology and math and prepare students accordingly to be productive. If we take these advances and combine them with the better understanding of cognitive psychology that they facilitated, we have the foundation for a better future and the survival of mankind.

If I were to look for a positive closing idea, I would place my faith in the geniuses — the Newtons, the Einsteins, the Herbert Simons, the Marvin Minskys. No one forecasted the nature or the scale of the impact of their ideas and therein lies my hope for humanity. Hopefully, I have shown I believe in the Individual, a Renaissance and Computation to successfully and fairly shape our future. As we come to better understand Complex Adaptive Systems, we will realize that the individual in a self-organizing network is the means to insure the survival of man. The individual only needs to be given the proper tools and that is where the status quo no longer serves us and education and learning need to be redefined.

“We have this argument at the Santa Fe Institute a lot. Some people will say, “Well, at the end of the day it’s all math.” And I just don’t believe that. I believe that science sits at the intersection of these three things — the data, the discussions and the math. It is that triangulation — that’s what science is. And true understanding, if there is such a thing, comes only when we can do the translation between these three ways of representing the world.”

— Jessica Flack SFI

The opinions expressed in this article are my personal opinions and do not represent the views of any organization with whom I am affiliated or employed.

[1] AI-Artificial Intelligence, IOT-Internet of Things

[2] https://arxiv.org/pdf/1412.2447.pdf

[3] https://jods.mitpress.mit.edu/pub/ageofentanglement/release/1

[4] Reason & Goodwin, 1999

[5] https://www.thenewatlantis.com/publications/einstein-in-athens

[6] https://www.quantamagazine.org/nature-versus-nurture-add-noise-to-the-debate-20200323/

[7] Hardy, G. H. A Mathematician’s Apology, 1992

[8] https://fortelabs.co/blog/a-pattern-recognition-theory-of-mind/

[9] https://fortelabs.co/blog/a-pattern-recognition-theory-of-mind/

[10] https://fortelabs.co/blog/a-pattern-recognition-theory-of-mind/

[11] https://link.medium.com/o4SleHWnG3

[12] Simon Levin, Princeton University, and Andrew Lo, Massachusetts Institute of Technology Christian Science Monitor, August 22, 2016

[13] Murray Gell-Mann at SFI. https://arstechnica.com/science/2019/12/loonshots-and-phase-transitions-are-the-key-to-innovation-physicist-argues/

[14] https://medium.com/cantors-paradise/fractal-geometry-9e516a5b244b

[15] Reason & Goodwin, 1999

[16] https://www.inc.com/jeff-haden/why-innovators-like-elon-musk-jeff-bezos-embrace-this-ancient-problem-solving-technique.html?cid=sf01002

[17] https://www.re-thinkingthefuture.com/fresh-perspectives/a1285-understanding-biomimicry-the-three-levels-of-mimicry/

[18] Simon, HA (1972), “The Sciences of the Artificial”, MIT Press

[19] Simon, HA (1962), “The Architecture of Complexity”, Proceedings of the American Philosophical Society, Vol. 106, №6

[20] Haldane, A (2019), https://www.bis.org/review/r190508d.pdf

[21] https://americanaffairsjournal.org/author/edward-r-dougherty/

[22] https://katholmesdesign.com/blog/tag/exclusion

[23] The Snowflake IPO September 2020 that valued the company at $70 billion at the end of the first day of trading shows the importance of cloud computing.

[24] http://www.cbd.cmu.edu/about-us/what-is-computational-biology/

[25] I actually realized that my four-part theory of individual matched up with the social issues of the 21st century while writing this article. I now think the four requirements for individual empowerment are timeless, which is why they are reframed in each new era.

[26] Frenkel, E, Love and Math: The Heart of Hidden Reality, Basic Books, 2013

[27] https://wqu.org/

--

--

Robert Hacker

Director StartUP FIU-commercializing research. Entrepreneurship Professor FIU, Ex IAP Instructor MIT. Ex CFO One Laptop per Child. Built billion dollar company