Escaping Descartes: AI, Complexity and a New Scientific Paradigm

Robert Hacker
9 min readOct 3, 2023
Photo by Photoholgic on Unsplash

“We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.” — EO Wilson, The Origins of Creativity (2017)

Much recent attention and publicity has been paid to the newest version of artificial intelligence (AI), products such as Chat GPT. These Large Language Models will have a noticeable effect on how we work, similar to the impact of Excel or a new user interface like voice. However, I think the more important breakthroughs in artificial intelligence (AI) will redefine much of physical, natural and medical science and accelerate our understanding of the new science of complexity. I think the advances in complexity science will have a profound effect in shaping the 21st century, which is the purpose of this article. However, we must begin by going back to the 17th century and one of the fathers of modern science — Rene Descartes.

Descartes shaped modern science by introducing an empirical, macroscopic view of reality, a top down approach whereby we started with what we observed and then analytically reduced it to its component parts. This top down “reductionist” approach was quite intuitive and accepted as the scientific method until the late 19th, early 20th century when quantum physics was discovered. Quantum physics showed us that reality was based in sub-atomic particles, which were combined into atoms, molecules and eventually cells, organs, systems (e.g., digestive) and living animals like humans. Albert Einstein interpreted this quantum physics to show that creativity was “combinatorial play”, the combining of existing components in new ways. Nobel economist Herbert Simon labeled this process “synthesis”, an important concept for the future of science, especially when AI emerged in its current form.

From the discovery of quantum physics, the next important scientific event was the building of the atomic bomb to end WWII. The challenge in building the bomb was determining if the nuclear reaction would stop or eventually just consume the earth. To validate an acceptable outcome, John von Neumann and associates developed the first computers to model the problem. This approach may have been the first use of a “digital twin” and it marks the beginning of what came to be called cybernetics. Cybernetics might be simply defined as the communication and control of information in a natural or manmade system. Norbert Wiener at MIT formally defined cybernetics in 1948 and that thinking supported Claude Shannon’s Information Theory, the development of Systems Thinking and one of the key premises for modern computing — that any system can be modeled as information. Shannon’s information theory was one of the greatest discoveries in history and “launched the digital age”. “Shannon showed that the information describing a given system reflects the degree of order in the system” and the relationship between information and entropy.[1] Shannon showed how pattern recognition could potentially determine the certainty of information received at any distance by any natural or manmade method.

Systems Thinking, originally called “systems dynamics” was developed by another MIT professor, Jay Forrester, to provide a holistic approach to societal problems. A system has the following six characteristics:

1. A Boundary is semi-permeable and defines the scale of the system (community, state, planet, etc.)

2. Inputs of matter, energy and information pass through the boundary and are processed by the system

3. Elements are physical, natural and manmade components of the system

4. Agents are elements that process the inputs to achieve the system purpose, are adaptive and can reproduce or replicate

5. Outputs are the intended purpose and unintended negative or positive consequences of the process (or system)

6. Feedback Loops route certain outputs to be reused in the process (autocatalysis) and are a significant factor in explaining the uncertainty of a system

In 1971 Forrester was invited by the Club of Rome to develop a worldwide model of the environment that came to be called WORLD1. This study first predicted the life altering environmental consequences beginning in 2050, which has been modeled by many since and still stands as the best prediction. From Wiener’s original work in studying biological systems, combined with Shannon’s giant leap forward to better understand the concept of information, and Forrester’s holistic view of systems, came the concept of complexity science. We will return to discuss complexity science, but first we need to roll forward from von Neumann’s WWII computer to the birth of artificial intelligence and subsequent advancements.

By 1952 IBM had advanced the development of “mainframe” computers using vacuum tubes but providing no software for the machines. Nevertheless, the machines quickly caught on for their improvements to corporate efficiency and cost savings. IBM’s success spawned the minicomputers of companies such as DEC and then the microcomputers of companies such as Altair. The benefits of the famous Moore’s Law underpinned these advances and the pattern of ever faster, cheaper and smaller computers had been established. (Moore’s Law predicted that the density of transistors per chip would double every 18–24 months.)

With the advances in computing, a certain group of mostly college professors came together to explore the idea that computers could be used to model the results of human thought or more specifically that computers could manipulate numbers and symbols in a way similar to human thought. In 1956 at a summer program at Dartmouth, a group of luminaries came together that included two future Nobel Laureates, four Turing Prize winners and several other noteworthy thinkers.[2] One outcome of that summer was a successful grant submission to the Rockefeller Foundation for $7,500. One Rockefeller official described the submission as too visionary even for the Rockefeller Foundation, but not surprising for probably the greatest collection of thinkers since the famous 1927 Solvay Conference.[3] AI was funded by the Rockefeller Foundation and for the next forty years in starts and stops AI advanced on the back of better computers, larger and larger datasets, better algorithms and some new mathematics such as applications of graph theory and chaos theory.

One of the great beneficiaries of the advances in AI was the emerging field of genomics. Originally Watson and Crick’s discovery of the double helix in DNA in 1956 was pursued through bespoke lab work trying to unravel unheard of levels of possibilities. With the improvements in AI, DNA sequencing and much of biology was transformed into a more manageable “information science”.

As the 21st century began the combination of computing and AI increasingly became the tool for the discovery of new science, engineering and medicine. AI has now reached the point where it is an integral part of many fields, including biochemistry, material science and pharmaceutical development. AI became such a powerful tool because it had the ability to deal with the sub-atomic and microscopic level of data in the enormous quantities that modern science used to produce validated results. With the introduction of synthetic or “artificially generated” data to anonymously train the algorithms, the scientists even discovered new components such as biomarkers that had not been part of the science before. The National Institute of Health (NIH) describes the change in science beautifully as “transdisciplinary, translational, and network-centric” and in part we crossed this frontier because scientists had the perfect tool — AI.[4]

This view of science as “transdisciplinary, translational, and network-centric” was consistent with the development of the new science — complexity science — in the second half of the 20th century. Cybernetics popularized the idea of the system. Systems thinking showed us a way to think holistically about any natural or manmade system. The remaining piece was to discover and document the fundamental principles that apply to both natural and manmade systems, which is the focus of Complexity Science. When we try to understand these principles of complexity, we need to see it as the means to deal with the problems of the 21st century. The important problems of the 21st century — the environment, healthcare, energy — are at the intersection of natural and manmade systems and we need tools that can provide solutions that work at both the microscopic level of nature and the macroscopic level of manmade systems.

Retired Stanford complexity economist W. Bryan Arthur lists five fundamental characteristics in every complex system.

1. Emergence — Unexpected outcomes arise that are not explainable or predictable from the characteristics of the agents and [sub]systems

2. Hierarchical — Nested [sub]systems are an integral part of a larger system with no dictated instruction from outside the system

3. Non-linear — Agents have random or chaotic behavior (unpredictable and uncertain)

4. Adaptive — Agents learn or evolve based on information or the related feedback

5. Self-organizing — Agents and [sub]systems are autonomous to achieve purpose

(I reorder the fundamentals of complexity using the mnemonic “SHANE”.) An integral part of Arthur’s definition is the recognition that every system is multifactorial, dynamic, and nonlinear. By the nature of complex systems, applied mathematics and AI is required to understand the systems, see the complex patterns and capture the sometimes chaotic behavior.

This convergence of complexity, applied mathematics and AI is well illustrated by the National Renewable Energy Lab (NREL), one of the U.S. government’s seventeen national laboratories. These laboratories are mandated to explore and research at the frontiers of science and engineering. Using a “combination of high performance computing, computational science and applied mathematics”, the NREL works to “describe, model, simulate, solve, explore, and optimize complex systems, whether those systems are interacting atoms, systems of chemical reactions, or engineered systems describing the electric grid.”[5] “The NREL uses AI and ML to derive insights, make predictions, and explain future system states by learning from data. Our models enable us to address — in a computationally efficient way — issues that are emergent over multiple scales or unusual spaces.”[6]

Conclusion

In simple terms, we can now take a problem, beginning at the atomic scale, identify the multifactorial characteristics, adjust for uncertain interaction over multiple scales and produce new insights into science that benefit humanity in tangible ways that improve human life, the environment and society. Much of this complexity is understood through advances in AI. It all sounds so simple when these concepts are explained by the NREL, Santa Fe Institute or the Oxford Centre for Complex Systems. The issue that concerns me is at the convergence of this new science, mathematics and technology. More precisely, are we preparing enough people properly to develop and apply the potentially transformative science and technology to solve the critical problems of the environment, develop economical renewable energy and provide healthcare that supports people likely to live to 100 or 150 years of age?

When I think about the current state of affairs, I always go back to this quote from the Founders of Santa Fe Institute:

“It has been the great triumph of the sciences to find consistent means of studying natural phenomena hidden by both space and time — overcoming the limits of cognition and material culture. The scientific method is the portmanteau of instruments, formalisms, and experimental practices that succeed in discovering basic mechanisms despite the limitations of individual intelligence.”[7]

We are at another period in human history where we are about to overcome the limits of cognition and material culture. As the National Academies Press described it recently, “as the amount of data available grows beyond what any person can study, AI can be useful in its power to identify patterns in data. The patterns will be new, the tools will be new, the knowledge will be new. We will rely less on the limits of human cognition and more on tools we cannot fully understand”.[8]

The science, math and computation are advancing, probably on a scale not seen in human history. We need to prepare a new generation of students, government officials and corporate executives to use these tools wisely and effectively for the benefit of society. I believe today we are ill equipped and unprepared. The biggest problem of the 21st century may be integrating the technology, the education system and the complexity science to find timely solutions.

“But begetting information is not easy. Our universe struggles to do so. Our ability to beget information, and to produce the items, infrastructures, and institutions we associate with prosperity, requires us to battle the steady march toward disorder that characterizes our universe…” –Cesar Hidalgo, Why Information Grows

“Nature is the source of all knowledge.” — Leonardo da Vinci

References

[1] https://nautil.us/the-math-of-living-things-9812/

[2] https://en.wikipedia.org/wiki/Dartmouth_workshop

[3] https://www.cantorsparadise.com/the-1927-solvay-conference-and-the-most-iconic-physics-photograph-ad37f227e2e0

[4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3940421/

[5] https://www.nrel.gov/computational-science/complex-systems-simulation-optimization.html

[6] https://www.nrel.gov/computational-science/artificial-intelligence.html

[7] (David Krakauer, Murray Gell-Mann, Kenneth Arrow, W. Brian Arthur, John H. Holland, Richard Lewontin,…, Worlds Hidden in Plain Sight)

[8] https://nap.nationalacademies.org/catalog/27241/artificial-intelligence-to-assist-mathematical-reasoning-proceedings-of-a-workshop?utm_source=NASEM+News+and+Publications&utm_campaign=c3361cb7aa-EMAIL_CAMPAIGN_2023_09_18_03_08&utm_medium=email&utm_term=0_-c3361cb7aa-%5BLIST_EMAIL_ID%5D&mc_cid=c3361cb7aa&mc_eid=d79ca692b9

--

--

Robert Hacker

Director StartUP FIU-commercializing research. Entrepreneurship Professor FIU, Ex IAP Instructor MIT. Ex CFO One Laptop per Child. Built billion dollar company