The Asymmetry of Knowledge: How AI Unveils the Invisible Realms of Science

Robert Hacker
4 min readJun 7, 2024

--

A recent article got me thinking about the relationship between information, artificial intelligence (AI) and the nature of reality. As an undergraduate I studied philosophy. As I investigate the application of artificial intelligence, more and more, I find myself returning to philosophy. Philosophy at the most basic level has four fields of study:

· Metaphysics — the nature of reality

· Epistemology — the theory of knowledge

· Metaethics — the study of moral behavior

· Aesthetics — the appreciation of art and beauty

Metaphysics and epistemology are particularly relevant to AI, although some would argue that ethical concerns are the biggest risk.

Lord Kelvin, who formulated entropy and the second law of thermodynamics, said that “mathematics is the only true metaphysics”, introducing a valuable level of precision to metaphysics and, along with Ludwig Boltzmann, setting the stage for quantum physics. In 1989 John Wheeler, at the Institute for Advanced Studies at Princeton, published a famous paper that showed that information is a descriptor of reality. This paper popularized the phrase “it from bit” and elevated information to the same status as matter and energy. Then along came AI and the study of metaphysics went through another major revision in my opinion.

While 20th-century science significantly advanced our understanding of quantum physics and the different types of sub-atomic particles, biology and chemistry also made major advances in building up our understanding of the hierarchical relationship between atoms, molecules, cells, organs, organisms and ecosystems. Much of the limitation in using this information came from our evolutionary processes of cognition and learning. It is widely held that 70–80 percent of human learning is through vision. Therefore, if we cannot see it, we are slow to learn and understand it. What AI did for us is that it made the invisible viewable using computational methods, what some call “data driven discovery”. AI visualized this scientific information through algorithms, models and simulations, just as Lord Kelvin would have anticipated.

In other words, AI eliminated a major asymmetry of information. The concept of asymmetry of information, “what we do not know”, is so useful that a Nobel Prize in economics was awarded in 2001 to Michael Spence, Joseph Stiglitz and George Akerlof. In science, many advances have come about when we changed the scale of the information available, when we eliminated the asymmetry of information. The telescope, the microscope and the x-ray are some examples. The entire research into sub-atomic particles is another example.

Where we are now is to identify what additional asymmetries of information are holding back advances in knowledge, and in science in particular. An example makes the opportunity clear. Confirmation is advancing, for example, that every lung cancer patient has a particular bacteria in their stomach, which suggests a link to the development of the cancer. Setting aside all the other factors that need to be considered to meet research standards, this linkage would not have been possible without machine learning (ML) to identify the connection. Another case is AlphaFold’s revolutionary analysis using deep neural networks to learn patterns and relationships between protein sequences and their 3D structures. Some analysts estimate that AlphaFold saved as much as 800 years of research. What these examples show me is that we are really just at the beginning of studying matter, energy and information at different scales and levels of complexity. The limitation is no longer what we know, but rather how we shape the research to use ML to explore what we do not know. And, an important point, while all the examples relate to biology and medicine, the same opportunities exist to redefine chemistry, materials science and environmental solutions to name a few more obvious areas.

These advanced applications of ML, and more to come almost weekly, will put extreme pressure on government regulators. mRNA vaccines were deemed experimental until COVID-19 changed priorities. Now these vaccines are being studied to prevent cancer. And prevention is the direction this new technology is taking medicine. No longer satisfied by cures or accurate diagnosis, the standard across a wide range of diseases is prevention…at the individual level. How will the regulators adapt to this “precision” medicine at the individual level? This level of exploratory, game-changing science reminds me of entrepreneurship. There the model is iterate, fail, test again until a viable solution is found. Government regulators are not renowned for their ability to test, fail and retry, but as the new science and AI is popularized the public will expect the regulators to adapt their 3-phase approach and thousands of test patients to reflect the improved research results from ML. This approach of faster approvals and more limited testing will also extend to synthetic solutions in materials, environmentally advanced solutions and new chemical compounds. Such advances I think will lower concerns about the safety of AI.

The National Science Foundation (NSF), responsible for U.S. research strategy in science, engineering and education, recommends that every graduate student study AI, multidisciplinary approaches and entrepreneurship to prepare them for 21st century careers. Maybe we should put the government regulators through the same training.

--

--

Robert Hacker

Director StartUP FIU-commercializing research. Entrepreneurship Professor FIU, Ex IAP Instructor MIT. Ex CFO One Laptop per Child. Built billion dollar company