Robert Hacker
26 min readSep 22, 2022

--

A Strategy for Academic Research in the 21st Century

— Guidance for Early Career Researchers

“The greatest danger for most of us is not that our aim is too high and we miss it, but that it is too low and we reach it.” — Michelangelo

“While understanding Earth’s interconnected processes is intrinsically interesting and of inherent scientific value, these efforts are made urgent by the need to understand how the Earth can continue to sustain civilization and the planet’s biodiversity.”

–A Vision for NSF Earth Sciences 2020–2030: Earth in Time (2020) NAP

Introduction

The Bayh-Dole Act of 1980 redefined research in the U.S. by granting universities the rights to inventions derived from federal funding and requiring universities to make reasonable efforts to commercialize the research in the private sector. The Act fueled the Third Industrial Revolution (3IR, 1973)[1] and new technologies in computing, telecommunications and media.

The four Industrial Revolutions to date are shown below.

The Four Industrial Revolutions [2]

According to the scholar Carlota Perez, an industrial revolution is defined as an “overarching process of transformation takes place thanks to the gradual construction of a new techno-economic paradigm, a shared common sense model of best technical and organizational practice for the use of that set of pervasive technologies, which provides a generalized quantum jump in productivity and quality”.[3] Simply put, an industrial revolution occurs when a single new technology, such as AI, initiates a transformative model (for example, combining cloud computing and Internet of Things (IoT) that solves multiple problems in multiple industries.

According to the World Economic Forum (WEF), in 2016 we began the Fourth Industrial Revolution (4IR). As in the 3IR, academic researchers will again need to choose the new science, engineering and applications to address many imminent challenges from climate change and the economic scale of China to renewable energy and population health. This article presents an approach for how Ph.D. candidates and early career academics can pick the fields that represent the greatest likelihood for theoretical breakthroughs, social impact and commercial scaling.

First, I outline the opportunities to use Transformative Platforms as the basis for research in any field. Next, I offer some established paradigms that can guide researchers to new problem-solving and engineering approaches. Finally, I highlight the five existential problems of the 21st century, which perhaps offer attractive opportunities for government agency and private sector (venture capitalists and corporate venture capitalists) research funding. I conclude with a policy level definition of a research agenda.

Transformative Platforms

The NSF defines transformative research as: “…ideas, discoveries, or tools that radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education”.[4] This quote from the book “A Mind at Play” about the work of the legendary mathematician, scientist and engineer Claude Shannon illustrates “transformative” well:

“But before Shannon, there was precious little sense of information as an idea, a measurable quantity, an object fitted out for hard science. Before Shannon, information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted into bits.”

We will return to Shannon’s work shortly, but first let me define what I mean by a platform. Platform has become a popular term in computer and data science, but here I use it to mean “a method (process, tool) with multiple, readily accessible uses”[5]. A platform is by definition most likely to be used in a multidisciplinary approach to a problem. The semiconductor is an excellent example of what I mean by a platform with wide-ranging applications in CPUs, modems, memory, etc.

I think that there are three transformative platforms today and that any early career researcher could leverage a transformative paradigm applied to a new field or application, develop tangible results and build a lifetime of productive work around any one of the three. These transformative platforms are 1) Quantum Mechanics 2) Information Theory and 3) Artificial Intelligence. We define each one and their respective research opportunities in the next three sections of the article. Let’s begin with Quantum Mechanics to understand a transformative platform and future opportunities in research.

Descartes, Simon and Quantum Mechanics

The history of science and mathematics provides much guidance to students, Ph.D. candidates and early career researchers on promising fields for new research and commercialization. This approach is based in part on the historical patterns in science and technology determined by Nikolai Kondratiev[6] (technology cycles), Thomas Kuhn (scientific revolutions and paradigms), and Carlotta Perez (industrial revolutions). Modern science begins with the Enlightenment at the beginning of the 17th century[7], what some call the First Scientific Revolution. Much of mathematics was discovered at the same time beginning with Newton and Leibnitz work on calculus, but in this article we will only discuss the math if it helps the story about science.

Rene Descartes, philosopher, mathematician and scientist can be considered the father of modern science, although many also cite his contemporary Galileo.[8] Descartes “offered a new vision of the natural world that continues to shape our thought today: a world of matter possessing a few fundamental properties and interacting according to a few universal laws.”[9] Heavily influenced by the view that God is objective reality, Descartes crafted a macroscopic, top-down empiricism with a hierarchical view of reality frequently described as analytical or reductionist. For Descartes:

“Method consists entirely in the order and arrangement of those things upon which the power of the mind is to be concentrated in order to discover some truth. And we will follow this method exactly if we reduce complex and obscure propositions step by step to simpler ones and then try to advance by the same gradual process from the intuitive understanding of the very simplest knowledge of all the rest”.[10]

Descartes’ macroscopic reductionism shaped science until the discovery of the principles of quantum mechanics in the late 19th/early 20th century. The principles of quantum mechanics were developed through the contributions of a long line of physics and mathematics luminaries: Boltzmann, Faraday, Maxwell, Schrodinger, Heisenberg, Planck, Einstein, Bohr, and von Neumann to name a few. Simply put, quantum mechanics showed that reality could best be understood as wave-like particles behaving according to probabilities.[11] Effectively, the macroscopic view of Descartes that shapes our thinking and culture was replaced by microscopic (sub-atomic) particles of energy behaving stochastically. This focus on particles or components to explain reality triggered a transition in science to a bottom-up synthesis approach [to science] that Nobel Laureate Herbert Simon called “synthetic” creation. Simon also initiated the concept of formal research in design. Simon proposed that design (problem-solving) and creativity begin with the simplest fundamental components and from there assemble and add new components [just like in physics] until a solution to a problem is achieved. John Holland, the noted complexity scientist, described this method as the “building blocks” approach[12]. In addition to being consistent with quantum mechanics and design, this bottom-up reasoning (and creative process) led modern physicists to see the ‘bit”, “binary choice”, as the basis of the physical universe. “John Wheeler (1988) expressed this idea as “it-from-bit”, and implied that the basis of the physical universe — the “it” of an atom or subatomic particle — is not matter, nor energy, but a bit of information. Consequently, the entire universe should be seen as a cosmic processor of information [or computational].”[13] Effectively, quantum mechanics set the stage for both digital computation and artificial intelligence (AI). Quanta Magazine makes the point well:

“All of this is part of a much larger shift in the very scope of science, from studying what is to what could be. In the 20th century, scientists sought out the building blocks of reality: the molecules, atoms and elementary particles out of which all matter is made; the cells, proteins and genes that make life possible; the bits, algorithms and networks that form the foundation of information and intelligence, both human and artificial. This century, instead, we will begin to explore all there is to be made with these building blocks.”[14]

At the more practical level, if that phrase can be combined in a sentence with quantum mechanics, we now see several trends that could be of interest to a researcher. According to The Quantum Insider, investment in quantum technology(s) startups was over US$2 billion in 2021 (and remains strong in 2022), comparable to funding for open source companies. As a rule of thumb, investors in the startup space are looking for returns within ten years, which suggests that the commercialization of quantum technology is perhaps imminent. In much the same way that AI is almost universally applicable regardless of the domain, quantum technology has widespread applications that are only increasing. Quantum technology is shaping computing, cybersecurity, information theory, telecommunications, medical imaging, GPS, lasers and solar cells, to name a few domains. We also now have the fields of quantum chemistry and quantum biology.

Claude Shannon and Information Theory

The transition from matter and energy as the basis of the physical universe to the exploration of computation and information was accelerated by Claude Shannon’s work in Information Theory at Bell Labs (1948). Shannon’s work had a profound effect on science, engineering and mathematics and we continue to explore new frontiers of science using information theory. The history of information theory can be traced back to Carnot’s original work on thermodynamics, Lord Kelvin’s development of the second law of thermodynamics (entropy) and the contemporaneous work in statistical mechanics of Boltzmann. Some say physics has now been formalized as the study of energy, entropy and information.[15] Some even describe Shannon’s work as comparable to the contribution of Einstein. “Fortune magazine [1953] gushingly describes the field [information theory] as more crucial to ‘man’s progress in peace, and security in war’ than Einstein’s nuclear physics.”[16]

Simply put, information theory defined the smallest amount of information required to accurately transmit a message. Shannon provided “a mathematical guide for the system’s engineers, a blueprint for how to move data around with optimal efficiency”. Shannon maintained that all communications systems could be thought of in the same way regardless of whether they involved a lunchroom conversation, a postmarked letter, a phone call, or a radio or telephone transmission. Messages all followed the same fairly simple pattern.”[17]

Shannon’s insight about information, now labeled “information entropy”, has been applied widely in fields including statistical inference, cryptography, neurobiology, perception, linguistics, evolution, molecular dynamics, quantum computing, black holes, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection and art creation.[18] Information theory combined with von Neumann’s theoretical work in computer design and William B. Shockley (Nobel Prize in Physics 1956, Bell Labs) and Walter H. Brattain’s work in semiconductor physics would be an example of what Thomas Kuhn called a “scientific revolution”. This “second” scientific revolution launched the Digital Age and new advances and applications of information theory are continuing 70+ years later.[19]

Even the social sciences have adopted information theory. In 2001, George A. Akerlof, A. Michael Spence, and Joseph E. Stiglitz won the [Nobel] prize “for their analyses of markets with asymmetric information.”[20] Michael Spence[21] went on to show that poverty is caused by a “negative” asymmetry of information and Israel Kirzner[22] showed that entrepreneurship is caused by a “positive” asymmetry of information. The concept of information asymmetry is fundamental to applied information theory and deserving of more attention in research in the social sciences.

Artificial Intelligence and “Engineered Discovery”

This platform based on Information Theory launched the Digital Age and led to the invention of artificial intelligence. In 1956 at Dartmouth College a group including Herbert Simon, Marvin Minsky, Claude Shannon, John Holland and John McCarthy came together to write a grant proposal to the Rockefeller Foundation[23]. That grant, when awarded, funded the first research in artificial intelligence (AI). In the years that followed much time and money was mistakenly focused on improving the computing power applied to AI. Only at the beginning of the 21st century did researchers realize that it was a data problem or the lack of data to be more precise. W. Brian Arthur, the renowned complexity economist based at Santa Fe Institute (and retired Stanford professor), says that the “tools” emerge to solve the problems of their times. AI could not appear as useful until Google, the Internet and database technology could generate, capture and organize data in large quantities. The availability of large datasets then permitted AI to advance from data analytics to predictive analytics, prescriptive analytics and now cognitive analytics where the hope is that AI can come close to “thinking” like a human.

Kondratieff and Schumpeter have shown that human history has gone through three phases — Transforming Material, Transforming Energy and Transforming Information.[24]

Since the beginning of the late 18th century and the first Industrial Revolution (1IR), both energy and information have been the factors that triggered the new science and then the related technologies appeared. According to Arthur, technology appears to solve the problems of the times. As we continue with “transforming information” in the 4IR, the question one might ask is why has AI appeared and become so effective? What is the problem AI solves? My answer is that AI is the technology platform we needed to process the breakthroughs in biology at “industrial scale”. One might say that the 19th century was the century for chemistry and thermodynamics and the 20th century was devoted to quantum mechanics and physics. This 21st century is the century for biology, life sciences and biotech. Why — because biology is fundamentally a science of building blocks using information components. Freeman Dyson, a notable researcher at the Princeton Institute for Advanced Studies, said “that the origin of life is the origin of an information-processing system”.[25] All living systems adapt by processing inputs…of information. AI gives us the computing power to understand biological complexity and synthesize solutions to problems such as disease.

Biology research has had a long history, dating back to the Greeks, Avicenna and Darwin. Modern biology was launched in 1944 when three researchers (Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty) determined that DNA holds the gene’s information. Rosalind Franklin, James D Watson and Francis Crick then demonstrated the molecular structure of DNA.[22] Together, these discoveries established the central dogma of molecular biology.[26]

With the recognition that DNA was a form of information and that molecular biology was just a natural system of components, biology research was now able to take full advantage of both information theory and artificial intelligence. Information theory has been used in evolution to probe one of the most profound questions in science — the origin of life (OOL) — by researchers including External Professor Sara Walker and Santa Fe Institute (SFI) President David Krakauer.[27] With the fundamental structure of biology shown to be building blocks of information or molecules, EO Wilson at Harvard pioneered the field of computational biology. Starting with Wilson’s first applications of AI, biology has now transitioned from computational biology to synthetic biology.

Synthetic biology is the design of new components and systems, such as microbial genomes. Many knowledgeable sources, including Nobel Prize winner Jennifer Doudna,[28] believe we are at the frontier of engineered biology or industrial biology. The well-known venture capital firm Andreessen Horowitz explains the point well.

“We are at the beginning of a new era, where biology has shifted from empirical science to an engineering discipline. After a millennium of using man-made approaches for controlling or manipulating biology, we have finally begun using nature’s own machinery — through biological engineering — to design, scale, and transform biology.”[29]

The advances in this ML technology support not only genetics and biology research but medical science research, explorations into chemical synthesis, the development of new pharmaceuticals and ML-based diagnostic imaging techniques to name a few of the fields. The component nature of biology in the form of molecules is the perfect input for a Simon-like synthetic process managed by ML and algorithms.

Quanta summarizes well what we have covered so far.

“All of this is part of a much larger shift in the very scope of science, from studying what is to what could be. In the 20th century, scientists sought out the building blocks of reality: the molecules, atoms and elementary particles out of which all matter is made; the cells, proteins and genes that make life possible; the bits, algorithms and networks that form the foundation of information and intelligence, both human and artificial. This century, instead, we will begin to explore all there is to be made with these building blocks.”[30]

For me, this component nature of reality just reinforces the multidisciplinary nature of great research and science, as different disciplines facilitate the examination of combinations of building blocks.

AI and Engineered Biology are also Paradigms, which I will now discuss.

Paradigms

If we combine the best ideas of Kuhn[31], Perez[32] and J. Dosi[33] we would define a [technological] paradigm as transformative technology at the time that provides the basis for innovative solutions to problems in a wide range of industries. We should perhaps add a footnote to the definition to acknowledge Schumpeter and his critical insight that a solution is just an invention until it succeeds in the market and thereby becomes an innovation. Innovation happens when the technological, economic and cultural factors all coincide to create value for a significant number of paying customers.

Carlota Perez provides more detail on paradigms, as described below.

“The construction of a techno-economic paradigm occurs simultaneously in three main areas of practice and perception:

1. In the dynamics of the relative cost structure of inputs to production where new low and decreasing cost elements appear and become the most attractive choice for profitable innovation and investment.

2. In the perceived spaces for innovation, where entrepreneurial opportunities are increasingly mapped for the further development of the new technologies or for using them advantageously in the existing sectors.

3. In the organizational criteria and principles, where practice keeps showing the superior performance of particular methods and structures when it comes to taking advantage of the power of the new technologies for maximum efficiency and profits

In all three areas, the emergence of the paradigm depends on the rhythm of diffusion of the revolutionary products, technologies and infrastructures in self-reinforcing feedback loops. At first the impact is localized and minor, with time it is widespread and all-encompassing. The changes occur in the economy and in the territory, in behaviors and in ideas. The paradigm and its new common sense criteria become ingrained and act as inductors and filters for the pursuit of technical, organizational and strategic innovations as well as for business and consumer decisions. The process is self-reinforcing as the further propagation and adoption of the new technologies confirm in practice the wisdom of the sharing.”[34]

Semiconductors and Bluetooth technology are historic examples of paradigms. The “application” of AI (applied AI), in almost every field of science, engineering and medical science demonstrates AI as a paradigm. Engineered biology, combining gene sequencing, AI and synthetic chemistry (the synthesis of manmade molecules), is another paradigm. [The reader and the early career researcher should note that paradigms, transformative platforms and multidisciplinary approaches are increasingly replacing lab drudgery with modern tools that allow the scientist to focus more on the creative parts of science. The siloed approach of many universities perhaps will collapse under the multidisciplinary pressure on researchers.]

The ever-popular Blockchain is a paradigm candidate, but by its nature I do not think it will be integrated into scientific and engineering research and is therefore omitted. The roots of Blockchain are more cultural than natural science and the tech. Nevertheless, I think that Blockchain and cryptocurrency have much potential as the foundational concepts for new forms of commercialization and Augmented Reality and Virtual Reality (AR/VR) may also achieve commercial success later in the century. Web 3.0 might combine Blockchain, cryptocurrency and AR/VR to demonstrate the next version of the Internet.

Now let’s explore three new paradigms that have already established themselves in the 21st Century — 1) cyber-physical systems and nanotechnology, 2) robotics, automation and autonomous vehicles 3) AI, Cloud Computing and Internet of Things (IoT).

Cyber-physical Systems (CPS) and Nanotechnology

I like the way the National Science Foundation (NSF) defines cyber-physical systems. They make the paradigm nature of the technology obvious, as shown below.

“Just as the Internet transformed the way people interact with information, cyber-physical systems are transforming the way people interact with engineered systems. Cyber-physical systems integrate sensing, computation, control and networking into physical objects and infrastructure, connecting them to the Internet and to each other.”[35]

Engineered systems date back at least to the early Romans. As information technology has been updated since Shannon, this technology has been added to CPS. Today the challenge is to make the best use of AI, ML and computation in real-time in the CPS. We are no longer satisfied simply to collect data but now want to inform operating systems to improve effectiveness and efficiency. This convergence of CPS and computation drives the NSF support for research in CPS, as described below.

“Core [CPS] research areas of the program include control, data analytics, and machine learning including real-time learning for control, autonomy, design, Internet of Things (IoT), mixed initiatives including human-in- or human-on-the-loop, networking, privacy, real-time systems, safety, security, and verification. By abstracting from the particulars of specific systems and application domains, the CPS program seeks to reveal cross-cutting, fundamental scientific and engineering principles that underpin the integration of cyber and physical elements across all application domains.”

Like AI and ML, the range of CPS applications and domains could include every part of our lives by 2050, from agriculture and water management to smart cities, energy management and healthcare and medicine. Today, when I mention healthcare and medicine you probably think of telemedicine and digital health. The next frontier in medicine is to combine CPS and nanotechnology.

Nanotechnology is the science and engineering at the nanometer scale, which is an atomic scale of measurement. More precisely, 1 inch=25.4 million nanometers. What we see here is perhaps the lowest level at which we could apply Simon’s synthesis. What also is promoted is model building and “digital twins” at a new level of detail and eventually understanding. Otherwise unmodelled domains can now be explored, data captured and new understanding derived. Biology and evolution work at the nanoscale, producing such notable “components” as capillaries and hemoglobin. Natural materials and fundamental natural chemical processes, such as photosynthesis, further our understanding when investigated at the nanometer level.

Synthetic biology produces artificial components that behave identically to the human body’s disease fighting systems. One of the biggest challenges is in delivering the components to the proper location in the body. CPS might provide the required accuracy in a documented specificity and repeatability sufficient to reliably heal people and warrant commercialization. Once this level of multidisciplinary success is achieved, we might apply the technology to preventive medicine.

Robotics, Automation and Autonomous Vehicles (AV)

When artificial intelligence was invented, John McCarthy said that the objective was “to do what humans used to do”. Much of that philosophy is captured in this section of the article. First, let me cite Techopedia.com and define some terms.

Automation — the creation and application of technologies to produce and deliver goods and services with minimal human intervention

Robotics — the engineering and operation of machines that can autonomously or semi-autonomously perform physical tasks in lieu of a human

Autonomous Vehicles (AV) — autopilot directed drones, vessels and vehicles using various onboard technologies and sensors

Automation was a principal factor to explain the first two industrial revolutions, beginning at the end of the 18th Century. Automation provided the means to more effectively and efficiently use material, labor and energy. (The first industrial wealth creation model was based on land, labor and capital. This has been gradually replaced by a new wealth creation model of information and capital starting in the 3IR in late 1960s, early 1970s.)

The first computer automated robots were commercialized in the late 1960s and early 1970s. Their success largely followed advances in computer technology and physical downsizing of the computers. Today robots provide four principal applications — 1) hazardous environments, 2) war fighting, 3) advanced automation, 4) human substitution. Much of the current U.S. Department of Defense strategy involves modernizing the military to use the technologies of this paradigm.

Today AVs come in three types — 1) cars, 2) boats and 3) drones. There is probably a fourth type coming to be used in space exploration and colonization. The goal is to replace the human in the operation of all three types of vehicles. Cars probably face the most challenges, drones are the furthest advanced (Afghanistan, Iraq) and boats may still struggle to successfully integrate real-time data on submerged natural and manmade obstructions.

All of the machines and devices in this paradigm of Automation, Robots and AVs are also part of the next paradigm as devices in the Internet of Things (IoT), which is where we turn now. Actually, all three paradigms described in the article are interrelated as simple systems of data collection, storage, ML and analysis.

AI, Cloud Computing and Internet of Things (IoT) (in combination a “data platform”)

We defined AI earlier and hopefully have shown the power of applied AI to process and analyze data to draw insights not easily available through empirical research approaches. To understand Cloud Computing and the related areas of Fog Computing and Edge Computing I rely on the researchers at CB Insights for definitions:

Cloud computing enables companies to store and process data (among other computing tasks) outside of their own physical hardware, across a network of remote servers.

Fog computing refers to the network connections between edge devices and the cloud, i.e., the extension of the cloud to the edge of a network.

Edge computing is computational processing done at or near the “edge” of a network, i.e., at the device level.

Cloud computing began in 2006 when Amazon introduced Amazon Web Services (AWS). Originally intended to provide data storage and cybersecurity, today AWS and other leading cloud providers such as Google and Microsoft host private company software and offer a full range of AI/ML services for ETL (Extract, Transform and Load data) and algorithmic analysis. Spurred on by the increasing popularity of new data collection devices in AVs, CPS and telemedicine, these original cloud providers have now expanded to edge computing. Edge computing offers two major advantages — 1) improved real-time data processing and analysis, 2 — better security for local data by processing it to anonymize it before transmission and remote storage in a cloud.

Oracle describes IoT as “the network of physical objects — “things” — that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools.”[36] International Data Corporation (IDC) estimates that there will be 41.6 billion IoT devices in 2025, capable of generating 79.4 zettabytes (ZB) of data.[37]

I prefer to think of this paradigm of AI, Cloud Computing and IOT as an example of a “data platform”, a fully integrated system to capture raw data, process it, analyze it and draw out synthetic solutions to complex and urgent problems. Moderna was able to develop their COVID vaccine so quickly because they had a large, relevant dataset and access to AWS and Amazon’s cloud-based applications, tools and data storage. The AWS Research Cloud Platform[38] served as a multidisciplinary partner and reduced the time and set up friction so the researchers could focus on the science and vaccine development. Moderna identified a viable COVID vaccine reportedly in three days. (The fact that it took months to be approved and available to people prompted an enhanced federal agency focus on translation, getting the research to the market to help people…more quickly. For those who say the approvals were hasty, perhaps we need more public education on the new technologies and their application.

What Moderna made clear to the world, and NIH described[39] as early as 2008, is:

“the increased emphasis on science that is transdisciplinary, translational, and network-centric reflects a recognition that much, if not most, disease causation is multifactorial, dynamic, and nonlinear.”

What Moderna also made clear is that we now have the tools to deal with the “multifactorial, dynamic, and nonlinear” world that quantum mechanics predicted.

The utility of this Paradigm in the COVID case also raised awareness about the need for more tools that can speed up the translational process, moving research to market. For example, accurately projecting laboratory-scale results to industrial production levels is still a problem. Improved use of bioreactor technology or microfluidic chips might be other examples. Another example might be “creating platforms for the robust biosynthesis of a variety of molecules without having to perform extensive tinkering to obtain commercially useful yields”.[40] Developing such tools is another area that early career researchers could pursue. However, in these fields the universities and regulators must be prepared to adjust their risk profiles and recognize that returns may come over longer time frames.

Moderna makes clear the power of this Paradigm, which also points to a bright future for advances in chemistry, materials science, food science, nanotechnology, bio-waste, environmental science and agriculture. All of these sciences offer the opportunity for synthetic combination of fundamental components. This description from the consulting firm BCG makes clear, for example, the future of agriculture and the important role of data, datasets and platforms.

“In light of evolving technologies, reevaluate your value added from first principles. To take just one example: imagine smart agriculture as a stack. Cheap, meshed sensors measure the temperature, humidity, and acidity of the soil; active repeaters embedded in agricultural machinery or in cell phone apps capture, aggregate, and relay the data; data services combine this local data with aggregate models of weather and crop prices; other services tap into their APIs to optimize planting, irrigation, fertilizing, and harvesting. Farmers collect the data, share in the aggregation and pattern recognition [ML], and follow prescriptions [ML] that give them a better yield on their crops. Such an ecosystem creates social and private value in both developed and developing economies.”[41]

The multiple sciences and technologies brought to bear in the BCG example, and the importance of the platform (“stack”), data capture and data analysis, show us the future of scientific research in the natural and social sciences and what early career researchers need to be trained for. The NSF[42] describing their Convergence Grant program quotes the NAP to make the same point.

“Convergence is an approach to problem-solving that cuts across disciplinary boundaries. It integrates knowledge, tools, and ways of thinking across disciplinary boundaries in STEM fields to form a comprehensive synthetic framework for tackling scientific and societal challenges that exist at the interfaces of multiple fields.” — National Academies Press NAP, 2014).

With such powerful tools available, now let’s turn to the five existential problems of the 21st Century.

The Problems to Focus On

To this point, we have largely talked about the tools and technology that the early career researcher could use in a multidisciplinary, data-centric, synthetic approach. But, what are the problems to be solved, rather than the science-for-science sake approach that characterized much of scientific research up until the end of WWII? The first point about the nature of the problems I draw from the illustrious history of Bell Labs, one of the greatest research centers in history. The story of Bell Labs is told beautifully in Jon Gartner’s book, The Idea Factory — Bell Labs and the Great Age of American Innovation. Bell Labs’ philosophy can be simplified to three principles:

  1. Work on “messy” problems
  2. Basic research precedes applied research, which is the basis for development and then manufacturing
  3. “One policy, one system, universal service”

“Messy” problems are what we called “impossible” problems at the MIT Media Lab, Safi Bahcall uses “loonshots” and many social scientists refer to “wicked” problems. As the Cornell Policy Review explains it, “wicked problems result from the mismatch between how real-world systems work and how we think they work.”[43] These are the type of fundamental, theoretical problems that a young researcher can build a career around. Bell Labs’ messy problem was to create a nationwide network to carry telephone voice calls to every home and office. Fundamental research on this problem resulted in Claude Shannon’s Information Theory and Richard Hamming’s “eighth-bit error checking” code. Both accomplishments perhaps dwarf the nationwide telephone network in terms of contributions to science, engineering and humanity. Both Shannon and Hamming did fundamental research that was then applied to launch the Digital Age (Shannon) and modern computing (Hamming). Outcomes of this magnitude quickly led to the development (prototyping) and then equipment manufacturing. What I find interesting is that “one policy, one system, universal service”, the overarching Bell principle, looks today like the principle to explain, for example, Salesforce, Shopify, Benchling and Exscientia. (The last two companies use data platforms for AI-based engineered discovery.) Perhaps the Bell principle is a fundamental principle for any information service. Maybe that’s why software as a service (SAAS) companies create so much value and are so popular with venture capitalists and stock market investors. I digress.

Returning to the problems of the 21st century, Brian Arthur would suggest we have the tools to solve them. That is the way technology evolves — to solve the problems of the times. Perhaps the availability of the tools explains why we have almost a consensus on the problems. In 2018 the UN[44] identified five urgent problems facing mankind. WEF, McKinsey, Schmidt Foundation and many other organizations, think tanks, fund managers, NGOs and non-profit foundations routinely mention these same five problems:

1. Food

2. Water

3. Healthcare

4. Energy

5. Climate Change

Each problem is critical to the continued existence of humanity, which satisfies the Bell Labs “messy” test. As we advance in understanding the existential nature of these problems, we realize that they are not only multidisciplinary but heavily influenced by social, economic, political practices. The field of Population Health, for example, exemplifies the need to merge life sciences, medicine and social sciences to solve the problem(s) for under resourced communities. In integrating the social sciences and the natural sciences on these urgent problems, I see many opportunities to apply Information Theory and Natural Language Processing (NLP). Nobel Laureate Michael Spence[45] showed us that poverty is caused by an asymmetry of information. This tool of asymmetry I think helps to reframe any social problem and bring a new perspective to research approaches and resultant understanding. Another tool from AI, natural language processing (NLP), I am also starting to see applied in fields such as psychology and education. Large datasets of written or spoken text and interviews lend themselves to methodical examination using NLP. Given the long history of quantitative methods in the social sciences, these domains lend themselves to more frequent use of AI.

Conclusions

If we look at Herbert Simon’s research agenda[46] in the 1960s, at the phase transition preceding the 3rd IR, we see a good start to a current research agenda:

● mathematical

● behavioral-functional

● problem-centered,

● interdisciplinary.

If we were to update the research agenda today, at the start of the 4th IR, the National Science Foundation might include:

● Innovative — transformative (theoretical and not iterative)

● Social Impact — translational (must positively impact people)

● Scalable — impact a lot of people (the benefits of capitalism)

My definition of a research agenda would include:

● AI

● Data Science

● Multidisciplinary

● Translational

● Humanistic

The first four terms are hopefully self-explanatory at this point. “Humanistic” is my way to express the need for us to do better than just bring research-based solutions to market (translational) and economically profit from them. We need to focus the science and engineering research on the five existential problems of the 21st century — water, food, healthcare, energy, climate change. As technology networks the world and complexity increases, these five problems will naturally tend to merge with each other. This trend will only add to the annual number of “black swans” and increase the challenges for humans to survive.

Perhaps to reference human survival is overly dramatic or not, but do we have sufficient motivation to address the problems. The technology is present in the computational methods, increasing use of datasets and the application of synthetic approaches. We need just the discipline to tackle the right problems.[47] Neither science nor culture has yet given us the discipline to effectively solve the problems… but we have the tools. Buckminster Fuller says it much better than I:

“We are called to be the architects of the future, not its victims.”

“The power of scientific knowledge is not in its ability to solve specific problems but rather in that it forces us to change our intuitive frameworks and adopt new perspectives from which new transformative solutions become not only possible but often relatively easy.”[48]Quanta Magazine

To the early career researcher, I quote the Chinese proverb: May you live in interesting times.

[1] Kondratiev wave — Wikiwand

[2] Credit: Download Introduction To Veryable On Demand Labor For — Iot 4th Industrial Revolution, HD Png Download — 1262x789(#509482) — PngFind

[3] “FINANCE AND TECHNICAL CHANGE: A NEO-SCHUMPETERIAN PERSPECTIVE” Carlota Perez

[4] Learn About Transformative Research | Beta site for NSF

[5] Are mRNA vaccines really a “miracle cure”? — Prospect Magazine

[6] Kondratiev wave — Wikiwand

[7] Bartholomew, James R. (2003). “Asia”. In Heilbron, John L. (ed.). The Oxford Companion to the History of Modern Science. New York: Oxford University Press. pp. 51–55.

[8] The Birth of Modern Science: Galileo and Descartes, a lecture by Ricardo Nirenberg. Fall 1996, the University at Albany, Project Renaissance.

[9] René Descartes (Stanford Encyclopedia of Philosophy)

[10] Rules for the Direction of the Mind” by René Descartes

[11] Introduction to quantum mechanics

[12] John H. Holland, Complexity: a Very Short Introduction

[13] Information: what do you mean? | Research Gate

[14] Contemplating the End of Physics | Quanta Magazine

[15] https://philpapers.org/archive/CAPPAI-6.pdf

[16] Information Theory

[17] Information Theory

[18] Information theory — Wikipedia

[19] In 1948, Shannon published his paper “A Mathematical Theory of Communication” in the Bell Systems Technical Journal.

[20] 5 Nobel Prize-Winning Economic Theories You Should Know About

[21] Michael Spence, The Next Convergence, Farrar, Strauss and Giroux

[22] Austrian Subjectivism and the Emergence of Entrepreneurship, 2015, Israel Kirzner,

[23] A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

[24] Digital technology and social change: the digital transformation of society from a historical perspective — PMC

[25] Physics and information theory give a glimpse of life’s origins | Aeon Essays

[26] History of genetics — Wikipedia

[27] Aeon: Origin story | Santa Fe Institute

[28] Recognizing the Next CRISPR-Level Tech for Biology | Future

[29] Biology is Eating the World: A Manifesto | Andreessen Horowitz

[30] Contemplating the End of Physics | Quanta Magazine

[31] Thomas Kuhn (Stanford Encyclopedia of Philosophy)

[32] Technological revolutions and techno-economic paradigms

[33] Technological paradigms and technological trajectories

[34] Technological revolutions and techno-economic paradigms

[35] Cyber-Physical Systems | National Science Foundation

[36] What Is the Internet of Things (IoT)?

[37] Internet of Things and data placement | Edge to Core and the Internet of Things | Dell Technologies Info Hub.

[38] AWS RCP: Research Cloud Program — Amazon Web Services (AWS)

[39] Systems Thinking to Improve the Public’s Health — PMC

[40] Schmidt Futures

[41] Borges’ Map: Navigating a World of Digital Disruption

[42] Gen-4 Engineering Research Centers (ERC) (nsf20553) | NSF

[43] Learning Systems Thinking at the Graduate Level

[44] Secretary-General’s remarks on Climate Change [as delivered]

[45] The Next Convergence

[46] Hunter Crowther-Heyck, “Patrons of the Revolution: Ideas and Institutions in Postwar Behavioral Science,” Isis 97, no. 3 (September 2006): 431.

[47] There are some problems and technologies, such as quantum computing, which I have not mentioned in the article. These topics may very well define the 5th IR and are very worthwhile.

[48] How Computationally Complex Is a Single Neuron? | Quanta Magazine

--

--

Robert Hacker

Director StartUP FIU-commercializing research. Entrepreneurship Professor FIU, Ex IAP Instructor MIT. Ex CFO One Laptop per Child. Built billion dollar company