The Fourth Industrial Revolution, in which machines start to perform physical and mechanical tasks as well as cognitive ones such as object identification and translation, has many implications for libraries. Recent breakthroughs in machine learning are spreading to fields that used to be seen as being exclusively human, such as news writing and driving.
This significant change in the capacity of the machine is what the phrase “Fourth Industrial Revolution” is trying to capture. It is distinct from the changes created by previous industrial revolutions, characterized by the development of semi-conductors in the 1960s, mainframes in the 1970s, personal computers in the 1980s, and the internet in the 1990s.
In his 2016 book, The Fourth Industrial Revolution, Klaus Schwab noted that what differentiates the Fourth Industrial Revolution from the previous digital revolution is not just novel technologies, but their fusion. He observed that this fusion of advanced technologies brings new and interesting interactions across the physical, the digital, and the biological worlds. Naturally, the impact of those interactions on our society is quite fundamental and comprehensive.
That the newest technologies are blurring the lines between the physical, the digital, and the biological domains is a fascinating observation. These technologies are disrupting existing industries and transforming today’s production, management, and governance systems in their entireties.
If recent advances in digital technologies are truly taking us beyond the digital revolution, that is a sufficient reason for libraries to take heed of the Fourth Industrial Revolution. What does it exactly mean that the lines between the physical, the digital, and the biological spheres are getting blurred? How will that relate to the future of libraries? What kind of libraries would be appropriate for the world in which the digital blur and mix into the physical and the biological and vice versa?
Virtual and Augmented Reality
What is driving the blurring of the physical sphere with the digital is the technologies of virtual, augmented, and mixed reality. Many of us remember Google Glass, which was released in 2013. Some libraries purchased and lent those to library patrons for hands-on experiences. It was withdrawn from the market in 2015, partially due to privacy-related concerns.
But the use of the Google Glass continued in manufacturing. Currently, the enterprise edition, which is the second generation of the Google Glass, is available for corporate users only. It is being used at several companies, such as Boeing, GE, and DHL, increasing the productivity and efficiency of factory workers. For example, the employees of a farm equipment manufacturer, AGCO, get a reminder through their Google Glasses about the series of tasks they need to perform while assembling a tractor engine. They can locate and access information related to the assembly of parts. They can scan the serial number of a part and bring up a manual, photos, or videos with Google Glass while dealing with the assembly itself. AGCO reported that the addition of Google Glass made quality checks 20% faster and helped in the training of new employees on the job (Tasnim Shamma, “Google Glass Didn’t Disappear: You Can Find It on the Factory Floor,” All Tech Considered, NPR, March 18, 2017; npr.org/sections/alltechconsidered/2017/03/18/514299682/google-glass-didnt-disappear-you-can-find-it-on-the-factory-floor).
Google Lens is a newer augmented reality (AR) product. It is a camera-based AR technology that started supporting the camera app on the Android smartphone in 2017 and is now available in the Google app for mobile devices. Powered by machine learning, it performs real-time translation and object identification; scans and translates texts; and identifies popular landmarks and common objects. Using Google Lens, people can translate a restaurant menu into another language on-the-fly. They can also simply point their smartphone camera at a popular landmark to find out its hours and historical facts, or identify flowers, trees, and other common objects.
These examples show how the technologies of AR are starting to infuse digital information into the physical world. The early development of the AR cloud, through the Open AR Cloud Association (openarcloud.org), is also speeding up this process of the marriage of the digital and the physical worlds. The AR cloud is a real-time machine-readable 3D map of the world. Its purpose is to serve as a kind of shared spatial screen that enables multi-user engagement and collaboration in the AR environment. With the AR cloud, AR applications can overlay relevant contextual information onto people, objects, and locations, thereby making people’s experience of their physical surroundings digital as much as physical. Due to this, the AR cloud is regarded as an important future software infrastructure in computing, particularly spatial computing.
When physical objects and surroundings are represented digitally for communication and manipulation, the physical and the digital worlds are fully enmeshed with each other. When people can interact with these two worlds as if they were one, people’s activities, both online and offline, will take on a completely new character. This is what “spatial computing” means.
Big Data and the Internet of Things
Another technology that is accelerating the meshing of the physical and the digital is Big Data. Big Data is defined as “high-volume, high-velocity and high-variety information assets that demand cost-effective and innovative forms of information processing that enable enhanced insight, decision-making, and process-automation” (gartner.com/en/information-technology/glossary/big-data). The tools and technologies for storing, retrieving, and analyzing today’s high-volume, high-velocity, and high-variety data are essential components of Big Data.
The Internet of Things (IoT) is important because it generates a large volume of machine-to-machine data. The IoT is the network of uniquely identifiable things, that is, physical objects digitally represented on the internet through sensors and actuators. The network of those sensors and systems captures, reports, and communicates data about their environments and their own performances. The so-called “smart” things also interact with their environments through actuators. A smartwatch, a smart thermostat, a Fitbit, and an Amazon Alexa are examples of such IoT devices.
What makes Big Data different from just more data is its ability to apply sophisticated algorithms and powerful computers to large datasets and reveal correlations and insights that were previously inaccessible through conventional data warehousing or business intelligence tools. It is at this point that machine learning enters the scene as a powerful means to discover the patterns of correlations.
The IoT infrastructure will be built slowly over many years. But the fully realized IoT can eventually connect all physical objects in the world and allow us to detect, track, and control them digitally through their online representations. Those connected physical objects will also be able to communicate with one another to perform more sophisticated and complex tasks. This type of machine-to-machine communication and cooperation will significantly increase the degree of automation in the real world.
The more physical objects are brought into the IoT network, the more digital data the IoT network will generate. And this massive amount of data will fuel the development of more accurate machine learning algorithms. The Big Data phenomenon is likely to continue, since the IoT will necessitate such massive amounts of data to be collected, stored, retrieved, analyzed, and acted upon on an ongoing basis. The quickly advancing Big Data tools and technologies will make the world’s IoT infrastructure more robust and complete.
Just like virtual and augmented reality (VR/AR), the IoT aims to create a digital layer integrated with our physical world, thus accelerating the merging of the physical and the digital worlds. In the mature stages of the IoT and VR/AR, things in the world will be digital as much as physical in the way they interact with us.
Synthetic Biology and 3D Bio-Printing
Synthetic biology and 3D bio-printing technologies are transforming biological processes into digital ones. Today’s digital computer is an electronic device that stores, retrieves, and processes data. A computer program is made up of a set of instructions that the computer hardware performs as specific operations. These operations all boil down to manipulating bits, the smallest unit of digital data in a computer, namely 0’s and 1’s. The goal of synthetic biologists is making a biochemical process—such as gene synthesis or protein production—digital, that is, more like computer programming.
Genes in a cell, which are functional sequences of DNA, include the encoded instructions to build a particular type of protein. Ribosomes read those instructions and build the specified protein. In this sense, genes and ribosomes are analogous to programs and a production machine and cells to factories equipped with molecular machinery that produces chemicals. Synthetic biology studies how to synthetize genes to program cells to produce protein and beyond. It aims to build biological parts, devices, sensors, and chemical factories, which can then be used to make pharmaceuticals, renewable chemicals, biofuels, food, and so on.
The first synthetic genome, an artificial life-form, is JCVI-syn1.0. It was created in 2010 by an American biotechnologist, J. Craig Venter, and his team. The DNA code of the replica of the cattle bacterium Mycoplasma mycoides was written on a computer, assembled in a test tube, and then inserted into a hollowed-out shell of a different bacterium. The genome assembly process required stitching together 1.08 million base-pair DNA segments into a complete synthetic genome and propagating it as a single yeast artificial chromosome. The synthetic genome then encoded all the proteins required for life.1 This means the DNA “software” built its own “hardware.” In 2019, another team of scientists at the Medical Research Council Laboratory of Molecular Biology, a research institute in Britain, succeeded in synthetizing the complete genome of E. coli, named Syn61.
DNA sequencing and DNA synthesis are crucial in synthetic biology for two reasons. DNA sequencing allows synthetic biologists to read the instructions of how to construct a biological part, and DNA synthesis enables them to write new genetic information by replicating, modifying, and creating genes. The drop in the cost of DNA sequencing and DNA synthesis will facilitate and accelerate developments in synthetic biology. Microorganisms are small and require only a tiny amount of energy to function. Consequently, the ability to program cells and biological processes to produce specific outputs with precision through synthetic biology can usher in a truly new era of manufacturing.
Synthetic biology is not limited to synthetizing DNA molecules and proteins. Today’s researchers are also using 3D bio-printing technology to build whole cells, tissues, and even organs. This brings biology even closer to the digital realm. 3D-printing is an additive manufacturing technology of a physical 3D object. As such, it creates a 3D object, layer by layer. A 3D bio-printer uses bio-ink, which is organic living material, while a common 3D printer uses a thermoplastic filament or resin.
In 2016, regenerative medical scientists at Wake Forest Baptist Medical Center succeeded in bio-printing living tissue structures using a specialized 3D bio-printer. Since then, researchers have been able to bio-print ear, bone, and muscle structures that further mature into functional tissue, which develop a system of blood vessels when implanted in animals.
The vision of synthetic biology is to repurpose living cells as substrates for general computation. This vision has so far manifested itself in genetic circuit designs, which attempt to implement Boolean logic gates, digital memory, oscillators, and other circuits from electrical engineering. Biological circuits and parts are not yet sufficiently modular or scalable. But synthetic biology holds a key to the potential future where electronics and biology become fungible, and matter becomes programmable.
When this happens, the function of a mechanical sensor, for example, may be performed by bacteria, and those bacteria may function in connection with electronics and computers. Living organisms can be used to produce materials and may serve as an interface for everyday electronics. In such a future, living organisms and non-organic matter will become fungible in function. In combination, developments in computational design, additive manufacturing, materials engineering, and synthetic biology can create the true merging of the physical, the digital, and the biological worlds.
Digital Blurring and Libraries
So far, I have described how today’s technologies are blurring the lines between the physical, the digital, and the biological. But how does that relate to the future of libraries?
When all physical objects and surroundings are given a digital overlay, the library’s spaces and collections will also be given digital representations. In a world where the physical surroundings are persistently enmeshed with digital information, library patrons will expect to interact with the digital representations of the collections even while they are physically in the library building. In such cases, libraries will want to consider what kinds of digital information should be overlaid onto their physical collections and spaces.
Furthermore, continuing advances in virtual and augmented reality technologies will provide interesting opportunities for libraries to not only present their collections in new ways but also deliver completely new type of experiences. This may also make the strict division of the physical, the digitized, and the born-digital collections less important in the future.
In addition, the blending of the physical and digital worlds may alter the nature of the library’s spaces and services. What kind of blended experience should libraries aim to create for library patrons who are visiting the physical library if the physical and digital spaces are tied together? In what ways would libraries take advantage of spatial computing when it becomes available? Where would libraries deploy biosensors if they become available?
Although these may sound like parts of a sci-fi story now, they will play an important role in charting out new directions for library collections, services, and spaces that are appropriate and appealing in the world where the digital, the physical, and the biological converge. And when such convergence is reached, an important question will be how libraries enable people to experience knowledge and information, not simply access it.
Endnote
1. Sleator, Roy D. “The Story of Mycoplasma mycoides JCVI-Syn1.0,” Bioengineered
Bugs, vol. 1, no. 4 (2010):229–30, doi.org/10.4161/bbug.1.4.12465; Gibson, Daniel G., John I. Glass, Carole Lartigue, Vladimir N. Noskov, et al., “Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome,” Science, vol. 329, no. 5987 (July 2010):52–56, doi.org/10.1126/science.1190719.
Bohyun Kim is the Associate University Librarian for Library Information Technology at the University of Michigan Library.