Getting the Job Done
In order to achieve this new level of computing, cognitive systems must be the following:
Adaptive.They must learn as information changes and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time or near-real time.
Interactive. They must interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices, and cloud services, as well as with people. It’s early days to have a set of standard expectations for interaction design. One hopes there will always be a way to drill down to find out why the system returned the information it did. Depending on the application, the system should alert the user about data that is missing and suggest how to find it. This is certainly already true in Watson for Healthcare applications that suggest the doctor order tests to improve the certainty of the diagnosis.
Iterative and stateful. They must help the user define a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time.
Contextual.They must understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task, and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
Beyond these principles, cognitive computing systems can be extended to include additional tools and technologies. They may build on existing information systems and add domain- or task-specific interfaces and tools as required. Cognitive systems are a superset of information tools. They include aspects of search, business intelligence, artificial intelligence and machine learning, Big Data, text analytics, voting algorithms, logic, process control, and many more. One can think of them as the inevitable culmination of computer evolution, combining what we have learned about information management, access, collaboration, visualization, analysis, and decision support. Many of today’s applications (e.g., search, ecommerce, e-discovery) exhibit some of these features, but it is rare to find all of them fully integrated and interactive. Our comprehension of how to design really good information interaction is in its infancy, but it is getting better by leaps and bounds. Learning systems accelerate this pace.
Cognitive System Processes and Technologies
Cognitive systems are complex by definition. The more they can imitate how the brain takes in and makes sense of information, and then decides to act on it, the better they are. The brain is, by design, an information aggregator. It takes in information from all the senses and stores it. It applies frameworks for understanding this information: What is this? Is it like anything else I know? When did it happen (sequence)? What else was happening at the same time? Are these related in any way? What were the possible causes? What were the outcomes? It constantly analyzes what it takes in. Does this add to or change what I know? Is it important? Do I need to do anything about it? How soon?
The diagram below shows the basic processes and components in a cognitive system. Note that, unlike today’s information-gathering systems, this one is iterative, not linear. The red arrow shows that feedback—the new information, the user interactions—updates each one of these processes so that the system adapts and learns based on its “experience.”
1. Ingestion Loop. Takes in information in any format from any source. Provides APIs and connectors in order to collect the data. Normalizes, extracts entities, relationships, other attributes such as location, time, sentiment. Stores in Information Hub in a variety of knowledge representations, usually triples. Storage may be in search indices, relational databases, triple stores, or graph databases. Provides interaction feedback to learning engine.
2. Dialog Loop: Problem and Query Definition.Defines the information need interactively. Expands query. Proposes additional hypotheses.
3. Cognitive Processor: Probabilistic Matching and Learning Engine.Explores multiple hypotheses, gathers evidence for each, and weighs strength of evidence. Merges results and presents them in ranked order according to a set of criteria designed for the specific cognitive application.
4. Context Filters.Uses contextual clues (user profile, task, location, history, part of task, etc.) to filter output from matching engine. May add additional criteria such as utility or urgency to ranking. Delivers ranked results.
5. Exploration Loop. Presents results in interactive form for iterative exploration. Results may be in the form of text or spoken answers, graphs, charts, maps, images, sound, or a combination thereof. Designed to be exploratory and conversational. Gathers user interactions to feed back to learning engine.
6. Feedback. Uses feedback and new information to adjust the weights for each triple. This means that an information quest undertaken today will not match the same request results from last month. The system should know that the searcher has already received certain information, has chosen some to pursue further, and that the new directions need to be given more weight. Machine learning systems function best if they learn from use as well as from constant streams of information. Use tells the system what is interesting, what works, and what doesn’t. Feedback loops add users’ interactions and choices to the cognitive processor to enrich the knowledge base.
Why Now?
New inventions arrive constantly. To disrupt a market, however, we need a confluence of market demand, innovation, and the ferment of experimentation that fosters innovative thinking: in other words, a business reason for adopting the invention. We believe that these forces have converged to drive the development and adoption of cognitive computing. Today’s change agents include both positive and negative business pressures:
- Information has become a business focal point today. Agile organizations realize they must know what is happening now. They must be able to analyze and predict actions to take more accurately if they are to stay agile and compete effectively.
- Big Datahas become both a threat and an advantage. No person can stay abreast of all the pertinent information that streams in daily. Tools to manage and mine Big Data are a necessity.
- ROI: In the past decade, it has become apparent that investments in analytics and in cognitive and Big Data applications pay for themselves
- IT complexityhas become a problem. Data is scattered and incompatible. It needs help from more intelligent systems to monitor data streams, merge the information, and alert companies to changes, threats, or opportunities.
- Cloud and open source software foster standards for data exchange and storage, enabling software to discover patterns and answers across large collections of information.
- Pervasive analytics have become commonplace tools to analyze, understand, and predict trends, patterns, or actions across all types of structured and unstructured information.
- User expectations for easier, more interactive systems have increased, fueled by frustration with difficult-to-use, poorly integrated systems and by more comfortable personal devices. Users seek better, more integrated access to information and expect usable conversational systems that understand human language.
- Move from deterministic to probabilistic systems. Real-world problems and human questions rarely have a single neat answer. There are always trade-offs, pros and cons. We need systems that can handle changing needs and dynamic situations.
- The upshot is thathighly integrated systems are gradually replacing separate information silos or are being built on top of existing systems in order to give organizations a simpler, more complete view of what they know.
The Market for Cognitive Computing
Cognitive systems are so new, it is difficult to predict the size or shape of the market. The market is unknown because systems that address ambiguous, dynamic situations are not widely available. We believe, however, that we are already seeing some market evolution along the typical market lines: into cognitive tools, cognitive platforms, and cognitive applications. Cognitive systems are also beginning to differentiate themselves by the problems that they address. Each application we see can be placed on the following continua, based on the accuracy demanded of the output.
For instance, investors may want to get a 15-second jump on the market. They are looking for gross trends in stock trading. They need the information fast, but aren’t too worried about perfection—outliers are fine. At the other end of the spectrum, medical diagnosis must be as close to accurate as we can make it, with a fair amount of leeway built in for the doctor to explore the information, drill down, and consult with his patient before making a life-or-death decision.
Less Accurate | More Accurate |
Trend and pattern detection | Question answering |
Language patterns (syntax) | Deep NLP (semantic) |
Analysis | Inferences and recommendations |
General applicability | Domain knowledge |
Faster to deploy | Slow, demands domain customization |
As cognitive computing evolves, we predict a variety of highly integrated intelligent systems will emerge to address specific types of problems. Cognitive features are already becoming part of most user-facing software, making the market even more difficult to delineate. We expect technology evolution rather than revolution, paralleling what has happened recently in the search market. As open source search software has replaced proprietary search in enterprise search products, search market revenue has been deeply affected, but this has not been noticeable to those using search products. Instead, more up-to-date, standards-based, and affordable technology has replaced proprietary technology, all while looking the same.
These new cognitive systems will have some features in common. They will all be probabilistic with a machine learning component. They will be proficient to varying degrees in understanding and conversing in human language(s). They will build and maintain a knowledgebase, and doing so will include tools to extract, normalize, and analyze data from multiple sources and in multiple formats. They will be highly integrated and complex, with technologies that interact with each other to retrieve, compare, infer, reason, and calculate importance or priorities. Above all, they will be interactive and stateful so that a user can carry on a conversation with them.
We believe as cognitive features become more expected, any system will have a cognitive component, if that system is designed for a task which does the following:
- Is information-rich, particularly in unstructured information
- Is user-facing
- Needs the agility of dynamic, continuous, informed decision making
- Will improve outcomes by delivering hyper-targeted information in context to the right user at the right time
Today, there are only a handful of true cognitive systems available. The best known—IBM’s Watson—is a platform. Its partners, such as CognitiveScale, Welltok, LifeLearn Sofie, WayBlazer, Enlyton, Pathway Genomics, A9, and Cognitoys, are building domain-specific applications on that platform. As we go to press, customers in banking and pharmaceuticals, among others, are experimenting with these technologies, and smaller, nimble companies such as Customer-Matrix or Saffron have developed domain-specific applications for their own platforms. In the past 6 months, cognitive systems have become faster to deploy and more affordable. We believe the market is taking off.