About a decade ago Geoff Colvin, a long-time editor at Fortune magazine and a respected commentator on

Question:

About a decade ago Geoff Colvin, a long-time editor at Fortune magazine and a respected commentator on economics and information technology, agreed to play a special game of Jeopardy. The occasion was the annual convention of the National Retail Federation in New York, and Colvin’s opponents were a woman named Vicki and an empty podium with the name tag “Watson.” Watson’s sponsors at IBM wanted to show retailers how smart Watson is. “I wasn’t expecting this to go well,” recalls Colvin, who knew that Watson had already defeated Jeopardy’s two greatest champions. As it turned out, it was even worse than he had expected. “I don’t remember the score,” says Colvin, “but at the end of our one round I had been shellacked.” 

Obviously, Watson isn’t your average Jeopardy savant. It’s a cognitive computing system that can handle complex problems in which there is ambiguity and uncertainty and draw inferences from data in a way that mimics the human brain. In short, it can deal with the kinds of problems faced by real people. Watson, explains Colvin, “is not connected to the Internet. It’s a freestanding machine just like me, relying only on what it knows. . . . So let’s confront reality: Watson is smarter than I am. 

Watson is also smarter than anyone who’s ever been on Jeopardy, but it’s not going to replace human game-show contestants any time soon. Watson, however, has quite an impressive skill set beyond its game-playing prowess. For example, it has a lot to offer to medical science. At the University of Texas, Watson is employed by the MD Anderson Cancer Center’s “Moon Shots” program, whose stated goal is the elimination of cancer. This version of Watson, says IBM’s John Kelly, is already “dramatically faster” than the one that was introduced on Jeopardy—about three times as fast. 

Already, reports Kelly, “Watson has ingested a large portion of the world’s medical information” and it’s currently “in the final stages of learning the details of cancer.” Then what? “Then Watson has to be trained,” explains Kelly. Here’s how it works: Watson is presented with complex health care problems where the treatment and outcome are known. So you literally have Watson try to determine the best diagnosis or therapy. And then you look to see whether that was the proper outcome. You do this several times, and the learning engines in Watson begin to make connections between pieces of information. The system learns patterns, it learns outcomes, it learns what sources to trust.

Working with Watson, doctors at Anderson, who are especially interested in leukemia, have made significant headway in their efforts to understand and treat the disease. Watson’s role in this process has been twofold: 

1. Expanding capacity: It helps to make sense out of so-called big data—the mountain of text, images, and statistics that, according to Kelly, “is so large that traditional databases and query systems can’t deal with it.” Moreover, says Kelly, big data is “unstructured” and flows “at incredible speeds. . . . With big data, we’re not always looking for precise answers; we’re looking for information that will help us make decisions.” 

2. Increasing speed: Kelly also points out that “Watson can do in seconds what would take people years.” The system can, for example, process 2000 GB of information—the equivalent of four million books—per second. When it comes to making sense out of the enormous amount of data concerning the genetic factors in cancer, says Kelly, “Watson is like big data on steroids.” 

Clearly, however, Watson is not replacing “knowledge workers” (doctors) at the Anderson Center. Rather, it’s being used to support their knowledge work. In this respect, argues Thomas H. Davenport, a widely recognized specialist in knowledge management, Watson is confirming “one of the great clichés of cognitive business technology—that it should be used not to replace knowledge workers, but rather to augment them.” On the one hand, even Davenport admits that some jobs have been lost to cognitive technology. In the field of financial services, for instance, many “lower-level” decision makers—loan and insurance-policy originators, credit-fraud detectors—have been replaced by automated systems. At the same time, however, Davenport observes that “experts” typically retain the jobs that call for “reviewing and refining the rules and algorithms [generated by] automated decision systems.

Likewise, human data analysts can create only a few statistical models per week, while machines can churn out a couple of thousand. Even so, observes Davenport, “there are still hundreds of thousands of jobs open for quantitative analysts and big data specialists.” Why? “Even though machine learning systems can do a lot of the grunt work,” suggests Davenport, “data modeling is complex enough that humans still have to train the systems in the first place and check on them occasionally to see if they’re making sense.” 

Colvin, however, isn’t sure that these trends will hold true for much longer. Two years after he competed against Watson, Colvin reported that “Watson is [now] 240 percent faster. I am not.” He adds that by 2034—when Watson will probably be an antiquated curiosity—its successors will be another 32 times more powerful. “For over two centuries,” admits Colvin, “practically every advance in technology has sparked worries that it would destroy jobs, and it did. . . . But it also created even more new jobs, and the improved technology made those jobs more productive and higher paying. . . . Technology has lifted living standards spectacularly.”

Today, however, Colvin is among many experts who question the assumption that the newest generations of technologies will conform to the same pattern. “Until a few years ago,” acknowledges former Treasury Secretary Larry Summers, “I didn’t think [technological job loss] was a very complicated subject. I’m not so completely certain now.” Microsoft founder Bill Gates, on the other hand, is not quite so ambivalent: “Twenty years from now,” predicts Gates, “labor demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.” 

According to Colvin, today’s technology already reflects a different pattern in job displacement: Its “advancing steadily into both ends of the spectrum” occupied by knowledge workers, replacing both low- and high-level positions and “threatening workers who thought they didn’t have to worry.” Take lawyers, for instance. In the legal-discovery process of gathering information for a trial, computers are already performing the document-sorting process that can otherwise require small armies of attorneys. They can scan legal literature for precedents much more thoroughly and will soon be able to identify relevant matters of law without human help. Before long, says Colvin, they “will move nearer to the heart of what lawyers do” by offering better advice on such critical decisions as whether to sue or settle or go to trial. 

So what appears to be the long-term fate of high-end knowledge workers? Davenport thinks that the picture is “still unclear,” but he suggests that, in order to be on the safe side, would-be knowledge workers should consider reversing the cliché about technology as a means of augmenting human activity: “If there is any overall lesson” to be learned from current trends, “it is to make sure you are capable of augmenting an automated system. If the decisions and actions that you make at work are remarkably similar to those made by a computer, that computer will probably be taking your paycheck before long.”


Case Questions 

1. Consider the definition of knowledge workers in the text: “workers whose contributions to an organization are based on what they know.” In what sense might just about any employee qualify as a “knowledge worker”? For example, what qualifies as “knowledge” in an organization’s operational activities (i.e., in the work of creating its products and services)? What’s the advantage to an organization of regarding all employees as knowledge workers? 

2. Review the sections in Chapter 4 titled “Decision-Making Defined” and “Decision-Making Conditions.” Why are computers, especially cognitive computing systems, so effective in assisting the decision-making process? In particular, how can they increase the likelihood of good decisions under conditions of risk and uncertainty? 

3. Think of a few jobs in which the application of “human-relationship skills” is important—even absolutely necessary. Explain why these jobs require more than just decision-making skills. How about you? Does the job that you want require good human-relationship skills? Do your human-relationship skills need some improvement? What sorts of things can you do to improve them? 

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  answer-question
Question Posted: