One of the most pressing issues for business this decade is how to make knowledge workers more productive. In a 2010 article called “The Productivity Imperative,” management consulting firm McKinsey & Company calls out the need for increased knowledge worker productivity as one of the five global forces shaping the future of business and society. In its article, the firm points out that while demand for knowledge workers is continuing to grow, the supply isn’t.

This was validated in the Corporate Executive Board report on the “Future of Corporate IT,” which calls the rise of the knowledge worker and the need for IT to focus on tools for knowledge worker productivity as the number one trend that will change corporate IT over the next five years. The report states that, “More than half the opportunities for IT enablement are at the customer interface or involve business intelligence or collaboration” (versus process automation). These activities are unstructured and dynamic and ask players to make decisions and redefine processes based on the situations at hand, experience and available information.

For example, in the U.S., 85 percent of the new jobs created in the past decade required complex knowledge skills: analyzing information, problem solving, rendering judgment and thinking creatively. The problem is that for the most part, the only tools IT provides for those tasks are email and documents (and maybe wikis and instant messaging). What would happen if we could magically increase the speed with which this type of work could be done? What would be the ramifications for business? What would be the competitive benefits?

A company that could speed up these critical knowledge worker decision processes would have a killer advantage over its competitors. So what is needed to make this a reality?

A simpler version of this problem can be found in performance problem resolution for complex distributed computer systems. The problem arises because today’s computer systems are so distributed. With so many components that interact in unanticipated ways and with so many transactions flowing through them, performance problems are a bear to solve.

Finding the root cause of distributed application performance problems is difficult – even for systems much less complex (and less dynamic) in their interactions than human processes. Not surprisingly, it turns out that the first step in solving such performance problems is having the right data about the actual transaction flow. The premise is that if you collect all the information about how a transaction progresses along its path (both as it happens and a historical view), then solving a performance problem becomes much easier. The intuition behind this is simple: The best way to fix a performance problem is to understand its root cause, and the best way to find a root cause is to see the problem while it is happening. If you don’t see it in vivo while it is happening, then it becomes a detective task – you only get to see the symptoms of the original problem and need to derive from those the root cause. It’s similar to the TV show CSI, in which smart people unravel a mystery from its symptoms or effects without the benefit of an eyewitness. The reason the problem is so tricky (and the shows so interesting) is that the players are studying the after effects of an event and trying to ascertain the root cause; for instance, this mark means that the injury was caused by this action. Contrast that to an eyewitness report – I saw X pick up the knife and stab Y. The latter would be a really short, effective investigation (and a really boring show). The same is true for performance problem resolution in computer systems. If you have the right eyewitness data, then in many cases, analyzing and fixing the problem is a piece of cake. Without the right data, it is a bear.

Performance problem technology went through some interesting phases. First came the sophisticated analysis techniques to help solve the problem (the CSI approach), and now we are seeing new technology and approaches that are focused on collecting better, more effective data (the eyewitness approach). Examining only symptoms makes root-cause analysis expensive, while an eyewitness approach is cheaper and much more effective.

The same holds true for knowledge processes. If we want to improve knowledge worker processes, the first thing we need to do is collect the right data - not after-the-fact, symptomatic, biased data, but true data regarding how the process is actually performed. Only then can we be effective at resolving the problems and speeding up the processes. If we could view knowledge worker decision processes as they are executed, then most of the difficulty involved with speeding up such a process (or fixing a broken one) would go away. It would be simple to see where any given decision process stands at any given time and to understand whether it is healthy or not.

Another aspect of knowledge work is the “what” of the work is known, but the “how” is often discovered as the work progresses. This is in contrast to structured work in which every detail can be defined and optimized before the work starts. It is important to point out that knowledge work and structured work are different types of work, and the tools needed for knowledge work are very different from the tools needed for structured work. Many end-to-end processes mix the two types of tasks, and even knowledge workers aren’t always doing knowledge work, just as production workers aren’t involved only in structured work. The main attributes of knowledge work are:

  • Knowledge work consists mainly of interactions between human participants such as in collaboration and negotiation.
  • Content is an integral part of knowledge work; it is both consumed and produced as part of the work process.
  • The participants control the work process and change it on a case-by-case basis, often due to flow changes, participant changes, and activity changes.
  • Every work process instance has an owner.
  • Every work process instance has a goal, deadline and a defined work product.

So, given the imperative of increasing knowledge worker productivity, what trends will we be seeing in knowledge work in the next year?
Process and document tool vendors will start extending their suites to encompass knowledge work. Adaptive or dynamic case management will be the buzzwords used by the process folks, while “system of engagement” seems to be the term preferred by the document management community. Terms like “unstructured process” and “unpredictable process” will become a standard part of process orientation.

There will be continuing attempts to replace email as the preferred tool for knowledge work, but they will all fail. The tools that succeed will embrace email, rather than fight it.

Simplicity and user experience will take center stage in knowledge work management tools. Checklists, guidelines and best practices will take the place of rigorous modeling. Tracking, monitoring and participant empowerment will take the place of centralized control.

The industrial revolution and its ensuing tools and techniques were responsible for increasing the productivity of structured work and propelled economies forward in an unprecedented pace over the last 200 hundred years. The pace of economic improvement over the next 200 years will be dependent on our ability to bring the same type of productivity gains to knowledge work.