HOME | SPEAKING | PUBLICATIONS | ABOUT JSB | CONTACT ME

Assessing Corporate Research: Restructuring at Xerox

by John Seely Brown

Talk given at the
National Research Council Workshop
on Assessing Corporate Research Restructuring
Washington, D.C. April 10, 1995

I must confess that this is the first time I have been on a panel where I am the fourth speaker and feel like I represent a start-up company.  It is usually the other way around.  But after IBM, AT&T and Ford, I represent the little new guy — Xerox.  We did not get into this issue of restructuring corporate research because of budget issues.  Eight years ago, there was a book published called “Fumbling the Future. “  It made us realize that we needed to get our act together and that maybe our game should be to stay ahead of the game, and to do our own restructuring and our own measurements before somebody asked.  So, Mark Myers, myself, and others have been looking at these issues for some time, and I am going to have a little bit different take on it than the prior speakers.

Organization of Research at Xerox

I would like to begin with our organizational context.  It helps to illustrate some of the interesting issues in the management of research and technology.

We have eight business divisions and a group called Corporate Research and Technology headed by Mark Myers.  The Corporate Research and Technology group comprises a small strategy and innovation group, three technology platform centers, and three fundamental research centers.  I call this out initially because the performance measurements of the technology platform centers are going to be somewhat different from the performance measurements of the more basic or fundamental research centers.

The second thing worth noting is that we have only a partially centralized R&D group.  The whole Corporate Research and Technology operation represents about 25 percent of the R&D dollars inside Xerox, which means that 75 percent of all of development is done in the business divisions.  Why did we not push everything into the business divisions, or why did we move from a place that was almost all centralized to spreading a lot more of it out in the business divisions?

One of the interesting things that makes this decentralization work is that we have a Technology Decision Making Board which meets once a month.  The eight business division presidents, the heads of the technology platform centers, and the heads of research all sit on this board.  The chairman is Mark Myers who is a member of the Corporate Office.  We are not allowed to send substitutes, which means the business division presidents must learn to traffic in technology issues and, equally important, the heads of research must learn how to have interesting, useful conversations with the business division presidents.  Mark has perfected the processes of running this Technology Decision Making Board over the last couple of years.  This board is now a very successful mediating device where we really do put our cards on the table.  That is a big change.  In the past, we hid a lot.  Now, we are beginning to realize the value of exposing ourselves more.  If there happen to be conflicts that cannot be resolved here, Mark can bring the issue to the Corporate Office.

Part of the purpose of still having some centralized R&D is to create some synergy among the business divisions.  If you break a company up into business divisions, in what way is the whole more than the sum of the parts?  One of the ways is by having a small centralized research group create common technology platforms that can be used by multiple business divisions.  So, one of our measures is the number of technology platforms we create and how many of these get shared across the business divisions.  If you can show real shareability, then you have a synergy factor that justifies some centralization.  Also, by creating platforms that are shared by several business divisions, you get an implicit corporate architecture across the various products based on that platform and this may be the best way of getting an architecture accepted in a corporation.  So, there are some subtle cultural or social issues that come out of this measure.

What I want to do is to go through a set of shifts that we have been going through over the last three to five years.  These are summarized in Figure 1.

Figure 1

I want to talk a little bit about the theory underlying some of these shifts, but before I do that, I need to warn you that what we are really talking about and I think everyone here in his or her gut knows it — is basically a cultural shift.   It is not just a cultural shift of research; it is also a cultural shift of the corporate world at large.  So the question is, how do you bring about cultural changes in the research community?  That is done partly by measures, but it also is done by basically changing the language, by reframing some of the classical distinctions of research and technology transfer.

What I am going to talk about is how we have been changing our discourse and some of the techniques for that, that  turn out to be as important as the measures themselves.  In fact, although measures are important, we feel that if our success is going to turn on measures alone, we will have already lost the game!  Measures without having established credibility of the research organization are not going to change the perception of our credibility.

So perhaps the real question is how do you establish that credibility.  Then against that backdrop, measures have quite a different tone and purpose — indeed, they then become a tool we can use for ourselves for self improvement as much as a “monitoring” tool used by others.  So, what I want to do is to go through how some of these shifts change how we see ourselves.

The first is the recognition that our job as researcher is not just to create the invention, but also to share some of the responsibility for making it into an innovation.  By “innovation,” we mean invention implemented.  Perhaps one of the reasons why top-level management often comes out of research is because we quickly learn that there often is as much creativity that goes into the innovation part of the invention as into creating the invention in the first place.  Why is that important?  Because it changes the arrogance factor in researchers.  As researchers, we need to learn that we are not the only creative people and that, in fact, the amount of creativity it takes to unfreeze the corporate mind and to get a radical new idea accepted often requires as much out-of-the-box thinking as did the original invention.   Although this is an obvious business goal, it actually starts to change how we, as researchers, view ourselves.

The second shift — and this has been talked about a lot this morning — is the sense of moving research from being isolated from business to being much more deeply aware of business issues.  Although we are not trying to turn researchers into general managers per se, we do want our researchers to be participants in a broader set of strategic conversations and to be able to hold their own in these conversations.

A third shift for us was to view corporate research and technology, especially corporate research in this case, as not just creating new technologies, but taking on the tasks of creating, in addition to new technologies, new businesses and new business models.  The latter may seem very bizarre.   However, as we heard this morning about being confused about what the Internet may become as a set of business opportunities, we will probably remain confused because we do not yet have the right business models.  They have to be invented, which is another aspect of innovation.

In this rapidly changing world, not only is it the case that we as researchers have to be prepared to have multiple careers, but the corporation itself has to be prepared to have multiple new core competencies emerge.  Who is going to determine what those core competencies are?  Who is going to nurture them and who is going to make them catch on?  It is going to be research.  So, corporate research needs to be responsible for the renewal of a technology capability as opposed to just the technologies themselves (as are our universities).  To actually grow those core competencies you have to infiltrate the structure of the organization and not just sit in research.

Another shift has to do with creating our own destabilizing events; that is, how do we completely transform our industry or our market? The reason I bring this up is because Jim McGroddy was talking about one of their measures having to do with the business division's satisfaction with research.  It is very important to realize, however, that if you are in the process of creative destruction, a particular business division may not be very happy with you, especially if you are doing a great job.

Thus, satisfying your customers may be fairly tricky, because if it is not going to be research that is leading to creative destruction, who is it?  You cannot just ask business divisions how happy they are with you.  Obviously, there is a way around that problem because, for example, you also can measure how many new business divisions you have created and how rapidly they are growing, etc.

The fifth shift, which is one of the hardest cultural transformations, is how do we move from obscuring responsibility, to one of embracing responsibility?  It is all too easy for us in the research center to talk about those idiots out there that couldn`t adopt a new idea, and we can very easily become very cynical about just how closed minded everybody else is.  It is an interesting move when you say, hey, folks, the responsibility stops here.  If we cannot get our ideas accepted, are we not an endangered species, and whose fault is it really?  It is a collective fault.  How do you turn this into a useful reflective experience?

The final shift is how do we think of ourselves as moving from, if you like, an ivory tower to an ivory basement?  What do I mean by that?  I think one of the things we have learned is that the classical distinctions of basic versus applied research really harm our ability to have useful dialogues with each other and with the business divisions.

Figure 2 presents a framework that we have been pursuing for some time, which we call pioneering research; pioneering research meaning doing both radical and grounded research.  Here we go back to the etymology of the word “radical,” meaning going to the root of the problem, as well as the popular meaning of “radical.”  Radical research means to us that you go to the root of the problem and then are willing to reframe the problem when necessary.

Figure 2

“Grounded” means that you have deeply marinated in the class of real world problems you are trying to crack. This is not mission-oriented research.  It is completely different than mission-oriented research. Grounded research is going to the root of a class of problems in which you know deeply and intuitively why those problems are important and you can be fluent in explaining why they are important. We have identified three dimensions of research having to do with the context, style, and kinds of knowledge that come from it. We no longer think of a particular research project as being applied or basic but rather of it being a trajectory in this three dimensional space. The dimension that I am going to focus on here is the one called context — the context of open loop research versus grounded research.

At the founding of NSF, almost 50 years ago, there was a sense that the scientific community unquestionably produced a huge bang for the research buck, perhaps because so many of the researchers then being funded had come through the war and had for years marinated in a class of real world problems, but during the war had never had the opportunity to step back and go to the root of those problems.  “When NSF started, most of the researchers were, I believe, deeply grounded but then through time have become more and more disconnected from a grounding context, so that we have almost moved into an open loop situation, where basically we are working on problems that just our colleagues believe are important.  That may be too strong a statement, but it is at least an interesting conjecture to think about.

One last comment on this is that if you think about this notion of pioneering research, there is a slight paradox.  I am suggesting that you start out very connected to a set of real world problems, then pull back and become disconnected as you go to the root of those problems, and then reconnect when you have cracked the fundamental issue that you are going after.  The reason I think this is important is that a corporate research organization must have the freedom to disconnect as well as connect and this is only likely when research is separated from the daily crises of creating products in a business division.

Let me mention another change having to do with technology transfer.  As a caricature we have all understood that research used to invent something,  “throw it over the transom” and expect these poor developers to do something with it.  This, of course, is a recipe for everything except success!  What we have been looking at — and partly as a function of our Technology Decision Making Board — is both the formal processes as well as a set of informal processes for managing technology transfer.  It is the coupling of the formal with the informal that, I think, is the secret.  If you leave everything up to informal processes, then you must rely on measurements or charisma to prove your worth.

If you are trying to do everything formally, however, you get cookie cutter recipes that do not do justice to a lot of the individuality of each situation.  Ideally, you’d like to keep the formal processes elegantly minimal and have them structure the conversations of the informal processes.  A major part of our formal process concerns contracting.  It is one form of measurement that was created for several reasons — probably the major one being that our CEO asked how he should know how much to spend on research.  Is there any way one can tell from an a priori  argument how much should be spent on research?

We had to answer, of course, that we did not really know.  In fact, we now have a small research project trying to answer that.  We are trying to look at research as one of the critical elements in enhancing the adaptability of the organization in a rapidly changing world.

As an underlying metaphor, you might think about research as providing the genetic variance in a species.  If you do that, you can use some mathematics from population ecology to show how increasing the genetic variance helps you adapt in a more rapidly changing landscape.  This is the form of argument we are working on, but it is at best a form of an argument.

A more empirical approach is as follows: we have in place a contracting mechanism in which the company allocates a certain amount of money for the business divisions to contract with CR&T over a multi-year time frame.  This is an attempt to put in over time some kind of process to see what the alignments really are, what the collective judgments are, and so on.  The business divisions are also allowed to go outside the company to buy their own technology and again, if they keep doing that rather than use ours, that says something to the Corporate Office.

Finally, let me say in all this discussion about change that there is a certain irony in all of this.  We as researchers want to be at the cutting edge, but we are still deeply conservative.  Several times in Jim McGroddy’s presentation, he talked about the joy of researchers engaging in deep market connection, but then he kept talking about their insecurity.  If you think about it, we aren’t worth anything as researchers if our intuitions aren’t honed.  And honing intuitions takes time.  So, there is, I think, a good reason to expect that the research community should be fairly conservative.

Assessing Research at Xerox

Measures of research productivity are decidedly non-trivial, even without asking how it connects to creating business value. Table 1 shows the results of a recent survey, a rating of physical sciences over a ten-year period by the impact of their publications, determined by the total number of citations divided by the number of papers. This is a well accepted metric and it turns out that under this metric, two groups that you might not think of as having the world’s best physics research come out on top, namely the Institute for Advanced Study and Xerox.

Top Ten U.S. Research Institutions in the Physical Sciences, 1981-1991

Ranked by Citation Impact

Rank Name Papers Citation Impact
1 Institute for Advanced Study, Princeton 1,462

25,538

17.47

2 Xerox Corporation 1,619

26,516

16.38

3 AT&T Corporation 10,340

169,031

16.35

4 Harvard University 7,049

110,760

15.71

5 Princeton University 5,593

85,423

15.27

6 University of California, Santa Cruz 1,541

22,963

14.90

7 IBM Corporation 8,929

127,092

14.23

8 University of California, Santa Barbara 4,583

64,744

14.13

9 Caltech (including Jet Propulsion Lab) 9,160

128,919

14.07

10 University of Chicago 4,781

65,203

13.64

Source: ISI’s Science Indicators Database, 1981-91

Table 1

The difference between Xerox and AT&T is indistinguishable really, but it is strange that both of us come out above all the universities.  How could that be?  If you think about it a moment, you realize that researchers in industry do not have to publish to be promoted.  We tend to write a lot fewer papers and only when we have something that we really believe is worth publishing do we bother to publish it, so it’s not surprising those get cited more.  Hence, even well established metrics can have interesting side effects.  Nevertheless, we are increasingly convinced that the classical measures of research having to do with papers, citations, and citation impacts have some value but they are far from the whole story. Patents are at least as important a measure because they become a critical part of how we create cross licenses.

We also are starting to use some classical measures of business productivity. For example, we are doing a lot of work now on qualified technology options. There is one caveat to that. Most qualified options theories depend on the options being distributed over normal distribution. With “increasing returns” economics based on the notion of lock-ins, you probably should shift to a different distribution, one with a very long tail to it.

The traceable research contributions in new products is another measure but it is always going to be arguable. There are an infinite number of ways to double count and, again, if we do not have independent credibility, these measures will fail to convince the unconvinced.

So, measures and credibility have to go hand in hand in this particularly murky world. Perhaps equally important is that we as researchers are going to have to stop kidding ourselves about the significance of our own work. I remember when I brought all the researchers of one of our labs together to discuss value creation in research, one young researcher, quite frustrated, asked, “John, what does it take to convince you that my work is really important?”

I said, “That is the wrong question.  Indeed, ask what does it take to convince yourself that your research will really pay off?” I said that we are all skilled at writing those last paragraphs in grants that explain why our project is so critical but those paragraphs are mostly bull. How do you move from that to a place where you really believe that you have made the right judgment call yourself, especially if you are willing to go beyond just looking at your community for concurrence?

Again, I stress that establishing credibility is at least as important as the measures themselves.  We as researchers must start to take a more active role in shaping the discourse of how research pays off.

Perhaps one of the most critical things is how do we as representatives of the different research communities get our own act together so we can speak with a clear, single voice?  If we do not do that, we are going to be torn to shreds by competitive desires of the business divisions.  What we have done is created a laboratory manager’s laboratory.  It is a laboratory for laboratory managers to reflect on their own practices.  There is no leader.  I happen to be just a member of that group, although I am their boss.  What we are trying to do is to leverage the diversity of the different laboratory managers in order to facilitate our ability to triangulate on better measures, on things that we have screwed up, and on things we did not quite see right.  We hold weekly two-hour sessions and quarterly off-site meetings.

Related to this, John McTague from Ford made the comment that communications is one of the biggest problems in a large corporation.  It is also true in little start-ups like Xerox.  Is it not curious that both of us talk about communication as a major barrier in making our corporations more effective yet neither of us funds basic research into how to create shared understanding across the organization?   Hmmm . . . .

I will conclude with five suggestions for the National Science Foundation:

  • Encourage the scientific communities to become as conversant in issues of commerce as they are in national defense.
  • Get the various scientific communities to set their own priorities with an eye to the above.
  • Consider a new kind of partnership between universities and industry for creating overlapping communities or practice
    • Industry with universities (refreshing skills)Universities with industry (grounding of research)
    • Turn the issue of measurements into an ongoing research topic for the various research communities, to improve understanding of them.
  • Change the funding discourse from one of rights to one of privileges.

General Discussion

DR. BABA: Thank you, John, a wonderful presentation that I really enjoyed. I especially liked your comments about researchers being connected, then disconnecting for a time, and being reconnected again. I wanted to pursue that with a question for you. I am a program manager at the National Science Foundation.  Many of my colleagues view the research community as their customers; they are really trying to serve the research community. That is where the ideas come from and the real fundamental knowledge is developed. So you cannot really think about the NSF without thinking about those research communities in the universities, their position in society, and how connected they are.  Are they isolated from the rest of society, or are they connected, so that they have that problem-specific knowledge and they have a deep understanding?  Have you at Xerox thought about how to position researchers in universities in a state of connection with industry and then also have them connected with Xerox?

DR. BROWN: First of all, you might even reframe your initial question in terms of NSF being the impedance matcher to the real customer — the taxpayer.  So you help researchers get connected to the customer, and then you provide multiple kinds of resources, as I provide resources to my researchers, not just money, but also an ability to help them use their ideas to have a greater impact on the world.  Our researchers do not think of us doing that, but we help amplify the impact of their ideas.  NSF could think of itself in that way too.

You have obviously gotten my point about the language we have used.  We have flipped the issue around with amazing success.  Instead of asking our fundamental researchers, what have you done for the company, we asked them how have you leveraged the company to advance your own ideas?  As soon as a researcher starts thinking about that, he or she then realizes that these other corporate people are allies and they ought to start learning their language.  What are their problems and what fundamental research ideas might underlie them?

Going back to mathematics, one of our most theoretically gifted researchers is right now in Rochester working with our engineers in finding new ways to build scheduling software for a critical part of our copiers.  On his own he found this connection and the problems are causing him to extend his constraint algebras into dealing with temporal logics in beautiful new ways,  and he’s turning out incredibly good theoretical work.  He is getting fundamentally new ideas, plus the corporation is winning.  That is the sense of excitement that is possible if you can flip this whole question around.

Universities do not do very good software engineering, nor do corporations.  But university researchers do not even know what a software engineering task is, by and large, because they have not trafficked in real problems of building and maintaining large, commercial systems.  And yet the people who are doing software engineering do not know what a theoretically elegant idea is in software design.  We ought to be able to bring these two communities of practice together and have a win-win situation.

DR. LINEBERGER: A related question to all of that is, in many ways, what you have described is a mapping process, where you have talents, problems, and the ways in which cross-fertilization takes place. A related issue that faces the National Science Foundation, and obviously faces you in the restructuring of your laboratories, is what should be in this pool that you are mapping?  If, for example, one has to give more or less emphasis to various areas, how do you decide that specialization in Nubian mummies is something that is terribly important to the future of the corporation or to the National Science Foundation? That is an important question because, in effect, if at some level the skills that one cares about are not in this pool that you are thinking of mapping, the idea never comes to you.

DR. FROSCH: To address this problem, you can make a list of the business problems you want to solve, what kind of knowledge is necessary to solve them?  Two things happen.  One is that you discover that it is a much richer set of answers than people normally have. The second is you discover that it is hierarchically organized — that is to say mathematics does not appear as the competitor of combustion chemistry.  For instance, you need combustion chemistry to do an engine.  In order to do combustion chemistry, you need mathematics, and so on and so forth. You can show it as a matrix, or a whole logic tree of things you would like, but you may never find out about Nubian mummies that way.

In fact, there is probably no way to find out about Nubian mummies, except increasing the collision cross-section. You need an inelastic collision cross-section of researchers for business problems. That is what is being discussed here. And the other measure is what is the delayed emission rate of the solution? It is exactly like a physical problem. So there is something to be said just for increasing the collision cross-sections of apparently unrelated ideas.

DR. LINEBERGER: One has to find a boundary. It is easy once one ascertains that, but, in fact, it is not an easy question.

DR. BROWN: Let me give you an idea of how we probed the boundaries.  One of the things that we have found incredibly profitable is having a very aggressive summer intern program.  The primary purpose of bringing the interns in is to challenge us.  The first day they come I meet with them, and I also meet with each one during the summer.  I tell them that their job is to hassle us, and to point out what we aren`t seeing out there.  This is different from the classical notion of bringing in scientific advisory boards, who already “know” what is (or was) important.  The intern program has helped shift some of the religion in PARC. We used to have our own private programming language,  After about the fifth year of being beat up by the graduate students coming in, the researchers themselves said maybe we are doing something wrong. We finally threw it out.

DR. McTAGUE: That is another point. At Ford, we do not say, “I am an engineer making an engine, and I want to make sure that  the laboratory has somebody working on Nubian mummies.” What we do is make sure that the person who is working on Nubian mummies can convince himself or herself that there is a reason for being at Ford. We ask for self-certifiction.

DR. FROSCH: There are all sorts of devices. You can purposely hire a small fraction of people from unlikely specializations, and just throw them in the swimming pool and see what happens. What always happens is they find some problem that somebody was working on and could not solve, or they invent problems. Frequently, they are very important problems.

DR. BROWN: I think another key part of it is how much of our time is spent challenging background assumptions. How much is set aside at NSF for people who want to challenge fundamentally background assumptions? Do you set aside money to probe the fringes, especially cross-disciplinary fringes?  The peer review system is not going to do it.

DR. GOLDSTEIN:That is something that is more at the intersection of NSF problems than of industry ones. Everyone says one of the trends in industry is going more outside the industry labs for research. When industry goes to universities for research, it is sort of acting like NSF, albeit for a different purpose. What kinds of things are you funding in the universities?  How do you evaluate whether it is what you need?  Do you do something different when you are going outside your labs than when you are staying inside?

DR. BROWN: We do not provide that much funding to universities.  We have several institutes, like at Cornell, where we jointly work with the computer science department there.

One of the most surprising high pay-off benefits is that we would take a professor at a university and fund that person full time as a Xerox employee while he/she is still full time as a professor.  Now, the catch is that they only get to keep four months of their salary for themselves with the other eight months of funding going to anything they want in the way of funding students that they could not ordinarily fund, or paying the students to come with them to PARC in the summer.  But they get to make the judgment call. They know that at the end of five or six years, we are going to step back and ask whether this has paid off or not.  But there is no peer review. It is putting it really back on them.

DR. McTAGUE: We have several ways that we fund things in universities. One of them is a central fund. Actually, I have a central fund that solicits proposals that have to be cosponsored by somebody at Ford, not necessarily in terms of working together, but somebody in the operations somewhere, or somebody in research, or somebody in advanced engineering saying I like the ideas that this person is proposing to work on, and I am going to be the person who acts as an interface with this individual, and my name is attached to this in that sense. We have a group that evaluates these proposals worldwide that involves people in research, people in all of the technical organizations. We have a concern that this supports work that I call “too obviously relevant.” So there is a separate set-aside of money for riskier, less obvious stuff that I personally make the decisions on. Then we have other things. A local plant may be interacting with somebody who is working on manufacturing engineering at a nearby university. They fund that directly. Sometimes we have a hard time of finding out what is going on. We do not try to be rigid. We do not try to have a structure inside the company where there is a check point that you have to go through that can stifle creativity. We have got lots of different ways of funding things that go on in universities, not the least of which is bringing people in the summer, having graduate students do their theses in our lab and having professors spend sabbaticals there.

DR. BROWN: Creating overlapping communities.

DR. McTAGUE: And actually, on the other end of the extreme, getting back to education, we also have high school teachers and students spend the summer in the research lab. You have got to do a full spectrum of things.