Advertisment

"The next big change will come from new ways of programming"

author-image
DQI Bureau
New Update

Craig Mundie works closely with Microsoft Chairman Bill Gates to assume

responsibility for the company's research and incubation efforts. Mundie

previously held the position of Microsoft chief technical officer. Mundie joined

Microsoft in 1992 to create and run the Consumer Platforms Division, which was

responsible for developing non-PC platforms and service offerings such as

Windows CE, software for the handheld, Pocket and auto PCs, and telephony

products. Mundie also championed the Trustworthy Computing Initiative at

Microsoft. He was in India recently. The DATAQUEST team caught up with him to

take a look at what Microsoft's research is focusing on, and what lies ahead.

Advertisment

What are the areas you focus on? I spend a lot my time thinking about

issues in say, a 3-15 year time horizon, and where to focus on coordinating the

action in the next one or two product release cycles.



Some of the biggest changes in the architecture of computing systems and how

we program them will probably arrive in the next three to five years. This is

the result of a long-term trend. In fact, in the physics of microprocessors,

we're leaning more and more towards the multi-core computer. We are the

harbinger of a more aggressive movement for heterogeneous high core count chips

and single chip systems. These will have to be programmed in a different way in

order to get real benefits from them. Many other things that have provided

fairly easy evolution in software for the last 30-40 years are suddenly going to

get broken, in terms of needing to get more parallelism out of high-scale

applications. This would only come through from new ways of programming the

machine. So, a lot of our research for the last four or five years and a lot

more of development activity is now beginning a trend in that direction.

Another area that has been a big focus of mine, since I came to Microsoft 14

years ago, is lifestyle. Now, game consoles, automotive computing, and mobile

computing in handsets are a big phenomena, and Microsoft is evolving a more

balanced portfolio in the way that we address these things. Even in the

workplace, there is a requirement now for cohesion in the way that we address

the mobile market for information, communication, and data access, along with

traditional desktops or laptops. That environment has evolved. So, the next

topic is the way it is today and relevant now. You get security threats in

enterprise networks, and so, a great deal of emphasis has been placed for the

last five years in securing enterprise computing. Now, employees might want

access to any application on their cell phone and their home computer and from

wherever they happen to be. We actually have to contemplate a fairly dramatic

change in the nature of the enterprise network. That will really serve two

purposes-one is the creation of an environment where you can have anywhere

access for your employees; and the other is coupled with evolution and identity

systems to allow real-time collaboration across the enterprise boundary.

All of these trends represent probably the biggest collective set of changes,

which the industry will have to address in 10 or 15 years.

Advertisment

Do you think that 3-15 years is a manageable timeframe for growing

technology?



It is and it isn't. Certainly as the time horizon starts, your ability to

predict accurately how things will evolve gets fuzzier. When we moved from DOS

to Windows, or when we went from the introduction of word processors to Word,

the process went on a period of five to nine years before they actually became

possible. When we did Windows NT, which became the basis of XP and now Vista,

that project started about 15 years ago. Fourteen years ago, we started work on

Windows CE, the first call, the watch, the first auto computer, the first

interactive television system-all of those things, which we produced the first

versions of when I was running those groups in the late 1990s, are only now

going into very large-scale deployment.

The time from when you can clearly see the technical direction to the time

that they reach strong deployment is actually much longer than most people

think. Somebody has to lay the foundation on which you can build the future

houses, and that falls to people in the company who think about the general

direction (of) the industry, who think about the natural evolution of

technology, and who are willing to do incubation that will explore from either a

technical or business point of view where things are likely to emerge.

Are there specific things which come to your mind, when you are looking at

that time? The fundamental re-architecting of the microprocessor, I think, is

the biggest change to come to the world of CPU in 30 years. That will be a major

transition for the semiconductor industry and for all the people who have to

program new things and make major systems. That has occupied a lot of my time

for the last five years already. I started the first formal work at Microsoft to

prepare for this evolution, even before we knew how Microsoft will choose to

make this change. Partly, it was possible for me, more than five years ago, to

know that this change would come, simply because you could look at the physics

and predict that at some point you could run out of clock speed but not out of

transistor scale. You could actually say, "I know that they are going to

hit this wall," and the only way forward is going into the other space. We

began to work and hit the space back before the wall actually appears. Now, I

think that was a good investment. Clearly, the world of devices will evolve in

quite interesting ways, the Pocket PC and Smartphones that we have today are

more powerful than many of the desktops only five years ago.

Advertisment

This great technology will continue to evolve quite dramatically, and that

will create a lot change in the way people interact. One (change) that's going

to happen is the modality of interaction with machines, to more like 'talking'.

We are contextually aware, our gaze matters, nonverbal cues matter. We haven't

even got the word interaction to be very good yet. We will see machine vision,

speech synthesis and recognition soon. We have been working in every one of the

areas I mentioned for 5-10 years already in the research group.

So, after 15 years of so, we might have a product from Microsoft where two

computers could talk to each other like two human beings?



Yes, although I am more interested in having it easy for the human being to

talk to the computer. Computers can talk to each other pretty effectively within

the domain of discourse of the machine.

It's like speaking to the computer?



Yes, software would transcribe everything. We can feed in Excel spreadsheets

a hundred times faster. For most people at their desk that would not make that

much of a difference; it's already very fast. Yet, if you intersect this power

to where it was aware of what I wanted to do, or maybe could anticipate things

that I would like to have done, and make it more like a great human assistant,

the utility of the machine might be a quantum higher.

Advertisment

For the global software industry, the challenge will be (that) if we are

going to see this nonlinear increase in the capability of the hardware, how do

we harness it and what task do we put it to. Many times, I talk to people and

they think that mostly all the problems have been solved with computing. The

reality is very few of them have been solved in terms of what will make these

machines more useful.

This is very interesting. I think the impression you're giving is that,

perhaps, users don't need such fast machines, but they need the application. Are

you saying that users' expectations are likely to come down in the future?



No, I think it is always the case if you go and ask people what they want.

They can either only talk about it is at such a high level of abstraction that

it's hard to implement or they will try to extrapolate from the things they know

so well. But the real breakthrough is in the middle zone, where these things

provide a pleasant surprise to the consuming public when they are afforded the

opportunity to get them. But prior to their availability, they never would have

asked for them or predicted that they would require them.

The

computers can talk to each other pretty effectively. But, I am more

interested in having it easy for the human being to talk to the computer
Advertisment

Computing, as we know it today, has already transformed the global society.

And yet, my belief is that many of the challenges that the global society will

face will ultimately only be solved through the use of advanced information

technology. So, whether it is disease or education or climate, the way it is

ultimately going to get grasped is somehow through the use of very sophisticated

technology.

Today, all of us in this industry have made a very fine business by only

selling to the billion richest people on the planet. There are another five and

a half billion people, who are not using these things at all. If we could just

bring what we have today down the rest of the way into the society globally, we

would clearly make a big difference. There are many opportunities in both these

dimensions of taking what we already know and finding a way to use the advances

to make them generally more available. There are classes of problems, whether

they are in the technical domain or just usability, which will be addressed by

conceptually different ways of writing software.

How much of your work is involved in working out the actual product map?



The research work in the company in many ways, leads us to new businesses,

new products, or new features and we've got a fairly good process now to flow

these things from research activity into the business field. The individual

business groups also do some incubation and advanced development work, where

they think about things that will be in the product line and the roadmap that

will be beyond the releases that they are currently working on. In the zone that

is beyond the next release, you could get contribution in three different ways.

You could get contribution from the product group itself, you can get the

contribution from the research activity, or you can get strategic input from me

or Bill (Gates) or Ray (Ozzie) or other quarters in the company at the

management level. I would like us to move in a particular direction. When I talk

about my work as being strategy, policy and incubation and research, with the

exception of policy, any of those things could affect the company's roadmap on a

time horizon that is several years or more in the future.

Advertisment

You also license out some of that research; there was some work in

graphics that you licensed out to Softedge in Ireland. It started a year ago.

How do you pick on a project to license out?



For the most part, the license of today is fairly narrow. When we find an

intersection between a company which wants to work in an area, and a particular

research technology that may not have direct applicability for Microsoft-or

maybe there's some time before we would get around to using it-we have expressed

the willingness to consider licensing that. Broadly, we are willing to license

the majority of things that are in our IP portfolio, but we don't aggressively

go out and try to seek license for all of it.

Would you license technologies that are used in your products? 



Occasionally. Generally, we try not to license the application that we would

think would be directly competitive, because that creates an unnecessary degree

of conflict. But often, we invent some technology where its main use may be a

particular product line of Microsoft, but (it) has alternative applications that

we deem to be outside of our line of business. So, we might find both an

internal use and a non-competing external use.

Vista has been an enormous, complex piece of software which has been

delayed. Open source has an alternative model which is much faster, though it

might have its disadvantages. What do you feel is the future, as far as the next

versions of Office and Vista are concerned?



We will talk more broadly about this complexity problem. Microsoft has been

working consistently for quite a few years to balance our advancement in the

tools necessary to manage the complexity at each level as the system has grown.

To some extent, the complexity tends to grow in a nonlinear way. Historically,

our ability and other people's ability to deal with that complexity through the

use of tools and training, only grows linearly. It's a question as to whether

those cross over or not, but clearly, it has become more and more challenging.

There will be large-scale systems in a way that meet people's requirements

relative to the trustworthy and reliability aspect to provide security, privacy

and availability. When I talk about the need to construct systems in a different

way, I don't think that the issue at hand is whether you happen to be using a

community process or not. Look at security as one example. A few years ago,

everybody said the community developed software is going to be inherently more

secure. We said that's great, because the class of security flaws that are

evolving today require experts to analyze codes; a casual observer, if presented

with the code, will not know that it is susceptible to flaws. History has now

shown that's true; many flaws are emerging, whether it's an Apple system or

Linux system or even other application level systems. We find there are many

other things to susceptibility.

Advertisment

So, the problem is that no matter who is writing it and what process you are

using to manage it, software is still too much of an art form and too little of

an engineering process. Almost every other form of engineering gradually goes

through this evolution. That is why you can build big buildings and bridges and

skyscrapers, and things that don't fall over. Yet, everybody doesn't know

everything about the building process. We don't have that level of competition

yet in software. So, my personal belief is that the next big change has to come

through the introduction of programming methodology that will guarantee

composability and that will allow us to reason mechanically about different

aspects of large-scale proper systems in a way that is not possible today. We

have seen in Vista an outgrowth of our security efforts and trustworthy

computing. We have begun to move our development process in this direction.

You have enormous challenges every time you launch your product and in

getting people to upgrade. Does that call for a model shift in the way you

distribute software? 



Yes and no. When Microsoft was offering software that met the then

requirements on a local execution basis, the model we had was fine. Today, we

say that almost every product we have is software plus service. That is very

different from saying software and service. The more we've thought about it, the

more we're certain that it is more likely to reach a steady state where you have

local software running in conjunction with global services. That would be true

whether the network had intermittent connectivity or persistent connectivity.

The reason is largely along the lines I was just talking about. (That) is, as

the power of the client computer increases, the class of things that I want it

to do for me, may move in a direction that make it less and less likely that you

could centralize that function, either just due to the computational part or the

communication part. I see these trends coming in the computational capability

when put on the desktop, will allow a quantum change in what I can expect from

the local device. That, in fact, can then work in conjunction with the

integrated services within the cloud. Then, you get a compelling solution.

My own belief is that we do have to make it easier to deploy our software,

but that deployment will be in service of getting into the configuration; there

is sophisticated software that runs local to the device and works in conjunction

with the Internet-based web scale services or local enterprise-related services.

Today, the whole process of developing, downloading and installing is too heavy

weight. A lot of the company's engineering efforts are moving to make that more

streamlined. If you look at the enterprise where we can do this, where we know

what the physical infrastructure and the network infrastructure is, we are

already moving to more automation and updating management. If you look at the

things like One Care services are starting to offer consumers in their homes,

it's like having a rental IT department for your house.

There can be many issues of deploying and security, and antivirus and other

things, all are from a service that is provided. Over time, we will see a set of

programming capabilities that are optimized for this light-weight component-wise

updating or installation of capabilities that come through. But that will be a

fairly gradual transition.

Microsoft is already engaged in a global campaign against piracy. From the

technology standpoint, what is the role that technology plays in terms of

addressing it?



It is a big business issue for us in many countries. For many years, there

has been a fine balancing act between putting a large burden on the bulk of the

users in order to restrict the piracy level. Technology has evolved to the point

particularly because of network connectivity, where we can do more things to

validate which versions were legitimately acquired and which were not, and

trying to impose very little burden on legitimate users and a slightly higher

penalty on people that use pirated versions of the product.

But in general, we are very concerned now, in high piracy environments, about

issues like security. We know in many countries a lot of pirated software comes

through pre built-in box, and spyware and other things (are) already crooking

fingers into the software. Many people are beginning to realize that they may

want some assurance if they think they paid for some software; they want to be

sure that they got a genuine version and (that) they don't have a lot of these

problems. So, we are trying to put mechanisms to provide those assurances over

time. But, at the end of the day, it is more or less about the value proposition

that the company has.

Another big effort here in India, has been to evolve our business model

through the product. Many people would say that part of the reason for piracy is

that the company and the industry have not found a good way to deal with the

affordability question. So, a lot of our efforts are going into trying and make

offers, (that) allow these products or the business model of the product to deal

more directly with the affordability question, with the assumption that people

say "I'd rather have the comfort of buying authentic products but you have

to put them in a configuration where I can afford to do it". So, things

like this 'pay as you go' technology, where people can pay for as much as they

pay for their cell phone using a prepaid card is an example. We have done

engineering that builds a delivery capability to support that alternative

business model.

We are working hard to deal with these things holistically, and not just

enforcement action. But again, for the most part, we would like to have

enforcement action targeted only at the large scale commercial pirates. We hope

to deviate those who are into casual piracy or low-level piracy, through offers

that make the products more financially affordable or attractive.

There are people who have moved into IT in the last few years and don't

know what DOS is. Is there a possibility that ten years later, people won't know

what Windows is? You moved from DOS to Windows. As something new comes up, you

might call it Sky or Doors. 



I don't know what will happen to the brand, but I certainly think because

the brand has now expanded into some service component?mobile phone, product,

desktop or service component-the brand may survive longer than the particular

capability that people know of today.

When people moved from DOS to Windows, what they really gained was the move

from a character mode interface to a graphical user interface, and that was so

compelling that people left the character mode interface behind completely. If

there is a user interface change that would be so significant in the way that

people interacted with the machine, they would leave behind the point-and-click

model of computer action. The only high-bandwidth input and output system for

people is the visual system. So, screens are going to continue to be very

important even in the case where we would have alternative ways of interacting

with the computer. But, whether we call it Windows, or as you said, Doors or Sky

or something else, I don't know.

We will see the introduction of profound changes in the way people interact

with the computer systems; whether that will be powerful enough for people a

decade from now to say, "I don't even know what it was like to work in

Windows", I don't know. I do believe that 10 or 15 years from now, most

people's interaction will not be something that they call a computer. It could

be a phone or a television or game console or their car. Most of the computers

in your life in the future, you will not address them as computers. We will

still talk about laptop computers and desktop computers; they will be in the

minority in 10 or 15 years.

Zia Askari, Ibrahim Ahmad

and Prasanto K Roy




maildqindia@cybermedia.co.in

Advertisment