Click Multimedia PI.HTM
Click Multimedia PI.HTML
Dar Click para Web Page
<html><pre><b>
Before I introduce him, I would like to remind you that computing
is deeply embedded in the culture of this campus, and we are proud
to say we have maintained our edge. Our students, faculty and staff
enjoy more than 47,000 network connections by which we connect to
the world, and people connect to us. More than 1 million times a
week, people log on to the online catalogue of the University of
Illinois Library, which is the third, only to Harvard and Yale, in
size of its collection. And this campus is a giant in research and
development in science and engineering. We have more than 80
centers, labs, and institutes where important, life-altering work is
underway. Among them is the widely known National Center for
Supercomputing Applications, which is helping to build the future of
high performance cyber infrastructure. And this new office here at
the far edge of the campus is the Beckman Institute for Science and
Technology where 600 researchers collaborate, and finally I would be
remiss not to mention the investments in R&D brought to the
happy place of having two of our faculty members win Nobel
Prizes.
(Applause.)
As you know, they are Paul Lauterbur, who was awarded the Nobel
Prize in Medicine for his ground breaking work on the MRI, and Tony
Leggett, Nobel Prize winner for pioneering theoretical work in
understanding super fluid.
But that*s enough about us, it*s time that we move on to our
guests this evening. You are here to see Bill Gates, the Chairman
and Chief Software Architect of Microsoft Corporation. As you know,
Microsoft is the worldwide leader in software, services, and
Internet technology for personal and business computing. Last year*s
revenues topped $32 billion, and the company employed 55,000 people
in 85 countries. And Mr. Gates is an iconic figure in contemporary
computing.
While attending Harvard, Bill Gates and his childhood friend Paul
Allen started Microsoft, and launched a revolution. The fledgling
company was more interesting than the classroom for Bill Gates, so
he dropped out in his junior year. In his case, it was clearly a
great decision. He not only built a company, but more importantly he
built a vision. Both were built on the idea that the computer would
be a valuable tool on every office desk, in every home, and that
software was key. The penetration of personal computing in our
businesses, our offices, our public libraries, on the train or on
the plane, and in our home is astonishing, and truly reflects the
Bill Gates* view that if the software is right, they will come.
Bill Gates also is an author of two books. One of them, Business
at the Speed of Thought, is available in 60 nations and 25
languages. It shows how computer technology can solve business
problems in fundamentally new ways. By the way, the proceeds of both
books are donated to nonprofits that support the use of technology
in education and skill development.
Since he is a man on the edge, it makes sense that Bill Gates
also has invested in biotechnology, one of the most exciting
frontiers in science, and you probably have heard that he and his
wife Melinda have endowed a foundation with $24 billion. Their
generosity extends to global health, technology for public libraries
that serve low income neighborhoods in the U.S. and Canada, and a
variety of other community and special projects. He*s an avid
reader, a golfer, and a bridge player. He is a household name, a
visionary, a philanthropist, and tonight he is our guest. So please
join me in giving an Illinois welcome to William H. "Bill"
Gates.
(Applause.)
<B>BILL GATES:</B> Thank you. It*s great to be here this evening.
They told me I couldn*t come too early in the morning or the
computer science students wouldn*t be up to hear what I had to
say.
I want to share some of the exciting things that are going to
happen in computer science, and how that*s going to change the world
in a pretty profound way. Computer science has done a lot over these
last 25 years, but I would say that the most exciting years are the
years ahead, and there*s amazing opportunities for all of you in
contributing to that.
It*s great to be here in this particular location. The University
of Illinois has a great history of contributing to engineering and
the sciences, and actually this is the university that Microsoft
hires the most computer science graduates from of any university in
the entire world.
(Applause.)
I*m always a tiny bit embarrassed speaking in university groups,
because I, myself, am a dropout, but I*m not here to spread the word
about becoming a dropout. In fact, quite the opposite. I*m going to
talk a little bit about how computing got to where we are today. The
early days of computing were very big machines, and although they
were visionaries like Vannevar Bush, who as long ago as 1945 wrote
about the Memex machine. Most people thought of them as tools of
large organizations, and certainly when I was in high school the
computer was a very daunting thing, people talked about taking those
punch cards you get in the mail and putting staples in them so you
could defeat that evil machine that was always sending you bills
that didn*t seem to be correct. And nobody thought of it as a tool
of empowerment.
It really took an amazing breakthrough in chip technology, the
idea of putting at first thousands, and eventually millions, and in
the future billions, of transistors on a single chip to get this
idea that computers could become a tool for the individual. I think
it*s fair to say that personal computers have become the most
empowering tool we*ve ever created. They*re tools of communication,
they*re tools of creativity, and they can be shaped by their user.
New applications are coming out all the time. Now there*s a few key
elements that allowed that to happen. From a software point of view,
one of the problems in computing was that the machines from every
different manufacturer were incompatible. IBM made different
machines than Digital Equipment, which were different than NCR or
Wang, or UNIVAC, or all the big computer companies of the 1960s and
1970s.
One of the unique things that Microsoft, myself and Paul Allen,
had in mind was that we wanted to have a software layer that would
hide the hardware differences, and allowed people to invest in
software applications, knowing that they could be used across all
those machines. In fact, the goal was to create this virtuous cycle
that, as more applications became available, people would be these
machines, and, as more people bought them, the economies of scale
would allow the prices to come down, creating a thriving personal
computer and software business. That was our dream, and that was the
thing that got me to leave university and start the company. And
it*s a dream that to some degree came true. Today 600 million people
get up every day and have personal computers that they use in a very
rich way.
There are still a lot of milestones in that progression. The very
first machine, the Altair, was a kid computer that could only light
up the lights, it was a miracle if you could even program it to do
that much. Then there was a generation of computers like the
Commodore 64, the Apple II, the TRS-80, and Microsoft wrote the
software which was inside those machines. It was actually a
programming language called Basic that let you get in and play
around a little bit with the graphics, and write applications.
A major step took place as we moved to larger memory machines in
the early *80s, the so-called IBM personal computers with MS-DOS.
That machine by today*s standards is unbelievably primitive, slow,
very limited storage, but it really created the path for this
virtuous cycle to take place. It was in the early 1990s that we
moved up to graphical machines. This was an approach, of course,
that was pioneered at Xerox*s Palo Alto Research Center. And then
Apple, with both their Lisa and Macintosh got behind it. We got
behind it, putting Windows software on top of the PC hardware. It*s
hard to remember now, but when that was done it was considered a
crazy thing. People thought graphics interface was slow, it was hard
to program to. And of course, today we take that completely for
granted.
The late 1990s were another step change in how we think of these
machines, because that*s when they all began to be connected
together. The standards of the Internet, the pioneering work done
here on the browser as a way of visualizing the information across
the entire Internet. Those things created a phenomenon that was
quite unbelievable, and a phenomenon that created almost a Gold Rush
atmosphere. The number of start ups, as we look back on it, was
pretty wild. The valuations of companies that had no business model
was pretty wild. But, in a sense that hyper-investment, and that
attention all accelerated the installations of the connections, and
getting people aware that there was something pretty phenomenal
going on here.
Today I think we very much take it for granted. Certainly when I
want to look up what*s new in some area of science, medicine, I want
to look up something about history, I just take it for granted that
I can go and type in a few simple search terms, and immediately be
connected up with the information that comes from the very best
experts in the world. So we*ve come along way.
In fact, the original Microsoft vision of a personal computer in
every home, and on every desk, we*ve gotten a trajectory that*s
going to get us there. The systems we have today are not the
ultimate device. They*re not as reliable as we need. They*re not as
secure as we need. They*re not as easy to use as we need. In fact,
we have a technology that we call Watson that lets us monitor, if
people are willing to send us the reports, when you get error
conditions on PCs. Maybe some of you have seen that dialogue that
comes up and says, do you want to send this report in, and that
gives us a very statistical view of what drivers, what applications,
what*s going on in terms of that user experience. So it*s one source
of data that says to us that we have a long way to go to achieve the
vision of the personal computer that*s as easy as it should be.
At the same time, people are far more ambitious about what
they*re doing with these machines. We have a whole new area called
social computing, the idea of being able to reach out, connect with
friends, meet new people, and ways that*s taking place. We have new
forms of communication, so-called blogging, and Wikis that are
drawing people in to participate in new ways. In the area of
entertainment this idea that you can play games with your friends,
have massive multiplayer games, not just play but also talk to them,
in some cases see them, those things are bootstrapped now, and
eventually we*ll just take those for granted.
One of the things that helps us drive forward is that hardware
advance. The chip advance, as predicted by Moore*s Law that says
you*ll have a doubling in power every two years. And that has held
true for these last 25 years. And it looks like it will hold true
for the next 10 to 15 years. Actually mapping that increase in
transistors into computer performance turns out to be a very tough
problem. As we get more transistors and very high bandwidth, we*re
still limited by the actual delay in these systems that*s at every
level of the hierarchy. It is very much a limiting factor, and
there*s a lot of clever things we*re going to have to do on this.
But, certainly we*ll have a lot of transistors.
The graphics processing units, the CPUs, all of these things are
becoming phenomenally effective. We have 64-bit computing that will
give us an address space that will last us quite a long time, moving
up from the 32-bit address space.
And when we think of storage, the limitations of the past where
you could literally type and fill up a hard disk, that simply can*t
be done now. In fact, the hard disks that you*ll have by the end of
this decade, you*ll be able to store thousands of movies, tens of
thousands of photos, and everything you create in terms of typing
your entire lifetime on that single storage device.
That third path is going up even faster than chip performance.
Those double every 14 months or so, and they*re literally coming to
software companies and saying, what are we going to do with all this
storage? What kind of applications, what things can you create that
would take advantage of that?
Screen technology is another very key factor. We eventually need
screens that have unbelievably high resolution. There*s no reason
that things like magazines and newspapers should be delivered in
paper form. The cost, the inability to search, to annotate, the fact
that it*s not completely up to date, all those things are much
superior in digital form. But, our systems still require batteries,
and they*re still fairly heavy, the resolution is still improving.
But where we*ll be in a few years at a crossover point where most
consumption of media will move to that pure digital form, partly
because of these low-cost LCD screens. A 20-inch LCD, which used to
be a $2,000 thing, is coming down, will be down to $400 or $500
price point within three to four years.
And so, we have to think about how we take all that display space
and resolution and use it on behalf of the user. And so you have to
be fairly adaptive because the display space you*ll have at your
desktop will be much greater than you*ll have as you*re moving
around. The tablet type machine that you carry simply won*t have
that same display surface, although at some point we may get screens
that literally go back to the papyrus where you can unroll them, and
then we can get back to having really big screens anywhere that we
go.
And then to graphics processors, those are achieving a level of
performance that will let us provide high definition realism as part
of a serious software activity, or just as part of the
communications or game playing. The next generation of video games
will be thought of as the high definition devices, including
realistic scene that are already pretty good on today*s Playstation
2 or Xbox. There*s more than an order-of-magnitude improvement that
comes in that generation, and is therefore at a level of reality
that will draw people in, and allow for game genres that really
haven*t made sense to this point.
All of these things will be connected with very high performance
wireless networks, you*re experimenting with this in the Siebel
Center, I know, but things like ultra wideband will provide hundreds
of megabits of connection. And so the idea that you have to connect
the computer up to the display, that will be very antiquated, you
will connect up to the display simply over that wireless
connection.
And various new approaches like Wi-Max will let us deliver
wireless data in a very low-cost way without building a lot of
infrastructure. That*s fundamentally important to get computing out
into all countries, where you can*t afford to run fiber optics, or
DSL, or cable-modem type infrastructure into all the residences, but
these wireless technologies, taking advantage of semiconductor
advance in the spectrum will give us essentially infinite spectrum
to those homes at very, very low cost. And so that*s a breakthrough
that we*re just taking for granted in designing in to the
assumptions we have about the software.
There will be devices of all sizes. The screen that*s up on the
wall in a meeting room or in the living room in the house, that*s
your largest way of interacting. You do that at a distance. I
mentioned the desktop, I mentioned the Tablet. Of course, the
pocket-sized devices are getting far more powerful as well and the
idea that your digital wallet, GPS locator and games and personal
information will be there, together with your communications
functionality, we*ll just take that absolutely for granted.
We*ve even moved to a device size somewhat smaller than that.
We*ve come out with actually a watch that I have on here. This is
called the SPOT watch. And what this does is it receives a data
signal over the FM network. It*s a data-sideband approach. And so as
I just look at my watch, not only do I see the time, but I see my
calendar that*s kept up to date, I see news, I see weather, stock
prices. I get instant messages from people that I*ve authorized to
send me information right there on my wrist. Sports games, you can
see while they*re in progress who*s on base, what*s going on, and
then get the report on anything that you*re interested in.
And the chip that*s in here, which is an ARM microprocessor, has
10 times the performance and 10 times the memory of the original IBM
personal computer. And so we can literally download programs into
this device over the FM channel, we take what are called CLR
programs and send them to this thing and so we can always create new
channels, new ways of gathering information and it*s ubiquitous and
secure.
And so scaling on to all these devices and getting them to work
together, so your information shows up where you want and you don*t
have to manually synch these things or think about the information
mismatches, those are big challenges and those are software
challenges.
In fact, software is where the action is. I admit to some bias in
this, but I think even objectively the hardware people are doing
their jobs, they are going to give us the miracle opportunities, but
will it be usable, will it be secure, will it be fun and exciting
and approachable? That is purely something that the software
industry needs to deliver on.
Let*s look at different domains where software can help us be
more effective. First, let*s take people at work. People at work
overwhelmingly are what we call information workers, designing new
products, dealing with customer service, forecasting demand, buying
and selling. Those are the kinds of jobs that overwhelmingly in
developed economies are the vast majority of people.
And competition exists in terms of how effectively you do those
jobs. Do you design a new model properly? Do you anticipate the
demand? Do you understand the least cost way of getting something
done? Do you see where your quality problems are? And the insights
into those things can be provided through software.
The lack of visibility of what*s going on and all the information
about a business that people have today is really quite unbelievable
and they don*t have the expectation that they should be able to look
at all those transactions and data mine the transactions and
navigate the latest information.
But software can change that. Visualization techniques, modeling
techniques, even things that you might think of mundane, saying
that, hey, when you have a meeting let*s make that meeting 20
percent more efficient, let*s allow people who aren*t physically
present to participate in a very rich way. When you have a phone
call, why can*t you just connect your screen up to their screen so
instead of talking about a budget or a plan or whatever the
information is, you can sit there and edit that together?
The very mechanism of capitalism, finding buyers and sellers,
there was a lot of hype in the late 1990s about how that would
change and become friction free, but, in fact, the software
infrastructure was not present. The idea of having software anywhere
on the planet being able to find other relevant software and
exchange very complex information, we didn*t have the protocols,
standards and tools to make that work.
So as we connected up things to the Internet, we connected them
up with a presentation standard, HTML, but the idea of arbitrary
software, no matter what the application is, but take buying and
selling as a good example of it, we don*t have that today. And with
the challenges of security and things, that*s not an easy thing but
it is being built.
These are called the Web services standards, and they*re
fundamental to letting information be exchanged in a rich way. They
fulfill a dream of computer science that existed for a long time,
dreams about heterogeneous information, that the advances in XML are
finally solving those very tough problems.
And so within the next year, as that foundation gets into place,
a lot of those dreams of the late 1990s will become a reality. The
cost of a transaction, the cost of finding who can well you the
product that is absolutely the most suitable and check their
reputation and check the state of that transaction, all of those
things will move to be digital, and that hasn*t happened yet but
with the software advance that will absolutely take place.
People waste a lot of time on various communications modalities.
Today software doesn*t know which calls or e-mails are important to
you. We*ve all been in meetings where peoples* cell phones ring.
We*ve all gone to our e-mail and found lots of unusual, unwanted
e-mail that wastes our time. I have been offered many university
degrees in that spam e-mail. (Laughter.) I don*t know if they*re
targeting me or if other people are being offered those as well. The
most interesting ones, they said that for dollars a month they would
pay all my legal bills. (Laughter, applause.) That one, I know they
didn*t mean it to come to me probably. (Laughter.)
Another good story about that is just this weekend my wife and I
were sleeping in a little bit. Our 7-year old came in and woke us up
and said, "You*ve got to come, you*ve got to come." And we said,
"No, no, no, it*s still 7 o*clock, why don*t you go back and keep
doing what you were doing?" And she said, "Well, I was using the
computer and it*s "amazing. And I said, "Well, keep using "it.
(Laughter.) And she said, "No, no, no, we won, we won money, dad."
(Laughter.) And I didn*t want to say something flip, like, "Hey, we
don*t need more "money. (Laughter, applause.) So I got up and, of
course, it was one of those come-on type things, and there*s my
7-year old who thinks she*s won some amazing contest, and I*m trying
to explain to her about it*s just somebody trying to get her to go
to that website and all that.
So we have a lot of work to have the computer model our
interests, what is worth interrupting us for at various contexts
we*re in during the day, what kind of e-mails should we see no
matter what*s going on, what should only be brought to our attention
as we go home.
How do we organize our tasks? Think about all the different
things you want to get done; the computer is not very good at
helping to organize those things, notifying us about deadlines.
Literally take phone calls today. If you call somebody and
they*re not available, if you can prove who you are through some
caller-ID-type mechanism, if you*re a person who works with that
other person, their software ought to negotiate with you, looking at
your schedule, to find exactly the best time for you to meet or be
in touch with each other, and the idea of phone tag or busy signals
and those things should really become a thing of the past. But we
need a software model. We need something that*s adaptive, that
learns, that has the right authentication built underneath.
And we have far too many communications things: e-mail, phone
and, even phones, we have our phone at home, we have the portable
phone. The fact that we have to remember phone numbers and update
those things, the instant messaging is a world of its own; all of
those things really have to come together and help people and make
people far more productive.
In terms of things that people do at home, we are at the
beginning of a revolution in terms of people being in control,
control of when they want to watch a TV show that the digital video
recorder is now getting people addicted to this idea that it*s up to
them to decide when they want to do it. People are getting addicted
to the idea that, in terms of their music, they can organize their
collection and have different play lists, that they can have a
portable device that they take with them that lets them play that
music.
We*re even getting to the point now where we can take videos and
put those on a portable device.
This is a little device called the Portable Media Center. You can
see the basic size of it and that shows what comes up on the screen.
You connect this to your PC over a wireless or a USB cable and you
can take whatever TV shows you recorded, your movies, your pictures
and all of those things can be downloaded onto this hard disk. It*s
a 40-gig hard disk, which, of course, is becoming unbelievably
inexpensive, and then relative to a music player the only extra
expense is just having this LCD screen, where that too is becoming
quite inexpensive.
And so this is a different way of thinking about consuming media,
putting the person in control, having it wherever you want it,
having your lifetime collection easy for you to get at and work
with.
And as people have all this different media, we need to make it
easy for them to navigate around in this information.
I*ve just got two little quick demos that are ideas coming out of
Microsoft Research that give a sense of how we think visualization
can be made a lot better than it is today. The first screen I*ve got
here is to help you look at a set of movies or a movie collection.
And so at the center we have a particular movie, "Blade Runner," and
you can see that off on the side here it takes things that are
related in some way, like everything that*s directed by Ridley
Scott, it shows and I can go in and cycle through at any speed, see
those different things, and I can pick one of those and say, OK, put
that at the center and then go look up in the database, get me the
information and tell me who are the actors. So here are all the
Anthony Hopkins movies, here are all the Julianne Moore movies. I
can pivot there. And so this idea of going back and forth between
these different things becomes a fairly straightforward thing.
Another example is dealing with lots of photos. This is a case
where it*s going to be so easy to take photos, you*re going to have
thousands and thousands. And, in fact, one of the researchers at
Microsoft Research goes around with what she calls a little photo
button, and it*s noticing transitions during the day and it*s taking
a few hundred photos. And so she doesn*t even have to think about
actually clicking a camera; she just gets at the end of the day all
these interesting photos that she can decide if she wants to share
with people or in terms of having memories about her activities or
things she*s doing with kids or friends or things like that, it*s
there at no effort at all.
Well, you*re going to get a lot of these photos and what do you
do with them? Well, this is a research project called Media Frame to
start to suggest that we can have user interfaces that make this
practical.
So you see we have a bunch of images here, hundreds, we can hover
over different ones of these. And some of these actually aren*t
photos, they*re actually movies. It*s our belief that you*ll more
and more not think of photos by themselves and movies by themselves,
but rather you*ll think of still images, motion video and all of the
audio that you capture either at that time or that you can easily
add later on, we*ll think about these things as wanting to organize
them together.
Now, sometimes what you want to do is put various keywords on
these things and you can see here we*ve done that a little bit. So
let*s take one, let*s go in and look at the thing that relates to
Thanksgiving.
I still have a fair number of photos here, so I can go in and use
a software algorithm that shows me which are the ones that have
faces in it and those get highlighted, or which are the ones that
are indoors and you can see it*s automatically able to tell which
those are and highlight those.
And so we have recognition software that actually did the
orientation. It found and notified me of all the slides that were
coming in mis-rotated; it did that without my having to spend time
scanning through those things and it can see these different
photos.
And, in fact, if I take the photos of those faces and I tell it
who somebody is, if I make an association with my contact list, then
in the future it will be able to do that recognition and do that
categorization in a very automatic way.
We have the idea of finding similar images. Actually, let me go
back into that and try the similarities. If images are similar, it*s
actually looking at what*s inside here. And so if I can take this
image and say, okay, what else is similar to that, if I relax the
constraint, eventually everything is similar, but at this rating
it*s just these particular images. And so actually intelligent
analysis is part of how we*ll be able to deal with these things.
If we go back and see the whole set again, we can also try out a
different view where we*re using 3D. And here what it does is it
takes and organizes them by time. Of course, the camera is storing
lots of metadata with these photos. It has a clock in it. It is able
to know when that photo is taken and I can just switch and change
that X-axis and break it down into different groups. And as I select
a group of photos, I can use these tags, add tags, change tags on a
whole set, all at once.
And so this is just an idea that we ought to be able to make it
reasonable to play around with lots of different photos and media
clips and make navigating through those things a very, very simple
activity.
Well, the wellspring that really drives software forward is
research and research is done both at universities and in commercial
organizations. And, in fact, the United States is dramatically the
leader in both aspects of this. The best universities are
overwhelmingly here in the United States doing this work and there*s
a real symbiosis of the relationship between the companies trying to
build these things into products, whether they*re startups or larger
companies and the universities; very much a virtuous cycle of
sharing ideas, helping research get funded, creating jobs for people
and it*s worked in a really fantastic way.
Microsoft is a big believer in investing in R&D. Our R&D
budget at $6.8 billion is substantially the largest of any
technology company. And it*s kind of amazing to me, when I grew up,
I always thought IBM was the big company and actually in terms of
employees they are the biggest. They still have 330,000 -- I
shouldn*t say still -- employees, because they*ve taken an approach
that*s more based on services and doing different things than we do.
We*re very focused on building software products, but to do that
it*s got to be about R&D, and R&D that looks well out into
the future and takes on the very toughest problems.
There are some good examples of collaborations here at the
University of Illinois. The Gaia.Net distributed OS was something
that some of our devices and software components can come in there,
and I*m sure we*ll learn a lot from what*s going on there.
The experimentation of the Siebel Center, built on a lot of
different kinds of software, including some of the Conference XP
things we*ve done there, we*re very excited to see what can come out
of that.
Some of these research problems are very tough problems. A good
example of that is what we call Trustworthy Computing. In fact, when
I met with the faculty earlier, I was very pleased to hear this is
going to be a major focus of bringing together a lot of research
ideas about security and reliability into an institute that looks at
it in a very broad way.
When the Internet was first designed, it was designed assuming
that different parts of the network could be malfunctioning, that
they might be broken or literally that they might be bombed, but
there was not an assumption that there would be malicious actors on
the network, and so there*s no authentication of the From and To
addresses. SMTP mail, there*s no authentication of who that mail is
coming from. Many of the software systems are built around passwords
that are truly a weak link in terms of being written down or used on
less secure systems or being very guessable.
And so what we*ve ended up with is a situation that*s very
fragile. Any software bug can result in what*s called an escalation
of privilege, and then hijacking a system to either flood the
network with traffic or to send lots of e-mail out that appears to
come from that person or various kinds of attack methods that are
taking place.
There is no doubt that for computer science to fulfill its role
in helping business, helping entertainment, that we*ve got to make
this network secure and reliable, that we have to be able to make
privacy guarantees to people in terms of how information is dealt
with on this network.
And there*s a lot of invention taking place here. This has been
our biggest area of R&D investment for many years now. It was
about three years ago that we really pushed this up to the top of
the list and really brought in a lot of additional expertise.
Some of the issues are very simple to solve: Moving to smart
cards instead of the password, having software be kept up to date so
that when there are problems they don*t sit there so people can do
exploits, having firewalls so you partition the systems off and you
don*t just look at what type of remote call is being made but you
also look at who*s making it, the transition to IPSec and IPv6 will
help us with this.
There are new programming methodologies around interpretive
systems, like our Compact Language Runtime, the CLR, that helps you
define the privileges of a piece of software so you*re not just
doing exactly what that user is privileged to do but rather saying
what*s appropriate for that software.
Some of the newer techniques have biological inspirations of
monitoring systems and having a way of looking at them and seeing
when their behavior becomes abnormal, both at a system level and at
a network level. So that*s a very exciting area.
Another big area of investment is what you might broadly call
natural interface. The keyboard is okay, the keyboard is going to be
around for a long time, but it would be far more natural if we could
use ink and speech as a way of getting information into these
systems.
These are tough problems. They*ve been worked on for a long time.
Ink is somewhat easier than speech, partly because users have a very
explicit model of what readable handwriting is and what it*s not.
And so even as people start to use our Tablet PC that*s got ink
built-in, they find themselves taking an E versus a C, being a
little more careful after they*ve had recognition errors, to loop
the E and not loop the C and so you get more and more accuracy.
And so these handwriting systems are really coming into the
mainstream. The cost of the digitizer is extremely low and the way
that software is adapting to it, we*ll take this for granted that
every portable PC is a Tablet-type PC within the next two to four
years.
Speech has been a little tougher. It*s one that we are investing
in very, very heavily, but users have no explicit model of speech.
In fact, when the speech systems start to make errors, their
tendency is to not only get irritated but talk louder. And whatever
the model is of their speech becomes less and less capable as
they*re getting slightly more irritated at the system. And the fact
that there*s no predictability and the system makes errors that
every other thing you*ve ever spoken to, which are humans, would
never make those errors, is kind of frustrating.
And so we have to get the accuracy levels to be extremely high.
There are great advances here, not just driven by the extra power we
have, but modeling, going through, for example, all of the user*s
e-mail and understanding the corpus of words that are typical in
their discourse, we*re using that both in mail and in speech
capability, having deeper language models, having better microphone
type systems.
One thing that*s fascinating is that the difference between human
and computers, in a noise-free environment -- well, if you take the
best case, a noise-free, context-free environment where you*re just
doing random words for a human and a computer, the computer is not
that bad, the difference is very modest. Where the human gets the
wild advantage is that the human has context, they have a sense of
what the speaker might say next, based on what*s going on and what
they know about the subject. And humans are dramatically better at
doing noise elimination and this is a case where the signal people
and the speech people are coming together now to get a sense of,
okay, how does the human audio system do this.
Like most things related to human capabilities, our appreciation
for how good the human system is just gets higher and higher as we
try and create the equivalent on a digital basis.
The ultimate natural capability is the idea of artificial
intelligence, and there is less research on this today than when I
left school 25 years ago, but there is some very good research going
on. Bayesian systems are a type of system that attempt to model
non-linear activities, and there are many similar approaches that
are becoming ripe and can be applied in interesting ways.
We*re starting out with some very simple things. The only AI
product that actually sells today is this vacuum cleaner that goes
around, so that gives you a sense that we*re really at the low level
there, down on the rug trying to find our way around.
The next generation will be using AI algorithms in games. If you
play a computer opponent today, after you*ve done that for two or
three days, that computer opponent becomes somewhat predictable and
the range of skills is either too high or too low. And with an AI
machine built in there, we*ll be able to make that richer and
richer, in fact, learn from users how they play, gather that
information centrally and reprogram the AI machines down on those
different systems.
One fascinating trend is that all of the sciences are becoming
very data driven. Take a science like astronomy. Jim Gray, who*s one
of our researchers, realized that if you want to propose a theory
about astronomy, you need to look into all the different databases
that are out there, and yet these databases were not connected in a
way that you could perform these very rich queries and try and see
what*s the density of a start system like this or are there any
cases of something where these two things are near to each
other.
And so he led a project taking very advanced software technology,
Web services, and built, together with a lot of collaborators,
what*s called the National Virtual Observatory. And so no longer is
astronomy just sort of being at 3 in the morning with your eyes to
the lens when a supernova explodes, but rather it*s doing
sophisticated data mining and looking and forming theorems about the
information that*s been recorded over all time in this very large
virtual database that*s been created there.
That same sort of thing is necessary across all the different
sciences, biology being probably one of the most interesting and
challenging. And so the interplay between people who have computer
science backgrounds, and data mining and modeling and networking and
what they*ll be able to bring to advancing biology in these next
several decades, will be quite phenomenal.
I think it*s really biology and computer science that are
changing the world in a dramatic way. Other fields, they*re great,
but they are not changing the world. They*re not empowering people.
They*re not making the advances like these are. And it*s actually at
the intersection of these two fields where perhaps some of the most
interesting activity is taking place.
Certainly nature has come up with a learning algorithm that we
have no understanding of, and as we, through various techniques, are
approaching that kind of capability, implementing that in software
will be a profound contribution.
While all this computer activity is so neat as a tool, one of the
big problems we get is the so-called digital divide, that is that
you have a lot of people who have access in richer countries but
even there, not everyone, and yet the tool, you*d like it to be
everywhere. What*s the solution to that? Well, you can drive the
cost down on the hardware side, the software side; that*s happening
very effectively. You can make sure there*s lots of philanthropy and
donations around it; some good activity there. It*s actually the
communications costs, the broadband connections that are the most
expensive part of this, but even there the advances in wireless can
solve the problem.
One of the projects I have the most fun with that Microsoft and
my foundation did over the last six years is go out to 18,000
different libraries and put in a total of 50,000 computers that are
just sitting there so that anyone who can reach a library can get
in, get out on the Internet, get that information and use the latest
software. And it*s amazing to see how people come in and use that.
There*s a lot more to be done there in terms of the schools in
terms of looking at that on a global basis, but it*s a very
important goal, particularly if you see this as almost like
literacy, like reading literacy in the same imperative for everyone
to have access.
Now, the tools of technology are changing global competition and
there is a lot of concern about this. The tools of technology are
making it possible for not only manufacturing type jobs to be done
anywhere on the globe but actual services type jobs, not just
programming, not just call center but design, architecture, any type
of work, if you have these rich collaborative interfaces that the
Internet and the rich software on top of it make possible, that will
let people compete for that work anywhere around the world.
And so we*re going to go from a world where the thing that would
predict your opportunity best historically was, were you lucky
enough to be in one of a very few countries, to, in the future, the
best predictor will be what*s your level of education. If you have a
college education, no matter what country you*re in, there will be
substantial opportunity because of the way these things connect
together.
Now, this is an interesting challenge for the United States. The
United States actually did its best work during the 1970s and 1980s,
and that was actually a period of great humility, of concern about
international trends. In fact, the great concern of that era was
that Japan had a better model, Japan was ahead of us, Japan was
going to own industry after industry, and just wipe out the United
States -- including computing was going to move there. And although
that was completely overblown, completely wrong, it underestimated
the vitality of both the commercial and research side in this
country, it allowed us to really step back and examine what our
strengths were in driving forward, and that*s why such amazing work
I think was done during that period.
Here we*re going to have that same type of questioning as we*re
seeing more global trade and all these different activities, as we
see particularly India and China stepping onto the world stage with
their university output and the energy and the innovation in those
countries, a lot taking place, that will challenge the U.S. to say,
are we really able to keep our edge, are we really able to keep
ahead? And it*s the investment in research, the value of
intellectual property, it*s a lot of things that the U.S. is
actually pretty good at, that we just have to renew our commitment
to.
So my view is that in the next 10 to 15 years computer science
really will be magical, that the impact, whether you think what it*s
going to do for medicine, what it*s going to do for education, what
it*s going to do for worker productivity, the impact is really hard
to exaggerate.
And I*m not saying this is going to happen in the next year or
two. Every year there will be some neat things as speech and ink and
all these things come along. But it*s really the accretion of those
things where people are used to the tool and the tool is super
secure that creates this shift in how things are done. How will
education be done, how will that change? Well, that*s one of those
great open questions.
The key element in doing this is having great people, and
Microsoft succeeds by having great people, universities succeed by
having great people and making sure that the re-investment in those
people takes place.
There*s a little bit of concern that the peak enrollments in
computer science are off from the years past, and looking at that,
particularly on a national basis, it says, OK, what aren*t we doing
to show the opportunities that are here?
Another challenge, of course, is the lack of diversity; both
women and minorities in computer science are not nearly at the
levels that we*d like. Obviously we*d like those numbers to be 50
percent, purely diverse, and yet the numbers are much more at the
10, 15 percent level, and a lot that needs to be done about
that.
I*m sure that this is a very multifaceted thing in terms of
showing the opportunity, giving people an opportunity at a young age
to see that it*s very interesting and pointing out that these jobs
aren*t all just hard-core coding type jobs. There are plenty of
those, those are neat, I like that, but a lot of them are much more
in terms of having skill sets where you need to know computer
science but also understanding usability and social factors and
marketing and business and bringing those things together. Those are
a lot of the really great jobs that are there.
On the minority opportunity front, I*m very pleased that I*ve
been able to sponsor what*s called the Millennium Scholarship
Program. (Applause.) Here at this university there are 25 Millennium
Scholars, including some here tonight. It*s a neat thing and I hope
all of you will serve as role models and really encourage other
people to do the things you*re doing, because I think that*s a key
part of the path forward.
So the tough problems just take great people. We will have any
type of simple user interface, secure type systems, and the
direction this is going to head in, there*s a lot of unknowns that
are going to make this, in my view, by far the most interesting
place to be involved.
And so I*m excited that many of you will go through a computer
science program and join a variety of companies, perhaps Microsoft,
perhaps some of the startups, and really make this a reality,
because this is the important stuff and the great stuff is all ahead
of us.
Thank you. (Applause.)