![]()
![]()
![]()
Click Multimedia PI.HTM Click Multimedia PI.HTML
Dar Click para Web Page <html><pre><b> Remarks by Bill Gates, Chairman and Chief Software Architect, Microsoft Corporation Harvard University, Cambridge, Massachusetts<BR>February 26, 2004 BILL GATES: Thank you. It*s great to be back at Harvard. I have a lot of fond memories of my time here. When I came here as a student and my friend Paul Allen decided he would come out, and every day say to me, hey, why don*t you leave school and start Microsoft, because he*d seen the power of the microprocessor, and we*d talked about wanting to be in on the ground floor, and the seminal event for me was when he was in Harvard Square and picked up a copy of Popular Electronics magazine that had the MITS Altair Kit on the cover. And it was a clear day, I was up in my Radcliffe dorm, he brought that up there and said, look, it*s going to happen without us, we*ve got to do it now. And so I said, OK, you*re right. Let*s get that BASIC out there, and that led to the creation of Microsoft. Harvard has been an amazing institution, and as I talk tonight I think you*ll see a lot of ways that, out into the future as computer science and computer technology move forward, that Harvard can contribute a lot to that, not just purely in what we think of as computer science, but in a number of different areas. Now, Microsoft has benefited from Harvard and its excellence more than just in my time here. Steve Ballmer, who is the CEO of the company also was here, we met here. In fact, it was kind of a funny night, we went out and saw two movies, it was "A Clockwork Orange" and "Singing in the Rain." And when we came back, Steve was saying to me how he thought the greatest impact you could have on the world was going to work for the government, because after all the government had a lot of resources, and using those effectively was maybe the way to make the greatest contribution. And that started a debate between Steve and I that went on for a number of years. Steve graduated, Steve went to work for Procter & Gamble, came to visit me a lot, and he started his first year at Stanford Business School. And by then, Microsoft had about 35 people, and I really needed somebody to help me. I had been signing contracts left and right, and we were kind of overcommitted, in fact, the demand was so amazing. So, I said to Steve, really, I need you to come help out now. And so he, too, is a dropout, although he*s a dropout from Stanford Business School. And so that*s how that got things going. We have a lot of other great people who are in key roles at the company who came from here, Bill Veghte, Chris Capossela, and lots and lots more. A big message you*ll hear from me tonight is this opportunity for computer science, that it is almost ironic that today people are underestimating computer science more than ever before. During the late 1990s, there was that hype around the startups and the valuations, and people looked and ignored some of the tough problems that had to be solved to get computing to be a fantastic tool. But actually during that time period, some phenomenal work was going on. And that*s why we can say it*s really this decade that some of the seminal advances that will utterly change not just computing, but how business is done, how education is done, how people communicate and entertain themselves, those things will move into the mainstream. And so what we have today we*ll see as quite limited compared to what will happen over the next five or ten years. Now, the starting of this industry was very humble indeed. That kit computer on the cover of that magazine had a 8080 microprocessor in it, and you had to buy it and assemble it yourself, and it came with 256k of memory. And you paid extra to buy the 1K memory board. And, when I wrote the BASIC, I had to create a BASIC that could run not only run in 4K bytes, but would leave room for the user*s program and data in 4K bytes. And so, crafting this program so that there was no extra bytes and was tight as can be, it was a real hand-tinkering thing. " And I always said to everybody else, if you can save a single byte out of this program, I*ll pay you $20. In fact, nobody has yet collected on that. Still, today, I*d challenge you, if you can save a byte in 4K BASIC or 8K BASIC, that would be worth "something. And so memory was very scarce. The machines didn*t perform well, there weren*t much in the way of peripherals, and yet that got it all started. There were a set of machines that came after that, the TRS-80, Apple II, Commodore Pet, all of which had as the software that ran on those machines Microsoft BASIC. So, when you turned it on, essentially the operating system for the disk, and the tape, and the screen, was this language interpreter that exposed the rich sort of graphics, and music capabilities of those systems. The next milestone we got to was a move to what many called 16-bit computing, and that*s where IBM came to us. It was a very strange thing, they came and said, we*ve been told that because it usually takes IBM five years to do something, we*re supposed to find something we can do in two years. And that*s the key thing about our project, and we picked doing a home computer. And so we said, wow, OK, we*d love to help you with that, but we want you to do something unique around 16-bit to make that something that really changes things, and three of the people on that project team and our team made it a very state-of-the-art machine. And what we did with that machine is, we said, OK, now it*s time for the software boundary to hide the hardware differences, to make it so all the applications can run on the different machines whether they*re from IBM, or HP, or Wang, or Digital Equipment, whatever company offers that machine. And this was a radical idea, and one that just hadn*t been done before, because all the software was done by the hardware companies. And yet, it was a very necessary step to create the virtuous cycle that we believed in. And that cycle is pretty simple, that you need lots of neat applications that are very low cost, and to get those applications to fund the development of them, you*d better be able to sell them in volume. And so you need a lot of machines that are completely compatible out there, and if you can get this going, the more of those machines get bought, the cheaper they*ll get because the components will get cheaper, so you*ll have more of them, and then you*ll have more applications, and so more people want to buy the machines. And that is something that in the early 1980s came to pass, a completely different structure for the computer industry, Intel at the chip level, Microsoft providing the basic software, and then thousands of companies providing applications and solutions on top of that. And it moved at a pace that was quite phenomenal, and now we just take that for granted that that*s what computing is all about. It*s very different than the computer from when I was high school, those were really tools just of large organizations. The PC, even in the 1980s, became the best tool for creativity and empowerment that man had ever created. Today over 600 million people are using those machines, in different and very productive ways. And yet, I*m saying we*ve really just scratched the surface. The next milestone after the IBM PC, with MS DOS, was the arrival of graphical interface, and this is an advance where some pioneering work at Xerox had been seen by Apple, and Microsoft, and we both said, hey, let*s go out and build machines that work that way. It*s hard for people to appreciate that that was very controversial at the time. People thought, this was strange, it*s hard to write software for this, it*s very slow, because you have to do more up on the screen, and what*s this whole icon thing anyway? I mean, do we need 29 fonts in a document and all those kind of, fruit -oop little icons. So it was a period of about six or seven years that we were evangelizing that, getting the software developers to do work around that, and eventually that got to critical mass. Windows was a key part of that, and the launch of Windows 95 was sort of the celebration of the fact that now we had 32-bit computing, and graphical interface was absolutely in the mainstream. Soon after that came a change that started this Internet gold rush, and that was the idea that you could browse, and the connections were cheap. We had anticipated that e-mail, and online services would take off for many years, but it never really happened. 3-Com declared the Year of the Network again, and again, and again, and it was all just too hard, not at critical mass. But, then out of the university environment, a few dozen institutions, including Harvard, got connected up, and the standards of the Internet, the protocols became the basis for full connectivity. And that was a phenomenal period, Netscape got started, everybody thought that was an amazing thing. People thought that and it was. They used to always say that they moved at Internet time, so whatever that was, we must have been moving at double Internet time to get something that moved ahead, and was a strongly reviewed and accepted solution there. That got the Industry into this period where people said, well, isn*t everything going to change, won*t the way people buy, do banking, insurance, set up travel, won*t that all be done on the Internet? And the answer is, yes, it will. It takes time, it takes the software foundation relative to security and protocols, and information representation that didn*t exist in those years, but is now emerging. So in some ways it was just the sense that it would be an overnight thing that was wrong. Our industry has always benefited, been driven forward by the phenomenal advance in hardware. For the CPU itself, that was the thing that got Paul Allen and I to say, wow, computing will be cheap, the software is the missing element. And Moore*s Law says you*ll double the transistors on those processors every 18 to 24 months, that*s held true for these last 25 years, and it appears, certainly for the next 10 and probably the next 15, that will hold true. Now, we need very clever software that can take that increase in transistors and map that into an increase in performance. Those aren*t quite as directly tied as you might hope and expect, particularly because memory latency is the thing that*s holding us back. Even clock speed doubling doesn*t mean a doubling in performance, because we are waiting most of the time for the memory hierarchy to bring data in. And you can improve clock speeds a lot faster than you can improve memory latency. But, we will do the software to get that performance. Other elements of the system are critical, too. The storage people, and they*re better than the processor people, they double storage capacity every 12 to 18 months. And so thinking of a disk that can not only store everything you type in your whole life, or everything you ever hear or see, but also the movies you watch, the photos you take, it*s very realistic. Today it*s a 40-gig disk, then 80, 160, 320, and pretty soon you*re talking about a serious amount of storage, and that*s part of the $400 personal computer. So it means that scenarios that deal with rich data types become very much realistic. A good example of this is, I*m sure many of you have experienced portable media players that let you take your music wherever you want to go. This fall there will be a new class of devices that we call Portable Media Center that lets you take not just music, but also movies and photos. So you just connect this up with a USB cable, and everything that*s on your PC, the TV shows you wanted to record with Media Center, anything that you*ve created, comes down onto this device, and then of course you can go and play this wherever you go. And so a demand to get media in digital form, to have that be flexible, to have that be available, whether it*s lectures that you might want to catch up on, or just fun TV shows, all of that will absolutely be there. Screen technology is also very important. We need big screens, so you have a big field of view, like opening up an entire newspaper. I*ve got three big 22-inch displays on my desk, one on the left, one in the center, one on the right, and that*s helping me understand that with that kind of display area we need improved ways of doing Windows management, and remembering how you have things set up. It*s very high DPI, and so the readability is amazing, that too is something that*s very important to us. We want to move documents from paper onto the screen, where they can be searched, where they can have media, where you can take notes on them, and share them with the people who would be interested in that. And so all the advantage of digital will come into play once we overcome our disadvantages. Disadvantages that we*ve got this great Tablet device that came out a year ago, but it*s not quite as thin as a Tablet, it*s not quite as light, the battery doesn*t last quite as long as a Tablet, those don*t have batteries. But, the progress is there. That*s something that we*ve been investing in for more than a decade, the ink recognition software, the hardware design that can get that into the mainstream, and that will come about. So screen technology, including eventually even screens that you can roll up or fold up, are something that we need to think, okay, what kind of software will be valuable to take advantage of that. The next generation of video games, whether it*s PCs, or Xbox, or the next generation Playstation, will be high-definition gaming. They will be very realistic games. And so it*s not just the existing genres that we*ll drive forward there, it*s all of these social genres where people can talk and play together, and things that will appeal to people of all age groups brought into that entertainment scenario. The graphics processors are phenomenal in these devices. That*s why we can do rendering in real time that would have taken rendering farms days to do in the past. And so you don*t even think about it in terms of pixels, because it*s anti-aliasing, and shadows, and smoke, and fire, and all the effects that have been very tough, now move to a level of realism. All these devices will be connected over wireless. Wireless, Wi-Fi, hopefully, we don*t know if it*s in every building at Harvard, but if it*s not someday it will be. Houses, corporations, this is just something we*ll take for granted, you carry the device anywhere you want to go, and it*s connected up. There*s new forms of wireless, like ultra wide band that provides hundreds of megabits of connectivity. So you don*t need to connect the computer to the screen, it just connects through the wireless. You don*t need to connect your disk up to the computer, the ultra wide band just connects that up. Over long distances we have the idea of doing what we call mesh networking, that*s a big software research project at Microsoft, and combining the new wireless techniques like Wi-Max let you reach out and connect to the rural areas, where it*s never economic to run terrestrial wired infrastructure into those, and get to this goal of everybody being connected up. Now, we see all these different devices working together, the wall-sized screen, the desk-sized, the Tablet, the pocket device that*s not only the phone, but your GPS locator, your personal-information management. But, even glancable information on your wrist, ala Dick Tracy. We took a big step towards that with the shipment just a month ago of the device I*m wearing called the SPOT watch. Now, what*s in here is a radio receiver, and so it*s getting the weather report, stock prices, what happens is you go to a PC and pick what kind of news you want, anything you care about, horoscopes, daily word problems, who you want to send messages to you, how you want your calendar to show up, and as soon as you do that a message gets sent to the watch that tells it what information to present. When I leave work I look at the traffic on here to know which way to go home, because it*s completely up to date. And that just means glancability is part of the hierarchy of how all these devices need to work together. Amazingly, the microprocessor in here that we worked with National Semiconductor on, and it*s based on the ARM architecture, is ten times as powerful as the original PC. It*s got ten times as much memory as the original PC. I could put 80 copies of BASIC into this little watch. In fact, it does have an interpreter, we download programs all the time. If we decide to present soccer games that are happening in a better way, or baseball, or any new idea we have, we just download it over this FM network that we use to connect up to the device. So it*s actually the CLR .NET runtime that*s built into every one of these. Now, where will software change things? A lot of the economy is people dealing with information, we usually call those information workers. That*s very broad: if you*re on the phone talking with customers, if you*re purchasing things, if you*re designing new products, if you*re figuring out marketing campaigns, you are an information worker. And our proposition is that the way that you deal with information is way, way more inefficient than it should be. The way that you are able to navigate through sales information, the way that you can look at quality things, the way you can find out the attitudes of customers and transfer those things. You often get these things on a piece of paper, so you get the sales data, but, say, it*s bigger than you expect, or smaller, it*s just a piece of paper on the a number on a piece of paper, what are you going to do, call someone up and say, I*m confused about this? What you should be able to do is just click on that, and see it by time, by products, by geographies, see what the currency effect was, see what happened before and after you did a special, take out a certain class of customer and see if that trend is different. And just sit there and click and navigate at a level of semantics that you understand, the way that you think, and all that information coming through in a simple way. This doesn*t exist today. In fact, information workers that should be demanding that just don*t know that that*s possible. They don*t understand the visualization techniques that would be brought to bear on that. Think about meetings, meetings are this huge thing that clog up your schedule, and yet anyone who goes to those meetings will say, no, that wasn*t a perfectly effective use of my time. I didn*t need to be there for part of that meeting, some things could have been sent out in advance, we could have coordinated it better. Some people didn*t need to fly in, or if they didn*t fly in, we were waiting, we didn*t need to meet, because without their participation we couldn*t make the right decision. And just taking meetings and making them 20 percent more effective with software and wireless and tablets is very doable, we proved that at least at that level it*s very straightforward, that alone unleashes hundreds of billions of dollars of productivity into the economy, where people can make better decisions, save costs, all the things that really drive the economy forward. Take the way that you think about business applications. Today you have to write lots of code to change an application for the particular needs of one company, that*s a terrible way to express those differences. Those differences should exist in a very visual form that*s not code; it*s just a different process. The way we collect cash at this company, the way we introduce products, the way we review defects, all of that should simply be in business terms, and as soon as you change those diagrams, the code that*s needed should be automatically connected to those things. Take buying and selling, people don*t today have an automatic way to find all sellers, check the reputation, engage in complex transactions with them, the state of those things, and deal with exceptions, if someone sends you defective products. Today, it*s a nightmare because you negotiate with that person through e-mail and the phone, and yet the software in your company and their company gets completely confused about this exceptional event, and how to deal with that. There*s no coordination between the pieces. There is a foundation advance called XML Web services that is the infrastructure to make all of that possible, and we build these modeling layers on top of that. So, in the world of business, we can make the jobs more interesting, make people more effective. Communications affects everybody, whether you*re at work, whether you*re at home. Right now, you know, communications is very splintered. You*re blogging, you*re IMing, you*re e-mailing, you*re using your mobile phone, you*ve got your wired phone, and the whole notion of when should it ring, when should things get into your Inbox, the software is not working on your behalf to know your context, to know exactly what*s important. Say you*re busy and somebody important wants to contact you, your software should be able to schedule that, get that so that you*re coordinated, and it happens without any overhead at all. Spam is just sort of the extreme of your time not being used properly. Spam is in big numbers because people have figured out they can send millions, or even billions of pieces of e-mail for very low cost. So, if only one out of a million people go click that thing and buy a product, that*s an economically positive event for those people, although it*s bad for the economy, 99,999 people had some of their time taken away from them, and perhaps don*t even get to the mail that is important to them. Some of this spam is pretty surprising and quite unusual. I*m not going to show you all the spam I get, some of that might not be appropriate, I don*t know why I get that. But this one is one of my favorites, because what*s clear to me is that as soon as I get out of debt, there are going to be a lot of nice people that are going to be very friendly, and so this is definitely important for your social life. The next one seems to be a little bit more targeted. How many people need a college diploma? Not that many. (Laughter.) And then, finally is one that really appeals very directly to a problem I deal with, and that is this whole legal cost thing. My shareholders definitely think I should follow up on this one. It*s pretty interesting what comes in over the transom. How can we solve that? We can solve it. This is not something that will plague us in the years ahead. By authenticating who sends the e-mail, like a Caller ID-type structure, by automatically passing in mail from people you know, and by having mail from strangers that you should pay attention to, making it trivial for them to provide some sort of proof that it*s not spam, we can get e-mail back to what it was. The forms of approach will include things like asking their machine to do 10 seconds worth of computations. There are math functions that take 10 seconds to do, and they take thousandths of a second to check to see whether they were done, they*re called puzzle functions. And so, when you receive the e-mail, you just check, was that done, and so if you*re sending a modest number of messages, it*s no effect at all, it*s just done in the background, but for that spam generator, there*s a gigantic economic cost that undermines that asymmetric model of very few people really wanting to get that spam. We can also use human-interactive proof, where you prove there*s a person on the other end willing to do a short task, or monetary proof, where you show that you*re willing to put some money at risk if the person who receives it says, yes, you*ve wasted my time, as opposed to it being mail from a stranger who it is something that you did want, your long-lost brother, somebody saying that your house is on fire, that you should pay attention to. In the consumer realm, it*s clear things are also going to change. Keeping memories about your friends, your kids, your events, that*s not just photos, that*s videos, that*s the audio annotation that share why it was exciting and special to you, all of that should be archived and shareable and navigable in a very rich way. Remembering the movies you*ve seen, and sharing recommendations with other people, being able to do these things whenever you choose to do them, all of that we will take or granted, because at your home you will have a PC that let*s you create and organize and then project through wireless that out onto any speaker or any screen that*s in the house. One of the challenges here, though, is that you*re dealing with a large number of objects. You*ve got a lot of movies, and a lot of photos. And just real quickly I wanted to show you a couple of prototypes that Microsoft Research has done that suggest improved visualization that will make these things easy to navigate. The first one we*ll bring up is about movies, and what you see is that we*ve got "Blade Runner" there in the middle, and what it*s done when we selected that is it took the director, Ridley Scott, and put all his movies over here so we can go through and look at those, and pick let*s say something here, and we bring that to the center, it does the same thing of bringing the famous actress here, and so we can see their movies, and select through those. We just pivot through this, and of course we*d have the reviewers who we like, or what our friends that we trust have said as part of this navigation process as well. So, we*re making it easy and fun to go through these different dimensions without making it feel like you*re writing a query against a database, which actually of course it is. The second prototype is focused on photos, and so I can see I*ve got a ton of photos here that I can go through and look at. I also have movie clips here, so if I take one of these movie clips, let*s see, I click on it, it just plays, and that*s recorded. I have audio annotations on a lot of these different photos. That*s why I*m keeping that one. And so, how would you like to navigate this? Well, partly you*d like to assign key words, you know, I can look at all the different key words I*ve applied here, and so say I*ve picked one about Thanksgiving, when I click on that, it does the selection, if I double-click it, it just brings that up. But, I want the software to help me with this. So, for example, the software should be able to tell every photo that has faces in it. You can see it goes and selects those when I go over that face thing. It should be able to tell which are indoors or outdoors. It even should be able to say, okay, if I take an image like that, show me anywhere that I*ve got something similar. So, I can relax the similarity criteria and get more. And as I make it more stringent, I get less of those. I get groupings based on similarity where it*s looking at the image, and it will actually use that recognition capability. In fact, every one of these photos, ideally the camera records the time and the location as well, and so we can group things that way. We*re also experimenting with where 3-D comes in. Today, most computer interfaces are still very flat. And so when we get into the 3-D mode, we can step through things this way. Beginning to change the X axis, which is based on time, and get more groupings, more detail, and then we can select any set that we want, and say, okay, that*s about that, let*s put a keyword on that, which is a better way for me to navigate than remembering the exact time I grouped together all those things that are similar. All that gives you a sense that there are advances that will make the navigation of these things pretty straightforward and kind of fun. We*ll have a lot of different items that we*re dealing with. The optimism I mentioned about software breakthroughs, the best way we demonstrate that at Microsoft is by our increased R&D spending. We*re spending $6.8 billion a year this year. That*s substantially the largest technology R&D budget, about 20 percent more than IBM, and more than double anyone else. And, of course, IBM does a lot of hardware and physics things, they work in five different operating systems, so it*s a little bit different character than what we*re doing. One of the parts of this that has been phenomenal for us is the pure research group called Microsoft Research. And, it really has allowed us to get at the forefront of these top issues, making sure that the advances we need really get done. For example, we*re working with lots of universities, including a good example is Harvard on this Center Network Application, it*s a pretty neat thing. And that kind of collaboration is very important to us. Research in the U.S. is ahead because of the great symbiosis between commercial labs doing neat new things, and universities. Now, that commercial side in some ways is not as strong as we*d like it to be. Companies like Xerox and AT&T that in the *70s and *80s were very big are dramatically down. So even if Microsoft increased, the net amount of that activity is less, and certainly that*s an issue in terms of the U.S. really having this as a unique advantage. One of those research topics that*s vital in the top priorities of the company is all the issues around security, verifying whether code is correct, that involves writing it in new ways, scanning the code to check for specific kinds of defects, and actually being able to prove whether code works or not. When I left Harvard, this was a state-of-the-art problem, and Professor Cheatham and others were using a thing called ECL, Extensible Computer Language, and they could prove programs that were about 20 lines of code. Now, Microsoft*s code is not 20 lines of code. Over the last years, that*s gotten up to hundreds of lines of code, but only in the last year with some work, it was collaborative, but with a key breakthrough by Microsoft Research, now we*re taking programs that are hundreds of thousands, even a million lines of code and being able to prove things about them. And if we look at, say, a device driver, and we say, does this device driver ever cause a fault? And it either says, no it doesn*t, or it can actually prove to you, show you exactly the set of code tabs that would lead to that result. And so you immediately understand what you have to do to make that better. Now, raising the level of abstraction, and having more modularity and contracts between the pieces, these are necessary steps to get security to work. The Internet was not designed with security in mind, it was designed so that it over time would heal if some of the nodes went down, but the time of that healing, and the fact that malicious players would be on that network, were not part of the original design. And so the TCP/IP-verifying the packets, SMTP knowing who is sending you information, the fact that passwords are used in so many of these systems, that*s an incredibly weak link. And so, to get security and reliability right, we need to really do a lot better. We need to make sure that it*s not easy to exploit these systems. A combination of firewalling things and updating systems will make a huge impact, and those will be being used in a very widespread way even in the next year. More profound is watching systems to see when their behavior is unusual. For example, if you look at the whole Internet and say that the level of traffic is way up, what are the types of traffic errors that have risen up as a percentage of the profile, and can we drop those and let the other traffic have higher priority? If you look at a computer and look at a program that normally doesn*t, say, update files is all of a sudden updating files, isn*t that an unusual event that should be examined, and thought about. So this whole active protection is one of the paths forward, and one that there*s a lot of good invention taking place around. Other inventions we need are things that have been thought about for a long time, but only now can we say that these will be solved in the not-too-distant future. Ink recognition I mentioned, this is one where we now have hundreds of thousands of devices out there, and we got it good enough that we have those users, and we take any time one of them is frustrated by the recognition, and they send it back to us, and we make it better. We understand by that information. A lot of our approaches are very data driven. We look at all the handwriting people do, and build rich Bayesian models, neural models around that. That*s the underlying technology that*s used. Ink is moving into the mainstream, a little bit before speech. Part of the reason for that is that with ink you see your mistakes, you too can read it and say, OK, I wouldn*t have recognized that either, it*s OK. Whereas with speech, it*s all subconscious, and so you never say to yourself that you misspoke or anything, because you have no idea what the correction function looks like, and it*s very frustrating, it appears random. With ink, people actually change, if they are taking a C and closing the loop to open, and we think it*s an E, and over time you get better at that. Even subconsciously they get way better at it, and so the recognition rates improve quite a bit. In speech, that doesn*t happen, you don*t have conscious plasticity. And so we have to get an error rate that*s pretty unbelievable. Just take speech and take random words, so there*s no context, take a perfect microphone, and eliminate all noise, the difference between human and computer recognition is quite small, it*s only as you relax those constraints that we see that humans are unbelievably good at using context, eliminating noise, and really getting a strong signal that gives them that huge advantage. And that*s really informative, because now we*re matching those things, we*re matching through signal processing techniques the noise elimination, we use what are called array microphones to do that, with some very deep algorithms. We*re using our natural language work to say, OK, how is this context done in a very deep way and make that work well. Another input modality we believe in is vision. We*re already seeing on video games little cameras where you can sit there and swing the bat, and do various things. The cost of these cameras is way down. So in meeting rooms, on your PC, in your video game we*ll have very high-resolution cameras that take information, and the computer will be able to understand that. The computer will know, are you looking at the screen, maybe it should notify you of something, or are you looking at something else, is there someone else in the room talking if the voice is different. And it will use that to build a richer model of interaction. The holy grail of computer science is artificial intelligence, and this is the idea of learned behavior. And people today, there*s actually, if you take the field, there*s less working on it now than 20 years ago. We at Microsoft have a large group because we actually feel some real advances have taken place, and will take place around Bayesian systems, or statistical verifier approaches that are very strong. We*re applying this in many different ways. One is that we watch how games are played up on Xbox Live, and we have our AI engine learn all the playing techniques. So you can take any level of difficulty and the machine will play like an idiot, or it will beat you hands down the same way that human players would beat you, because it has that learning base that it*s been designed around. Machine translation is another one that we believe is really ready for prime time, to take and have lots of documents be made available in different languages. Now, computer science, I mentioned, will also be the set of tools that advance and really change the other sciences. Jim Gray, who works for us, got together with astronomers and said, look, each of these observatories has their own database, a database at different resolution, frequencies, from different parts of the earth, and they*re not normalized in any way so that somebody who wants to propose something in astronomy, they can just test out that idea and see, are there any pulsars like this, anywhere near anything else like this, and boom, get that answer back. There*s so much data that it*s not about just staring into a telescope late at night and hoping you*re there when some supernova blows up, it*s about mining that rich data. So the frontiers of that science only can be advanced by having tools that are good at that. And already there*s some amazing things that have come out of the work of taking that data, connecting it up through Web services. The same thing can be said for most of the hard sciences, biology is a particularly interesting one, and a tough one. The complexity of the data, the breadth of the data -- but the value of understanding that data makes that a very, very exciting area. If you look at Harvard, this trend of computer science working with the other sciences is a very interesting opportunity, because Harvard, of course, is a leader in so many areas, health and medicine, you*ve got an incredible group of people to work with. The Government school, the Business school, all of the other hard sciences. And so there should be more opportunities really for Harvard really to stitch those things together than almost any other university. Now, as we charge forward in making these great tools, and having them work fro everyone, we need to continue to look at the issue that these are so important that we want everyone to have access. This is often talked about as the Digital Divide, and it*s a challenge. One of the things Microsoft does, of course, is make sure that high-volume computing, software and hardware are more available, less expensive. The broadband communications costs are the most expensive piece, but here wireless and our mesh software we think will come in and even solve that piece so that it won*t be as much of a barrier as it is today. We did a program, that was actually my foundation working with Microsoft, to say, let*s try and put computers in libraries. And we started this six years ago. It was something we were worried about, in terms of would the librarians like it, would kids come in and do things that weren*t considered all that educational, using the machines? Would the machines break down, would people still go for the books? How would this be accepted? And by providing a lot of training, and really reaching out to librarians, this thing has been a phenomenal success. It*s raised the traffic in the libraries, it*s raised the number of books being checked out. It means that anybody who can reach a library has the latest software and is connected up to the Internet. The demands that came out of this were fascinating. We had to make Windows so you could just push a button and switch from Spanish to English, boom, all the software would be switched. We have to have a button so you could push and all the fonts would get bigger, so that older people coming in would find it easy to navigate and read the information. We had to make systems more robust, so in case somebody was a little bit messing around the system state could be restored very easily and be exactly right. So now in a sense the U.S. project is complete. We*ve got 18,000 libraries who now have these 50,000 machines, and those are being used very heavily. There*s more to do in the U.S., schools and community centers, and then the final frontier, the tough one, is making sure this happens on a global basis. Computing is making the world a smaller place. I*m always fascinated when I go to hospitals in Africa or high schools in India, and see that PCs are there, and people are connecting up to the Internet, they*re getting that wealth of material, some of it is only in English, but the wealth of material that is as good as what all of us here have access to. Some people are worried about this globalization, because it means that jobs can be done anywhere in the world. It means that if you have an education you can compete for jobs that other educated people are looking at as well. And what this means is for global productivity, for raising the level of wealth of these countries, for having better goods and services, it*s a fantastic thing. It does mean that in the same way the U.S. during the 1980s had to think, OK, what is our edge, what makes us better, we need to do that now, and renew our commitment to those things. In the 1980s it was a concern about Japan, and it was overwhelming, people wrote books about that their industrial system was just better, and that the consumer electronics industry was gone, the car industry was going, the computer industry was next, people were kind of depressed. And along with that, there was this humility of saying, no, we*re not just going to do it the way we do it, we*re going to keep our university research system, we*re going to let it pursue lots of different paths, we*re going to have this capital formation, and companies taking risks, and rewards for intellectual property that defines our approach. And all those productivity benefits that came out of the *90s were based on work that was done during the 1980s. So it really is something where you can have lots of winners as this moves forward, including the U.S. staying in that strong position. In order to do that we need great people working on the important areas. And computer sciences, the sciences at large, all of that is very important. We*re falling short a little bit, in terms of getting diversity, lots of women and minorities, into these fields. I*ve done a little bit to help with that with a Foundation program that*s called the Gates Millennium Scholars, and here at Harvard there*s about 60 of these Gates Millennium Scholars that I hope will do well themselves and set a role model that will really drive forward and make a change in this. So we need diversity and we need the excitement, we need people to understand these are jobs that are very interesting, most of these jobs are very sociable. If you want to just write code, actually that will be fine, too, but most of them are demanding a broad range of skills. And the excitement of the kind of impact you can have doing this work rivals anything else, because the change is there, the breakthrough is there, so every day is fun and then when you look back on the change that you drove, that*s fun as well. And so I*m very excited to see this move forward at full speed, I*m very excited to see how each of you can contribute to it. Thank you.<!--START RIGHT NAVBAR--></TD>