foto.jpg Alex.jpg
grafico.jpg
Cornell.jpg Click Multimedia PI.HTM Click Multimedia PI.HTML

Dar Click para Web Page


            Remarks by Bill Gates, Chairman and Chief Software Architect, 
            Microsoft Corporation    Cornell University, Ithica, New 
            York    February 25, 2004    
               BILL GATES:     Thank you. It*s great to be here at Cornell. 
            Cornell has made some fantastic contributions in many, many 
            different areas, particularly engineering and computer science, have 
            a lot of cutting-edge laboratories, and we feel privileged that 
            Microsoft has a strong relationship with a lot of the activity going 
            on there. Certainly, Microsoft has benefited fantastically from 
            hiring graduates from the schools here. Steve Sinofsky, who runs the 
            Office Software Group at Microsoft, a very key person who was 
            mentioned, and the incredible way that his experience here helped 
            get us really behind the focus on doing super Internet software.
            Another graduate, Mike Nash, runs our security activities. You 
            can imagine that*s a reasonably important job, and one that keeps 
            him very busy. But we have people from many parts of the company, 
            the head of our human resources group, and many, many other people, 
            and so we*re pleased at the connection that we have.
            What I want to share today is some of my excitement about the 
            impact of breakthroughs in computer science, the impact that will 
            change the way we do business, change the way we learn, and change 
            the way we communicate and entertain ourselves. In many ways, this 
            is a field that people are now underestimating. Although in the late 
            *90s and bubble period, there was a lot of hype and a lot of talk 
            about things happening overnight. In fact, what will happen in this 
            decade goes beyond even those wild dreams, and it*s happening 
            because of the advances, infrastructure changes -- primarily in 
            software -- that we and others have been working very hard on. The 
            impact will also be on how science is done in many other areas. The 
            tools of computer sciences, the models in computer science, will 
            really change.
            And we got a slight glimpse during the 1990s of the impact of the 
            extra productivity that came out of a modest set of software 
            advances. So here, we*ll see a much greater productivity benefit 
            that won*t just be impacting the United States, that will actually 
            change the global market, and in fact, increase the effectiveness of 
            global competition in delivering great products and services to 
            every one around the world.
            My history, as was said, has one unusual thing, which is that I 
            am a college dropout. I*m here today advocating staying in college, 
            finishing up, and really getting the great background that you can 
            only get by finishing your school.
            The thing that convinced me that the time was right to start 
            Microsoft, even though it wasn*t super timely from my parent*s point 
            of view, was the arrival of a microprocessor-based system. It was a 
            very different than computers that had been up to that point. 
            Computers before had been tools of large organizations, and people 
            feared them because they would send out these funny bills, and you 
            couldn*t get them to correct their information. People even talked 
            about folding and mutilating and stapling those computer cards that 
            you*d get as part of your bill just to fight back against what the 
            computer was. So, this idea that the PC would become the most 
            empowering tool we*ve ever created -- the best tool for creativity, 
            communications, publishing, and really alter every one of those 
            activities -- wasn*t widely anticipated.
            My friend Paul Allen, who was the co-founder of Microsoft, saw 
            the 1971 8028 microprocessor from Intel, and by extrapolating out 
            the Moore*s Law prediction of exponential improvement, he saw that 
            we*d have something very different. And he enlisted me and said, 
            *Hey, let*s start a company to write the software for this machine.* 
            Well, that idea was a very strange one, because, of course, all the 
            software at that point was written by the hardware companies.
            All the machines were incompatible -- the IBM machines, the 
            digital equipment machines, Univac, NCR -- utterly different, and 
            the software was written inside those companies, the operating 
            systems and the tools. And so, our idea was that we*d make all these 
            machines compatible, we*d provide the software layer that ran on top 
            of those machines that would allow people to write applications to 
            sell a volume of applications proportional to all the machines in 
            the marketplace. And that would create a virtuous cycle, the more 
            applications were there, the more people would buy, that would 
            increase the volume, which would reduce the prices, drop the cost, 
            and draw more people in. So starting with that 1970s kit machine, a 
            cycle was begun.
            The early machines were not very capable. That Altair was a kit. 
            All it did was flash the lights, there was no disk. The most 
            advanced version had 8k of memory, 8,000 bytes of memory. And one of 
            my great programming feats was writing a basic interpreter that 
            could run in an 8k byte environment, including floating point and 
            string management, and storing people*s programs.
            So you can imagine in product review meetings now, on days when 
            people come into me and say a piece of software is 8 megabytes, or 
            20 megabytes in size, I kind of shake my head and say, *Wait a 
            minute, how did it all get so big?* And maybe it shouldn*t be so 
            big, that*s definitely my initial reaction. But, the scaling effect 
            of this additional power led to the second generation, which was the 
            Apple II, TRS-80, Commodore 64, all of those running Microsoft Basic 
            built into the machine, and then in 1981 to the IBM PC.
            In some ways, we think of that as a pretty limited machine today, 
            but in its time, increasing the memory capability, that was pretty 
            exciting. And as that became more and more powerful, that led us to 
            graphical interface. That was something that Xerox had really played 
            around with in their labs, the Palo Alto research labs, but it was 
            Apple and Microsoft that took and created commercial products that 
            popularized that approach.
            It*s probably not easy to remember that it was very controversial 
            at the time, people thought it was too hard to write the programs, 
            it was too slow, and it was just a frilly thing. Good old 
            characters, mono-based characters were all that people need, or 
            needed, even lower case was considered sort of a lower thing, that 
            "real" programmers wouldn*t ever engage in using that, and we still 
            see that a little bit in some of the old time programmers comments, 
            or lack of comments, the things they do.
            Then Windows came along. It became a phenomenon, and that was the 
            start of our building productivity applications -- Word, Excel, 
            PowerPoint -- and that*s become a really phenomenal business, as 
            people are able to interchange documents, and as those products 
            become richer and richer.
            The late *90s brought the wild period where the Internet, based 
            on work in universities including Cornell and many others, exploded 
            onto the scene. And people realized every computer on the planet 
            would be connected. And that kind of really opened up people*s eyes 
            to what was possible. And in many ways, completing those 
            predictions, that will happen this decade, even though people are 
            underestimating how that*s going to come together.
            Certainly, if we look at the PC today, as great as it is, it*s 
            clear there*s a lot more to do. It*s not that easy to use. You have 
            to learn various commands in different programs, even for dealing 
            with common ideas like lists of things, lists of mail, lists of 
            files, lists of music, all very different. Way to many verbs, way 
            too much user interface, way too difficult to move your information 
            around, to search it, to have it on the different devices. Way too 
            complicated, it*s all aggregated there together, and you can*t 
            connect it up to any display or any speaker that you want to. And 
            so, I*d say in some ways, we*re maybe a third of the way to 
            achieving the original vision that Microsoft had, which was a vision 
            about empowering everyone with this tool.
            Now, one of the great helps we have in moving forward is the 
            continued improvement at the hardware level. For the microprocessor, 
            Moore*s Law, which has stayed true from the start of Microsoft to 
            today, will certainly stay true for another 10 or 15 years. The one 
            question there is whether we can map the increased transistors 
            directly into performance, because the idea of parallelization is 
            one of the tough problems in computer science, and we will need 
            parallel techniques to be able to continue to map the increase in 
            transistors to an increase in performance. We won*t be able to use 
            just a purely brute force approach.
            Now, there are several areas that are improving even faster than 
            the microprocessor. This is really hard to believe, but the capacity 
            of the disks, the magnetic storage, on these systems doubles more 
            like every 15 to 18 months, and what that means is already you can 
            type an entire lifetime and not fill up that disk. But as the disks 
            improve, the rest of this decade you can take all the movies you 
            want to watch, all the photos you*ve ever taken, and store those as 
            well on something that*s very, very low cost. So storage is no 
            limitation in terms of getting to movies, and photos, and very, very 
            rich things.
            The screen is another key element. We need large, very 
            high-resolution screens. If we look at the impedance today between 
            the world of paper, where people are still reading periodicals and 
            taking notes and printing things out for meetings, one of the 
            reasons that can*t move over to the digital environment is because 
            the readability just isn*t as good. The fact that it*s often in a 
            fixed position, the text doesn*t look as good, that*s a great 
            inhibitor. And we*re committed to making it so that reading is far 
            better off of the screen, where you can search and annotate, you can 
            share your thoughts with people, the information is very much up to 
            date. But we have to match some of these characteristics that are 
            making people live in a split world.
            Certainly, the Tablet computing device that comes with a pen and 
            a digitizer is a great step in that direction. The readability 
            there, the idea of annotating things, all build in now is moving us 
            from just having a keyboard to now having a keyboard and a pen as 
            ways of getting information into these systems. User interface 
            experts need to think about these big screens and how we can take 
            advantage of those, the way we did Window management in the past 
            definitely was influenced by having displays that were, say, 17 
            inches, or 18 inches or less, today I have three 20-inch screens, 
            and although that*s a fairly premium price configuration today, that 
            will come down to $700 or $800 in the years ahead, and be a very 
            typical thing that can drive productivity.
            The graphics processors in these systems are really phenomenal, 
            and we*re using them for entertainment software, but mapping those 
            into things like seeing a representation of a bookstore, or a 
            laid-out set of documents in a rich 3D way, those are techniques yet 
            to be invented. Clearly, 3D will move into the mainstream interface, 
            because the power is there to do that.
            The wireless connections are exploding. Wi-fi is one of those 
            technologies that was really underestimated, but now it*s becoming 
            pervasive, in the business setting, university setting, home 
            setting. That means that the information can be moved around. And 
            that will be complemented at small distances with ultra-wide bands 
            that have unbelievable bandwidth to connect up to your screen or 
            your disk without it being part of the same device. WiMAX is a 
            standard that*s at an earlier stage than Wi-fi, but it*s a 
            long-distance standard. It will take the cost of broadband 
            connections and make them low enough that they*ll become practical 
            even in rural areas in developing countries, where those connections 
            are not very prevalent today.
            We see a world of many, many devices, and this I think fits in 
            with some of the vision and exploration going on at the university 
            here. We see the wall-sized device, the desktop, the Tablet, 
            pocket-sized device, even a wrist-sized device all working together. 
            And as soon as you indicate a preference about whether you want to 
            see certain sports scores, or be told if a flight arrival changes, 
            you should be notified on whatever device you happen to have with 
            you. The system across all the devices should have a sense of what 
            you*re doing -- that is, the context -- as well as a sense of what 
            you care about. Is this e-mail worth alerting you about? Is this 
            phone call something that should be scheduled to happen right after 
            your meeting is done? And software is working for you to make all of 
            that work.
            I mentioned moving from the pocket-sized all the way down to the 
            wrist, and what I*m wearing is a product called the SPOT watch. We 
            just came out with this about a month ago, and it*s got a special 
            microprocessor and radio set of chips that we designed built-in. The 
            microprocessor here is ten times more powerful than the original IBM 
            PC. It runs at 30 megahertz. It*s got 640k of memory standard, so 
            that*s ten times as much as the original IBM PC.
            And so, we download arbitrary programs that we can update just 
            using the CLR byte code for things like watching sports of various 
            kinds, looking at stock information, looking at weather, messages, 
            calendar; anything you*re interested in can show up here, whether 
            it*s as simple as just a customized personalized watch face all the 
            way up to business information that might be interesting.
            And so, having information at a glance that you don*t need to get 
            anything out of your pocket -- you don*t need to make a 
            point-to-point connection, it*s just a broadcast network -- that 
            always has the information available, we see that as just fitting 
            into the hierarchy of a world with many, many devices that are 
            working on your behalf and delivering key scenarios.
            Now, what holds back having all this great hardware result in 
            fantastic things or result in great productivity? The answer is it*s 
            all about software. I*m biased, but software is where the action is. 
            Good software will fulfill these dreams, and we don*t have 
            everything we need there.
            Just think of various domains. Think of what we call an 
            information worker, somebody who as part of their job has to 
            organize things, whether it*s a salesperson, a purchasing person, a 
            product design person. The vast majority of the U.S. economy are 
            people who do some type of information work. And yet, today the way 
            they track what*s going on, the insights they get into their 
            customers* attitudes, the way they can explore quality metrics, the 
            way they even can look at basic things like sales trends and data 
            mine through those by region and product type, pricing approach, 
            they are literally starved for information. The data they get today 
            is not good at all. Whatever data they get, they get on a piece of 
            paper, and if they look at the number on there and say that*s bigger 
            than I expected, it*s way too hard to dive in and say, *OK, why is 
            that different?* If it*s up on a screen, they ought to be able to 
            pivot through it and have analysis -- business intelligence software 
            -- help them look at what the explanation is for what they*re 
            looking at.
            So we*ve got to make it live, we*ve got to make it at a much 
            higher level in terms of how they talk about it, model it and share 
            it.
            Just look at meetings. Meetings are a huge part of people*s 
            schedule at work and yet most people would tell you half the time in 
            these meetings is not well spent. It*s information that doesn*t 
            matter to them, that could have been sent out in advance, that 
            doesn*t get followed up on. And by using software to facilitate the 
            meeting, record the meeting, let people at a distance participate in 
            that meeting, there*s a lot we can do to make things far, far more 
            effective. The world of collaboration is just at its beginning. 
            If we look at software customization, most businesses take 
            application software and write lots of lines of code to customize it 
            to their needs. That*s very expensive, and it*s the wrong level of 
            abstraction. Instead, they should be taking a visual business 
            process and relaying that out with the events and things that really 
            are different for them, versus the other businesses in that 
            industry. And by doing it that way, as there are improvements in the 
            basic underlying modules, you don*t get a conflict between the 
            customization work and the base improvements that take place. So, 
            we*re at the wrong abstraction level, and modeling tools -- software 
            modeling tools -- are what will close the gap there.
            Even the basic process of buying and selling is very inefficient 
            today. Finding anyone who might sell you something, checking their 
            reputation, checking the status of the order, dealing with something 
            complex like when you get an invoice that*s wrong but you want to 
            keep the goods if they*ll adjust the price, and not getting tied up 
            in the mismatch between the ad hoc e-mail and phone calls versus 
            your back-end software and that other company*s back-end software 
            that doesn*t really understand those exceptions -- modeling these 
            things very explicitly lets us track them, lets us manage people*s 
            time in an effective way, and really puts them in charge and makes 
            it work the way that it should.
            Communications is another great example where things are clearly 
            inefficient. Why do we have phone numbers? Why do we have many phone 
            numbers? Why do we get phone calls that interrupt us when we*re not 
            interested?
            Why do we get e-mail that wastes our time? Our time is a valuable 
            resource. Now, some of this e-mail that we get is actually almost 
            humorous. I*ve got a few examples of some that I*ve gotten recently. 
            This one here is pretty exciting. (Laughter.) It turns out if you 
            get out of debt you get to meet people that are really friendly to 
            you, looks good. 
            Another one I*ve gotten looks like it might be more targeted. 
            (Laughter, applause.) And I haven*t responded to this but I like 
            that look with the diploma in hand.
            And finally the one that probably I am going to have to follow up 
            on -- (laughter) -- is this legal thing. Whoever sent me that has 
            got very good targeting software. It*s something that would be very, 
            very timely.
            So spam is wasting our time. It is a very serious problem. It can 
            even be used to sort of fool people into doing something they 
            shouldn*t do, ignoring messages from other people, so we*ve got to 
            make e-mail authenticatable in terms of exactly who it came from. 
            And we have to give people control methods so that only the e-mail 
            that comes from people they suggest gets passed through, and e-mail 
            from strangers is subject to various proof techniques that make sure 
            that it*s appropriate and what they*re interested in.
            We have to take the mismatch of all these communications 
            modalities -- instant messaging, e-mail, phone calls, wireless 
            versus wired, blogs, the blog indexes you get -- and bring those 
            together in a simpler way. You shouldn*t have to join a game network 
            and a social network and set up your e-mail and set up your personal 
            Web site as very, very disparate things. And so, this is a hot area, 
            a lot of advances in communication have a very profound effect in 
            every realm of activity.
            For consumers, the move towards digital is really under way in 
            big numbers. Digital photography is more popular now than film-based 
            photography. And the fact that we don*t just think about it as 
            photos, we can think about taking a set of photos and having the 
            computer, for example, make sure that it picks the people who are 
            smiling in each shot and create a collage so you don*t have to make 
            a trade-off of who looks bad in this one or who looks good in this 
            one. That should be automatic.
            Recording audio so you can really talk about how you felt about 
            the event -- we have a thing called Photo Story that starts to say 
            that you shouldn*t take motion video and stills and audio, and how 
            you pull those together and think of those are completely separate. 
            We need some unification there.
            The idea that your memories can be tracked for you and easy to 
            navigate -- I think people would find that of immense value. One of 
            the people at Microsoft Research actually has a little camera-like 
            device that she carries around throughout her work day, and it*s 
            noticing where she goes to different places or when people are 
            laughing or talking loud, and it just passively -- without being 
            noticed -- takes photos. So over the course of the day there will be 
            something like a hundred photos, and it would be great to have 
            software that can kind of sort out which ones are important, take 
            the GPS data and the time data that*s associated with those, take 
            the information off of her digital calendar and do the annotations 
            to make that work very well. From that calendar, between what*s 
            there and the state-of-the-art in face recognition, even being able 
            to point out who*s who, and knowing what*s in the photo becomes very 
            straightforward.
            Now, with all this proliferation of media types, we have to do 
            better visualization, better interfaces. I*ll just quickly show you 
            two prototypes that are being played around with in Microsoft 
            Research that suggest that this is something that can be handled and 
            made quite attractive. The first one is very simple; it*s called 
            Media 3D. And it*s … say I*m interested in looking at different 
            movies. And so, what we do is we take a movie, we put it at the 
            center, and then on the outside we take the actors, the director and 
            we put films that relate to them. 
            So if I go up here, I can rotate through and see what things 
            Ridley Scott did, and if I see one of those I pick that, bring that 
            to the center and then it goes out to the database, brings up all 
            the different clips and you can see now that Michael Douglas was in 
            this and so we can see the movies about him that he is in, we can 
            see Andy Garcia and various people. And this could also be annotated 
            with whatever movie reviewers you trust, whatever top lists it*s on, 
            what your friends whose advice you value thought about the thing, 
            bring it together to a very rich and kind of easy to navigate 
            interface.
            The other prototype I*ll show you real quickly is called the 
            Media Browser, and this is more about photos and film clips that I 
            mentioned. And so what it will do is it will load in -- in this 
            case, I*ve got more than a thousand different images here -- and 
            it*s putting a lot of those up on the screen using kind of miniature 
            format. I can see I*ve got quite a variety here. Some of these are 
            actually film clips, so if I double-click on those, then it will 
            actually go and play the little film clip that I have there.
            So all of these are essentially in a database, but you don*t 
            really want a database-like interface for navigating these things. 
            You want to be able to select them easily, being able to put 
            keywords on them easily. 
            Well, let*s go look at the photos that have been tagged with 
            *Thanksgiving.* Now, it*s likely that these were all taken in a 
            similar time period and a similar area, and so it was easy to group 
            them. I can say which of these photos have faces in them, just by 
            hovering over that and it selects those. I can say which of them 
            were taken indoors, which of them were taken outdoors. 
            And the software recognition that*s helping with this is not 
            perfect today, but it*s very good, and it makes selection and 
            navigation a lot easier. I can even say, *OK, which photos are 
            similar?*, so I*ll take this bridge photo and select that and then 
            I*ll say, *OK, relax the constraint and say what is like that,* and 
            as I relax the constraint on similarity, I get more and more photos 
            that look like it, so I can select those and tag those in any way 
            that might make sense.
            If I go back to the top where we had all those different photos, 
            this is also a place that we*re playing around with 3D and saying, 
            *OK, how could that help us group things in an interesting way?* 
            When I have these stacked, of course, I can still go through them. I 
            can also select a set and put one of these tags on or I can actually 
            say I want even more groups to show up there, so I just take this 
            slider bar and change that, and then it will regroup things with a 
            little bit of more granularity on the X axis if that*s how I want to 
            do those groupings.
            And so I think you get a sense here that there*s a lot we can do 
            that*s very different than the way that we*re navigating the media 
            today.
            One of the things that makes this compelling is in the world of 
            new devices, people will be in control of when they want to watch 
            video and how they want to watch their music. We*re moving away from 
            the world where you put a CD in and listen to those tracks and you 
            have to go get that physically or where you watch TV shows when 
            they*re scheduled to be watched.
            This device here that I*m holding is called a Portable Media 
            Center. It will come out from Creative, Samsung, quite a few 
            different people this fall, and it*s got a nice color LCD and a 
            40-gig disk. And if you connect it up to a USB, automatically the TV 
            shows you record, your film clips, your photos are brought down to 
            this and, of course, music as well. So it*s kind of a superset of 
            portable music players, but it*s really getting the movie companies 
            and the music companies to think, *Hey, we really have to do a 
            better job of making it easy to license software and have flexible 
            rights so I can use it on many different players.* And so these, I 
            think, will become very pervasive. Whether it*s a kid watching a 
            movie or somebody on the plane, the fact that you have exactly what 
            you want whenever you want -- it really puts you in control in a 
            different way.
            It*s very interesting in terms of licensing models and 
            advertising models the effect this is going to have on the media 
            world, but things are moving pretty quickly because that*s what 
            users are demanding.
            So I said that software advances are the key to this. We show our 
            optimism and commitment to driving that forward by our R&D 
            investment. We*re spending $6.8 billion on R&D this year. That*s 
            substantially the largest of any technology company. And it*s pretty 
            focused, in a way. It*s not on physics, it*s not on biology, it*s 
            focused on software, and really focused on a single, unified 
            architecture around Windows and XML and Web services in a way that 
            has a coherency and doesn*t treat things like management or security 
            as being off on the side. They are at the core of how we do this 
            design.
            A big part of our R&D, a very important part, is our pure 
            research group. It*s got sites in Cambridge, at our headquarters 
            outside Seattle and over in Beijing in China. And that*s the group 
            that tends to work most closely with the university here. In fact, 
            we have some very notable relationships. The Cornell Theory Center 
            is actually one of the biggest things we*ve supported, and it*s been 
            fantastic for us to understand how Windows can be used in 
            high-performance computing, look at the different applications that 
            are emerging there, make sure our tools are very good for that, and 
            so that*s been extremely fruitful.
            We*ve got Windows CE, which is our sort of mini Windows being 
            used in some robots and vehicles to help out there, and we*re very 
            excited about the work coming out of that.
            Our biggest R&D area is security. We call this broadly 
            Trustworthy Computing. There is no doubt that the dreams of commerce 
            and media and great things around the Internet and PCs, really only 
            one thing could stand in the way of that happening, and that is if 
            people perceive that their data and the reliability and privacy 
            around these things just aren*t well understood.
            So that*s why for the industry, this has become very much a top 
            issue. It reaches down into the very Internet protocols and how they 
            were originally designed. There was a certain robustness against 
            parts of the network being blown up or out of service, but not even 
            there with the right type of guarantees. There was no design for 
            authentication, for knowing that if somebody was malicious on the 
            network that you could eliminate that traffic and not be fooled by 
            the things they were doing.
            In order to solve this problem, there is innovation at many 
            levels. Things like updating software and firewalls are very much 
            near-term solutions, and just the great progress taking place there 
            will make a huge difference in changing these things.
            But over time, the very way that we write the software, the fact 
            that we can verify properties of the software, that we write 
            essentially in a higher level language, in a more tight language 
            that has contractual guarantees between the modules, allows us to 
            prove things out piece by piece.
            Proving that software is correct was something that was being 
            played around with 25 years ago when I left academic computer 
            science. And I was a little worried that I*d leave and just 
            overnight they*d have some breakthrough in it. That didn*t happen. 
            In fact, it*s only really now, with the collaboration between 
            Microsoft Research and a number of universities, that doing that for 
            large bodies of code appears to be a very practical thing, and a 
            tool that we*ll use not only for our software but we*ll provide 
            those tools to our customers as well. Getting up from hundreds of 
            lines of code to millions of lines of code is a very tough problem, 
            but on things like device drivers, that*s already working very well 
            for us. 
            And so the security realm is a very hot area and one that we 
            think your breakthroughs and ours need to come together.
            Another huge area that I*m very excited about is moving towards a 
            more natural interface, moving so that you don*t just have to use 
            the keyboard. I mentioned reading off the screen, I mentioned the 
            Tablet PC with ink. Ink recognition is a problem that -- there were 
            a set of companies about eight or nine years ago -- that came out 
            with products that were called, what, "pen computing" products. And 
            they were interesting, the demos, like all these things, the demos 
            worked very well. But then when you figured out what the battery 
            life was and the recognition rate was and the clunkiness was and the 
            parallax was, there were just dozens of things that meant that that 
            kind of burned out.
            Well, we*re very patient and so we kept our research on ink 
            recognition going full bore and finally about 15 months ago, came 
            out with this Tablet PC that*s the first product based on that. 
            We*ll have a major update of it this summer and the software gets a 
            lot better and the hardware is also evolving at a very rapid 
            rate.
            This idea of note taking is really catching on, the idea that 
            it*s a small extra cost to get the pen in there. One of the reasons 
            that handwriting is a bit easier than speech recognition is that 
            your recognizing of text is a conscious activity, so if you see that 
            the way you*re drawing the E looks too much like a C, even if you*re 
            not explicitly doing it, you will loop that thing a little better 
            for it the next time. And if you look at when we make mistakes, you 
            can say, *Yeah, even I would have a hard time recognizing what was 
            written there.*
            Now, with speech it*s not as easy. Speech is another one that 
            will be solved, and will be solved for a broad range of applications 
            within this decade. We see it today for small vocabularies, but not 
            for dictation, not for really important things. You ought to be able 
            to just talk to your cell phone and navigate the information you 
            care about, and that certainly will become a reality.
            The things that are holding us back from that is we can compare 
            computer recognition to human recognition when the words are 
            randomly chosen and there*s no noise in the environment and when the 
            microphone is perfect. If you take those three idealistic 
            assumptions, the difference between computers and humans is actually 
            very small. But then as you relax those things and go to real world 
            microphones, lots of noise in the environment and you allow there to 
            be context, that the human has a much deeper understanding of the 
            likelihood of words in a particular discourse. And then in today*s 
            computer systems, you start to see a huge gap. And the gap is big 
            enough that even though people do start using these things, unless 
            they have repetitive stress injury or something that makes the 
            keyboard unattractive, they*re not often long-term users. 
            But we think that*s starting to change. We*re seeing in, 
            particularly in China and Japan with our latest software, which, of 
            course, are markets where the keyboard is not quite as effective 
            because you just have big alphabets, thousands of characters. And 
            so, when you*re using a keyboard there*s a level of indirection 
            between those keystrokes and the alphabet. 
            We had a contest in China where we were able to beat the best 
            typist to get to a perfect set of input by starting with speech, and 
            so that*s the kind of milestone that makes us very optimistic.
            Vision: these cameras are cheap, a $50 camera, CCD array, has 
            very good resolution. And the idea of seeing what*s going on in the 
            meeting, taking viewpoints, seeing what was up on the blackboard, 
            being able to present that as a time sequence, all that takes is the 
            camera.
            Understanding viewpoints and social clues, we can*t just take 
            that raw video feed and send it out. That*s not what people are 
            interested in. In fact, if you warp the room, you can actually make 
            it appear that everybody is a co-equal participant instead of the 
            kind of views that video conferencing has typically provided.
            Now, the ultimate in computer science advances is the field of 
            artificial intelligence. And here again, our respect for the human 
            equivalent, the natural equivalent, grows as this proves to be a 
            very tough problem. The actual products in the marketplace that use 
            AI are things like little vacuum cleaners that try and steer around 
            your rug. So we*re right down on there on the rug, trying to find 
            our way around in terms of applied AI.
            It won*t stay that way. These Bayesian modeling systems and other 
            approaches, we*re starting to use them in things like games, where 
            when you play with a computer opponent, because we*ve watched across 
            the network all these different playing styles and strategies, we 
            can make the computer as good or as bad as you want it to be, and 
            make it incredibly diverse in the way that it*s interacting with 
            you. And so, the fundamental work at the Bayesian level and the 
            understanding systems, we see a lot of progress there.
            Now, computer science will start to touch the other sciences in a 
            pretty deep way. The best concrete example of this is some work Jim 
            Gray, who works for us in Microsoft Research, did, collaborating 
            with a set of astronomers, including some here at Cornell. And the 
            idea was to say that astronomy had moved beyond the idea of just 
            staring at a lens late at night and being lucky enough to see a 
            supernova and writing up a paper about that. It*s moved to where you 
            need to take the whole corpus of data of all the observations done 
            over time at various wavelengths and resolutions, and propose 
            theories about densities or distances or dark matter that are 
            consistent with that observed data.
            And this is not a classic database problem, because the 
            information is very disparate, and so coming up with a schema and 
            ways of navigating, creating these Web services, is very much a 
            state-of-the-art problem that you have to involve domain experts in 
            terms of what classification is very interesting.
            But they*ve made enough progress on this that it*s clear it*s got 
            momentum, it*s happening, and there will be essentially a logical 
            database that theoreticians in the field can sit there and pull and 
            advance their work.
            In other sciences, the amount of data is even greater, biology 
            being perhaps the most difficult, but also the one with the greatest 
            payoff. Certainly, that*s a field that, like computer science, will 
            be changing the world in some exciting ways, because advances are 
            coming along.
            And all of these rich visualization, modeling, data mining 
            techniques are very important. In fact, people with computer science 
            backgrounds I think will be very key to all the advances, because 
            systems-type thinking is very important. And, in fact, with my 
            dialogue with the faculty this afternoon, the emphasis on these 
            multi-disciplinary approaches and the excitement around that was 
            very impressive to me, because I think that*s going to be critical 
            and allow for all the sciences to benefit from these tools.
            Now, our industry is delivering all this magic. The prices go 
            down, the number of people using it goes up in a lot of ways. But 
            what we*ve got is important enough even in terms of today*s system 
            -- and more so as these systems become more effective -- that the 
            idea of making sure that everyone has access, the so-called digital 
            divide issue is a very important one.
            Getting the prices down, that*s part of it. Broadband costs are 
            actually the biggest inhibitor today, but, as I said, various peer 
            techniques using mesh software that we and others are working hard 
            on, combined with new modulation techniques, ought to really break 
            the bottleneck there and make that something that*s very accessible 
            as well.
            Here in the United States, a combination of my foundation and 
            Microsoft, as the president mentioned, have done a project of 
            getting machines out into libraries. And at first when we piloted 
            this, we were a little concerned that kids would come in and maybe 
            not do the most wholesome things using the system, that the systems 
            would break, that the librarians wouldn*t like them, a lot of 
            concerns. In fact, over a six-year period, with the right training, 
            involving lots and lots of people, this thing has been a phenomenal 
            success. In fact, now in every library, there are now 50,000 new 
            computers connected up to the Internet with the latest software. And 
            the librarians are seeing more traffic coming in to actually check 
            out books as well as use the computer, so it*s reinforced the role 
            of the library as a focal point in the community and a place that 
            provides equity, so that the kid without the machine at home -- if 
            he can get to the library -- he*s got that leveling factor.
            Getting this technology into education is a major challenge and 
            one that I think is very, very important.
            Getting this technology into poorer countries, there are 
            particular problems, even things like power not being as available 
            in various rural villages. 
            As people think about this field globally, there*s a lot of 
            concern now that not only have transportation systems enabled 
            manufacturing jobs to be done anywhere on the globe, but they*ve 
            enabled all jobs, including jobs that require college education or 
            just jobs answering the phone, they allow them to be done in 
            different places. And this is going to create a lot of opportunity, 
            it*s going to create a lot more effective goods and services. It*s 
            sort of free trade brought to the next level.
            And I think for the U.S., I*d label it as more of an opportunity 
            than anything else. We need to strive to keep our edge, which is by 
            doing research. Certainly for Microsoft, the lion*s share of the 
            work we do will continue to be here in the United States. We*ll grow 
            outside the United States, but we*re not cost optimizing for doing 
            Windows for five percent less; we*re optimizing or having those 
            breakthroughs come 10 percent faster and the quality be that much 
            better. And delivering it to the most demanding market there is, 
            which is the market here in this country.
            In the 1980s, it was fascinating. There was all this angst about 
            Japan, and Japan taking over various industries. And some of the 
            humility and thinking that came out of that actually led to the 
            great work that we saw the benefit of in the *90s. So I*m hopeful as 
            we look at the fact that the Internet, software and hardware are 
            enabling global activity, we*ll go back to basics and say, *No, we 
            don*t want to close the door, but we want to make sure that we*re 
            leading the way and that we*ve got our own unique contribution to 
            that picture.*
            One of the great challenges is in education, to make sure that 
            the quality of education at all levels is super good and to make 
            sure that the entire population is participating in that. If we look 
            at the engineering and the sciences, there the progress is not as 
            strong as in some of the other professional fields.
            I*ve had a chance, through the Millennium Scholarship Program, to 
            support a lot of people, a lot of minorities in going into fields 
            that without that would have been more difficult for them to do. And 
            I was pretty impressed to see that here at Cornell there are over 50 
            Gates Millennium Scholars, so that*s a real endorsement, that these 
            people who have the scholarship can literally go to any college in 
            the sense that it*s all financed, and such a high number have chosen 
            to come here.
            Computer science, I*m saying very explicitly, is the most fun and 
            interesting field. In fact, if you think of other fields, they*re 
            just not going to change like this. They*re not going to take a 
            device that*s blind and can*t talk and can*t do anything really, 
            it*s so limited today, and over the course of just the next ten 
            years, tackle and solve many of these very, very tough problems. And 
            people who understand those things can really be the ones who 
            participate in the advances in the other fields.
            So it*s fun stuff. The type of jobs that are available are quite 
            broad. I think we need to do more to get the word out about the 
            opportunities and the range of things that go on. And I*m excited to 
            see you all here. I think all of you have a chance to make 
            contributions to the breakthroughs that I talked about, and I look 
            forward to seeing what you*re able to achieve.
            Thank you. (Applause.)
 
Universidad de Cornell, institución estadounidense de enseñanza superior situada en Ithaca, Nueva York. Es una de las ocho universidades de la costa este de Estados Unidos que forman el tradicional grupo conocido como la Ivy League. La Universidad, constituida con carácter formal por el estado de Nueva York en 1865, y que recibió su nombre en honor del empresario estadounidense Ezra Cornell, ofrecía carreras en las disciplinas clásicas además de los estudios en agricultura e ingeniería que se le exigían a una universidad creada por la concesión de tierras del estado. Algunas de las divisiones de esta institución están financiadas por el Estado. A este grupo pertenecen la Facultad de Agricultura y Ciencias de la vida, la Facultad de Ecología, la Escuela de Relaciones industriales y laborales y la Facultad de Veterinaria. Los centros privados de la Universidad son: la Facultad de Arquitectura, Arte y Urbanismo, la Facultad de Ciencias y Letras, la Facultad de Ingeniería, la Escuela de Administración Hotelera, la Facultad de Derecho y la Escuela Johnson de estudios empresariales para posgraduados. La Facultad de Medicina y la Facultad de Ciencias médicas para posgraduados están ubicadas en el campus de la ciudad de Nueva York. Los 20 centros de esta Universidad dedicados a la formación especializada de Ciencias y Letras comprenden siete centros cogubernamentales. La biblioteca de la Universidad tiene importantes fondos sobre el sureste y el este de Asia, Latinoamérica, así como ediciones de Dante, Petrarca, William Wordsworth, James Joyce y George Bernard Shaw.