Αρχική > νέες τεχνολογίες > Brainlike Computers, Learning From Experience / Viewing Where the Internet Goes

Brainlike Computers, Learning From Experience / Viewing Where the Internet Goes

 

Erin Lubin/The New York Times

Kwabena Boahen holding a biologically inspired processor attached to a robotic arm in a laboratory at Stanford University.

By JOHN MARKOFF, The New York Times, Published: December 28, 2013

PALO ALTO, Calif. — Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

Readers shared their thoughts on this article.

Read All Comments (258) »

     

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.

“We have no clue,” he said. “I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.”

Until now, the design of computers was dictated by ideas originated by the mathematicianJohn von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.

The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.

“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”

The new computers, which are still based on silicon chips, will not replace today’s computers, but will augment them, at least for now. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and in the giant centralized computers that make up the cloud. Modern computers already consist of a variety of coprocessors that perform specialized tasks, like producing graphics on your cellphone and converting visual, audio and other data for your laptop.

One great advantage of the new approach is its ability to tolerate glitches. Traditional computers are precise, but they cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks.

Traditional computers are also remarkably energy inefficient, especially when compared to actual brains, which the new neurons are built to mimic.

I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain.

Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.

I.B.M. and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

The largest class on campus this fall at Stanford was a graduate level machine-learning course covering both statistical and biological approaches, taught by the computer scientist Andrew Ng. More than 760 students enrolled. “That reflects the zeitgeist,” saidTerry Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered early biologically inspired algorithms. “Everyone knows there is something big happening, and they’re trying find out what it is.”

A version of this article appears in print on December 29, 2013, on page A1 of the New York edition with the headline: Brainlike Computers, Learning From Experience.

     

 

Viewing Where the Internet Goes

 

Jon Han

By JOHN MARKOFF, The New York Times, Published: December 30, 2013
Will 2014 be the year that the Internet is reined in?

For the last podcast in 2013, we pause to look at the Internet we may take for granted today — and five technology experts prognosticate on what the web of years to come may have in store.

  • Associated Press/JavaSoft, Court Mast

Vinton Cerf, far left, and Robert Kahn, far right, pictured in 1997, worked out the conventions of the modern Internet in a hotel room in 1973.

When Edward J. Snowden, the disaffected National Security Agency contract employee, purloined tens of thousands of classified documents from computers around the world, his actions — and their still-reverberating consequences — heightened international pressure to control the network that has increasingly become the world’s stage. At issue is the technical principle that is the basis for the Internet, its “any-to-any” connectivity. That capability has defined the technology ever since Vinton Cerf and Robert Kahn sequestered themselves in the conference room of a Palo Alto, Calif., hotel in 1973, with the task of interconnecting computer networks for an elite group of scientists, engineers and military personnel.

The two men wound up developing a simple and universal set of rules for exchanging digital information — the conventions of the modern Internet. Despite many technological changes, their work prevails.

But while the Internet’s global capability to connect anyone with anything has affected every nook and cranny of modern life — with politics, education, espionage, war, civil liberties, entertainment, sex, science, finance and manufacturing all transformed — its growth increasingly presents paradoxes.

It was, for example, the Internet’s global reach that made classified documents available to Mr. Snowden — and made it so easy for him to distribute them to news organizations.

Yet the Internet also made possible widespread surveillance, a practice that alarmed Mr. Snowden and triggered his plan to steal and publicly release the information.

With the Snowden affair starkly highlighting the issues, the new year is likely to see renewed calls to change the way the Internet is governed. In particular, governments that do not favor the free flow of information, especially if it’s through a system designed by Americans, would like to see the Internet regulated in a way that would “Balkanize” it by preventing access to certain websites.

The debate right now involves two international organizations, usually known by their acronyms, with different views: Icann, the Internet Corporation for Assigned Names and Numbers, and the I.T.U., or International Telecommunication Union.

Icann, a nonprofit that oversees the Internet’s basic functions, like the assignment of names to websites, was established in 1998 by the United States government to create an international forum for “governing” the Internet. The United States continues to favor this group.

The I.T.U., created in 1865 as the International Telegraph Convention, is the United Nations telecommunications regulatory agency. Nations like Brazil, China and Russia have been pressing the United States to switch governance of the Internet to this organization.

Dr. Cerf, 70, and Dr. Kahn, 75, have taken slightly different positions on the matter. Dr. Cerf, who was chairman of Icann from 2000-7, has become known as an informal “Internet ambassador” and a strong proponent of an Internet that remains independent of state control. He has been one of the major supporters of the idea of “network neutrality” — the principle that Internet service providers should enable access to all content and applications, regardless of the source.

Dr. Kahn has made a determined effort to stay out of the network neutrality debate. Nevertheless, he has been more willing to work with the I.T.U., particularly in attempting to build support for a system, known as Digital Object Architecture, for tracking and authenticating all content distributed through the Internet.

Both men agreed to sit down, in separate interviews, to talk about their views on the Internet’s future. The interviews were edited and condensed.

The Internet Ambassador

After serving as a program manager at the Pentagon’s Defense Advanced Research Projects Agency, Vinton Cerf joined MCI Communications Corp., an early commercial Internet company that was purchased by Verizon in 2006, to lead the development of electronic mail systems for the Internet. In 2005, he became a vice president and “Internet evangelist” for Google. Last year he became the president of the Association for Computing Machinery, a leading international educational and scientific computing society.

Q. Edward Snowden’s actions have raised a new storm of controversy about the role of the Internet. Is it a significant new challenge to an open and global Internet?

A. The answer is no, I don’t think so. There are some similar analogues in history. The French historically copied every telex or every telegram that you sent, and they shared it with businesses in order to remain competitive. And when that finally became apparent, it didn’t shut down the telegraph system.

The Snowden revelations will increase interest in end-to-end cryptography for encrypting information both in transit and at rest. For many of us, including me, who believe that is an important capacity to have, this little crisis may be the trigger that induces people to spend time and energy learning how to use it.

You’ve drawn the analogy to a road or highway system. That brings to mind the idea of requiring a driver’s license to use the Internet, which raises questions about responsibility and anonymity.

I still believe that anonymity is an important capacity, that people should have the ability to speak anonymously. It’s argued that people will be encouraged to say untrue things, harmful things, especially if they believe they are anonymous.

There is a tension there, because in some environments the only way you will be able to behave safely is to have some anonymity.

The other side of this coin is that I believe that strong authentication is necessary. We must support the entire spectrum here. In some cases you want whistle-blowing kinds of capacity that will protect anonymity. Some governments will not tolerate anonymity, and in our government it’s still an open question.

Enlarge This Image

Kenzo Tribouillard/Agence France-Presse — Getty Images

Vinton Cerf, now with Google, showed an interactive work at the company’s cultural hub in Paris.

Science Times Podcast

For the last podcast in 2013, we pause to look at the Internet we may take for granted today — and five technology experts prognosticate on what the web of years to come may have in store.

Can the Internet be governed effectively?

I’m deliberately arguing that new institutions are not necessary.

How significant is the danger that the Internet will be balkanized, as critics of the I.T.U. fear?

Balkanization is too simple of a concept. There is an odd mix of permeability and impermeability in the Net. You won’t be able to communicate with everyone, and not every application will be accessible to everyone. We will be forced to lose the basic and simple notion that everyone should be able to communicate with everyone else.

I’m disappointed that the idyllic and utopian model of everyone being able to communicate with everyone else and do what they want to do will be — what is the right word? Inhibited is the wrong word, because it sounds too widespread — maybe variable is the best way of saying it. End-to-end connectivity will vary depending on location.

How has your original design weathered the test of time?

Everything has expanded by a factor of a million since we turned it on in 1973. The number of machines on the network, the speeds of the network, the kind of memory capacity that’s available, it’s all 10 to the sixth.

I would say that there aren’t too many systems that have been designed that can handle a millionfold scaling without completely collapsing. But that doesn’t mean that it will continue to work that way.

Is the I.T.U. and its effort to take over governance a threat to an open Internet?

People complained about my nasty comment. I said that these dinosaurs don’t know that they’re dead yet, because it takes so long for the signal to traverse their long necks to get to their pea-sized brains. Some people were insulted by that. I was pleased. It’s not at all clear to me that I.T.U.’s standards-making activities have kept up with need. The consequence of this is that they are less and less relevant.

Beyond the mobile Internet and the Internet of things, what else do you see on the horizon?

There are a couple of things. One of them is related to measurement and monitoring. It gives us the ability to see trends and to see things that we might not see if we under-sample. That, plus being able to see large aggregates of what we hope is sufficiently anonymized information, can help us reveal states that we might not otherwise see.

It is like being able to figure out flu trends. I think of it as a kind of sociological or a socioeconomic CT scan that is helping us to see the dynamics in the world in a way that we couldn’t otherwise see. And of course it leads to all kinds of worries about privacy and the like.

The Engineer

An official with Darpa from 1972 to 1985, Robert Kahn created the Corporation for National Research Initiatives, based in Reston, Va., in 1986. There he has focused on managing and distributing all of the world’s digital content — as a nonproprietary Google. He has cooperated with the I.T.U. on the development of new network standards.

Q. The Snowden affair raises a paradox. The Internet made it relatively easy for him to do what he did, and at the same time it enabled the dramatic increase in surveillance that alarmed him. How do you sort that out?

A. I would push back on that a little bit. You could say oxygen made it possible for him to do that, because without it he wouldn’t be alive. Or his parents made it possible for him to do that.

Does the scandal imply anything about the future of the Internet more generally?

You can’t gaze in the crystal ball and see the future. What the Internet is going to be in the future is what society makes it. It will be what the businesses offer, it will be new products and services. It’s the new ideas that show up that nobody thought of before.

And looking farther down the road?

If you ask me what it’s going to look like in 100 years, I’m sure there are going to be some things that are similar. That is, everyone will say we know we need connectivity between computational devices. We all know that access to information is important, so what’s different? It is just the same as it was back then.

You can say the same thing about transportation. What’s new about transportation? Well, people still need to get from here to there, and sometimes it’s not safe. You can get there faster, but that’s just a parameter that’s changed.

Has the Snowden scandal changed the dynamics surrounding privacy and surveillance? How will it affect the debate?

There have always been ways in which people can access things, so instead of being able to log in because he had a key to this file, or this password or this firewall, he had a key to a physical room or a key to a safe.

Thievery of this sort is not new. The question is, did it change the scale of it. Probably. If it had been actually physical stuff, someone would have said, “What are you doing with these trailer trunks walking out the door?”

Is there a solution to challenges of privacy and security?

In the 1990s when I was on the National Internet Infrastructure Advisory Committee, Al Gore showed up as vice president, and he made an impassioned pitch for Clipper chip [an early government surveillance system]. He said, “We need to be very aware of the needs of national security and law enforcement.” Even though the private sector was arguing for tight encryption, the federal government needed [to be able to conduct surveillance]. It never went, and it’s not anywhere today. I think it’s probably easier to solve the Israeli-Palestinian problem than it is to solve this.

Can the Internet be governed? What about the disputes between the different standards-setting bodies over control of the network?

No matter what you do, any country in the world is going to have the ability to set its own rules internally. Any country in the world can pull the plug. It’s not a question of technical issues, it’s not a question of right or wrong, it’s not a question of whether global Internet governance is right or wrong. It’s just with us.

I used to do the Icann [management] function myself with one 3-by-5 card in my pocket, and when I got two of them, I asked Jon Postel if he would take over. You have to put it in perspective. Now it’s a huge business, and it gets caught up in a few things.

Would it be possible to start over and build a new Internet to solve the problems the current Internet faces?

You can’t do a wholesale replacement. If you think there is too much spam today, tell me what your solution is for it, because if you design a clean slate Internet and you don’t have a solution for spam, you’re going to have spam on your clean slate Internet and you’re going to have an argument for yet another clean slate Internet because that one didn’t work. It’s like saying we have crime in society, so let’s blow up the planet and build a new one. There will probably be crime on the new planet.

  1. Δεν υπάρχουν σχόλια.
  1. No trackbacks yet.

Σχολιάστε

Εισάγετε τα παρακάτω στοιχεία ή επιλέξτε ένα εικονίδιο για να συνδεθείτε:

Λογότυπο WordPress.com

Σχολιάζετε χρησιμοποιώντας τον λογαριασμό WordPress.com. Αποσύνδεση /  Αλλαγή )

Φωτογραφία Twitter

Σχολιάζετε χρησιμοποιώντας τον λογαριασμό Twitter. Αποσύνδεση /  Αλλαγή )

Φωτογραφία Facebook

Σχολιάζετε χρησιμοποιώντας τον λογαριασμό Facebook. Αποσύνδεση /  Αλλαγή )

Σύνδεση με %s

Αρέσει σε %d bloggers: