Cybernetics

In this age of DNA computers and artificial intelligence, information is becoming disembodied even as the "bodies" that once carried it vanish into virtuality. While some marvel at these changes, envisioning consciousness downloaded into a computer or humans "beamed" Star Trek-style, others view them with horror, seeing monsters brooding in the machines. In How We Became Posthuman, N. Katherine Hayles separates hype from fact, investigating the fate of embodiment in an information age.

Hayles relates three interwoven stories: how information lost its body, that is, how it came to be conceptualized as an entity separate from the material forms that carry it; the cultural and technological construction of the cyborg; and the dismantling of the liberal humanist "subject" in cybernetic discourse, along with the emergence of the "posthuman."

Ranging widely across the history of technology, cultural studies, and literary criticism, Hayles shows what had to be erased, forgotten, and elided to conceive of information as a disembodied entity. Thus she moves from the post-World War II Macy Conferences on cybernetics to the 1952 novel Limbo by cybernetics aficionado Bernard Wolfe; from the concept of self-making to Philip K. Dick's literary explorations of hallucination and reality; and from artificial life to postmodern novels exploring the implications of seeing humans as cybernetic systems.

Although becoming posthuman can be nightmarish, Hayles shows how it can also be liberating. From the birth of cybernetics to artificial life, How We Became Posthuman provides an indispensable account of how we arrived in our virtual age, and of where we might go from here.
How did cybernetics and information theory arise, and how did they come to dominate fields as diverse as engineering, biology, and the social sciences?

Winner of the CHOICE Outstanding Academic Title of the Choice ACRL

Outstanding Academic Title, Choice

Cybernetics—the science of communication and control as it applies to machines and to humans—originates from efforts during World War II to build automatic antiaircraft systems. Following the war, this science extended beyond military needs to examine all systems that rely on information and feedback, from the level of the cell to that of society. In The Cybernetics Moment, Ronald R. Kline, a senior historian of technology, examines the intellectual and cultural history of cybernetics and information theory, whose language of “information,” “feedback,” and “control” transformed the idiom of the sciences, hastened the development of information technologies, and laid the conceptual foundation for what we now call the Information Age.

Kline argues that, for about twenty years after 1950, the growth of cybernetics and information theory and ever-more-powerful computers produced a utopian information narrative—an enthusiasm for information science that influenced natural scientists, social scientists, engineers, humanists, policymakers, public intellectuals, and journalists, all of whom struggled to come to grips with new relationships between humans and intelligent machines.

Kline traces the relationship between the invention of computers and communication systems and the rise, decline, and transformation of cybernetics by analyzing the lives and work of such notables as Norbert Wiener, Claude Shannon, Warren McCulloch, Margaret Mead, Gregory Bateson, and Herbert Simon. Ultimately, he reveals the crucial role played by the cybernetics moment—when cybernetics and information theory were seen as universal sciences—in setting the stage for our current preoccupation with information technologies.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
This is a book whose time has come-again. The first edition (published by McGraw-Hill in 1964) was written in 1962, and it celebrated a number of approaches to developing an automata theory that could provide insights into the processing of information in brainlike machines, making it accessible to readers with no more than a college freshman's knowledge of mathematics. The book introduced many readers to aspects of cybernetics-the study of computation and control in animal and machine. But by the mid-1960s, many workers abandoned the integrated study of brains and machines to pursue artificial intelligence (AI) as an end in itself-the programming of computers to exhibit some aspects of human intelligence, but with the emphasis on achieving some benchmark of performance rather than on capturing the mechanisms by which humans were themselves intelligent. Some workers tried to use concepts from AI to model human cognition using computer programs, but were so dominated by the metaphor "the mind is a computer" that many argued that the mind must share with the computers of the 1960s the property of being serial, of executing a series of operations one at a time. As the 1960s became the 1970s, this trend continued. Meanwhile, experi mental neuroscience saw an exploration of new data on the anatomy and physiology of neural circuitry, but little of this research placed these circuits in the context of overall behavior, and little was informed by theoretical con cepts beyond feedback mechanisms and feature detectors.
©2021 GoogleSite Terms of ServicePrivacyDevelopersAbout Google|Location: United StatesLanguage: English (United States)
By purchasing this item, you are transacting with Google Payments and agreeing to the Google Payments Terms of Service and Privacy Notice.