Technological singularity
From Wikipedia, the free encyclopedia
Jump to: navigation,
search
"Rapture of the nerds" redirects
here. For the novel by Cory Doctorow and Charles Stross, see The Rapture of
the Nerds.
The technological singularity is the
theoretical emergence of superintelligence through
technological means.[1] Since the
capabilities of such intelligence would be difficult for an unaided human mind
to comprehend, the technological singularity is seen as an occurrence beyond
which events cannot be predicted.
Proponents of the singularity typically
postulate an "intelligence explosion",[2][3]
where superintelligences design successive generations of increasingly powerful
minds, might occur very quickly and might not stop until the agent's cognitive
abilities greatly surpass that of any human.
The term was popularized by science fiction
writer Vernor Vinge,
who argues that artificial
intelligence, human biological enhancement,
or brain-computer
interfaces could be possible causes of the singularity. The specific
term "singularity" as a description for a phenomenon of technological
acceleration causing an eventual unpredictable outcome in society was coined by
mathematician John von Neumann,
who in the mid-1950s spoke of "ever accelerating progress of technology
and changes in the mode of human life, which gives the appearance of
approaching some essential singularity in the history of the race beyond which
human affairs, as we know them, could not continue." The concept has also
been popularized by futurists such as Ray Kurzweil, who cited
von Neumann's use of the term in a foreword to von Neumann's classic The Computer and
the Brain.
Kurzweil predicts the singularity to occur
around 2045[4] whereas Vinge
predicts some time before 2030.[5]
Contents [hide]
1 Basic concepts
2 History of the idea
3 Intelligence explosion
3.1 Speed improvements
3.2 Intelligence
improvements
3.3 Impact
3.3.1
Existential risk
3.4 Implications for
human society
4 Accelerating change
5 Criticisms
6 In popular culture
7 See also
8 Notes
9 References
10 External links
10.1 Essays and
articles
10.2 Singularity AI
projects
10.3 Fiction
10.4 Other links
|
[edit]
Basic concepts
Kurzweil writes that, due
to paradigm shifts,
a trend of exponential growth extends Moore's law to integrated circuits
from earlier transistors,
vacuum tubes, relays, and electromechanical
computers. He predicts that the exponential growth will continue, and that in a
few decades the computing power of all computers will exceed that of human
brains, with superhuman artificial
intelligence appearing around the same time.
Many of the most recognized writers on the
singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms
of the technological creation of superintelligence, and argue that it is
difficult or impossible for present-day humans to predict what a
post-singularity would be like, due to the difficulty of imagining the
intentions and capabilities of superintelligent entities.[5][4][6] The term "technological singularity"
was originally coined by Vinge, who made an analogy between the breakdown in
our ability to predict what would happen after the development of
superintelligence and the breakdown of the predictive ability of modern physics at the space-time
singularity beyond the event horizon of a black hole.[6]
Some writers use "the singularity" in
a broader way to refer to any radical changes in our society brought about by
new technologies such as molecular
nanotechnology,[7][8][9]
although Vinge and other prominent writers specifically state that without
superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to
observations of exponential growth in various technologies (with Moore's Law being the most
prominent example), using such observations as a basis for predicting that the
singularity is likely to happen sometime within the 21st century.[8][10]
A technological singularity includes the
concept of an intelligence explosion, a term coined in 1965 by I. J. Good.[11] Although technological progress has been
accelerating, it has been limited by the basic intelligence of the human brain,
which has not, according to Paul R. Ehrlich, changed
significantly for millennia.[12]
However, with the increasing power of computers and other technologies, it
might eventually be possible to build a machine that is more intelligent than
humanity.[13] If a superhuman intelligence were to
be invented - either through the amplification of
human intelligence or through artificial intelligence - it would
bring to bear greater problem-solving and inventive skills than current humans
are capable of. It could then design an even more capable machine, or re-write
its own source code to become even more intelligent. This more capable machine
could then go on to design a machine of yet greater capability. These
iterations of recursive
self-improvement could accelerate, potentially allowing enormous
qualitative change before any upper limits imposed by the laws of physics or
theoretical computation set in.[14][15][16]
The exponential growth in computing technology
suggested by Moore's Law is commonly cited as a reason to expect a singularity
in the relatively near future, and a number of authors have proposed
generalizations of Moore's Law. Computer scientist and futurist Hans Moravec proposed in a
1998 book that the exponential growth curve could be extended back through
earlier computing technologies prior to the integrated circuit.
Futurist Ray Kurzweil postulates a law of
accelerating returns in which the speed of technological change (and
more generally, all evolutionary processes[17])
increases exponentially, generalizing Moore's Law in the same manner as
Moravec's proposal, and also including material technology (especially as
applied to nanotechnology),
medical technology and others.[18]
Between 1986 and 2007, machines’ application-specific capacity to compute
information per capita has roughly doubled every 14 months; the per capita
capacity of the world’s general-purpose computers has doubled every 18 months;
the global telecommunication capacity per capita doubled every 34 months; and
the world’s storage capacity per capita required roughly 40 months to double
(every 3 years). [19]
Like other authors, though, Kurzweil reserves the term "singularity"
for a rapid increase in intelligence (as opposed to other technologies),
writing for example that "The Singularity will allow us to transcend these
limitations of our biological bodies and brains ... There will be no
distinction, post-Singularity, between human and machine".[20] He also defines his predicted date of the
singularity (2045) in terms of when he expects computer-based intelligences to
significantly exceed the sum total of human brainpower, writing that advances
in computing before that date "will not represent the Singularity"
because they do "not yet correspond to a profound expansion of our
intelligence."[21]
The term "technological singularity"
reflects the idea that such change may happen suddenly, and that it is
difficult to predict how such a new world would operate.[22][23] It is unclear whether an intelligence
explosion of this kind would be beneficial or harmful, or even an existential threat,[24][25] as the issue has not been dealt with by most artificial
general intelligence researchers, although the topic of friendly artificial intelligence
is investigated by the Future of
Humanity Institute and the Singularity Institute for Artificial
Intelligence, which is now the Machine
Intelligence Research Institute.[22]
Many prominent technologists and academics
dispute the plausibility of a technological singularity, including Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is often cited
in support of the concept.[26][27]
[edit]
History of the idea
In 1847, R. Thornton, the editor of The Expounder
of Primitive Christianity,[28]
wrote about the recent invention of a four function mechanical
calculator:
...such machines, by which the scholar may, by
turning a crank, grind out the solution of a problem without the fatigue of
mental application, would by its introduction into schools, do incalculable
injury. But who knows that such machines when brought to greater perfection,
may not think of a plan to remedy all their own defects and then grind out
ideas beyond the ken of mortal mind!
once the machine thinking method has started,
it would not take long to outstrip our feeble powers. ... At some stage
therefore we should have to expect the machines to take control, in the way
that is mentioned in Samuel Butler's
Erewhon.
In the mid fifties Stanislaw Ulam had a
conversation with John von Neumann
in which von Neumann spoke of "ever accelerating progress of technology
and changes in the mode of human life, which gives the appearance of
approaching some essential singularity in the history of the race beyond which
human affairs, as we know them, could not continue."
In 1965, I. J. Good first wrote of an
"intelligence explosion", suggesting that if machines could even
slightly surpass human intellect, they could improve their own designs in ways
unforeseen by their designers, and thus recursively augment
themselves into far greater intelligences. The first such improvements might be
small, but as the machine became more intelligent it would become better at
becoming more intelligent, which could lead to a cascade of self-improvements
and a sudden surge to superintelligence (or a singularity).
In 1983, mathematician and author Vernor Vinge
greatly popularized Good’s notion of an intelligence explosion in a number of
writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this
op-ed piece, Vinge seems to have been the first to use the term
"singularity" in a way that was specifically tied to the creation of
intelligent machines,[30][31]
writing:
We will soon create intelligences greater than
our own. When this happens, human history will have reached a kind of
singularity, an intellectual transition as impenetrable as the knotted
space-time at the center of a black hole, and the world will pass far beyond
our understanding. This singularity, I believe, already haunts a number of
science-fiction writers. It makes realistic extrapolation to an interstellar
future impossible. To write a story set more than a century hence, one needs a
nuclear war in between ... so that the world remains intelligible.
In 1984, Samuel R. Delany used
"cultural fugue" as a plot device in his science fiction novel Stars in My
Pocket Like Grains of Sand; the terminal runaway of
technological and cultural complexity in effect destroys all life on any world
on which it transpires, a process which is poorly understood by the novel's
characters, and against which they seek a stable defense. In 1985 Ray Solomonoff introduced
the notion of "infinity point"[32]
in the time scale of artificial intelligence, analyzed the magnitude of the
"future shock"
that "we can expect from our AI expanded scientific community" and on
social effects. Estimates were made "for when these milestones would
occur, followed by some suggestions for the more effective utilization of the
extremely rapid technological growth that is expected."
Vinge also popularized the concept in SF novels
such as Marooned in
Realtime (1986) and A Fire Upon the
Deep (1992). The former is set in a world of rapidly accelerating change
leading to the emergence of more and more sophisticated technologies separated
by shorter and shorter time intervals, until a point beyond human comprehension
is reached. The latter starts with an imaginative description of the evolution
of a superintelligence passing through exponentially accelerating developmental
stages ending in a transcendent,
almost omnipotent
power unfathomable by mere humans. It is also implied that the development does
not stop at this level.
In his 1988 book Mind Children, computer
scientist and futurist Hans Moravec generalizes Moore's law to make predictions
about the future of artificial life. Moravec outlines a timeline and a scenario
in this regard,[33][34]
in that the robots will evolve into a new series of artificial species,
starting around 2030–2040.[35]
In Robot: Mere Machine to Transcendent Mind, published in 1998, Moravec
further considers the implications of evolving robot intelligence,
generalizing Moore's law to technologies predating the integrated circuit, and
speculating about a coming "mind fire" of rapidly expanding
superintelligence, similar to Vinge's ideas.
A 1993 article by Vinge, "The Coming
Technological Singularity: How to Survive in the Post-Human Era",[5] was widely disseminated on the internet and
helped to popularize the idea.[36]
This article contains the oft-quoted statement, "Within thirty years, we
will have the technological means to create superhuman intelligence. Shortly
after, the human era will be ended." Vinge refines his estimate of the
time scales involved, adding, "I'll be surprised if this event occurs
before 2005 or after 2030."
Vinge predicted four ways the singularity could
occur:[37]
1. The
development of computers that are "awake" and superhumanly
intelligent.
2. Large
computer networks (and their associated users) may "wake up" as a
superhumanly intelligent entity.
3. Computer/human
interfaces may become so intimate that users may reasonably be considered
superhumanly intelligent.
4. Biological
science may find ways to improve upon the natural human intellect.
Vinge continues by predicting that superhuman
intelligences will be able to enhance their own minds faster than their human
creators. "When greater-than-human intelligence drives progress,"
Vinge writes, "that progress will be much more rapid." This feedback loop of
self-improving intelligence, he predicts, will cause large amounts of
technological progress within a short period, and that the creation of
superhuman intelligence represented a breakdown in humans' ability to model
their future. His argument was that authors cannot write realistic characters
who surpass the human intellect, as the thoughts of such an intellect would be
beyond the ability of humans to express. Vinge named this event "the
Singularity".
Damien Broderick's popular science book The Spike (1997) was the
first to investigate the technological singularity in detail.
In 2000, Bill Joy, a prominent
technologist and founder of Sun Microsystems, voiced
concern over the potential dangers of the singularity.[38]
In 2005, Ray Kurzweil published The Singularity
is Near, which brought the idea of the singularity to the popular
media both through the book's accessibility and a publicity campaign that
included an appearance on The Daily Show
with Jon Stewart.[39]
The book stirred intense controversy, in part because Kurzweil's utopian predictions
contrasted starkly with other, darker visions of the possibilities of the
singularity. Kurzweil, his theories, and the controversies surrounding it were
the subject of Barry Ptolemy's
documentary Transcendent Man.
In 2007, Eliezer Yudkowsky
suggested that many of the different definitions that have been assigned to
"singularity" are mutually incompatible rather than mutually
supporting.[8] For example,
Kurzweil extrapolates current technological trajectories past the arrival of
self-improving AI or superhuman intelligence, which Yudkowsky argues represents
a tension with both I. J. Good's proposed discontinuous upswing in intelligence
and Vinge's thesis on unpredictability.
In 2008, Robin Hanson (taking
"singularity" to refer to sharp increases in the exponent of economic
growth) lists the Agricultural
and Industrial
Revolutions as past singularities. Extrapolating from such past
events, Hanson proposes that the next economic singularity should increase economic growth between 60
and 250 times. An innovation that allowed for the replacement of virtually all
human labor could trigger this event.[40]
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced
the establishment of Singularity
University, whose stated mission is "to assemble, educate and
inspire a cadre of leaders who strive to understand and facilitate the
development of exponentially advancing technologies in order to address
humanity’s grand challenges."[41]
Funded by Google, Autodesk, ePlanet Ventures, and a
group of technology industry leaders, Singularity University is based at NASA's Ames Research
Center in Mountain View,
California. The
not-for-profit organization runs an annual ten-week graduate program during the
summer that covers ten different technology and allied tracks, and a series of
executive programs throughout the year.
In 2010, Aubrey de Grey applied the
term the "Methuselarity"[42] to the point at which medical technology
improves so fast that expected human
lifespan increases by more than one year per year. In 2010 in
"Apocalyptic AI – Visions of Heaven in Robotics, Artificial Intelligence,
and Virtual Reality"[43]
Robert Geraci offers an account of the developing "cyber-theology"
inspired by Singularity studies. A book exploring some of those themes is the
1996 Holy Fire
by Bruce Sterling, which postulates that a Methuselarity will become a gerontocracy.
In 2011, Kurzweil noted existing trends and
concluded that the singularity was becoming more probable to occur around 2045.
He told Time magazine: "We will successfully reverse-engineer the
human brain by the mid-2020s. By the end of that decade, computers will be
capable of human-level intelligence."[44]
[edit]
Intelligence explosion
The notion of an "intelligence
explosion" was first described thus by Good
(1965), who speculated on the effects of superhuman machines:
Let an ultraintelligent machine be defined as a
machine that can far surpass all the intellectual activities of any man however
clever. Since the design of machines is one of these intellectual activities,
an ultraintelligent machine could design even better machines; there would then
unquestionably be an ‘intelligence explosion,’ and the intelligence of man
would be left far behind. Thus the first ultraintelligent machine is the last
invention that man need ever make.
Most proposed methods for creating superhuman
or transhuman minds fall into
one of two categories: intelligence amplification of human brains and
artificial intelligence. The means speculated to produce intelligence
augmentation are numerous, and include bioengineering, genetic engineering,
nootropic drugs, AI
assistants, direct brain-computer interfaces and mind uploading. The
existence of multiple paths to an intelligence explosion makes a singularity
more likely; for a singularity to not occur they would all have to fail.[6]
Hanson (1998)
is skeptical of human intelligence augmentation, writing that once one has
exhausted the "low-hanging fruit" of easy methods for increasing
human intelligence, further improvements will become increasingly difficult to
find. Despite the numerous speculated means for amplifying human intelligence,
non-human artificial intelligence (specifically seed AI) is the most
popular option for organizations trying to advance the singularity.[citation needed]
Whether or not an intelligence explosion occurs
depends on three factors.[45]
The first, accelerating factor, is the new intelligence enhancements made
possible by each previous improvement. Contrariwise, as the intelligences
become more advanced, further advances will become more and more complicated,
possibly overcoming the advantage of increased intelligence. Each improvement
must be able to beget at least one more improvement, on average, for the
singularity to continue. Finally, there is the issue of a hard upper limit.
Absent quantum
computing, eventually the laws of physics will prevent any further
improvements.
There are two logically independent, but
mutually reinforcing, accelerating effects: increases in the speed of
computation, and improvements to the algorithms used.[46] The former is predicted by Moore’s Law and the
forecast improvements in hardware,[47]
and is comparatively similar to previous technological advance. On the other
hand, most AI researchers believe that software is more important than
hardware.[citation needed]
[edit]
Speed improvements
The first is the improvements to the speed at
which minds can be run. Whether human or AI, better hardware increases the rate
of future hardware improvements. Oversimplified,[48]
Moore's Law suggests that if the first doubling of speed took 18 months, the
second would take 18 subjective months; or 9 external months, whereafter, four
months, two months, and so on towards a speed singularity.[49] An upper limit on speed may eventually be
reached, although it is unclear how high this would be. Hawkins (2008), responding to Good, argued that the upper limit
is relatively low;
Belief in this idea is based on a naive
understanding of what intelligence is. As an analogy, imagine we had a computer
that could design new computers (chips, systems, and software) faster than
itself. Would such a computer lead to infinitely fast computers or even
computers that were faster than anything humans could ever build? No. It might
accelerate the rate of improvements for a while, but in the end there are
limits to how big and fast computers can run. We would end up in the same
place; we'd just get there a bit faster. There would be no singularity.
Whereas if it were a lot higher than current
human levels of intelligence, the effects of the singularity would be enormous
enough as to be indistinguishable (to humans) from a singularity with an upper
limit. For example, if the speed of thought could be increased a million-fold,
a subjective year would pass in 30 physical seconds.[6]
It is difficult to directly compare silicon-based hardware
with neurons. But Berglas (2008) notes that computer speech recognition is
approaching human capabilities, and that this capability seems to require 0.01%
of the volume of the brain. This analogy suggests that modern computer hardware
is within a few orders of magnitude of being as powerful as the human brain.
[edit]
Intelligence improvements
Some intelligence technologies, like seed AI,
may also have the potential to make themselves more intelligent, not just
faster, by modifying their source code.
These improvements would make further improvements possible, which would make
further improvements possible, and so on.
This mechanism for an intelligence explosion
differs from an increase in speed in two ways. First, it does not require
external effect: machines designing faster hardware still require humans to
create the improved hardware, or to program factories appropriately. An AI
which was rewriting its own source code, however, could do so while contained
in an AI box.
Second, as with Vernor Vinge’s conception of
the singularity, it is much harder to predict the outcome. While speed
increases seem to be only a quantitative difference from human intelligence,
actual improvements in intelligence would be qualitatively different. Eliezer
Yudkowsky compares it to the changes that human intelligence brought: humans
changed the world thousands of times more rapidly than evolution had done, and
in totally different ways. Similarly, the evolution of life had been a massive
departure and acceleration from the previous geological rates of change, and
improved intelligence could cause change to be as different again.[50]
There are substantial dangers associated with
an intelligence explosion singularity. First, the goal structure of the AI may
not be invariant under self-improvement, potentially causing the AI to optimise
for something other than was intended.[51][52]
Secondly, AIs could compete for the scarce resources mankind uses to survive.[53]
While not actively malicious, there is no
reason to think that AIs would actively promote human goals unless they could
be programmed as such, and if not, might use the resources currently used to
support mankind to promote its own goals, causing human extinction.[10][54][55]
[edit]
Impact
Dramatic changes in the rate of economic growth
have occurred in the past because of some technological advancement. Based on
population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic
Revolution. This new agricultural economy began to double every 900
years, a remarkable increase. In the current era, beginning with the Industrial
Revolution, the world’s economic output doubles every fifteen years, sixty
times faster than during the agricultural era. If the rise of superhuman
intelligence causes a similar revolution, argues Robin Hanson, one would expect
the economy to double at least quarterly and possibly on a weekly basis.[40]
[edit]
Existential risk
Berglas (2008)
notes that there is no direct evolutionary motivation for an AI to be friendly
to humans. Evolution has no inherent tendency to produce outcomes valued by
humans, and there is little reason to expect an arbitrary optimisation process
to promote an outcome desired by mankind, rather than inadvertently leading to
an AI behaving in a way not intended by its creators (such as Nick Bostrom's
whimsical example of an AI which was originally programmed with the goal of
manufacturing paper clips, so that when it achieves superintelligence it
decides to convert the entire planet into a paper clip manufacturing facility;[56][57][58] Anders Sandberg has also
elaborated on this scenario, addressing various common counter-arguments.[59]) AI researcher Hugo de Garis suggests
that artificial intelligences may simply eliminate the human race for access to
scarce resources,[53][60]
and humans would be powerless to stop them.[61]
Alternatively, AIs developed under evolutionary pressure to promote their own
survival could outcompete humanity.[55]
Bostrom (2002)
discusses human extinction scenarios, and lists superintelligence as a possible
cause:
When we create the first superintelligent
entity, we might make a mistake and give it goals that lead it to annihilate
humankind, assuming its enormous intellectual advantage gives it the power to
do so. For example, we could mistakenly elevate a subgoal to the status of a
supergoal. We tell it to solve a mathematical problem, and it complies by
turning all the matter in the solar system into a giant calculating device, in
the process killing the person who asked the question.
A significant problem is that unfriendly
artificial intelligence is likely to be much easier to create than friendly AI.
While both require large advances in recursive optimisation process design,
friendly AI also requires the ability to make goal structures invariant under
self-improvement (or the AI could transform itself into something unfriendly)
and a goal structure that aligns with human values and does not automatically
destroy the human race. An unfriendly AI, on the other hand, can optimize for
an arbitrary goal structure, which does not need to be invariant under
self-modification.[62]
Eliezer Yudkowsky proposed
that research be undertaken to produce friendly
artificial intelligence in order to address the dangers. He noted
that the first real AI would have a head start on self-improvement and, if
friendly, could prevent unfriendly AIs from developing, as well as providing
enormous benefits to mankind.[54]
Bill Hibbard also
addresses issues of AI safety and morality in his book Super-Intelligent
Machines. These ideas were refined in 2008[63] and revised in 2012[64][65][66].
One hypothetical approach towards attempting to
control an artificial intelligence is an AI box, where the
artificial intelligence is kept constrained inside a simulated world
and not allowed to affect the external world. However, a sufficiently
intelligent AI may simply be able to escape by outsmarting its less intelligent
human captors.[22][67][68]
[edit]
Implications for human society
In 2009, leading computer scientists,
artificial intelligence researchers, and roboticists met at the Asilomar
Conference Grounds near Monterey Bay
in California. The goal was to discuss the potential impact of the hypothetical
possibility that robots could become self-sufficient and able to make their own
decisions. They discussed the extent to which computers and robots might be
able to acquire autonomy,
and to what degree they could use such abilities to pose threats or hazards.
Some machines have acquired various forms of
semi-autonomy, including the ability to locate their own power sources and
choose targets to attack with weapons. Also, some computer viruses can evade
elimination and have achieved "cockroach intelligence."[citation needed] The conference
attendees noted that self-awareness as depicted in science-fiction is probably
unlikely, but that other potential hazards and pitfalls exist.[69]
Some experts and academics have questioned the
use of robots for military
combat, especially when such robots are given some degree of autonomous
functions.[70]
A United States
Navy report indicates that, as military robots become more complex,
there should be greater attention to implications of their ability to make
autonomous decisions.[71][72]
The Association for
the Advancement of Artificial Intelligence has commissioned a study
to examine this issue,[73]
pointing to programs like the Language
Acquisition Device, which can emulate human interaction.
Some support the design of friendly artificial
intelligence, meaning that the advances which are already occurring with AI
should also include an effort to make AI intrinsically friendly and humane.[74]
Isaac Asimov's Three Laws of
Robotics is one of the earliest examples of proposed safety measures
for AI. The laws are intended to prevent artificially intelligent robots from
harming humans. In Asimov’s stories, any perceived problems with the laws tend
to arise as a result of a misunderstanding on the part of some human operator;
the robots themselves are merely acting to their best interpretation of their
rules. In the 2004
film I, Robot,
loosely based on Asimov's Robot stories,
an AI attempts to take complete control over humanity for the purpose of
protecting humanity from itself due to an extrapolation of the Three Laws.
In 2004, the Singularity Institute launched an Internet campaign called 3
Laws Unsafe to raise awareness of AI safety issues and the inadequacy of
Asimov’s laws in particular.[75]
[edit]
Accelerating change
According to Kurzweil, his logarithmic graph of 15
lists of paradigm shifts
for key historic
events shows an exponential
trend. The lists' compilers include Carl Sagan[citation needed], Paul D. Boyer, Encyclopædia
Britannica, American Museum
of Natural History, and University of
Arizona. Click to enlarge.
Main article: Accelerating change
Some singularity proponents argue its
inevitability through extrapolation of past trends, especially those pertaining
to shortening gaps between improvements to technology. In one of the first uses
of the term "singularity" in the context of technological progress,
Stanislaw Ulam (1958) tells of a
conversation with John von Neumann
about accelerating change:
One conversation centered on the ever
accelerating progress of technology and changes in the mode of human life,
which gives the appearance of approaching some essential singularity in the
history of the race beyond which human affairs, as we know them, could not
continue.
Hawkins (1983)
writes that "mindsteps", dramatic and irreversible changes to
paradigms or world views, are accelerating in frequency as quantified in his
mindstep equation. He cites the inventions of writing, mathematics, and the
computer as examples of such changes.
Kurzweil's analysis of history concludes that
technological progress follows a pattern of exponential growth,
following what he calls the "Law of
Accelerating Returns". Whenever technology approaches a
barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will
become increasingly common, leading to "technological change so rapid and
profound it represents a rupture in the fabric of human history".[76] Kurzweil believes that the singularity will
occur before the end of the 21st century, setting the date at 2045.[77] His predictions differ from Vinge’s in that he
predicts a gradual ascent to the singularity, rather than Vinge’s rapidly
self-improving superhuman intelligence.
Presumably, a technological singularity would
lead to rapid development of a Kardashev Type I civilization,
one that has achieved mastery of the resources of its home planet.[78]
Oft-cited dangers include those commonly
associated with molecular nanotechnology and genetic engineering. These threats
are major issues for both singularity advocates and critics, and were the
subject of Bill Joy's Wired
magazine article "Why the future
doesn't need us".[79]
The Acceleration
Studies Foundation, an educational non-profit foundation founded by John Smart,
engages in outreach, education, research and advocacy concerning accelerating
change.[80] It produces the Accelerating Change
conference at Stanford University, and maintains the educational site Acceleration Watch.
[edit]
Criticisms
Some critics assert that no computer or machine
will ever achieve human intelligence, while others hold that the definition of
intelligence is irrelevant if the net result is the same.[81]
Steven Pinker stated in
2008,
"(...) There is not the slightest reason
to believe in a coming singularity. The fact that you can visualize a future in
your imagination is not evidence that it is likely or even possible. Look at
domed cities, jet-pack commuting, underwater cities, mile-high buildings, and
nuclear-powered automobiles—all staples of futuristic fantasies when I was a
child that have never arrived. Sheer processing power is not a pixie dust that
magically solves all your problems. (...)"[26]
Martin Ford in The Lights in the Tunnel:
Automation, Accelerating Technology and the Economy of the Future[82] postulates a "technology paradox" in
that before the singularity could occur most routine jobs in the economy would
be automated, since this would require a level of technology inferior to that
of the singularity. This would cause massive unemployment and plummeting consumer
demand, which in turn would destroy the incentive to invest in the technologies
that would be required to bring about the Singularity. Job displacement is
increasingly no longer limited to work traditionally considered to be
"routine."[83]
Jared Diamond, in Collapse: How
Societies Choose to Fail or Succeed, argues that cultures
self-limit when they exceed the sustainable carrying capacity of their
environment, and the consumption of strategic resources (frequently timber,
soils or water) creates a deleterious positive feedback loop that leads
eventually to social collapse and technological retrogression.
Theodore Modis[84][85] and Jonathan Huebner[86] argue that the rate of technological
innovation has not only ceased to rise, but is actually now declining (John Smart,
however, criticizes Huebner's analysis[87]).
Evidence for this decline is that the rise in computer clock rates is slowing,
even while Moore's prediction of exponentially increasing circuit density
continues to hold. This is due to excessive heat build-up from the chip, which
cannot be dissipated quickly enough to prevent the chip from melting when
operating at higher speeds. Advancements in speed may be possible in the future
by virtue of more power-efficient CPU designs and multi-cell processors.[88] While Kurzweil used Modis' resources, and
Modis' work was around accelerating change, Modis distanced himself from
Kurzweil's thesis of a "technological singularity", claiming that it
lacks scientific rigor.[85]
Others propose that other
"singularities" can be found through analysis of trends in world population, world gross domestic
product, and other indices. Andrey Korotayev and
others argue that historical hyperbolic growth curves
can be attributed to feedback loops
that ceased to affect global trends in the 1970s, and thus hyperbolic growth
should not be expected in the future.[89][90]
In The Progress of Computing, William Nordhaus argued
that, prior to 1940, computers followed the much slower growth of a traditional
industrial economy, thus rejecting extrapolations of Moore's law to
19th-century computers. Schmidhuber (2006)
suggests differences in memory of recent and distant events create an illusion
of accelerating change, and that such phenomena may be responsible for past
apocalyptic predictions.
Andrew Kennedy, in his 2006 paper for the British
Interplanetary Society discussing change and the growth in space travel
velocities,[91]
stated that although long-term overall growth is inevitable, it is small,
embodying both ups and downs, and noted, "New technologies follow known
laws of power use and information spread and are obliged to connect with what
already exists. Remarkable theoretical discoveries, if they end up being used
at all, play their part in maintaining the growth rate: they do not make its
plotted curve... redundant." He stated that exponential growth is no
predictor in itself, and illustrated this with examples such as quantum theory. The
quantum was conceived in 1900, and quantum theory was in existence and accepted
approximately 25 years later. However, it took over 40 years for Richard Feynman and others
to produce meaningful numbers from the theory. Bethe understood nuclear
fusion in 1935, but 75 years later fusion reactors are still only used in
experimental settings. Similarly, quantum
entanglement was understood in 1935 but not at the point of being
used in practice until the 21st century.
A study of patents per thousand persons shows
that human creativity does not show accelerating returns, but in fact, as
suggested by Joseph Tainter
in his seminal The Collapse of Complex Societies,[92] a law of diminishing returns.
The number of patents per thousand peaked in the period from 1850 to 1900, and
has been declining since.[86]
The growth of complexity eventually becomes self-limiting, and leads to a
widespread "general systems collapse".
In addition to general criticisms of the
singularity concept, several critics have raised issues with Kurzweil's iconic
chart. One line of criticism is that a log-log chart of this
nature is inherently biased toward a straight-line result. Others identify
selection bias in the points that Kurzweil chooses to use. For example,
biologist PZ Myers
points out that many of the early evolutionary "events" were picked
arbitrarily.[93]
Kurzweil has rebutted this by charting evolutionary events from 15 neutral
sources, and showing that they fit a straight line on a log-log chart.
The Economist
mocked the concept with a graph extrapolating that the number of blades on a
razor, which has increased over the years from one to as many as five, will
increase ever-faster to infinity.[94]
[edit]
In popular culture
See also: List of
fictional computers
Isaac Asimov's
1950 story "The Evitable
Conflict", (the last part of the I, Robot collection)
features the Machines, four supercomputers managing the world's economy. The
computers are incomprehensible to humans and are impossible to analyze for
errors, having been created through 10 stages of bootstrapping. In the end
of the story, it is implied that from now on (it occurs in 2052), no major
conflict can occur, and the Machines are going to guide humanity toward a
better future, one only they are capable of seeing (and know to truly be the
best). Susan Calvin
states that "For all time, all conflicts are finally evitable. Only the
Machines, from now on, are inevitable!"
James P. Hogan's
1979 novel The Two Faces of Tomorrow is an explicit description of what
is now called the Singularity. An artificial intelligence system solves an
excavation problem on the moon in a brilliant and novel way, but nearly kills a
work crew in the process. Realizing that systems are becoming too sophisticated
and complex to predict or manage, a scientific team sets out to teach a
sophisticated computer network how to think more humanly. The story documents
the rise of self-awareness in the computer system, the humans' loss of control
and failed attempts to shut down the experiment as the computer desperately
defends itself, and the computer intelligence reaching maturity.
While discussing the singularity's growing
recognition, Vernor Vinge wrote in 1993 that "it was the science-fiction
writers who felt the first concrete impact." In addition to his own short
story "Bookworm, Run!", whose protagonist is a chimpanzee with
intelligence augmented by a government experiment, he cites Greg Bear's novel Blood Music
(1983) as an example of the singularity in fiction. Vinge described surviving
the singularity in his 1986 novel Marooned in
Realtime. Vinge later expanded the notion of the singularity to
a galactic scale in A Fire Upon
the Deep (1992), a novel populated by transcendent beings, each
the product of a different race and possessed of distinct agendas and
overwhelming power.
In William Gibson's 1984
novel Neuromancer,
artificial intelligences capable of improving their own programs are strictly
regulated by special "Turing police" to ensure they never exceed a
certain level of intelligence, and the plot centers on the efforts of one such
AI to circumvent their control. The 1994 novel The
Metamorphosis of Prime Intellect features an AI that augments
itself so quickly as to gain low-level control of all matter in the universe in
a matter of hours.
William Gibson and Bruce Sterling's alternate history Steampunk novel The Difference
Engine ends with a vision of the singularity occurring in 1991
with a superintelligent computer that has merged its mind with the inhabitants
of London.
A more malevolent AI achieves similar levels of
omnipotence in Harlan Ellison's
short story I Have No
Mouth, and I Must Scream (1967).
William Thomas
Quick's novels Dreams of Flesh and Sand (1988), Dreams of
Gods and Men (1989), and Singularities (1990) present an account of
the transition through the singularity; in the last novel, one of the
characters states that mankind's survival requires it to integrate with the
emerging machine intelligences, or it will be crushed under the dominance of the
machines—the greatest risk to the survival of a species reaching this point
(and alluding to large numbers of other species that either survived or failed
this test, although no actual contact with alien species occurs in the novels).
The singularity is sometimes addressed in
fictional works to explain the event's absence. Neal Asher's Gridlinked series
features a future where humans living in the Polity are governed by AIs and
while some are resentful, most believe that they are far better governors than
any human. In the fourth novel, Polity Agent, it is
mentioned that the singularity is far overdue yet most AIs have decided not to
partake in it for reasons that only they know. A flashback character in Ken MacLeod's 1998 novel The
Cassini Division dismissively refers to the singularity as "the Rapture for nerds",
though the singularity goes on to happen anyway.
Popular movies in which computers become
intelligent and violently overpower the human race include Colossus: The
Forbin Project, the Terminator
series, the very loose film adaptation of I, Robot, and The Matrix
series. The television series Battlestar
Galactica also explores these themes.
Isaac Asimov expressed
ideas similar to a post-Kurzweilian singularity in his short story "The Last Question".
Asimov's future envisions a reality where a combination of strong artificial
intelligence and post-humans
consume the cosmos, during a time Kurzweil describes as when "the universe
wakes up", the last of his six stages of cosmic evolution as described in The Singularity
is Near. Post-human entities throughout various time periods of
the story inquire of the artificial intelligence within the story as to how entropy death
will be avoided. The AI responds that it lacks sufficient information to come
to a conclusion, until the end of the story when the AI does indeed arrive at a
solution. Notably, it does so in order to fulfill its duty
to answer the humans' question.
St. Edward's
University chemist Eamonn Healy discusses
accelerating change in the film Waking Life. He
divides history into increasingly shorter periods, estimating "two billion
years for life, six million years for the hominid, a hundred-thousand years for
mankind as we know it". He proceeds to human cultural evolution, giving
time scales of ten thousand years for agriculture, four hundred years for the
scientific revolution, and one hundred fifty years for the industrial
revolution. Information is emphasized as providing the basis for the new
evolutionary paradigm, with artificial intelligence its culmination. He
concludes we will eventually create "neohumans" which will usurp
humanity’s present role in scientific and technological progress and allow the
exponential trend of accelerating change to continue past the limits of human
ability.
Accelerating progress features in some science
fiction works, and is a central theme in Charles Stross's Accelerando.
Other notable authors that address singularity-related issues include Karl Schroeder, Greg Egan, Ken MacLeod, Rudy Rucker, David Brin, Iain M. Banks, Neal Stephenson, Tony Ballantyne, Bruce Sterling, Dan Simmons, Damien Broderick, Fredric Brown, Jacek Dukaj, Stanislav Lem, Nagaru Tanigawa, Douglas Adams, Michael Crichton and Ian McDonald.
The feature-length documentary film Transcendent Man
by Barry Ptolemy is
based on Kurzweil and his book The Singularity Is Near. The film
documents Kurzweil's quest to reveal what he believes to be mankind's destiny.
Another documentary, Plug & Pray,
focuses on the promise, problems and ethics of artificial intelligence and
robotics, with Joseph
Weizenbaum and Kurzweil as the main subjects of the film.[95]
In 2009, scientists at Aberystwyth University
in Wales and the U.K's University of Cambridge designed a robot called Adam
that they believe to be the first machine to independently discover new
scientific findings.[96]
Also in 2009, researchers at Cornell
developed a computer program that extrapolated the laws of motion from a
pendulum's swings.[97][98]
A Tamil film Enthiran deals with a humanoid
robot having an intelligence equivalent to that of a human and wreaking havoc
causing struggle for existence.
The web comic Dresden Codak deals with
trans-humanistic themes and the singularity.
The plot of an episode of the TV program The Big Bang
Theory (season 4, episode 2, "The Cruciferous
Vegetable Amplification") revolves around the anticipated date of the
coming Singularity.
The seventeenth episode of the sixth season of
the TV sitcom Futurama, "Benderama" references Bender reaching
the technological singularity and being able to infinitely produce smaller
versions of himself to wreak havoc on the world. The twentyfifth episode of the
same season "Overclockwise" has Bender overclocks himself so that he
passes over what he claims as the existential singularity. His description of
what he becomes is that the universe itself is his processor.
Industrial/Steampunk entertainer Doctor Steel weaves the
concept of a technological singularity into his music and videos, even having a
song entitled The Singularity. He has been interviewed on his views by
the Institute for
Ethics and Emerging Technologies,[99]
and has also authored a paper on the subject.[100][101]
In 2012, concept band SOLA-MI, released "NEXUS (Original Motion
Picture Soundtrack)," an album about the first waking machine.
In the sci-fi webseries Sync, a computer virus takes over
a computerized human and becomes a singularity.
[edit]
See also
[edit]
Notes
1.
^
Superintelligence. Answer to the 2009 EDGE QUESTION: "WHAT WILL CHANGE
EVERYTHING?": http://www.nickbostrom.com/views/superintelligence.pdf
2.
^ David Chalmers on
Singularity, Intelligence Explosion. April 8th, 2010. Singularity Institute for
Artificial Intelligence: http://singinst.org/blog/2010/04/08/david-chalmers-on-singularity-intelligence-explosion/
3.
^ Editor's Blog Why
an Intelligence Explosion is Probable By: Richard Loosemore and Ben Goertzel.
March 7, 2011; hplusmagazine: http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/
4.
^ a b Ray Kurzweil, The Singularity is Near, pp. 135–136. Penguin Group,
2005.
5.
^ a b c d Vinge, Vernor. "The Coming
Technological Singularity: How to Survive in the Post-Human Era",
originally in Vision-21: Interdisciplinary Science and Engineering in the
Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11-22,
1993
6.
^ a b c d "What is
the Singularity? | Singularity Institute for Artificial Intelligence".
Singinst.org. Retrieved 2011-09-09.
7.
^ "h+
Magazine | Covering technological, scientific, and cultural trends that are
changing human beings in fundamental ways". Hplusmagazine.com.
Retrieved 2011-09-09.
11.
^ Good, I. J. "Speculations
Concerning the First Ultraintelligent Machine", Advances in
Computers, vol. 6, 1965.
14.
^ Good, I. J.,
"Speculations Concerning the First Ultraintelligent Machine", Franz
L. Alt and Morris Rubinoff, ed., Advances in Computers (Academic Press) 6: 31–88,
1965.
16.
^ Good, I. J. 1965
Speculations Concerning the First Ultraintelligent Machine. pp. 31–88 in
Advances in Computers, 6, F. L. Alt and M Rubinoff, eds. New York: Academic
Press.
18.
^ Ray Kurzweil, The
Singularity is Near, Penguin Group, 2005
19.
^ "The World’s Technological
Capacity to Store, Communicate, and Compute Information",
Martin Hilbert and Priscila López (2011), Science (journal),
332(6025), 60-65; free access to the article through here:
martinhilbert.net/WorldInfoCapacity.html
20.
^ Ray Kurzweil, The
Singularity is Near, p. 9. Penguin Group, 2005
21.
^ Ray Kurzweil, The
Singularity is Near, pp. 135–136. Penguin Group, 2005. "So we will be
producing about 1026 to 1029 cps of
nonbiological computation per year in the early 2030s. This is roughly equal to
our estimate for the capacity of all living biological human intelligence ...
This state of computation in the early 2030s will not represent the
Singularity, however, because it does not yet correspond to a profound
expansion of our intelligence. By the mid-2040s, however, that one thousand
dollars' worth of computation will be equal to 1026 cps, so the
intelligence created per year (at a total cost of about $1012) will be about
one billion times more powerful than all human intelligence today. That will
indeed represent a profound change, and it is for that reason that I set the
date for the Singularity—representing a profound and disruptive transformation
in human capability—as 2045."
22.
^ a b c Yudkowsky,
Eliezer (2008), Bostrom, Nick; Cirkovic, Milan, eds., "Artificial Intelligence as a Positive and
Negative Factor in Global Risk", Global Catastrophic Risks
(Oxford University Press): 303, Bibcode
2008gcr..book..303Y,
ISBN 978-0-19-857050-9
26.
^ a b "Tech
Luminaries Address Singularity – IEEE Spectrum".
Spectrum.ieee.org. Retrieved 2011-09-09.
28.
^ Thornton, Richard
(1847), The Expounder of
Primitive Christianity, 4, Ann Arbor, Michigan,
p. 281
29.
^ A M Turing, Intelligent
Machinery, A Heretical Theory, 1951, reprinted Philosophia Mathematica
(1996) 4(3): 256–260 doi:10.1093/philmat/4.3.256 [1]
31.
^ Vinge did not
actually use the phrase "technological singularity" in the Omni
op-ed, but he did use this phrase in the short story collection Threats and
Other Promises from 1988, writing in the introduction to his story
"The Whirligig of Time" (p. 72): Barring a worldwide catastrophe,
I believe that technology will achieve our wildest dreams, and soon. When
we raise our own intelligence and that of our creations, we are no longer in a
world of human-sized characters. At that point we have fallen into a
technological "black hole," a technological singularity.
32.
^ Solomonoff, R.J.
"The Time Scale of Artificial Intelligence: Reflections on Social
Effects," Human Systems Management, Vol 5, pp. 149–153, 1985, http://world.std.com/~rjs/timesc.pdf.
33.
^ Moravec, Hans
(1998), "When will
computer hardware match the human brain?", Journal of
Evolution and Technology 1, retrieved 2006-06-23.
37.
^ The Coming
Technological Singularity: How to Survive in the Post-Human Era, by
Vernor Vinge, Department of Mathematical Sciences, San Diego State University,
(c) 1993 by Vernor Vinge.
38.
^ Joy, Bill (April 2000), "Why the future doesn’t
need us", Wired Magazine
(Viking Adult) (8.04), ISBN 0-670-03249-2,
retrieved 2007-08-07
40.
^ a b Robin Hanson, "Economics
Of The Singularity", IEEE Spectrum Special Report: The
Singularity, retrieved 2008-09-11 & Long-Term Growth As A Sequence of
Exponential Modes
43.
^ Geraci, Robert
M., Apocalyptic AI – Visions of Heaven in Robotics, Artificial Intelligence,
and Virtual Reality, ISBN 978-0-19-539302-6
45.
^ [2]
David Chalmers John Locke Lecture, 10 May, Exam Schools, Oxford, presenting a
philosophical analysis of the possibility of a technological singularity or
"intelligence explosion" resulting from recursively self-improving
AI.
48.
^ Siracusa, John
(2009-08-31). "Mac OS X
10.6 Snow Leopard: the Ars Technica review". Arstechnica.com.
Retrieved 2011-09-09.
49.
^ Eliezer
Yudkowsky, 1996 "Staring at the Singularity
54.
^ a b "Concise
Summary | Singularity Institute for Artificial Intelligence".
Singinst.org. Retrieved 2011-09-09.
57.
^ Eliezer Yudkowsky: Artificial
Intelligence as a Positive and Negative Factor in Global Risk. Draft
for a publication in Global Catastrophic Risk from August 31, 2006,
retrieved July 18, 2011 (PDF file)
63.
^ Hibbard, Bill
(2008), "The Technology of Mind and a
New Social Contract", Journal of Evolution and Technology
17.
64.
^ Hibbard, Bill
(2012), "Model-Based Utility Functions", Journal of Artificial
General Intelligence 3: 1, doi:10.2478/v10229-011-0013-5.
65.
^ Avoiding
Unintended AI Behaviors. Bill Hibbard. 2012 proceedings of the Fifth
Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel
and Matthew Ikle. This paper won the Singularity Institute's 2012 Turing Prize
for the Best AGI Safety Paper [3] .
66.
^ Decision Support
for Safe AI Design|. Bill Hibbard. 2012 proceedings of the Fifth
Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel
and Matthew Ikle.
68.
^ The Singularity:
A Philosophical Analysis David J. Chalmers
70.
^ Call for debate
on killer robots, By Jason Palmer, Science and technology reporter,
BBC News, 8/3/09.
71.
^ Mick, Jason. New Navy-funded
Report Warns of War Robots Going "Terminator", Blog,
dailytech.com, February 17, 2009.
72.
^ Flatley, Joseph
L. Navy report
warns of robot uprising, suggests a strong moral compass,
engadget.com, 18 February 2009.
73.
^ AAAI
Presidential Panel on Long-Term AI Futures 2008–2009 Study,
Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
75.
^ (Singularity Institute for Artificial Intelligence 2004)
76.
^ (Kurzweil 2001)
77.
^ (Kurzweil 2005)
78.
^ Zubrin, Robert.
1999, Entering Space – Creating a Spacefaring Civilization
79.
^ (Joy 2000)
80.
^ (Acceleration Studies Foundation 2007)
81.
^ Dreyfus & Dreyfus 2000, p. xiv:
"(...) The truth is that human intelligence can never be replaced with
machine intelligence simply because we are not ourselves "thinking
machines" in the sense in which that term is commonly understood.Hawking (1998) (...)"
: Some people
say that computers can never show true intelligence whatever that may be. But
it seems to me that if very complicated chemical molecules can operate in
humans to make them intelligent then equally complicated electronic circuits
can also make computers act in an intelligent way. And if they are intelligent
they can presumably design computers that have even greater complexity and
intelligence.
82.
^ Ford, Martin, The Lights in the Tunnel: Automation,
Accelerating Technology and the Economy of the Future, Acculant
Publishing, 2009, ISBN
978-1-4486-5981-4
83.
^ Markoff, John
(2011-03-04). "Armies of
Expensive Lawyers, Replaced by Cheaper Software". The New
York Times.
84.
^ Theodore Modis, Forecasting the
Growth of Complexity and Change, Technological Forecasting &
Social Change, 69, No 4, 2002
86.
^ a b Huebner, Jonathan (2005) A Possible Declining Trend for Worldwide
Innovation, Technological Forecasting & Social Change, October 2005,
pp. 980–6
87.
^ Smart, John
(September 2005), On Huebner Innovation, Acceleration Studies Foundation, http://accelerating.org/articles/huebnerinnovation.html,
retrieved on 2007-08-07
89.
^ See, e.g.,
Korotayev A., Malkov A., Khaltourina D. Introduction to
Social Macrodynamics: Compact Macromodels of the World System Growth.
Moscow: URSS Publishers, 2006; Korotayev A. V. A Compact
Macromodel of World System Evolution // Journal of World-Systems Research 11/1
(2005): 79–93.
90.
^ For a detailed
mathematical analysis of this issue see A Compact
Mathematical Model of the World System Economic and Demographic Growth, 1 CE –
1973 CE.
91.
^ Interstellar
Travel: The Wait Calculation and the Incentive Trap of Progress, JBIS Vol 59,
N.7 July 2006
92.
^ Tainter, Joseph
(1988) "The Collapse of Complex Societies" (Cambridge University
Press)
95.
^ Plug & Pray
Documentary film (2010) about the promise, problems and ethics of artificial
intelligence and robotics
99.
^ Michael
Anissimov. "Interview
with Dr. Steel". Institute for Ethics and Emerging
Technologies. Retrieved 2009-08-29.
100.
^ Dr. Steel (Spring
2005). "Multi-Media
Symbiosis and the Evolution of Electronic Life". Paranoia: The
Conspiracy Reader, Issue 38 (back issue). Retrieved 2010-04-16.
101.
^ Dr. Steel (Spring
2005). "Multi-Media
Symbiosis and the Evolution of Electronic Life". World
Domination Toys (clipping from Paranoia: The Conspiracy Reader). Retrieved
2010-04-16.
[edit]
References
•
Acceleration Studies Foundation (2007), ASF: About the Foundation,
retrieved 2007-11-13
•
Anonymous (18 March 2006), "More
blades good", The Economist (London) 378 (8469):
85
•
Bell, James John (2002), Technotopia and
the Death of Nature: Clones, Supercomputers, and Robots, Earth
Island Journal (first published in the November/December 2001 issue of the Earth
First! Journal), retrieved 2007-08-07
•
Bell, James John (1 May 2003), "Exploring
The "Singularity"", The Futurist (World Future
Society (mindfully.org)), retrieved 2007-08-07
•
Berglas, Anthony (2008), Artificial
Intelligence will Kill our Grandchildren, retrieved 2008-06-13
•
Broderick,
Damien (2001), The Spike: How Our Lives Are Being Transformed by
Rapidly Advancing Technologies, New York: Forge, ISBN 0-312-87781-1
•
Bostrom, Nick (2002), "Existential
Risks", Journal of
Evolution and Technology 9, retrieved 2007-08-07
•
Bostrom, Nick (2003), "Ethical Issues in Advanced
Artificial Intelligence", Cognitive, Emotive and Ethical
Aspects of Decision Making in Humans and in Artificial Intelligence 2:
12–17, retrieved 2007-08-07
•
Dreyfus, Hubert
L.; Dreyfus, Stuart
E. (1 March 2000), Mind over Machine: The Power of Human
Intuition and Expertise in the Era of the Computer (1 ed.), New York: Free
Press, ISBN 0-7432-0551-0
•
Ford, Martin (2009), The Lights in
the Tunnel: Automation, Accelerating Technology and the Economy of the Future,
CreateSpace, ISBN 978-1-4486-5981-4.
•
Good, I. J. (1965), Franz L.
Alt and Morris Rubinoff, ed., "Speculations
Concerning the First Ultraintelligent Machine", Advances in
Computers, Advances in Computers (Academic Press) 6:
31–88, doi:10.1016/S0065-2458(08)60418-0,
ISBN 9780120121069,
archived from the original
on 2001-05-27, retrieved 2007-08-07
•
Hanson, Robin (June 2008), "Economics of the Singularity",
IEEE Spectrum
•
Hawking, Stephen (1998), Science in the
Next Millennium: Remarks by Stephen Hawking, retrieved 2007-11-13
•
Heylighen, Francis (2007), "Accelerating
Socio-Technological Evolution: from ephemeralization and stigmergy to the
global brain", in Modelski, G.; Devezas, T.;
Thompson, W., Globalization as an Evolutionary Process: Modeling Global
Change, London: Routledge, ISBN 978-0-415-77361-4
•
Johansen, Anders; Sornette, Didier (25 January 2001), "Finite-time singularity in the
dynamics of the world population, economic and financial indices"
(PDF), Physica A 294 (3–4): 465–502, arXiv:cond-mat/0002075, Bibcode 2001PhyA..294..465J,
doi:10.1016/S0378-4371(01)00105-4,
retrieved 2007-10-30
•
Joy, Bill (April 2000), "Why the future doesn't
need us", Wired Magazine
(Viking Adult) (8.04), ISBN 0-670-03249-2,
retrieved 2007-08-07
•
Kurzweil,
Raymond (2001), "The Law of
Accelerating Returns", Nature Physics (Lifeboat
Foundation) 4 (7): 507, Bibcode
2008NatPh...4..507B,
doi:10.1038/nphys1010,
retrieved 2007-08-07
•
Kurzweil, Raymond (2005), The Singularity
Is Near, New York: Viking, ISBN 0-670-03384-7
•
Moravec, Hans (January 1992), "Pigs in
Cyberspace", On the Cosmology and Ecology of Cyberspace,
retrieved 2007-11-21
•
Schmidhuber,
Jürgen (29 June 2006). "New Millennium AI and the Convergence of
History". arXiv:cs/0606081 [cs.AI].
•
Singularity
Institute for Artificial Intelligence (2002), Why Artificial Intelligence?
Archived
October 4, 2006 at the Wayback Machine
•
Singularity Institute for Artificial Intelligence (2004), 3 Laws Unsafe, retrieved 2007-08-07
•
Singularity Institute for Artificial Intelligence (2007), What is the
Singularity?, retrieved 2008-01-04
•
Smart, John (September 2005),
On Huebner
Innovation, Acceleration Studies Foundation, retrieved
2007-08-07
•
Ulam, Stanislaw (May 1958),
"Tribute to John von Neumann", Bulletin of the American
Mathematical Society 64 (nr 3, part 2): 1–49, doi:10.1090/S0002-9904-1958-10189-5
•
Vinge, Vernor (30–31 March
1993), "The Coming
Technological Singularity", Vision-21: Interdisciplinary
Science & Engineering in the Era of CyberSpace, proceedings of a Symposium
held at NASA Lewis Research Center (NASA Conference Publication CP-10129),
retrieved 2007-08-07. See also this HTML version, retrieved
on 2009-03-29.
•
Warwick, Kevin (2004), March
of The Machines, University of Illinois Press, ISBN 978-0-252-07223-9
[edit]
External links
|
This article's use of external links
may not follow Wikipedia's policies or guidelines. Please improve this
article by removing excessive
or inappropriate
external links, and converting useful links where appropriate into footnote
references. (March 2011)
|
Listen to this article (info/dl)
MENU
0:00
This audio file was created from a revision of
the "Technological singularity" article dated 2010-04-03, and does
not reflect subsequent edits to the article. (Audio help)
[edit]
Essays and articles
• Singularities
and Nightmares: Extremes of Optimism and Pessimism About the Human Future
by David Brin
[edit]
Singularity AI projects
[edit]
Fiction
• [Message
Contains No Recognizable Symbols] by Bill Hibbard is a story
about a technological singularity subject to the constraint that natural human
authors are unable to depict the actions and dialog of super-intelligent minds.
• After Life
by Simon Funk uses a complex narrative structure to explore the relationships
among uploaded minds in a technological singularity.
• In
"The Turk",
an episode of the science fiction television series Terminator: The
Sarah Connor Chronicles, John tells his mother about the
singularity, a point in time when machines will be able to build superior versions
of themselves without the aid of humans.
• Dresden Codak,
a webcomic by Aaron Diaz, often contains plots relating to the singularity and transhumanism, especially
in the Hob
story arc
• Endgame:
Singularity is an open source game where the
player is AI, whose goal is to attain technological singularity/apotheosis.
[edit]
Other links
|
Categories:
EschatologyEvolutionFuturologyPhilosophy
of artificial intelligenceSingularitarianismSociocultural
evolutionTheories of
history
Navigation
menu
Interaction
Toolbox
Print/export
Languages
This page was last modified on 28 March 2013 at
01:39.
Text is available under the Creative
Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of
Use and Privacy
Policy.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.,
a non-profit organization.
No comments:
Post a Comment