SURVIVING AI - Calum Chace

Richard Susskind: The Future Of Professions

Two waves of AI:

1. 1986-1988: decision tree system

Failed because:

  • costly
  • little incentive to adopt (pay by the hour)
  • the invention of the web (lots of documents available at no costs)

To make a computer do clever things:

  1. find how human beings do clever things
  2. draw decision trees or procedures that captured the human beings reasoning processes
  3. put them into a computer systems

The intelligence was more in the representation that the human beings put together than the system Human expert relies in tacit knowledge, that cannot be expressed in rules Machine are great for doing complex but routine work that you can reduce to a set of procedures but when you need expertise, experience and creativity you could not reduce that to a set of procedures so that for all time will be for humans to do.

2. 1997 Deep Blue beats Gary Kasparov: till then has been thought it was impossible.

  • We did not predicted the exponential increase of processing power (Deep Blue 300 million moves in 1 second)
  • Kasparov was not beaten by some creative imaginative innovative systems.
  • He was beaten by:
    1. brute processing power
    2. huge amounts of past data
    3. clever algorithms This in the long run is what is going to replace almost all white collar workers.

AI Fallacy: the only way a machine could perform at the level of human experts was by replicating their expertise or judgement. The system does not know anything about the data it is processing. No one is thinking to create a robot that replicates what humans do. John Searle: Watson doesn’t know it won on Jeopardy

PART ONE - AGI Artificial Narrow Intelligence

Chapter 1 - History

  • The most obvious manifestation of AI today are our smartphones.
  • We are highly social animals
  • We don’t know whether technological unemployment will be the result of the automation of jobs by AI, or whether humans will find new jobs in the way we have done since the start of the industrial revolution.
  • The range of possible outcomes is wide and not pre-determined: They will be selected partly by luck, partly by their own internal logic, but partly also by the policies embraced at all levels of society.
  • be as flexible as possible to meet the challenges of a fast-changing world.
  • Automation and superintelligence are the two main forces
  • Automation could lead to an economic singularity: might lead to an elite owning the means of production and suppressing the rest of us in a dystopian technological authoritarian regime. Or it could lead to an economy of radical abundance The arrival of superintelligence, if and when it happens, would represent a technological singularity Death could become optional If we get it wrong it could spell extinction.

#1.1 Definitions

  • Intelligence: ability to acquire information, and use it to achieve a goal.
  • Marcus Hutter and Shane Legg, a co-founder of a company called DeepMind: “intelligence measures an agent’s general ability to achieve goals in a wide range of environments.”
  • Howard Gardner (American psychologist) has distinguished 9 types of intelligence: linguistic, logic-mathematical, musical, spatial, bodily, interpersonal, intrapersonal, existential and naturalistic.

Artificial intelligence (AI) is intelligence demonstrated by a machine or by software.

Two very different types of artificial intelligence:

  • artificial narrow intelligence (ANI) or weak AI or ordinary AI:
    • computers which can play chess better than the best human chess grandmaster
    • no computer can yet beat humans at every intellectual endeavour.
    • simply does what we tell it to
  • artificial general intelligence (AGI) or strong AI or full AI.
    • can carry out any cognitive function that a human can
    • has the ability to reflect on its goals and decide whether to adjust them
    • it will have volition
    • probably will need to have self-awareness and be conscious
    • some people prefer the term “machine intelligence”

1.2 – A short history of AI research

Charles Babbage, Ada Lovelace 1822

Alonzo Church, Alan Turing 1936

1st general-purpose computer to be completed was ENIAC (Electronic Numerical Integrator And Computer), built at the Moore School of Electrical Engineering in Philadelphia, and unveiled in 1946.

John von Neumann EDVAC

The Dartmouth Conference, New Hampshire 1956: every . . . feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester

free spending by military research bodies, DARPA 1958

Herbert Simon said in 1965 that “machines will be capable, within twenty years, of doing any work a man can do

Marvin Minksy Within a generation . . . the problem of creating ‘artificial intelligence’ will substantially be solved

James Lighthill report for the British Science Research Council 1973 “combinatorial problem”: a simple problem involving two or three variables becomes vast and possibly intractable when the number of variables is increased.

1974-1980 hard to get funding: AI Winter

1980 boom:

  • expert system:
    • solving narrowly-defined problems from single domains of expertise (for instance, litigation) using vast data banks
    • They avoid the messy complications of everyday life,
    • Do not tackle the perennial problem of trying to inculcate common sense.
  • Japanese Fifth Generation Computer Systems project:
    • first generation of computing (vacuum tubes), the second (transistors), the third (integrated circuits) and the fourth (microprocessors).
    • parallel processing

Britain £350m Alvey project in 1983

DARPA set up a Strategic Computer Initiative 1984

late 1980s when the funding dried up again

  • under-estimation of the difficulties of the tasks being addressed early 1990s, and AI research has been increasingly well funded since then

1.3 – AI today

smartphone, supermarket, netflix, amazon, google search, financial markets (50% of all equities trades)

Google is an artificial intelligence company. It makes most of its money (and it makes a phenomenal amount of money!) from intelligent algorithms which match adverts with readers and viewers

In December 2012 Google hired the controversial futurist Ray Kurzweil as a director of engineering. Kurzweil, of whom more later, believes that AGI will arrive in 2029, and that the outcome will be very positive.

Facebook lost out to Google in a competition to buy DeepMind, but in December 2013 it had hired Yann LeCun, a New York-based professor at the forefront of a branch of AI called Deep Learning

the technology giants which are major players in AI are almost all US companies. A notable exception is Baidu, founded in 2000 and often referred to as China’s Google.

Big Data

  • As well as generating more data, we are quickly expanding our capacity to store and analyse it
  • having more data beats having better data, if what you want is to be able to understand, predict and influence the behaviour of large numbers of people
  • if you find a reliable correlation then it often doesn’t matter if there is a causal link between the two phenomena.
  • negative aspects of Big Data. Notoriously, government agencies like the NSA and Britain’s GCHQ collect and store gargantuan amounts of data on us. NSA and GCHQ that we have to worry about. It is reported that they are desperately short of machine learning experts because they are unable to match the salary, lifestyle or moral prestige offered by Google and the other tech giants.

How good is AI today?

  • Computers today out-perform humans in many intellectual tasks. They have been better at arithmetic for decades.
  • IBM’s Deep Blue beat the best human chess player in 1997.
  • Kasparov went on to hold two general-purpose computers to draws in 2002 and 2003, but by 2005, the best chess computers were unbeatable by humans.
  • The improvements in computer chess are not due solely – or even mainly – to hardware improvements. Software is advancing rapidly too.
  • 2011: IBM Watson beatS the most successful human players of a TV quiz game, Jeopardy!
  • 2013: DeepMind’s system (by Demis Hassabis) solves problems and masters skills without being specifically programmed to do so. It shows true general learning ability. The system was not given instructions for how to play the game well, or even told the rules and purpose of the game: it was simply rewarded when it played well and not rewarded when it played less well.
  • 2014: Stephen Hawking and Elon Musk said that the future of artificial intelligence was something to be concerned about One reason why people were surprised to hear the warnings that AI could be very dangerous in the medium-term future is that functionality provided by artificial intelligence tends to get re-named as something else as soon as it has been realised. Artificial intelligence is re-defined every time a breakthrough is achieved.
  • Larry Tesler pointed out that this means AI is being defined as “whatever has not been done yet”, an observation which has become known as Tesler’s Theorem, or the AI effect. For many years, people believed that computers would never beat humans at chess. When it finally happened, it was dismissed as mere computation – mere brute force, and not proper thinking at all.
  • Computers cannot do what most people probably regard as the core of their own cognition:
    • They are not (as far as we know) self-conscious.
    • They cannot reflect rationally on their goals and adjust them.
    • They do not (as far as we know) get excited about the prospect of achieving their goals.
    • They do not (we believe) actually understand what they are doing when they play a game of chess, for instance. In this sense it is fair to say that what AI systems do is “mere computation”. Then again, a lot of what the human brain does is “mere computation”

2. Tomorrow’s AI

As Douglas Adams said, anything invented after you’re thirty-five is against the natural order of things, anything invented between when you’re fifteen and thirty-five is new and exciting, and anything that is in the world when you’re born is just a natural part of the way the world works.

  • Internet of Things. We don’t yet know whether the myriad devices connecting up to the Internet of Things will communicate with us directly, or via personal digital assistants like Hermione.

  • all industries are now part of the information industry Much of the cost of developing a modern car – and much of the quality of its performance – lies in the software that controls it.

Demis Hassabis has said that AI converts information into knowledge, which he sees as empowering people. Most of the tasks that we perform each day can be broken down into 4 fundamental skills:

  • looking
  • reading
  • writing
  • integrating knowledge.

AI is already helping with all these tasks in a wide range of situation Marketers used to observe that much of the value of a product lay in the branding The same is now true of the information which surrounds it

3. FROM DIGITAL DISRUPTION TO ECONOMIC SINGULARITY

Concerns about AI:

  • automating our jobs out of existence,
  • is de-humanising war
  • digital disruption placing millions of people around the world at a sudden and unexpected disadvantage

3.1 Digital disruption

Buzzwords:

  • early 2010s: Big Data
  • mid 2010s: Digital Disruption.
    • caused by the Internet
    • peer to peer commerce (AirBnB founded 2008, Uber founded 2009) Peter Diamandis, Singularity University, 6Ds of digital disruption of insurgent companies:
      1. Digitized, exploiting the ability to share information at the speed of light
      2. Deceptive, because their growth, being exponential, is hidden for some time and then seems to accelerate almost out of control
      3. Disruptive, because they steal huge chunks of market share from incumbents
      4. Dematerialized, in that much of their value lies in the information they provide rather than anything physical, which means their distribution costs can be minimal or zero
      5. Demonetized, in that they can provide for nothing things which customers previously had to pay for dearly
      6. Democratized, in that they make products and services which were previously the preserve of the rich (like cellphones) available to the many. (7.) Data-driven. The disruptive companies exploit the massive amounts of data that are now available, and the computational capacity to analyse them.

Business leaders often know what they need to do: set up small internal teams of their most talented people to brainstorm potential disruptions and then go ahead and do the disrupting first. These teams need high-level support and freedom from the usual metrics of return on investment, at least for a while. The theory is fairly easy but putting it into practice is hard: most will need external help, and many will fail.

3.2 Killer robots

  • within a decade or two, fully autonomous weapons will be available to military forces with deep pockets
  • They argue that lethal force should never be delegated to machines because they can never be morally responsible.

3.3 Economic singularity

Automation

  • In the late 20th century, automation came mainly in the form of robots, particularly in the automotive and electrical / electronic industries
  • 1930 John Maynard Keynes: unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment.
  • Up to now the replacement of humans by machines has been a gradual process.
  • The idea that each job lost to automation equates to a person rendered permanently unemployed is known as the Luddite Fallacy.
  • Some people argue that soon, people automated out of a job may not find new employment,
  • A report in September 2013 by the Oxford Martin School estimated that 45% of American jobs would disappear in the next 20 years, in two waves:
    • 1st: low-skilled jobs in transportation and administration.
    • 2nd: middle and upper-middle class: professional occupations like medicine and the law, managerial jobs, and even in the arts. Systems like IBM’s Watson will progress from being decision-support systems to being decision-taking systems. In this vision, a requirement for creativity is not necessarily a defence against automation.
  • Some people argue that the fears are over-done because technology is not actually advancing as fast
  • There is an increase in productivity, but for some reason our economic measurements don’t catch it.

*If computers steal our old jobs, perhaps we can invent lots of new ones?

  • In the past, people whose jobs were automated turned their hands to more value-adding activity, and the net result was higher overall productivity.
  • Martin Ford: in 2014, 90% of the USA’s 150m workers were doing jobs which already existed 100 years earlier.
  • will the rate of churn be too fast for us to keep up? Economic singularity: a majority of jobs can be performed more effectively, efficiently or economically by an AI than they can be done by a human. If and when the economic singularity arrives, we may need to institute what is now called the Universal Basic Income (UBI). Optimistic scenario: radical abundance, leaving humans to pursue self-fulfilment (Peter Diamandis)
  • Martin Ford envisages this as a modification to the market economy, with the UBI being funded by taxes on the rich
  • An alternative is some form of socialism: common ownership of the means of production (AI systems) Even if radical abundance is possible without consuming and polluting the entire planet, there will still be scarce resources.
  • the rich stay rich forever, and the poor stay poor?
  • Or perhaps there will be a tiny elite working in those few remaining jobs that computers can’t yet do, and they will enjoy the best goods and services? Perhaps Virtual Reality will ride to the rescue. Indeed, perhaps VR is a necessary element of radical abundance. In real life, not everyone can have the beachfront property and a beautiful spouse. In VR, given a generous supply of bandwidth, everyone can. radical abundance is possible, and the need for meaning can be satisfied The third big problem is getting from here to there. The most likely outcome is one that no-one has precisely predicted

PART TWO - AGI Artificial General Intelligence

Chapter 4 - Can we build an AGI?

  1. Can we build one?
  2. If so, when?
  3. Will it be safe?

It is possible for a general intelligence to be developed using very common materials. This so-called “existence proof” is our own brains. They were developed by a powerful but inefficient process called evolution.

  • evolution does not have a purpose or goal -> inefficient
  • So the human brain is the result of a slow, inefficient, un-directed process.
  • creating artificial intelligence by a very different process, namely science. Science is purposeful and efficient

let’s turn to three arguments that have been advanced to prove that it will not be possible for us to create conscious machines. These are:

  1. The Chinese Room thought experiment
  2. The claim that consciousness involves quantum phenomena that cannot be replicated
  3. The claim that we have souls

The Chinese Room John Searle 1980:

  • it tries to show that a computer which could engage in a conversation would not understand what it was doing, which means that it would not be conscious.
  • computers do not process information in the way that human brains do: Until and unless one is built which does this, it will not be conscious, however convincing a simulation it produces.

Quantum consciousness Sir Roger Penrose 1989

  • human brains do not run the same kind of algorithms as computers.
  • a phenomenon described by quantum physics known as the wave function collapse could explain how consciousness arises: In 1992 he met an American anaesthetist called Dr Stuart Hammeroff, and the two collaborated on a theory of mind known as Orchestrated Objective Reduction (Orch-OR). It attributes consciousness to the behaviour of tiny components of cells called microtubules. the great majority of physicists and neuroscientists deny its plausibility. The main line of attack, articulated by US physicist Max Tegmark, is that collections of microtubules forming collapsing wave functions would be too small and act too quickly to have the claimed impact on the much larger scale of neurons.

3 ways to build a mind (AGI):

  1. Whole brain emulation
  2. Building on artificial narrow intelligence
  3. A comprehensive theory of mind

1. Emulation

Copying or replicating the structures of a brain in very fine detail to produce the same output as the original

  • Emulation = replicated mind which is indistinguishable from the original
  • Simulation = the replicated mind is approximately the same, but differs in some important respects
  • Connectome = wiring diagram of the brain Whole brain emulation is a mammoth undertaking. A human brain contains around 85 billion neurons (brain cells) and each neuron may have a thousand connections to other neurons. Is it feasible in practice? We can break the problem down into three components:
    1. scanning
    • Scanners in general medical use today, such as MRI (Magnetic Resonance Imaging) are too blunt
    • Scanning a live brain rather than one which has been finely sliced will probably require sending tiny (molecular-scale) nano-robots into a brain to survey the neurons and glial cells and bring back sufficient data to create a 3D map.
    • So one way or another, the scanning looks achievable given technology that is available now, or soon will be.
      1. computational capacity
    • the human brain operates at the exaflop scale, meaning that it carries out one to the billion billion floating point operations per second (10^18)
    • Major projects have been announced in many of the developed countries to achieve exascale computing before the end of this decade. (2018 Intel)
      1. modelling
    • This may well turn out to be the hardest part of what is clearly a very hard overall project.
    • a complete connectome has been available of an organism called C. elegans. It is a tiny worm – just a millimetre long, and it lives in warm soils. C. elegans has a very small connectome compared to humans – just 302 neurons (compared to our 85 billion) and 7,000 synaptic connections.
    • in November 2014, a team led by one of the founders of the Open Worm project used the C. elegans connectome to control a small wheeled robot made out of Lego. The robot displayed worm-like behaviour despite having had no programming apart from what was contained in the connectome.
  • Henry Markram 2005 Blue Brain project: model the 10,000 neurons and 30 million synapses in the neocortical column of a rat. Neocortex is involved in our higher mental functions, such as conscious thought and our use of language.
  • In November 2007 Markram announced that the model of the rat’s neorcortical column was complete
  • €1.2 billion for the Human Brain Project:
    • build “working models” of first a rat brain and then a human brain.
    • understand how brain diseases work and to greatly improve the way therapies are developed and tested. 2013 BRAIN project: funding the development of tools and methodologies

Reasons why whole brain emulation might not work

The more detailed a model has to be, the harder it is to build

  • Granularity is one potential source of difficulty
  • Time: Perhaps the models being constructed now will prove uninformative because they lack this time series data. Usually an approximately accurate model generates approximately accurate results. In some cases be so far off the mark as to be positively misleading and counter-productive

Building on narrow AI

###1. Symbolic AI (Good Old-Fashioned AI) 1950

  • reduce human thought to the manipulation of symbols, such as language and maths, which could be made comprehensible to computers
  • Its most successful results were the expert systems which flourished in the late 1980s
  • there were diminishing returns to investment

###2. Machine Learning early 90’s

  • more statistical approaches
  • creating and refining algorithms which can produce conclusions based on data without being explicitly programmed to do so
  • It overlaps closely with a number of other domains:
    • pattern recognition
    • computational statistics
    • data mining: make predictions based on information which is already known to the experimenter, using training data.
      • computer vision:
        • convolutional neural nets (invented 1980): a large number of artificial neurons are each assigned to a tiny portion of an image. Did not become really useful until the 21st century when graphics processing unit (GPU) computer chips enabled researchers to assemble very large networks.

Seeking to emulate particular intellectual skills at which humans have traditionally beaten computers

Machine learning computer system does can be

  • supervised: the computer is given both inputs and outputs by the researcher, and required to work out the rules that connect them
  • unsupervised: he machine is given no pointers, and has to identify the inputs and the outputs as well as the rules that connect them
  • reinforcement learning: the computer gets feedback from the environment – for instance by playing a video game

Machine learning employs a host of clever statistical techniques. Two of the most commonly cited are:

  • “Bayesian networks”: graphical structure that allows you to make hypotheses about uncertain situations. The system generates a flow chart with arrows linking a number of boxes, each of which contains a variable or an event. It assigns probabilities to each of them happening, dependant on what happens with each of the other variables. The system would test the accuracy of the linkages and the probabilities by running large sets of actual data through the model, and end up (hopefully) with a reliably predictive model.
  • “Hidden Markov Models” Andrej Markov 1922: in this model the next step depends only on the current step, and not any previous steps. A Hidden Markov Model (often abbreviated to HMM) is one where the current state is only partially observable. They are particularly useful in speech recognition and handwriting recognition systems.

Deep learning: subset of machine learning.

  • Its algorithms use several layers of processing, each taking data from previous layers and passing an output up to the next layer.
  • The nature of the output may vary according to the nature of the input, which is not necessarily binary, but can be weighted.
  • The number of layers can vary too, with anything above ten layers seen as very deep learning. Artificial neural nets (ANN) are an important type of deep learning system 1950s Frank Rosenblatt construct the Mark I Perceptron, the first computer which could learn new skills by trial and error. Yann LeCun (now at Facebook), Geoff Hinton (now at Google) and Yoshua Bengio, a professor at the University of Montreal.

Since 2005 a group at Stanford University has hosted the International General Game Playing Competition, which offers a $10,000 prize to the winning machine.

  • The contestants cannot run specialised software designed specifically for a particular game, as they are only given the rules shortly before play begins.
  • In the first competition, humans were able to beat the best machines, but that has not happened since.
  • The first generation of game-playing software, back in 2005, did not plan ahead; instead they selected moves which maximised the current position.
  • The second generation, from 2007, employed the sort of statistical methods discussed above, and in particular the Monte Carlo search technique which plays out large numbers of randomly selected moves and compares the final outcomes.
  • The third generation machines currently winning the competition allocate more resources to learning about the game during the short preparation period in order to devise optimal playing strategies.

Our brain is not like a car, a single system whose component units all work together in a clearly structured way which is constant over time, and all co-ordinated by a controlling entity (the driver).

It is more like a vast array of disparate systems using hardware components (neurons) that are scattered all over its volume, seemingly at random.

The speculation that a system containing enough of the types of operations involved in machine learning might generate a conscious mind intrigues some neuroscientists, and strikes others as wildly implausible, or as something that is many years away

Gary Marcus, a psychology professor at New York University

Andrew Ng, formerly head of the Google Brain project and now in charge of Baidu’s AI activities,

Yann LeCun Facebook AI

Dr Dan Goodman of Imperial College London

to teach a computer to recognise a lion you have to show it millions of pictures of different lions in different poses. A human only needs to see a few such pictures. We are able to learn about categories of items at a higher level of abstraction. AGI optimists think that we will work out how to do that with computers too.

There are plenty of serious AI researchers who do believe that the probabilistic techniques of machine learning will lead to AGI within a few decades rather than centuries. The veteran AI researcher Geoff Hinton, now working at Google, forecast in May 2015 that the first machine with common sense could be developed in ten years.

If the first AGI is created using systems like the ones described above it is likely that it would be significantly different from a human brain, both in operation and in behaviour.

3. A comprehensive theory of mind

  • Achieve a complete understanding of how the mind works – and to use that knowledge to build an artificial one
  • we are still very far from a complete theory of mind
  • The first AGI may be the result of whole brain emulation, backed up by only a partial understanding of exactly how all the neurons and other cells in any particular human brain fit together and work.
  • Or it may be an assemblage of many thousands of deep learning systems, creating a form of intelligence quite different from our own, and operating in a way we don’t understand – at least initially

Chapter 5 WHEN MIGHT AGI ARRIVE?

Rodney Brooks: I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”

Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter.

However there are also plenty of veteran AI researchers who think AGI may arrive soon. Stuart Russell is a British computer scientist and AI researcher who is, along with Peter Norvig, a director of research at Google, co-author of one of the field’s standard university textbooks, “Artificial Intelligence: A Modern Approach”

April 2014 Stephen Hawking leading proponent of the idea that much more work is needed to ensure that AGI is friendly toward humans.

2014 “Superintelligence”, Nick Bostrom: The combined estimates were as follows: 10% probability of AGI arriving by 2022, 50% chance by 2040 and 90% chance by 2075

5.2 – Moore’s Law and exponential growth

arrival time of AGI it is likely to be strongly affected by the continuation or otherwise of the exponential growth in computer processing power known as Moore’s Law

When you compare exponential curves plotted for ten and 100 periods of the same growth, they look pretty much the same: wherever you are on the curve, the past always looks horizontal and the future always looks vertical

Exponential curves do not generally last for long: they are just too powerful

In February 2015 Intel updated journalists on their chip programme for the next few years, and it maintains the exponential growth.

One of the first systems to operate at the exascale may be the Square Kilometer Array telescope system. start scanning the sky in the early 2020s

**Most of the things which are of vital interest to us change at a linear rate. **

seven possible reasons why many people shrug off the extraordinary progress of AI as not significant.

  1. It is easy to dismiss AI as not progressing very fast if you allocate all its achievements to some other domain.

  2. The demise of Moore’s Law has been predicted ever since it was devised fifty years ago.

  3. What’s coming next is no less amazing, but we tend to focus on what we didn’t get more than what we did.

  4. Adoption is getting quicker but penetration isn’t.

  5. Another aspect of the hype around each new invention is that their early incarnations are often disappointing.

  6. The hedonic treadmill is a name for the fact that most people have a fairly constant level of happiness (hedonic level). what seemed wonderful in prospects becomes ordinary. “Wow” quickly becomes “meh”.

  7. Learning about a new AI breakthrough is slightly unsettling for many people. (i.e. Terminator)

standard curve of the product life cycle:

  • a small tribe of what marketers call “innovators” jump on it because it is new and shiny
  • They can see its potential and they generate some early hype.
  • The “early adopters” then try it out and declare it not fit for purpose – and they are right.
  • The backlash sets in, and a wave of cynicism submerges all interest in the product.
  • Over successive months or years the technology gradually improves, and eventually crosses a threshold at which point it is fit for purpose: crossing the chasm
  • then they adopted by the “early majority”, then the “late majority”, and finally by the “laggards”.
  • But by the time the early majority is getting on board the hype is already ancient history, and people are already taking for granted the improvement to their lives

5.3 – Unpredictable breakthroughs

Professor Russell thinks AGI will arrive not because of the exponential improvement in computer performance, but because researchers will come up with new paradigms; new ways of thinking about problem-solving.

  • His best guess is that they may be a few decades away.

PART THREE: ASI - Artificial Super Intelligence

We have no idea of how much smarter than us it is possible to be. It might be that for some reason humans are near the limit of how intelligent a creature can become, but it seems very unlikely.

we have already been overtaken – and by a very long way – by our own creations in various limited aspects of intelligence (pocket calculator, chess computers, self-driving car)

we humans are subject to a range of cognitive biases which mar our otherwise impressive intelligence:

  • Inattentional blindness
  • The flip side of this is “salience”, when something you have reason to pay attention to starts appearing everywhere you look.
  • “Anchoring”
  • confirmation bias

So it is easy to imagine that there could be minds much smarter than ours.

There is no need to pre-judge whether a superintelligence would be conscious or self-aware. It is logically possible that a mind could have volition, and be greatly more effective than humans at solving all problems based on information it could learn, without having the faintest notion that it was doing so.

6.2 – How to be smarter

Written on February 8, 2018