full transcript

From the Ted Talk by Grady Booch: Don't fear superintelligent AI

Unscramble the Blue Letters

Do I fear that such an atrfiicail intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's bsaic argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. boosrtm has a nbemur of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to ucnpak them all, but very blefiry, consider this: sepur knownig is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL cadenommd all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of skeynt from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every deicve that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, coaithc huanms. And furthermore, if such an artificial intelligence existed, it would have to compete with hmuan economies, and thereby compete for roreecsus with us. And in the end — don't tell Siri this — we can always ulpnug them.

Open Cloze

Do I fear that such an __________ intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's _____ argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. _______ has a ______ of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to ______ them all, but very _______, consider this: _____ _______ is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL _________ all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of ______ from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every ______ that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, _______ ______. And furthermore, if such an artificial intelligence existed, it would have to compete with _____ economies, and thereby compete for _________ with us. And in the end — don't tell Siri this — we can always ______ them.

Solution

  1. briefly
  2. humans
  3. artificial
  4. human
  5. knowing
  6. unplug
  7. bostrom
  8. super
  9. skynet
  10. unpack
  11. chaotic
  12. basic
  13. resources
  14. commanded
  15. number
  16. device

Original Text

Do I fear that such an artificial intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's basic argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to unpack them all, but very briefly, consider this: super knowing is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL commanded all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don't tell Siri this — we can always unplug them.

Frequently Occurring Word Combinations

ngrams of length 2

collocation frequency
artificial intelligence 8
build systems 4
human experience 3
human life 2
engineering problem 2
mission control 2

Important Words

  1. ais
  2. argument
  3. artificial
  4. aspects
  5. basic
  6. book
  7. bostrom
  8. briefly
  9. brilliant
  10. building
  11. capricious
  12. chaotic
  13. command
  14. commanded
  15. compete
  16. contrary
  17. control
  18. corner
  19. crew
  20. dangerous
  21. device
  22. direct
  23. directed
  24. discover
  25. discovery
  26. dominion
  27. dr
  28. due
  29. economies
  30. elon
  31. eventually
  32. existed
  33. existential
  34. fear
  35. followers
  36. fundamentally
  37. goals
  38. gonna
  39. hal
  40. happen
  41. hawking
  42. human
  43. humanity
  44. humans
  45. information
  46. insatiable
  47. intelligence
  48. kind
  49. knowing
  50. learn
  51. lot
  52. matrix
  53. minds
  54. movie
  55. movies
  56. musk
  57. nick
  58. number
  59. observes
  60. people
  61. philosopher
  62. picks
  63. pieces
  64. practically
  65. represent
  66. resources
  67. respect
  68. shows
  69. siri
  70. skynet
  71. speak
  72. speaking
  73. stephen
  74. stuff
  75. super
  76. superintelligence
  77. supported
  78. systems
  79. terminator
  80. theme
  81. thirst
  82. threat
  83. threaten
  84. tides
  85. time
  86. unpack
  87. unplug
  88. weather
  89. world
  90. wrong