Beyond Normandy Assignment Berlin Portable Toilets

The Navy Department Library

  • A
  • B
    • Battenberg Cup Award
    • Battle Experience - Radar Pickets
    • Battle Instructions for the German Navy
    • Battle for Iwo Jima
    • Battle of Derna, 27 April 1805: Selected Naval Documents
    • Battle of Guadalcanal
    • Battle of Iwo Jima: US Navy and Marine Corps Personnel Awarded the Medal of Honor
    • Battle of Jutland War Game
    • Battle of Lake Erie: Building the Fleet in the Wilderness
    • Battle of Manila Bay, 1 May 1898
    • Battle of Midway: Aerology and Naval Warfare
    • Battle of Midway: Army Air Forces
    • Battle of Midway: 3-6 June 1942 Combat Narrative
    • Battle of Midway: 4-7 June 1942
    • Battle of Midway, 4-7 June 1942: Combat Intelligence
    • Battle of Midway: 4-7 June 1942 SRH-230
    • Battle of Midway - Interrogation of Japanese Officials
    • Battle of Midway: Japanese Plans Chapter 5 of The Campaigns of the Pacific War
    • Battle of Midway: Preliminaries
    • Battle of Midway: U.S. Marine Corps
    • Battle of Mobile Bay
    • Battle of Mobile Bay: Selected Documents
    • Battle of Savo Island August 9th, 1942 Strategic and Tactical Analysis
    • Battle of the Atlantic Volume 3 German Naval Communication Intelligence
    • Battle of the Atlantic Volume 4 Technical Intelligence From Allied Communications Intelligence
    • Battle of the Coral Sea
    • Battle of the Coral Sea- Combat Narrative
    • Battle of the Nile
    • Battle of Tripoli Harbor, 3 August 1804: Selected Naval Documents
    • Battlecruisers in the United States and the United Kingdom, 1902-1922.
    • Battles of Savo Island and Eastern Solomons
    • Bayly's Navy
    • Beans, Bullets, and Black Oil
    • Bells on Ships
    • Bismarck, Sinking of
    • Blockade-running Between Europe and the Far East by Submarines, 1942-44
    • Bombing As a Policy Tool in Vietnam
    • Boxer Rebellion and the US Navy, 1900-1901
    • Brass Monkey
    • Brief History of Civilian Personnel in the US Navy Department
    • Brief History of Punishment by Flogging in the US Navy
    • Brief History of the Seagoing Marines
    • Brief Summary of the Perry Expedition to Japan, 1853
    • Bronze Guns (cannons) Glossary
    • Budget of the US Navy: 1794 to 2014
    • Building the Navy's Bases in World War II
    • Bull Ensign
    • Bunker Busters: Robust Nuclear Earth Penetrator Issues
    • By Sea, Air, and Land
  • C
  • D

One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?

Some reasons for concern include:

  • Otherwise smart people say unreasonable things about AI safety.
  • Many people who believed AI was around the corner didn't take safety very seriously.
  • Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
  • AI may arrive rather suddenly, leaving little time for preparation.

But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):

  • If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
  • AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
  • Therefore, safety measures will likely be taken.
  • If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.

The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)

Personally, I am not very comforted by this argument because:

  • Elites often fail to take effective action despite plenty of warning.
  • I think there's a >10% chance AI will not be preceded by visible signals.
  • I think the elites' safety measures will likely be insufficient.

Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.

In particular, I'd like to know:

  • Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
  • What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites' likely response to AI risk)?
  • What are some good studies on elites' decision-making abilities in general?
  • Has the increasing availability of information in the past century noticeably improved elite decision-making?

0 thoughts on “Beyond Normandy Assignment Berlin Portable Toilets

Leave a Reply

Your email address will not be published. Required fields are marked *