Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • machinin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Why the actual fuck is anyone throwing such a fit about the military researching the impact of one of the most important current technologies on military strategy and planning?

    I do miss the depth and experience of Reddit users on articles like this.

    Edit - glad to see some good responses in this thread.

    • BananaTrifleViolin@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      If you actually read his comment he gave a very good reason why using an LLM to make decisions is a bad idea. You may not like the style of his comment but it did have substance.

      Ironically, your own comment has style but lacks substance. It’s just a moan about other people’s comments without actually contributing to the topic. Tbf though, that is also very similar to Reddit.