Scientists used six to 10 seconds of people’s voice, along with basic health data, including age, sex, height, and weight, to create an AI model.

    • swope@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I didn’t read it, but my first thought was they trained it to associate speech patterns with wealth and/or education, which correlates with diabetes for all the usually reasons in the US health non-care system.

      Edit: I’m probably wrong:

      The scientists analysed more than 18,000 recordings and 14 acoustic features for differences between those who had diabetes and those who did not.

      They looked at a number of vocal features, like changes in pitch and intensity that cannot be perceived by the human ear.

  • plistig@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Even if this wasn’t bullshit, what it is… Why would we need it? It’s not exactly difficult to diagnose diabetes.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Diagnostic tools such as this could be used to help provide a diagnosis to patients who are in hard-to-reach/remote locations, or who may otherwise be unable to visit a medical professional in-person for other reasons. Depending on the fidelity needed for the tool to make such a diagnosis, it could potentially be done over a simple phone call.

      Assuming that this actually works as claimed, this could be huge for people in remote regions, where they may often have access to basic technologies like phones, but may not have viable transportation or other have conditions preventing them from accessing the help they need.