Tech

The AI that can predict when people will die

An innovative AI model trained using the life details of more than one million people has shown that it can predict a person’s time of death with a high degree of accuracy, as reported by the Independent.

A study found the system, which works like ChatGPT, could predict people’s chances of dying in a way that returned better results than any other existing method, following the work carried out by scientists from the Technical University of Denmark (DTU).

The AI model referred to as ‘life2vec’ converted data after the details of six million Danish citizens were collected from 2008 to 2020. Using their health and labor market data, it produced impressive results on life expectancy as well as a person’s risk of early death.

“We used the model to address the fundamental question, to what extent can we predict events in your future based on conditions and events in your past?” said study first author Sune Lehman from DTU.

“Scientifically, what is exciting for us is not so much the prediction itself, but the aspects of data that enable the model to provide such precise answers,” added Dr Lehman.

The research project handled data on a group of people aged 35 to 65, with half of them passing away between 2016 and 2020, then asked life2vec to predict those who lived and who died.

It returned predictions 11% more accurate than any other existing AI source used for the same purpose, and even the method used by life insurance providers to calculate premiums.

Warning on AI intelligence

With obvious ethical concerns about how such AI technology could be used, Lehman outlined that it should not be used by insurance companies.

“Clearly, our model should not be used by an insurance company, because the whole idea of insurance is that, by sharing the lack of knowledge of who is going to be the unlucky person struck by some incident, or death, or losing your backpack, we can kind of share this burden,” he said.

These findings and the ethical considerations play into fears of the capability of AI and why safeguards need to be in place.

In recent days, OpenAI introduced a new governance model for AI safety oversight whilst the EU has already reached a deal on significant regulations. The USA has also taken its first steps on AI guidance.



Source link

Related Articles