30 percent of Germans trust AI chatbot diagnoses as much as doctors’ diagnoses
PeachY Photograph / Shutterstock.com
A new survey has found that an increasing number of people in Germany are turning to chatbots to diagnose their medical conditions, and many take the diagnoses to be as trustworthy as one from a doctor.
Chatbot medical diagnoses on the rise in Germany
A survey conducted by Bitkom, the German information and telecommunications trade association, has found that an increasing number of people are turning to AI chatbots to diagnose their medical conditions.
The representative survey involved 1.145 people and found that 45 percent of people in Germany use AI chatbots for information about health and medical symptoms. 55 percent of respondents who use an AI chatbot for medical advice said they trust its advice and diagnoses.
30 percent said they place equal trust in a diagnosis from a chatbot and a diagnosis from a doctor, and one sixth of respondents said that they have disregarded a doctor’s advice in favour of the chatbot’s advice at least once.
However, 69 percent of respondents also said they were worried medical chatbots would lead to fewer in-person consultations and 56 percent said they were worried about AI chatbots misdiagnosing their medical conditions.
Respondents are worried about medical data misuse
While 46 percent of respondents would agree to their medical data being used to train AI models, 71 percent were worried chatbot companies misusing their medical data.
According to a European Commission plan to scale back General Data Protection Regulations (GDPR) law, it might soon be less clear to “patients” in a “chatbot clinic” to recognise when they are giving platforms permission to use their medical information to train AI models.
If the “Digital Omnibus” becomes legislation, websites operating in the EU would no longer have to ask for users’ explicit consent to track their cookies. Companies would be able to train AI on the personal data collected by cookies if it was justified by “legitimate interests”, “beneficial for the data subject and society at large”.
Lawyers specialising in data privacy believe that if companies can gather much more data by claiming “legitimate interest” to process personal data to train AI models, the EU would open the floodgates to large-scale data mining, to which individuals haven’t consented.
Large-scale data mining is when companies, organisations or authorities analyse huge amounts of personal data to understand patterns - this is widely considered to be a privacy violation. This is exactly what GDPR laws were initially designed to prevent.