Chat with us, powered byLiveChat

CALL

INSTANT QUOTE

BOOK ONLINE

TREATMENT FINDER

LIVE CHAT

AI-Generated Health Bots Spread Misinformation on Social Media

Blog

Posted: 8/10/2023

Author:

General Blogs

AI-Generated Health Bots Spread Misinformation on Social Media

As the digital age continues to evolve, artificial intelligence (AI) is becoming increasingly prevalent in various sectors, including healthcare. In recent times, the internet has seen a surge of AI-generated “doctors” providing health and beauty advice, amassing millions of views and followers. However, the credibility and accuracy of their claims have raised significant concerns.

These AI-generated figures, often portrayed as doctors in white lab coats with stethoscopes around their necks, are found sharing health advice on popular social media platforms. A popular claim made by one such bot on Facebook was that “Chia seeds can help control diabetes.” This video garnered over 2.1 million clicks, 40,000 likes, and was shared more than 18,000 times.

While chia seeds are known for their health benefits, including unsaturated fatty acids, dietary fiber, essential amino acids, and vitamins, they are not a cure for diabetes. Despite studies showing positive effects of chia seeds on health, including antidiabetic and anti-inflammatory properties, there is no scientific evidence to suggest they can cure or completely control diabetes.

This misinformation is not limited to health advice alone. These AI bots also share beauty tips and home remedies for whitening teeth or stimulating beard growth. For instance, those seeking a Skin Tightening service or Lip Filler service might come across such videos offering natural alternatives. However, it’s crucial to discern the authenticity of these claims before acting upon them.

A recent study from Canada found India to be a hotspot for health misinformation during the COVID-19 pandemic. This could be attributed to India’s high internet penetration rate, increased social media consumption, and in some cases, users’ lower digital competence.

These AI-generated figures often appear trustworthy due to their attire and authoritative demeanour. However, Stephen Gilbert, a professor of Medical Device Regulatory Science at Dresden University of Technology, warns that this is a deliberate misrepresentation with the intent to sell a product or service, such as a future medical consultation.

For instance, one AI-generated doctor on Instagram claimed that a specific concoction could cure all brain diseases. This video, viewed over 86,000 times, is a clear example of the dangerous misinformation being spread.

While AI-generated figures are relatively easy to identify due to their minimal facial expressions and static images, the potential negative impact on the medical field is alarming. Gilbert warns that the future could see these AI bots providing real-time responses, creating a dangerous scenario for patients.

In addition to video manipulation, AI also poses risks in medical diagnostics. For instance, in 2019, an Israeli study managed to produce CT scans with false images, indicating that tumors could be added or removed from CT scan images. Similarly, chatbots can provide seemingly knowledgeable but incorrect responses to health queries.

Despite these risks, AI has played an increasingly vital role in medicine. It can assist doctors in analysing X-ray and ultrasound images and provide support in making diagnoses or providing treatment. For those seeking services such as CoolSculpting near me or a Morpheus8 clinic in Australia, AI can offer valuable information. However, the reliability of this information remains questionable.

Ultimately, while AI has the potential to revolutionise healthcare, it’s essential to approach online health and beauty advice with caution. Always seek advice from certified professionals or trusted sources, especially when considering significant decisions like whether to get veneers or Invisalign in Australia. The digital age offers a wealth of information, but it’s up to us to discern the credible from the questionable.

Back to Blog feed

Get in touch

Hidden
Hidden
Hidden