- Duo Discover
- Posts
- You shouldn’t upload your medical images to AI chatbots
You shouldn’t upload your medical images to AI chatbots
The gold standard of business news
Morning Brew is transforming the way working professionals consume business news.
They skip the jargon and lengthy stories, and instead serve up the news impacting your life and career with a hint of wit and humor. This way, you’ll actually enjoy reading the news—and the information sticks.
Best part? Morning Brew’s newsletter is completely free. Sign up in just 10 seconds and if you realize that you prefer long, dense, and boring business news—you can always go back to it.
Generative AI is revolutionizing how we access information, including health-related advice. Tools like OpenAI's ChatGPT, Google’s Gemini, and X's Grok are increasingly used for medical inquiries—sometimes even to interpret sensitive medical images like X-rays and MRIs. But this convenience comes with significant risks. Here’s why you should think twice before uploading your private medical data to AI platforms.
What’s Happening?
Over recent months, users on platforms like X (formerly Twitter) have been encouraged to upload their medical scans to AI systems like Grok. The promise? Faster, AI-powered analysis of complex medical data. Even apps claiming to diagnose conditions from uploaded photos have gained popularity.
This trend highlights how people are increasingly relying on AI tools to understand health issues—but at what cost?
The Risks of Sharing Medical Data with AI
1. Lack of Legal Protections
Unlike hospitals or licensed healthcare providers, most AI platforms aren’t bound by stringent healthcare privacy laws such as HIPAA (in the U.S.) or similar laws in other countries. Once your data is uploaded to these platforms, it no longer benefits from the same legal safeguards.
- Medical data’s sensitivity: Your health records include information that can impact your employment, insurance eligibility, and personal privacy.
- No recourse for misuse: If an AI platform misuses your data, you may have little legal protection or recourse.
2. AI Training and Data Sharing
Generative AI systems often improve by learning from user-uploaded data. But companies don’t always disclose how they handle this information:
- Opaque policies: Platforms like X’s Grok share user data with “related” companies, but it’s unclear who these companies are or how the data is stored.
- Potential exposure: Sensitive data could inadvertently end up in datasets used to train future AI models, making it accessible to developers, researchers, or even hackers.
In some documented cases, private information—like medical records—has been discovered in AI training datasets. This raises the alarming possibility that your personal medical data could be accessed by unintended parties.
3. Permanent Online Footprints
Once you upload sensitive information to the internet, there’s no guarantee it can ever be removed. Even platforms with robust privacy policies can face data breaches or policy changes that leave your data vulnerable.
- Real-world consequences: Exposed medical data could be misused by healthcare providers, insurers, or government agencies, potentially leading to discrimination or financial harm.
Is AI Reliable Enough for Medical Analysis?
Even if privacy wasn’t an issue, AI’s ability to interpret medical images remains a work in progress. Elon Musk, owner of X, has admitted that Grok’s medical interpretation features are still in the “early stages.” Relying on such tools for medical decisions can lead to misdiagnoses or unnecessary worry.
How to Protect Yourself
1. Consult Healthcare Professionals
For any medical concerns, always rely on licensed doctors or certified radiologists. AI tools can be great for general research, but they should never replace professional medical advice.
2. Avoid Sharing Identifiable Information
If you must use AI tools for health-related research, avoid sharing personal or identifiable data, including names, birthdates, or any images that can be traced back to you.
3. Understand Privacy Policies
Read the terms and conditions of any app or platform before uploading sensitive information. If the policies are unclear, consider it a red flag.
4. Be Skeptical of New Apps
Apps promising medical diagnoses through uploaded images are often unregulated and lack transparency about how your data is used. Avoid such platforms unless you’re certain of their security standards.
The Takeaway
Generative AI is a powerful tool, but it’s not a substitute for qualified medical professionals—nor is it a secure place to store your private health information. What you upload to the internet can have lasting consequences, so think carefully before sharing sensitive medical data with AI platforms.
Instead, use AI responsibly: as a supplement for general health research, not a shortcut for medical diagnoses. Your privacy and health are worth the extra effort.
What did you think of this week's issue?We take your feedback seriously. |