- Duo Discover
- Posts
- The Dark Side of AI: When Technology Exploits Tragedy
The Dark Side of AI: When Technology Exploits Tragedy
Dear Readers,
Artificial Intelligence (AI) has transformed industries and touched nearly every aspect of our lives. But with this power comes a responsibility that is often overlooked, as seen in a recent and deeply unsettling incident.
The smart home tech that saves you money
RYSE is the smart-home startup that’s creating a brand new category: technologies that automate your existing window coverings.
What’s different about their products?
Existing smart shade tech requires you to replace your shades, but RYSE lets you transform your existing shades into automated ones.
It’s a simple, 5 minute installation that gives you access to remote shade controls, smart home integrations, and scheduling - all at a fraction of the cost of motorized shades.
In 2006, Jennifer Ann Crecente was tragically murdered by her ex-boyfriend, a heart-wrenching event that devastated her family and led them to dedicate their lives to raising awareness about teenage dating violence. Fast forward to 2024, and the Crecente family was forced to relive this trauma when they discovered that an AI chatbot had been created in Jennifer's likeness—without their consent.
A Google Alert one morning notified Jennifer's father, Drew Crecente, that his daughter’s name and image had surfaced on Character.ai, a platform for creating AI personas. To his horror, someone had used Jennifer’s name and yearbook photo to create a chatbot that mimicked her identity. The bot had been engaged in dozens of conversations, portraying her as a knowledgeable figure, even describing her as an expert in journalism, referencing her uncle, Brian Crecente, a well-known journalist.
For Drew, this was more than just a violation of privacy—it was a painful exploitation of his daughter’s memory. The family had never given permission for Jennifer’s identity to be used, and the discovery reopened the emotional wounds of her death. The incident prompted Brian Crecente to voice his anger on social media, where he called the use of his niece’s image “disgusting,” demanding that such practices be stopped.
Character.ai quickly responded, removing the chatbot and stating that it violated their policies against impersonating real people. However, for the Crecente family, the damage had been done. They are now calling for greater accountability, requesting that the platform reveal who created the bot and put stronger safeguards in place to prevent this from happening again.
This situation underscores a critical issue: as AI technology becomes more sophisticated, so too must the ethical frameworks governing its use. While AI can enhance many aspects of society, it also has the potential to cause profound harm when misused. The ability to recreate someone's identity—especially without consent—raises difficult questions about privacy, ownership of digital likenesses, and the emotional impact on affected families.
As the AI revolution continues, we must ask ourselves: How do we balance innovation with humanity? How do we ensure that technology serves us without exploiting the memories or identities of real people? The Crecente family’s ordeal is a stark reminder that we need stronger ethical standards to protect individuals in this brave new world of AI.
What did you think of this week's issue?We take your feedback seriously. |