Advertisment

AI Dating Bots: How Grey! How Shady!

Love may or may not be blind, but it looks like it needs to be blind, deaf and dumb – if it’s happening in the world of AI.

author-image
DQI Bureau
New Update
AI

AI

Love may or may not be blind, but it looks like it needs to be blind, deaf and dumb – if it’s happening in the world of AI. In a very intriguing, fascinating, and no-corners-left study conducted by Mozilla Foundation’s team, a lot of jaw-droppers about romancing with AI surfaced. Like- 90% of these apps and bots failed to meet some Minimum-Security Standards. It also spotted that- Romantic AI chat-bots will end up collecting sensitive personal information about you. Plus- They can harm human feelings and behavior. It’s not surprising to also unravel then- These companies take no responsibility for what the chatbot might say or what might happen to you as a result. Many more disturbing insights emerged here. Like- There is a gross lack of transparency and accountability and a lot of recklessness on user safety. Like- Most companies say they can share your information with the government or law enforcement without requiring a court order. Or that- these apps had an average of 2,663 trackers per minute. There were even themes of violence or underage abuse — featured in the chatbot’s character descriptions- as observed in the findings. From Mimico to CrushOn, to Chai to Replika – a lot of AI bots and apps were put under the lens here.

Advertisment

To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.

We tried to interpret some of these red flags in an interview with Mozilla’s ‘Privacy Not Included’ Team. Turns out, AI bots is the last match that Seema Aunty would suggest. ‘Swipe right’ here to know why it may help to ‘Swipe left’ to a bot.

 

Advertisment

What were the top two or three red flags that emerged in this research - What are the primary concerns individuals should focus on when evaluating dating tools, especially regarding privacy and vulnerability and how might these concerns differ for various demographics such as women, older adults, and those with extensive social networks or robust mental health?

Our top findings, which apply to ‘any’ user, are these. First: The majority of these relationship chatbot privacy policies provided surprisingly little information about how they use the contents of users’ conversations to train their AIs. And there is also very little transparency into how their AI models work. And-Users have little to no control over their data, leaving massive potential for manipulation, abuse, and mental health consequences.

Are most of the issues highlighted in this research intentional factors - configured deliberately by the platforms? Or are they gaps of negligence or things out of their control (like the Black Box and Hallucination problem of AI models)?

Advertisment

The lack of privacy protections isn’t an oversight: The amount of personal information these bots need to pull from you to build romances, friendships, and sexy interactions is enormous.

Is there a ‘Cobra Effect’ at play here- technology-induced loneliness is being solved by technology tools?

It’s certainly possible since this technology isn’t built to cure loneliness. Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity- all while prying as much data as possible from you.

Advertisment

Does the availability of deep, personal, and intimate user data help these bots leverage attachment styles in an unethical way? Do/can such bots also resort to Gas-lighting?

One of the scariest things about the AI relationship chatbots is the potential for manipulation of their users. What can stop bad actors from creating chatbots designed to get to know their soul mates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others?

When you say 90 percent of apps may sell/share personal data- how problematic can that be? Are these third parties (that are using this data) regulators and harmless advertisers? Or are they manipulators (given the year of Elections), Hackers, Insurance firms, court cases and serious fraudsters?

Advertisment

Oftentimes, there’s no transparency into who is receiving this data - and that’s a big part of the problem. People deserve to know not only ‘if’ their data is being shared, but also ‘with whom’.

It’s a shock to learn that there are almost no opt-out policies and that 54 percent apps won’t let you delete your personal data - how is that happening when so much privacy activism is going on and when data-sharp regulations like DPDP (India), GDPR (Europe) are emerging?

The majority of the apps we looked at are based in the U.S., which currently has no federal privacy laws.

Advertisment

Were you surprised at anything at all - or did you have a pre-study hunch that ‘privacy’ would be a gross problem area in such apps?

We were surprised to see how little - and sometimes, no - privacy literature was there by these companies and products. Some of them completely lacked privacy policies.

What actions can individuals in various roles - users, media, AI entities, regulators, and activists - take collectively to address the significant concerns highlighted by your team?

Advertisment

Individual users should choose products that value their privacy - and pass on those that don’t. Lawmakers can prioritize rules that better protect user data and mandate more transparency in AI systems. And media and activists can continue to shine a light on privacy threats - especially in the age of AI.

What’s your advice to users/prospective users of such apps? And to platforms that may want to fix such issues ahead?

To be perfectly blunt, AI girlfriends and boyfriends are not your friends.

pratimah@cybermedia.co.in

Advertisment