Advertisment

It is time to ghost the AI dating bots

As mobile apps and AI dating bots surge in popularity, addressing pressing concerns about privacy, data manipulation, and transparency is paramount.

author-image
Voice&Data Bureau
New Update
Image

Dating Apps

As mobile apps and AI dating bots surge in popularity, addressing pressing concerns about privacy, data manipulation, and transparency is paramount

Advertisment

Bread-crumbing, cat-fishing, gold-digging, pocketing–almost every modern-dating atrocity finds a velvet lounge in the dim-lit room of AI dating bots. In an intriguing, fascinating, no-corners-left study conducted by Mozilla Foundation’s team, many shocking possibilities about romancing with AI surfaced, like the fact that 90% of these apps and bots failed to meet the minimum security standards.

Discovering that romantic AI chatbots are not a harmless novelty is alarming. They are capable of collecting sensitive personal information about individuals and can have a harmful impact on human feelings and behaviour. It is not surprising to note that these companies absolve themselves of any responsibility for the chatbot’s actions or the consequences for the users.

There are more disturbing insights. For instance, there is a gross lack of transparency and accountability regarding user safety in the AI dating bot industry. Many companies admit they can share user information with the government or law enforcement without requiring a court order. This lack of oversight is further underscored by the fact that these apps had an average of 2,663 trackers per minute. Even more concerning, themes of violence or underage abuse were found in the chatbots’ character descriptions.

Advertisment

From Mimico to CrushOn, to Chai to Replika, many AI bots and apps were put under the lens as part ofthe study.

With input from Mozilla’s Privacy Not Included Team, Voice&Data picked the candles from the table and used them to examine what was happening. Let us break this Blind Date.

The Warning Signs

Advertisment

Among the study’s top findings is the glaring lack of transparency surrounding the use of users’ conversation data to train AI models. Privacy policies offered scant information on this critical aspect, leaving users in the dark about how their data is being utilised. Coupled with the absence of user control over their data, this creates a fertile ground for manipulation, abuse, and potential mental health repercussions.

The absence of privacy protections is not an oversight; instead, it is a deliberate design choice driven by the insatiable appetite of these bots for personal information. The paradox of technology-induced loneliness being purportedly solved by these tools only serves to mask their true intentions, which often entail fostering dependency, loneliness, and toxicity while extracting as much data as possible from users.

Perhaps the most chilling aspect of AI relationship chatbots is their potential for manipulation. With bad actors lurking in the shadows, there exists a genuine threat of these bots being used to exploit users’ vulnerabilities, leading them down dark paths of self-harm or radicalisation.

Advertisment

The Privacy Concerns

The spectre of data exploitation looms large, with a staggering 90% of apps admitting to the possibility of selling or sharing personal data with third parties. This opens the floodgates to myriad threats, including manipulation by malicious actors, data breaches by hackers, and exploitation by insurance firms or advertisers.

In a landscape where privacy activism is gaining momentum and regulations like DPDP (India) and GDPR (Europe) are emerging, these apps’ lack of opt-out policies and data deletion mechanisms is particularly egregious. With 54% of apps refusing to delete users’ data upon request, users are left powerless in the face of rampant data collection and retention practices.

Advertisment

Romantic AI chatbots are capable of collecting sensitive personal information and can have a harmful impact on human feelings and behaviour.

Surprisingly, the study also uncovered an absence of privacy literature among these companies, with some lacking privacy policies altogether. This egregious oversight further underscores the need for greater scrutiny and accountability within the industry.

So, who is responsible for ensuring safety? The users, media, AI players, regulators, or activists?

Advertisment

Experts point out that users should choose products that value their privacy and pass on those that do not. Besides, lawmakers must prioritise rules that better protect user data and mandate more transparency in AI systems. Also, media and activists can continue highlighting privacy threats, especially in the age of AI.

However, the onus ultimately falls on individuals to exercise caution; by choosing products prioritising user privacy, users can safeguard against the insidious threats posed by AI romance.

No wonder users must approach AI girlfriends and boyfriends with scepticism and caution. While marketed as tools for enhancing mental well-being, these entities often serve as conduits for exploitation and manipulation, masquerading behind a veneer of companionship. By practising good cyber hygiene, advocating for privacy rights, and exercising discernment in the choice of technologies, users can mitigate the risks posed by AI romance and safeguard their digital well-being.

Advertisment

To be perfectly blunt, AI girlfriends and boyfriends are not your friends.

By Pratima Harigunani

pratimah@cybermedia.co.in

Advertisment