How Dating Apps Can Respond to Demands for Further Safety

This article is brought to you by Stream’s Content Partnership with Global Dating Insights. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Video, Voice, Feeds, and Moderation APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.

In late 2023, Australia’s Minister for Communications stated that dating apps needed to enhance their safety policies and processes, forming a new ‘code of practice’. But what steps can dating apps take to ensure they’re providing safer experiences for singles?

Ensuring that bad actors are removed from the pool of users on a dating app is key to providing a more positive experience for genuine love-seeking singles. This requires recognizing those who are capable of inappropriate behaviour as soon as possible.

Understanding intent

But how do you separate bad actors from the good guys? It’s all about intent. Even the most authentic of daters can make a mistake, and so it is important for dating apps to have the power to separate genuine users from those who shouldn’t be on the platform.

Rather than relying on detecting specific keywords or phrases, which can be used in multiple contexts, understanding intent can help to identify bad actors. 

Stream’s AutoModeration capabilities can recognise the harmful intent of bad actors and consequently flag them up for further action. A dating app’s content moderators just need to provide examples of the content they want removed, and Stream’s moderation features will highlight other content with the same intent.

Intent will play a key role as dating apps combat the rise of generative AI, which can make more convincing malicious content. Recognising a user’s motivations will shine a spotlight on the bad actors, regardless if they’re human or an advanced chatbot.

Empowering and supporting human moderators

Human moderators excel at assessing whether content meets a dating app’s code of conduct. However, with high volumes of content being generated, human moderators should be focusing on complex cases, rather than dealing with a repetitive bulk of work.

Stream’s AutoModeration tools can address the workload of moderators by sending behavioural nudges to users. These nudges can remind users that the questionable message they’re about to send will breach the platform’s community guidelines, reducing accidental harms.

These AI-driven prompts and warnings will tackle the straightforward cases of inappropriate content, reducing the volume of content that requires manual review by human moderators.

Furthermore, Stream’s moderation technology will give human moderators a central hub and a single pane of glass to review flagged content. In this same system, moderators can then take appropriate action and monitor other ongoing situations.

Overall, this AutoModeration support enables a higher quality of content moderation, with human moderators focusing on cases that require their attention, while AI can give the dating app users further ownership of their own actions. 

Integrating a moderation solution

Stream’s chat and activity feed components include advanced moderation OOTB with the option to upgrade to their AI-powered AutoModeration product for especially sensitive use cases.

This means that engineering teams can focus on other innovation projects while harm-reduction solutions are taken care of.

Additionally, Stream’s AI-powered moderation models work together with an enterprise-scale global infrastructure, ensuring that a safer in-app experience comes at an extremely low latency for potentially billions of dating app users.

Find out more about Stream’s auto moderation here or by listening to this episode of The GDI Podcast with Adnan Al-Khatib, Principal Product Manager – Moderation at Stream.

Global Dating Insights is part of the Industry Insights Group. Registered in the UK. Company No: 14395769