Software Engineering

Design a Safer Dating App UX


Online dating has fundamentally changed the way people seek romantic partners. Fifty-three percent of American adults between the ages of 18 to 29 have used dating apps, according to a recent Pew Research study—and the industry is estimated to reach a market value of $8.18 billion by the end of 2023.

Despite the popularity of online dating, users are increasingly concerned about the stalking, online sexual abuse, and unwanted sharing of explicit images that occur on these platforms. The Pew study found that 46% of online dating users in the US have had negative experiences with apps, and 32% no longer think it’s a safe way to meet people. An Australian survey of online daters had similarly concerning results: One-third of respondents reported experiencing some kind of in-person abuse from someone they met on an app, such as sexual abuse, coercion, verbal manipulation, or stalking behavior.

During my time as a creative director and UX consultant for companies like Hertz, Jaguar Land Rover, and the dating app Thursday, I have learned that user trust and safety significantly impact a product’s success. When it comes to dating apps, designers can ensure user welfare by implementing ID verification techniques and prioritizing abuse detection and reporting systems. Additionally, designers can introduce UX features that clarify consent and educate users about safe online dating practices.

During account creation, the onboarding process should prompt users to provide comprehensive profile information, including their full name, age, and location. From there, multifactor authentication techniques such as checking email addresses, social media accounts, phone numbers, and government-issued IDs can verify that profile-makers are who they claim to be, thus building user trust.

Bumble—a dating app where women make the first move—requires users to submit a photo and a video of themselves performing a specific action, such as holding up a particular number of fingers or posing with a hand wave. Human moderators compare the selfie and the profile picture to verify the legitimacy of the account. For added peace of mind, verified users can ask their matches to verify their profile again using photo verification. Bumble notifies the user if their match is verified or not, and the user can decide to end the interaction. Being transparent about what your dating app does with user information and photos is also key to maintaining trust: Bumble, for example, makes it clear in its privacy policy that the company will hold onto the verification selfie and video until a user is inactive for three years.

Bumble verifies onboarding users by having them take a selfie copying a given pose.
There are multiple ways to verify accounts: email, social media, phone number, live videos, and posed photos. Designers must also consider the potential for fraudulent accounts that try to circumvent these methods.

Tinder’s onboarding process includes video verification and requires users to send snippets of themselves answering a prompt. AI facial recognition compares the video to the profile photo by creating a unique facial-geometry template. Once verified, users can customize their settings to only match with other verified users. Tinder deletes the facial template and video within 24 hours but retains two screenshots from each for as long as the user keeps the account.

While human moderators and AI tools are incredibly useful, they can only go so far in identifying scammers or technology that evades verification, such as face anonymization. In response to these threats, some dating apps empower users to take further safety precautions. For instance, in-app video chats allow users to determine the legitimacy of a user’s profile before meeting in person. Though video chats aren’t 100% safe, designers can introduce features that minimize risk. Tinder’s Face to Face video chat requires both users to agree to the chat before it starts, and also establishes ground rules, such as no sexual content or violence, that users must agree to for the call to proceed. Once the call ends, Tinder immediately asks for feedback, so that users can report inappropriate behavior.

Two Tinder users can initiate a Face to Face chat on the app without having to exchange personal contact information or meet in person.
Tinder’s Face to Face feature has a multistep process to ensure users want to video chat. After the call, the design immediately asks for user feedback, providing instant support if a user feels unsafe.

Prioritize Reporting and Detection to Protect Users

Designing an intuitive reporting system makes it easier for users to notify dating apps when harassment, abuse, or inappropriate behavior occurs. The UI components used to submit reports should be accessible from multiple screens in the app so that users can log issues in just a few taps. For example, Bumble’s Block & Report feature makes it simple for users to report inappropriate behavior from the app’s messaging screen or from an offending user’s profile.

When I worked on the MVP for Thursday, safety was a primary concern. The app started with the premise that people spend too much time online searching for potential dates. Every Thursday, the app becomes available for people to match with users looking to meet that day. Otherwise, the app is virtually “closed” for the rest of the week. Given this unique rhythm, the user experience is limited, so security protocols had to be seamless and reliable.

Thursday’s onboarding highlights the fact that the app is only available once a week—on Thursdays.
The dating app Thursday only reveals matches once a week, a feature that requires a unique set of security measures.

I tackled the issue of reporting and filtering in Thursday by using third-party software that scans for harmful content (e.g., cursing or lewd language) before a user sends a message. The software asks the sender if their message might be perceived as offensive or disrespectful. If the sender still decides to deliver the message, the software enables the receiver to block or report the sender. It’s similar to Tinder’s Are You Sure? feature, which asks users if they’re certain about sending a message that AI has flagged as inappropriate. Tinder’s filtering feature reduced harmful messages by more than 10% in early testing and decreased inappropriate behavior long term.

AI and machine learning can also protect users by preemptively flagging harmful content. Bumble’s Private Detector uses AI to identify and blur inappropriate images—and allows users to unblur an image if desired. Similarly, Tinder’s Does This Bother You? feature uses machine learning to detect and flag potentially harmful content and provides users with an opportunity to report abuse.

Bumble’s Private Detector blurs potentially explicit content, letting the user decide whether to reveal the image or not.
Bumble uses AI to detect potentially inappropriate images and protect the receiver. By giving users the option to reveal the content, Bumble trusts the user’s ability to gauge a situation.

It’s also worth mentioning that reporting can extend to in-person interactions. For example, Hinge—a prompt-based dating app—has a feedback tool called We Met. The feature surveys users who met in person about how their interaction went and allows users to privately report matches who were disrespectful on a date. When one user reports another, Hinge blocks both parties from interacting with each other on the platform and uses the feedback to improve its matching algorithm.

Even with robust ID verification and reporting features, users may still encounter harmful situations because of dating’s intimate nature. To protect users, dating apps should have guidance pertaining to safe dating practices and consent through digital content, company policies, and UX features.

Tinder educates users by linking to an extensive library of safety-related content on its homepage. The company provides tips for online dating, recommendations for meeting in person, and a lengthy list of resources for users seeking additional support, help, or advice.

Bumble’s blog, The Buzz, also features several articles about clarifying consent and identifying and preventing harassment. Consent is when a person gives an “enthusiastic ‘yes’” to a sexual request whether it’s online or in-person. Individuals are entitled to revoke consent during an encounter, and prior consent does not equal present consent. The bulk of dating app interactions are virtual and nonverbal, meaning there’s potential for confusion and miscommunication between users. To combat this, dating apps need to have clear and easily accessible consent policies.

For instance, Bumble encourages users to report disrespectful and nonconsensual behavior, and if a user responds rudely when someone rejects a sexual request, it is grounds for getting banned from the app. However, Bumble’s onboarding only hints at this rule; more explicit UX writing or highlighting the company’s consent policy during onboarding would reduce ambiguity and instill greater trust in the product.

Bumble prioritizes user safety in their onboarding process, gently reminding new users on their guidelines.
Dating apps should establish their behavior guidelines—including policies around consent—during the onboarding process to increase user trust and safety.

Familiar visual cues such as icons can also pave the way for clearer interactions between users. In most dating apps, the heart icon means a user likes a person’s entire profile. Hinge, though, allows users to place hearts on parts of an account, such as a person’s response to a prompt or a profile image. This feature is not a means of granting consent, but it is a thoughtful attempt to foster a more nuanced conversation about what a user likes and doesn’t like.

Designing a Safer Future for Online Dating

As concerns around privacy and security increase, user safety in dating app design must evolve. One trend likely to continue is the use of multifactor authentication methods such as facial recognition and email verification to verify identities and prevent fraud or impersonation.

Another significant trend will be the increased use of AI and machine learning to mitigate potential risks before they escalate. As these technologies become more sophisticated, they’ll be able to automatically identify and respond to potential threats, helping dating apps provide a safer experience. Government agencies are also likely to play a more significant role in establishing and enforcing standards for user safety, and designers will need to stay current on these policies and ensure that their products comply with relevant laws and guidelines.

Ensuring safety goes beyond design alone. Malicious users will continue to find new ways to deceive and harass, even manipulating AI technology to do so. Keeping users safe is a constant process of iteration led by user testing and evaluation of an app’s usage patterns. When dating app UX prioritizes safety, users have a better chance at falling in love with a potential match—and with your app.