The (D)Evolution of Discord's Reporting System

Discord's reporting system is often scrutinized, but how much merit is there to the critique? In this article, we'll cover EVERYTHING about Discord's reporting system, old and new.

The (D)Evolution of Discord's Reporting System

We've all been in a situation online where we've seen something that doesn't belong. An obvious scam, a virus, impersonation, or even criminal activity. Discord communities are typically moderated to prevent this, but what if it's a DM, what if there's no moderator, or what if YOU are the moderator?

In this article, we're going to dive into the various forms Discord's reporting system has taken, Discord's endeavors to support moderators, and the concerns the community has about the current state of affairs.

Trust & Safety

Back in the early days of Discord, Trust & Safety was not at the scale it is now. If you had an issue on Discord, you would email [email protected] & someone would hopefully deal with the issue. This early version of the report form didn't even have a Trust & Safety or reporting option. Once an email landed in the abuse email, it would be ingested into Discord's helpdesk software, Zendesk. Being a customer since 2015, Discord is one of Zendesk's most prestigious & touted users.

Source: https://web.archive.org/web/20170301163122/https://support.discordapp.com/hc/en-us/requests/new

Call the Hotline

In July 2016, a group of staff & owners of large Discord servers got together to share data about malicious users, prompted by raids that affected their communities. At this time, there was very little tooling available for communities to effectively moderate, making a collaborative early warning system very effective. Sure, they could send in an email, but that took a long time to get handled. In the meantime, they needed a way to handle these malicious users in volume and quickly.

Enter Discord Hotline, the first endeavor in this area. In August 2016 a bot was created that would allow the members of Hotline (vouched in by existing members) to optionally cross-ban and flag malicious actors.

Source: https://github.com/DiscordHotline/bot

Discord Steps Up

By July 2017, Discord Staff had taken notice of Discord Hotline and joined to engage with the community there. This coincided with the creation of the Discord Moderator Program, a volunteer program that Discord used to moderate their official communities and the communities of important strategic partners. Over the next few years, this program opened up to applications in several waves. Interestingly, application data was handled insecurely during a wave in 2019, leading to a potential breach that Discord had to disclose. Here's a snippet from an email sent to all applicants:

Hey there,
We're writing to let you know that the information related to your Discord Moderator Application (the name you provided, user ID, username, languages spoken, and answers to the questions) may have been available to unauthorized users due to a misconfiguration in how Google handles data submitted to Google Forms. The issue was immediately fixed, and while we don’t believe there was any malicious access, we wanted to let you know as soon as possible.

The Discord Mod Ecosystem is a whole different beast, but is emblematic of Discord's best efforts in transparency, communication, and safety to date. We'll talk about this in greater detail in an upcoming article so stay tuned!

So, Discord is now paying attention to the concerns of their larger communities and directly engaging with them, as well as retaining an official moderation team. But what about their Trust & Safety efforts? Well, in 2018 the support form got a dedicated Trust & Safety category & legal counsel stepped up to lead the Trust & Safety efforts of the platform. This led to generally perceived sitewide improvements concerning platform-level moderation. In 2020, another large improvement to content reporting on the platform arrived...

In-App Reporting

In 2020, the Desktop app gained a report button. But only if you meet certain requirements. If you were a Discord Partner, select high-trust member of Discord Hotline, member of a private Discord Official professional networking server, or member of the Discord Moderator Program, you were now given the option to right-click report messages on desktop.

This early implementation of In-App-Reporting (IAR) would create a Zendesk ticket, giving users normal responses as though they used the Zendesk reporting form including follow-ups for more information & confirmation an investigation was initiated. Unfortunately, the API route for enabling this would explicitly trust the content sent to it. While there's no evidence this was ever abused, this could have been used by a bad actor with access to the experimental feature to manufacture message content to report - which the TnS agent on the other end may trust & action.

Alongside the addition of this reporting surface, Apple & Google introduced terms to their App Developer Policies that require apps on their relevant platforms (App Store & Google Play Store) to allow the reporting of UGC (User Generated Content). Here's a current term from Google's policy:

[...] Conducts UGC moderation, as is reasonable and consistent with the type of UGC hosted by the app. This includes providing an in-app system for reporting and blocking objectionable UGC and users, and taking action against UGC or users where appropriate. Different UGC experiences may require different moderation efforts. [...]

This led to the addition of a 'report message' button on the mobile apps for Discord. This report button wouldn't create a Zendesk ticket like the Desktop IAR feature and was generally regarded as an inferior reporting surface, with unofficial advice at the time being to also report any content via the Zendesk form. It's speculated that these early reports were never seen by any Trust & Safety agents unless they met specific criteria, instead being dropped into the literal void.

In 2022, the Desktop IAR button was divorced from Zendesk & made to work similarly to the mobile IAR buttons, ahead of a general release that rolled out over the next year. This feature eventually became generally available across all platforms, with improvements over those years including the addition of surrounding messages automatically to provide agents with better context & report buttons in more places like profiles. Early adopters of the Desktop IAR feature found the general release to be less consistent with actioning versus the older version that created a Zendesk ticket and largely moved back to making Zendesk tickets for reporting while IAR slowly improved.

Voice is Sacred

To date, Discord has never provided an option for reporting voice channels. They've never accepted recordings of voice channels, or any other form of evidence. In 2023, Discord even released a blog post about the encryption of voice & video streams. There is one small outlier to this, the ill-fated stage discovery feature had a report button available for public stages. This report surface was manually moderated, and Discord staff would be able to join these stages & directly confirm terms of service violations. This also led to a dedicated guidelines page for these stages.

The privacy of voice likely comes down to US Wire Tap laws. California is a Two-Party Consent state, making recordings of people illegal without explicit consent from all parties. This is why clips are allowed, but Discord likely cannot perform any moderation that requires recording or 'tapping' for legal reasons.

Discord lost its Zen

In July 2023, Discord silently removed the ability to make Trust & Safety reports on Zendesk. As detailed above, this was regarded to be the most reliable report surface on the platform.

Source: https://www.reddit.com/r/discordapp/comments/14sx8fz/discord_just_silently_removed_the_ability_to/

I consulted a Trust & Safety expert with experience at several large social media companies about the decision to use, and then drop, Zendesk:

There are a few reasons a company may choose to have an off-site report form. The first, and the reason Discord did it, is that it's significantly easier to just use a CRM's (such as ZD) prefab tools. Their software allows you to easily make a form and manage communication, and it requires little to no engineering resources. This also allows you to easily see all communication a user has had with the company, and to pass tickets between departments (such as a general support agent transferring a hacked account ticket to T&S).

As we've covered, Zendesk was the first solution Discord had for support way back in 2015, so this absolutely tracks.

One other thing Discord cited was that an off-site report method created friction, i.e. it was more work for the user to create a report, thus preventing bad reports. And this argument worked for them in early years, because that friction kept reports from piling up. But there comes a point where the scale of the platform outweighs that, and Discord hit that point.

This is a critical point, while it worked early to quickly deal with the issue, it couldn't hold up over time as reports only grew in number. On the decision to move to IAR:

It is better for both the user and the agent to report an issue directly where it happened. Providing a message link in the ZD form was Discord's way of simulating this, but in an ideal world if a user sees a problem they can report it right where they see it. It's simple for the user and it's helpful for the agent to know where to get started on an investigation.

This holds true, as we've covered in how the IAR reporting methods work. But what about detail? What information is important for actioning a Trust & Safety report?

The order of relevance for the elements of a report are (generally): category > location > description. The category selected for a report is going to tell me, broadly, what kind of behavior I'm looking for. For example, a hate speech report will put me on the lookout for slurs and harmful stereotypes.

Discord surfaces several report categories in IAR, that's check one for Discord!

Next I look at the location, or where the issue took place. Sometimes users report the content perfectly and I don't even need to think about this, but if a report says it happened in a chat message I know, at least, I'm probably trying to locate a message.

IAR has surfaces on messages, servers (if you're on mobile) & profiles, allowing this to be directly handled with IAR as well, check two!

The last thing I look at is the description in the report, and I know this might shock people. There are a few reasons for this; it is often redundant of the report reason, it includes details that aren't relevant, and most users don't know how to articulate issues succinctly. This does not mean I don't read descriptions, of course I do, but I say all this because I don't think it's the end of the world to not have it. Messaging platforms especially will already be reading into a full conversation to understand a problem, and having a description clarify a report is much rarer than you'd think.

So in reality, to a T&S agent, very little is lost with the shift from Zendesk's report form to IAR, they still have their most critical data points. There are some blind spots in Discord's new reporting system though, markedly IAR on mobile has more surfaces than on Desktop, making mobile a requirement for reporting some content.

This decision also left some things entirely unreportable, including files hosted on Discord, which Discord themselves acknowledge as a large point of abuse for hosting malware. This was reported on in 2021. This also meant that you needed in-app access to a resource to be able to report it. That means being inside of a potentially malicious Discord server - which naturally puts your own account at risk - to be able to report it to Discord. This has led to an issue with 3rd party server lists, that have no good way to report abusive communities to Discord. I reached out to an administrator of a large Discord server list for comment:

There's one thing we'd want from Discord, a way to submit servers for investigation, ideally in bulk

We have plenty of servers in [...] database, we've banned a lot of them... but the team feels a bit helpless and annoyed because we can't do anything about them but just refuse listing them.

I went on to ask what the server list's team recommends users do if they encounter a malicious server on their site:

[...] contact Discord's Trust and Safety team, as it's not really our responsibility, however we do ban servers that we're told about.

While discussing the moderation issues this 3rd party server listing site faced, we discussed common tags/terms associated with compromise of child safety. The administrator was able to provide a removed listing with content regarding dating in the description. The problematic term in the description was recently added to a sitewide ban list due to the rise of online dating servers targeting minors.

Who can you tell besides Discord?

In the absence of proper reporting surfaces on Discord, these 3rd party server listing sites have had to consider reporting content directly to parties like NCMEC and relevant law enforcement authorities. The administrator of the third-party listing site had this to say:

There were ideas thrown around in the team to forward information about said servers to proper enforcement agencies but it's hard to tell what effect, if any, would reporting them have.

The stone-wall nature of these more serious reporting methods is definitely frustrating. For legal reasons, they are often barred from sharing any information about acceptance, actionability, or progress on any report made to them, even after affirmative action is taken in some cases.

This specific issue has seen a lot of coverage, with the US Senate even getting involved, bringing many tech CEOs to a hearing on Child Safety. including Discord CEO Jason Citron.

Source: https://apnews.com/article/meta-tiktok-snap-discord-zuckerberg-testify-senate-00754a6bea92aaad62585ed55f219932

Discord was recently legally compelled to open a new reporting surface for illegal content, this was due to the passing of the EU Digital Services Act. This seemed like a step in the right direction, however, Discord only shows this page to accounts they deem to be located within the EU regions affected by DSA. Visiting the page outside of those regions shows this blank page:

Source: https://discord.com/report

However, the alternate report avenues in the NCMEC & relevant law enforcement authorities are still strong, with the Cyber Tip Line being a good option for reporting child endangerment, and the FBI having a dedicated tip page. So the most serious criminal issues can still be reported to the right place. Outside of that though, routine moderation issues are down to Discord to deal with. So, why the removal of the Zendesk report form?

Reporting doesn't scale

To fully understand the decision to remove the Zendesk report form, I consulted several Trust & Safety professionals. The consensus reached was that the sheer scale of reports likely made it very difficult for Trust & Safety agents to do their job. I wanted more clarity on this, so I reached out to a Trust & Safety professional with experience at Discord. Disclaimer: The following insights are drawn from personal experience of a former agent and do not represent the views of Discord Trust & Safety as a whole or specific T&S teams. For the latest statements from Discord about Trust & Safety, always refer to Discord's safety center.

Why do you think the reporting system might have changed (From Zendesk form to solely IAR)?

While I can't disclose specific reasons, I can say that the sheer volume of tickets Discord receives daily necessitates a streamlined process. Consider the volume as being in the thousands, and then multiply that number several times over—you begin to see why a change was made.

This statement was parroted by Trust & Safety professionals outside of Discord - report volume seems to be a pain point for all social media services that allow UGC. Queues get higher and report volumes only increase with MAU, and as MAU is the main metric of success for these companies, it'll always come with an increase in abuse & reports of such.

For insights into the volume of reports Discord receives, we can reference their Transparency Reports. These are published bi-annually and go into great detail, covering the actions and reactions of Discord in their endeavor to keep the platform safe. Over 3 months from October to December 2023, Discord received over 12 million reports. That's an average of over 130,000 reports every single day.

Source: https://discord.com/safety-transparency-reports/2023-q4

So, it's easy to see the scale of the problem... Is a problem. But has the new reporting system made things better? Are reports easier to act on?

Can you say, approximately, how many reports would be immediately actionable (all message links provided etc.) as a percentage?

Before IAR, I would estimate that around 65-70% of reports were immediately actionable. I no longer have access to the relevant data, and even if I did, I couldn't provide an exact percentage.

Since the introduction of IAR, that percentage has likely decreased to around 50-65%, due in part to users' inability to provide additional context or follow up on their reports. It's worth noting that IAR reports often feel as though they disappear into a void, making follow-up challenging.

The tail end of this really drives home a large point. Users want to feel like their report is being dealt with. The Zendesk report form was preferred because you would always get a response, asking for more information or confirming it was being looked into. You would get direct feedback, and the absence of that feedback makes all other surfaces feel inferior. This feeling is substantiated though, one of the main perks of the Zendesk form was being able to directly communicate context to a Trust & Safety agent:

What report surfaces did you like the most as a moderator and as an agent?

As both a moderator and an agent, I preferred Zendesk.

From a server moderator's perspective, Zendesk allowed for detailed reports with message links and context, which was extremely valuable. If you're a moderator, you likely understand the importance of being able to include as much detail as possible.

As an agent, I appreciated the precision that came with most reports, especially when investigating specific types of tickets. Context remains crucial in our work, given the nature of the platform.

The shift to IAR was understandable and perhaps inevitable, but it did remove the average user's ability to provide additional context and message links, which I found less favorable as both a moderator and an agent. However, the introduction of in-app reporting is a significant improvement, and I encourage users to utilize it whenever they encounter inappropriate content or users.

You might recall the early days of Discord when there was no direct in-app reporting system, unlike many similar platforms. The current system is a marked improvement.

As covered in this article, it took 7 years from Discord's launch for report buttons to be widely available in the app - and there's no denying that the addition of those buttons IS an improvement over their prior, marked absence.

I'd like to close off this section with a final question:

Is there anything you think Discord could be doing better?

There's room for improvement. If it were up to me, I would reinstate the ability for users to include additional message links and context, effectively addressing two issues at once. That said, I do appreciate the current reporting system, but it's essential to remember [...]:

The transition from Zendesk to IAR was made for a reason. Discord identified flaws in the previous system and aimed to improve it. They're doing their best.

Behind the screen, there are people working hard to handle these reports, and they are doing their best as well.

Meeting the demand

Discord's high report volume necessitates an equally large Trust & Safety team. But how can Discord meet that demand? ✨Outsourcing ✨

This is something many companies are, at some point, forced to do to meet the scale of their platforms. Discord has leveraged several firms for outsourcing support, including Teleperformance:

Teleperformance, based in Paris, has become a market leader in content moderation services, with more than 7,000 of these types of workers around the world, according to analysis by Market Research Future. It sees Colombia – where moderators told the Bureau it worked with Meta, Discord and Microsoft as well as TikTok – as a key hub for cementing this position of dominance.

This is controversial for many reasons - tied deeply with labor laws and union busting - but that won't be the focus of this article. Feel free to read more about this here. Another big name in this area is Keywords Studios, a Dublin-based agency with a focus on Discord, that has previously worked with Discord on content moderation.

The flaws that come with outsourcing are clear, agents aren't Full-Time Employees and don't have the same rigorous guarantees that vetted employees would, so they may be more mistake-prone when handling reports and... Unfortunately, when handling their own safety too. Discord experienced this first-hand when a 3rd party agent was breached, potentially revealing support communications.

Source: https://www.bleepingcomputer.com/news/security/discord-discloses-data-breach-after-support-agent-got-hacked/

If not report, how ban?

Taking another look at Discord's latest Transparency Report, we can see that servers actioned come in two categories. Proactive and Reactive. Reactive is the conventional understanding of moderation, someone hits report and Discord investigates. After the investigation, Discord deems the server to have broken terms and hits the big red ban button. As for the Proactive actions? As we can see there's a very large amount of Proactive actions for Child Safety. That's down to Discord's efforts in machine learning-powered moderation to combat this particular abuse of their platform. But it doesn't end here, Discord employs pro-active message scanning in discoverable servers and will take action as appropriate for all categories where able. Overall, 94% of servers were removed Proactively.

Source: https://discord.com/safety-transparency-reports/2023-q4

When talking at a US Senate hearing, Jason Citron had this to say about Sentropy, the AI company acquired in 2021 by Discord:

The tech sector has a reputation of larger companies buying smaller ones to increase user numbers and boost financial results, but the largest acquisition we’ve ever made at Discord was a company called Sentropy. It didn’t help us expand our market share or improve our bottom line. In fact, because it uses AI to help us identify, ban, and report criminals and bad behavior, it has actually lowered our user count by getting rid of bad actors.
Source: https://medium.com/sentropy/sentropy-x-discord-1ed5c4269896

While it's easy to look at reactive numbers and say Discord isn't doing enough with reports, the true reality is a lot more complex. Discord is actively investigating and taking down abuse of their service every single day. A minority of actions are taken as a direct result of reports. This is a good metric to see and shows Discord's dedication to improving the safety of its platform. However...

We don't feel heard.

This is the core of the issue with Discord's changes to the reporting surfaces. People want feedback, to know they've made a good report and something is going to be done about it. Discord has a communication issue that far overshadows any moderation issue. They recently hired their first Chief Communications Officer. From that article, we have an important quote:

Her team will also be responsible for protecting corporate reputation amid heightened scrutiny around children's online safety and content moderation.

Discord is aware of the scrutiny of its moderation, of its safety, and of the platform as a whole. It seems, across the board, there's an effort to both clean up the safety issues Discord faces as well as the perception of those issues. Once again, the issue comes back to visibility. Discord wants to be seen doing something about the bad actors on its platform, especially in the wake of the recent Senate Hearing involving Discord's CEO, Jason Citron. While we've cited Transparency Reports in this article, we're yet to see any (as of August 2024) for 2024. We have, however, seen many articles on Discord's blog about the issues Discord is tackling.

Discord has also started responding to reports with email follow-ups. This is something many other platforms have implemented and is absolutely a step in the right direction.

As a final, personal, note; Every single discussion I've had with folks at Discord about the safety of the platform is passionate. Trust & Safety is a very hard industry, I have the utmost respect for the people doing hard work behind closed doors to keep the internet safe. If there's one takeaway you should all have from this article, it's that the hardest work is rarely visible. Trust & Safety and Anti-Abuse are a constant battle, and Discord is always forced to keep up...

To Evolve.