According to Discord’s announcement, all accounts will be set by default to a “teen-appropriate” mode until a user completes the age-verification process. Unverified users will not be able to join age-restricted servers and channels, participate in stage discussions, or access content labelled as “adult” without first confirming their age.
Discord says it will use an age-estimation model based on signals such as account activity — without analysing the content of private messages — to automatically infer a user’s age and, in some cases, exempt them from manual verification. If the system cannot determine a user’s age, the person may be asked to verify it either through AI-based facial analysis or by submitting a government-issued ID document to a third-party partner, which is supposed to delete ID images almost immediately after verification.
The decision to roll out age verification globally is controversial in light of a previous identity-data breach that occurred in October 2025. Discord confirmed that, following a security incident involving an external service provider, around 70,000 images of government documents — including passports and driving licences — may have been exposed, along with usernames, email addresses and other personal data. The company ended its cooperation with the vendor and notified affected users and law-enforcement authorities, but the incident significantly damaged community trust.
The new age-verification policy also matters in the context of Discord’s growing role as an alternative to professional collaboration tools such as Slack and other workplace communication platforms. Although the service originally became popular among gamers, it is now widely used by business teams, technical communities and educational groups. This broad user base means that privacy and security changes such as mandatory age checks are likely to face strong resistance, particularly from users who value anonymity and friction-free access.
Opponents of the new rules argue that earlier data leaks linked to age-verification systems demonstrate the risks of asking users to upload identity documents — even if Discord says such materials are deleted quickly. Critics also warn that some users may leave the platform or migrate to alternative services if they are forced to submit documents or feel their personal data is no longer adequately protected.
Discord says the primary goal of the new system is to improve child safety and comply with international online-child-protection regulations, while acknowledging that the rollout could result in the loss of some users.

