Skip to main content
Blog

Bans as a Last Resort: What Digital Safety Failures Teach Us About Proactive Wellbeing Support

By February 19, 2025No Comments

When digital safety measures fail, bans often step in as a last resort. A platform becomes overrun with harmful content, and instead of fixing the root issue, regulators and policymakers hit the big red button – blocking access, banning users, or even outlawing an entire service. It’s a dramatic move that makes headlines and gives the impression of strong action. But does it actually make digital spaces safer? 

Bans, whether on individual users, entire platforms, or specific types of content, are a reaction to problems that have already spiraled out of control. They rarely address the root causes of harm. Whether it’s insufficient content moderation, lack of digital literacy, or failures in platform accountability. And in many cases, they create new challenges: 

Bans are often difficult (or impossible) to enforce – people find workarounds, whether it’s through VPNs, alt accounts, or shifting to other platforms. They can push harmful content underground – bad actors don’t disappear; they move to less-regulated spaces, where they can operate with even fewer restrictions. They create the illusion of action – a ban makes it look like the problem is solved, when in reality, the conditions that allowed harm to spread in the first place remain unchanged. 

We’ve seen this play out time and time again, from de-platforming controversial figures to governments banning social media apps. Just recently, Australia moved to ban social media access for under-16s, citing concerns about online harms to young users. Meanwhile, in the U.S., the push to ban TikTok continues to be framed as a national security measure, but at its core, it reflects a broader failure to establish meaningful data protections for users. 

These high-profile bans highlight an uncomfortable truth: bans aren’t a proactive solution; they’re an admission that digital safety efforts have failed. Instead of waiting for platforms to reach a breaking point, what if we focused on early intervention, better governance, and wellbeing measures that reduce harm before it escalates? 

This blog explores why bans should be a last resort, how over-reliance on them signals deeper systemic failures, and what a truly proactive approach to digital wellbeing looks like. 

The Problem with Bans as a Fix-All Solution 

Bans make for strong headlines. Whether it’s an entire platform being outlawed, certain users being permanently de-platformed, or governments stepping in to regulate online spaces, bans often give the impression of decisive action. But in reality, they’re rarely the silver bullet they appear to be. 

The biggest issue? Bans don’t solve the root problem—they only contain the symptoms. If a platform is overrun with harmful content, banning specific users or services might temporarily suppress it, but it doesn’t address why the problem escalated in the first place. That’s why bans often end up being ineffective, difficult to enforce, and even counterproductive. 

Bans Are Often Unenforceable 

The internet isn’t easily fenced off. When governments try to ban an app or platform, workarounds quickly emerge. People find ways to mask their location with VPNs, create alt accounts, or simply shift to other platforms that are even harder to regulate. 

  • The U.S. government’s long-running attempt to ban TikTok is a perfect example. While concerns over data privacy are valid, banning an app with millions of active users is an enforcement nightmare. If TikTok is blocked, users will likely move to similar platforms – or find ways to access it anyway. 
  • Australia’s under-16 social media ban faces similar enforcement issues. Without a foolproof age-verification system (which raises its own privacy concerns), young users will continue to access these platforms through borrowed accounts or fake credentials. 

The reality is, banning a platform doesn’t stop the behaviors that made it problematic in the first place. Instead, it often just shifts them elsewhere, sometimes into even less regulated digital spaces. 

Bans Push Harmful Content Underground 

While bans might remove harmful individuals or content from mainstream platforms, they don’t stop the problem, they displace it. Extremist groups, harmful conspiracies, and illicit activities don’t disappear when banned from major platforms; they migrate to smaller, less-regulated spaces where moderation is weaker and accountability is nearly nonexistent. 

  • After high-profile figures were de-platformed from mainstream social media, many moved to alternative platforms like Telegram, Gab, and Truth Social, where there’s little oversight. Instead of stopping harmful movements, bans sometimes accelerate their fragmentation and make them harder to monitor. 
  • The banning of Telegram in some countries has sparked debate about whether restricting access to communication tools ultimately creates more harm than it prevents, especially when entire populations rely on these apps for news, coordination, and even emergency services. 

Rather than solving the problem, bans scatter it into darker corners of the internet, making it more difficult to track and address. 

Bans Can Create the Illusion of Action 

Perhaps the most dangerous aspect of bans is that they create the false sense that the problem has been “solved.” A platform or user gets banned, the headlines roll in, and it appears as though real progress has been made. But without addressing why the harmful behavior was allowed to flourish in the first place, the issue inevitably resurfaces, sometimes in even more damaging ways. 

  • The FTC’s ban on a location data company earlier this year was framed as a victory for digital privacy, but it raised a bigger question: How many other companies are still collecting and selling sensitive user data without proper safeguards? 
  • The U.S. Senate’s push to ban deepfake nudes is an important step in protecting victims, but without stronger proactive measures, new forms of AI-generated abuse will continue to emerge faster than bans can keep up. 

Bans might seem like strong regulatory action, but they’re often a band-aid on a much bigger wound. They don’t replace the need for proactive, systemic solutions that stop harm before it escalates. 

What Proactive Digital Safety Should Look Like 

Banning something might seem like a strong response, but if history has shown us anything, it’s that bans alone don’t work. The real question is: What should we be doing instead? 

A proactive approach to digital safety means identifying risks early, building safeguards that reduce harm, and ensuring that bans are truly a last resort rather than a first response. Instead of waiting for a crisis to spiral out of control, platforms, regulators, and policymakers should be focusing on prevention. 

Smarter Digital Guardrails Instead of Blanket Bans 

Bans are often blunt instruments that don’t account for nuance. A smarter approach would focus on targeted interventions that minimize harm without shutting down entire platforms or communities. 

Instead of banning social media for under-16s, as Australia has proposed, platforms could enforce stricter parental controls, verified age gating, and built-in digital wellbeing features. Completely banning young users will likely drive them to workarounds or alternative platforms that may have even fewer safety measures. 

Rather than banning TikTok over data privacy concerns, governments could establish stronger, enforceable data privacy laws. A universal standard for how all social media companies handle user data would be a much more effective safeguard than singling out one app. 

Banning platforms doesn’t eliminate the risks, it just displaces them. Instead, governments and tech companies should be creating stricter regulations that apply across all platforms, ensuring that no app can exploit users under the radar. 

Closing the Gaps: Data Privacy, AI Threats, and Moderator Wellbeing 

One of the biggest weaknesses in digital safety is the failure to address systemic risks before they become crises. Rather than tackling core issues like data privacy, AI-generated threats, and moderator wellbeing, we often see reactive policies and bans that don’t get to the root of the problem. 

The FTC’s recent ban on a location data company was framed as a victory for user privacy, but it raises a bigger question: How many other companies are still quietly collecting and misusing personal data? A proactive approach would involve universal, enforceable data protection laws that apply to all companies, rather than chasing down bad actors one at a time. Without clear regulations, companies will continue to exploit data in ways that go unnoticed until another scandal forces a reactive crackdown. 

The U.S. Senate’s recent crackdown on deepfake nudes is another example of legislation struggling to keep up with rapidly evolving AI threats. While bans on harmful content are necessary, AI-generated abuse is evolving faster than enforcement mechanisms can respond. Instead of scrambling to ban deepfake content after the fact, the focus should be on building AI detection tools, requiring watermarking on synthetic content, and holding platforms accountable for stopping abuse before it spreads. 

Platforms also need to support their own safety teams. Content Moderators experience high burnout rates, yet they’re expected to handle some of the most distressing material on the internet without sufficient mental health support. If platforms are serious about digital safety, they need to invest in structured wellbeing initiatives, including therapy access, resilience training, and mandatory decompression periods. Without these measures, the very people tasked with making digital spaces safer will continue to suffer from psychological distress and high turnover rates. 

True digital safety doesn’t come from banning individual threats—it comes from closing these foundational gaps so that bans aren’t necessary in the first place. 

Transparency in Enforcement: Clear Rules, Not Arbitrary Bans 

One of the reasons bans cause backlash is that they often seem arbitrary or politically motivated. Users rarely understand why a platform takes action, whether it’s banning an individual, blocking a piece of content, or restricting access in certain countries. 

Brazil’s Supreme Court is currently debating whether platforms should be liable for third-party content, even without a takedown order. This raises serious concerns about legal overreach and the potential for pre-emptive censorship, a problem that is made worse when platforms lack transparency in how they enforce content rules. 

Facial recognition bans, like the one being challenged in the U.S. Senate, also highlight the need for clearer regulation. Instead of banning an entire technology, governments should be setting clear rules on how, when, and by whom it can be used. 

What’s needed is better communication between platforms and users, including: 

  • Explanations for why content is taken down or accounts are banned. 
  • Appeals processes that allow users to challenge decisions. 
  • Transparency reports that show enforcement trends and policies. 

If platforms want to build trust, they need to make sure that moderation decisions don’t feel random or politically driven, especially in high-stakes cases like elections or misinformation takedowns. 

The Bottom Line: Bans Should Be the Last Resort, Not the First Response   

Bans are blunt instruments that often do more to mask a problem than fix it. Instead of relying on last-minute shutdowns, platforms and regulators need to take a proactive approach that prevents digital harm before bans even become necessary. 

A better digital safety strategy would focus on: 

  • Universal data privacy protections that apply to all platforms, not just foreign-owned ones. 
  • Stronger, more transparent content moderation policies that stop harmful content before it escalates. 
  • Wellbeing support for moderators and platform safety teams so they can do their jobs without burnout. 
  • Clear, enforceable rules on emerging technologies like AI-generated content and facial recognition, instead of reactive bans that don’t address the underlying issues. 

At the end of the day, bans should be a last-resort tool used in extreme cases, not a go-to strategy for handling digital safety failures. If we want to build a safer internet, we need long-term solutions that work—before it gets to the point where banning something feels like the only option left. 

 

Free Webinar | Finding the Balance: How the Content Moderator Can Avoid Overwhelm While Staying Informed

Register Now