Skip to content

9 things the Social Web Foundation could do to prioritize safety (if they decide they want to)

Will they? Time will tell!

Railroad tracks.  On the ties, a blue sigh with white letters saying "Safety first"
Join the discussion in the Fediverse on infosec.exchange or lemmy.blahaj.zone!

A followup to More questions than answers: another post about the Social Web Foundation and the fediverses – and to Steps towards a safer fediverse. Originally published as a draft Octover 16; see the update log below for details.

Contents

"[E]ven though millions of people left Twitter in the last two years – and millions more are ready to move as soon as there's a viable alternative – the ActivityPub Fediverse isn't growing. One reason why: today's Fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTQIA2S+ people, Muslims, disabled people and other marginalized communities."

Mastodon and today’s ActivityPub Fediverse are unsafe by design and unsafe by default, 2023
"The Social Web Foundation is our best chance to establish the conditions in which the new social media operates with zero harm."

– SWF Executive Director Mallory Knodel, Announcing the launch of Social Web Foundation

There's a lot to say about the new Social Web Foundation (SWF). The Social Web Foundation and the elephant in the federated room included quotes from the coverage of their initial launch, and highlighted the potential upsides if they evolve in the right ways. That post then went into detail on the tradeoffs related to their engagement with Meta, who's reportedly contributing at least some of the $1 million in initial funding SWF is "closing in on" (whatever that means). But Meta's only one of the hot-button issues SWF critics have spotlighted, and More questions than answers: another post about the Social Web Foundation and the fediverses started broadening the focus on other open questions ... like whether SWF will focus on safety.

Today's ActivityPub Fediverse still isn't growing, and safety continues to be a problem. It's not hopeless, and there are some very encouraging signs of progress.1 Still, there's a lot more to be done.

SWF's mission talks about a "growing, healthy" Fediverse, but their initial plans don't seem to be paying much attention to the "healthy" part. For example:

  • SWF's initial list of projects doesn't include anything addressing current Fediverse safety issues.2
  • As far as I know, none of SWF's advisors are safety experts, and IFTAS' Jaz-Michael King is the only one of their launch partners who has a history of prioritizing safety.
  • SWF's list of launch partners didn't included any of the safety-focused software projects I mentioned in footnote 1.

Meta's involvement with SWF adds to the concerns. Once Threads turns on two-way federation, there are some acute potential safety threats in the rest of the Fediverse – especially looking at Meta's failure to moderate extreme anti-trans hate. But Meta's supporters have mostly dismissed these fears with vague claims that "we have the tools". Yeah right. And earlier this year, Meta's Rachel Lambert and Peter Cottle talked about the possibility of offering their ineffective racist, anti-LGBTQIA2S+, Islamophobic automated AI-based moderation tools to the rest of the Fediverse. SWF's research director Evan Prodromou is a big fan of AI-based moderation; in his recent book on ActivityPub, for example, most of the very very limited discussion of moderation similarly talks about the potential for AI., "3 What could possibly go wrong?

Of course, as I said an earlier post

"[N]othing's set in stone at this point. Most non-profits' initial projects, program, staffing, network of participants, and even mission evolve. My guess is that'll be the case for SWF as well."

So if SWF decides they do want to focus on safety – and do it in a way that avoids making the Fediverse's equity problems worse and helping Meta more than people in the Fediverse – here's some ideas

Commit to spending at least X% on safety

On SocialHub, SWICG Trust & Safety Task Force lead Emelia Smith's suggested that SWF should commit to devote at least X% of its resources to safety. That would be a good first step, and if X% is high enough it would send important signal they intend to prioritize this issue.

Note to any of SWF's current and potential funders who happen to read this: commiting to spend X% of your fediverse budget on safety is a good idea for you too! Whatever SWF decides to do on this front, directly funding safety-oriented projects and organizations will complement SWF's efforts. As Samantha Lai and Yoel Roth highlight in Online Safety and the “Great Decentralization” – The Perils and Promises of Federated Social Media, "Funding is the most urgent gap."

What should X be? In a Mastodon poll I did, most people suggested 25% or 50% would be reasonable, although many thought that it should be even higher. Of course, there were only about 45 responses to the poll, so take it with a grain of salt, but it's still an interesting data pont.

Support diverse participation on the W3C standards group's Trust and Safety task force

On SocialHub, Evan mentioned that "SWF is going to support my work (and others’) at the W3C on ActivityPub". The new SWICG Trust and Safety task force is a great place to focus these resources. Diverse participation in this effort vital ... and, it's not realistic to ask marginalized people to volunteer in situations where others (like the many SWICG members employed by tech companies where participation is part of their job) are getting paid for their time.

Consent is a core value of much (although certainly not all!) of the ActivityPub Fediverse – and, as Eight tips about consent for fediverse developers discusses, a great opportunity for a potential competitive advantage. But consent-based tools and infrastructure historically haven't gotten a lot of attention in the ActivityPub Fediverse.

Work with people who are targets of harassment to develop tools for collaborative defense

Tools on other platforms like Block Party and Filter Buddy that allow for collaborative defense against harassment and toxic content, and could also apply in a federated context – initially as standalone tools if necessary, but ideally integrated into existing apps and web UIs. The developers of both of those tools are on the Fediverse, and might well be willing to help.

And (not to sounds like a broken record) both Block Party and Filter Buddy highlight that tools designed and implemented by (and working with) marginalized people who are the targets of so much of this harassment today are likely to be the most effective.

Support threat modeling work

Threat modeling is an important technique for improving safety (and security, privacy) that is only rarely used in the Fediverse. Improving privacy and safety in fediverse software sketehes what a potential project could look like, and also includes the important point that

"Threat modeling needs to be done from multiple perspectives, so it's crucial that participants and experts include people of color, women, trans and queer people, disabled people, and others whose safety is most at risk – and especially people at the intersections."

Develop automated tools to help moderators

Automated tools can potentially be incredibly valuable for moderation and other aspects of trust and safety. Even simple tools can help diminish repetitive tasks and reduce the load on moderators; the r/AskHistorians team that Sarah Gilbert describes in Towards Intersectional Moderation: An Alternative Model of Moderation Built on Care and Power makes extensive use of basic pattern-matching tools. Basic statistical models and counter/frequency based logic are also useful. There's clearly a lot of low-hanging fruit here!

Do any AI-related work in partnership with AI researchers who take an anti-oppressive, ethics-and-safety-first approach

Even though I'm very skeptical about the racist, sexist, anti-LGBTQIA2S+, Islamophobic (etc) AI technologies that Meta and others have adopted today, and the exploitative and non-consensual data Meta and others have used to create the underlying racist, sexist, anti-LGBTQIA2S+ (etc) models that power them, there are a lot of great AI researchers in the Fediverse who take an anti-oppressive, ethics-and-safety-first approach – like Dr. Timnit Gebru and the rest of DAIR Institute. So there's a real opportunity here to do it right.

Partner with IFTAS

IFTAS (a non-profit that focuses on federated trust and safety) is a SWF launch partner, but there hasn't been any discussion of concrete plans. Of course I might be biased here (I'm on IFTAS' Advisory Board) ... still, it seems to me that if SWF can use their connections with their corporate and foundation funders to help unlock additional funding for IFTAS, it could magnify the impact of both organizations – as well as address concerns I've heard from several trust and safety folks that SWF will unintentionally wind up competing with IFTAS for the same pool of funding.

And ...

These are only the tip of the iceberg. Steps towards a safer fediverse talks explores potential improvements in SWF's focus areas of people, protocols, and plumbing at length, and Threat modeling Meta, the fediverse, and privacy includes some recommendations for dealing with some aspects of the threat from Meta.

And I'm not the only one with opinions! In Online Safety and the “Great Decentralization” – The Perils and Promises of Federated Social Media, for example, Lai and Roth suggest that "larger platforms could straightforwardly share information about the moderation decisions they’ve already made." Of course, there's a big red flag here: many of Meta's moderation decisions are racist, sexist, anti-LGBTQIA2S+, Islamophobic, and favor authoritarian governments. Still, there's certainly room for significant progress in information sharing, and building on and reinforcing existing work such as FediMod FIRES could be a way to have significant impact.

Of course, these are far from the only ideas ... and SWF's budget isn't big enough to fund everything. But there certainly is no shortage of worthwhile projects, so let's hope they fund at least some of them!

And let's also hope that whatever SWF winds up doing in this area, they're approach it as something that benefit everybody, not just Meta-friendly instances and the corporate fediverse.

Notes

1 For example:

3 Prodromou's current employer OpenEarthFoundation is building a platform to empower cities to decarbonize with AI and data-driven solutions, and he previously founded fuzzy.ai, a developer-focused AI company, so it's not surprising that he'd favor this direction. As far as I know though he's never discussed potential solutions for the biases and ineffectiveness of today's AI-based moderation systems.

2 SWF does have a project focusing on end-to-end encryption (E2EE) in ActivityPub, presumably a followup to the work that Research Director Evan Prodromou and Product Director Tom Coates did on ths topic over the summer, funded by Ethereum Project. E2EE's a very good thing of course, but almost nobody I talk to thinks E2EE will help address the ActivityPub Fediverse's current safety problems. In a comment on SocialHub, Prodromou's said his user interface research incremental extensions to the Mastodon UI shows that the lack of E2EE is a big inhibitor for people to use DMs at all, and that DMs are a crucial part of the stack for having real, human relationships happen on the Fediverse" so perhaps SWF sees this as a growth strategy. But from a safety perspective, people who need secure DMs should use Signal, not the Fediverse, and adding E2EE to software that isn't designed with a security mindset doesn't actually led to secure communications.

Update log

October 14: initial limited-distribution unpublished draft

October 16: published as draft, with title "Will the Social Web Foundation prioritize safety?"

October 17-19: incorporating feedback on draft, thanks all!

October 20: published for reals, with a new title.

Image credit

"Safety First" by Ricardo Wong, via Flickr