Moderating a civil discussion
/Moderating a civil discussion
Conversations can take many forms on digital participation platforms. Common examples include comment fields, opportunities to submit proposals, the ability to send private messages, and the option of gathering signatures for a petition. You should develop a plan for how you and your team will moderate these various types of conversations.
To ensure civil discourse and protect your users from hate speech, harassment, and other harm, keep in mind:
Some groups of people, especially women of color, experience higher rates of online harassment.,
Online harassment doesn't always fit the definitions found in terms-of-service agreements.
So, what should you do?
Civil-discussion features
Some platforms remind users to be civil as they type. For example, the Expressão digital participation platform gently nudges its users toward more civil language as they compose their drafts.
These types of features have been adopted by mainstream social media platforms in recent years to help improve conversations. Google offers a free API, Perspective, to identify "toxic" language in comments and other digital conversations. While users can alter their language to “game” the algorithm, it's a start.
User authentication
Some digital participation platforms also promise to verify that each user actually lives in the eligible town or region. (We've noted in the comparison matrix which platforms have an identity-verification feature).
Validating people's identification cards is one way to ensure eligibility. However, this process doesn't guarantee participants will always be civil. Requiring official identification also can depress participation. Thus, verifying users may be counter to your goal of broad engagement. The more hurdles you put in front of participation, the fewer people will engage. For example, it's very common for digital participation platforms to require users to verify their email address before gaining access. Yet in many contexts, including those described by our interviewees in Chile and Jamaica, a large proportion of would-be participants either didn't have email addresses or didn't have access to them.
In some places, like Estonia, the government operates a digital-identity program. Residents thus are already familiar with the process, making verification less of a hurdle than in other contexts.
Despite the barriers they present (or perhaps, due to them), user-authentication features are often preferred by governments or other hosts when significant budgets or binding votes are involved. One compromise is to require a mobile phone number but not an identity card. Be wary of services that promise to "geo-verify" users by their location. These services base their determinations on users' IP addresses, and can be fooled with VPN connections.
Another way to protect results without erecting hurdles to participation is to focus on active fraud monitoring. You can use a web analytics service to monitor user activity and look for irregular patterns, like a proposal that receives an abnormal spike in votes over a very short time period.
Blocking and reporting
At the very least, users should be able to block someone who harasses them. And they should be able to report harassment to the platform provider. Test this process internally before you launch to make sure messages or flagged content go to a responsive person who can help resolve the issue.
Human moderation
Regardless of the various technical functionalities that promise to address harassment, you will want to allocate human resources to this challenge. If your participatory process involves meaningful decision-making power (and hopefully, it does), conversations can easily turn contentious and political. Some people will stop participating if they feel a process has become uncivil. Moderators can help keep conversations productive and limit the negative impacts of bad-faith participants.
To assure that people of all ages feel safe when they participate, it is critical to allocate adequate resources to moderation. Before launching your process, identify moderators who speak the language(s) of the people you wish to engage. Participants might use languages or vernacular with which your team isn't familiar. So, assign someone on your team the responsibility of regularly checking in to make sure the platform is a healthy, accessible place for everyone.
In addition, establish some basic guidelines for the discussion on your platform. For example, specify that comments or suggestions that violate laws or personally attack someone will be deleted.
Some platform operators offer to moderate conversations on your behalf for an additional fee. For example, PlaceSpeak offers this service as an add-on to its platform for 1,000 USD per month.
Next: Cybersecurity
Previous: User support