Evaluation criteria
The ratings are based on six criteria: cost, capacity requirements, features, accessibility, ethics and transparency, and track record and reliability. The score for each was standardized to a scale of 1-100, giving each criterion equal weight.
Platform cost: The cost of the software. Platforms with lower software and configuration costs (if any) are assigned more points.
Capacity requirements: This criterion focuses on the internal capacity requirements necessary to configure and use a platform for participatory processes. There are five variables:
Tech expertise required for configuration. Platforms received a higher score if they don’t require an expert to configure.
Tech expertise required for maintenance. Platforms also scored higher if they don’t require an expert for maintenance.
Tech support. Platforms offering full-service support received a higher score. Support may include integration into existing processes (either virtual or on-demand) and onboarding support.
Process planning guidance. This includes templates for specific types of participatory process, and other backend tools that enable users to design all steps of participatory activities or processes according to their needs.
Hosting capacity. Platforms that provide flexible hosting options received a higher score.
Features: Platforms received a point for each feature and N/A if a feature isn’t offered or the committee members could not determine if it is. Thirteen possible features were identified:
Idea collection. This allows a person to submit their proposal.
Survey.
Proposal-drafting. This allows multiple people to cooperate and co-create a proposal together (i.e. draft a policy in a shared space).
Voting.
Discussion forum.
Commenting and sharing. This refers to social engagement features.
Mapping (allowing projects and user contributions to be connected visually to a particular location).
User timeline. Tools that allow users to visualize where they are in the process, e.g., idea collection stage, voting stage, etc.
Notifications to participants.
User verification.
Data visualization (e.g., heat mapping, or using data visualization to indicate strength of support for an idea).
Quantitative data analysis.
Sentiment analysis (categorizing the emotional tone of discussions).
Note: This criterion is based on the type/quantity of features, i.e. whether or not the platform has any of these features. It does not rate the features based on quality. In the future, we may explore how we can evaluate feature quality.
Accessibility. Platforms were given more points if they allow meaningful participation, including by those facing barriers. Seven variables were evaluated:
Number of countries where the platform has been used. (This metric helps in indicating if a platform is adaptable to different contexts)
Functionality in multiple languages.
Accessibility for people with disabilities. (Clients can customize platforms for people with visual and hearing disabilities)
Hybrid integration with in-person activities (platforms that better integrate with in-person activities received a higher score).
Browser and technology requirements for users (platforms compatible with the most-used browsers or technology received a higher score).
Connectivity requirements (platforms suitable for communities with connectivity challenges received a higher score).
Degree of mobile device responsiveness or compatibility (platforms that are fully functional on mobile devices received a higher score).
Ethics and transparency. Platforms received more points if they publish rules governing data use, protection of personal information and content moderation. Five variables were considered:
Open source: Is the code published under an open source license? Is the source code easy to find and recently updated?
Data policy: Data policy transparency (e.g., do platforms inform users how they use the data?) and ethical use (i.e., do platforms sell user information to third parties?)
Data protection: Are collected data protected from leaks and outside use?
Transparency of moderation: How transparent are the content moderation services? Do they include moderation against hate speech?
Raw data export: This feature increases (in theory) transparency as well as autonomy.
Track record and reliability: Platforms were given more points if they have been on the market longer, indicating maturity and reliability. They also received more points if they have a diverse clientbase. These three variables are summarized below:
Length of time on the market. (A new platform that makes it into the ratings will progressively win points over time.)
Profile and breakdown of institutional users (for example, governments, schools, CSOs).
Diversity of contributors. (This metric indicates the extent to which a platform relies on multiple actors to be sustained. For instance, if only one company contributes, if it failed, the platform would disappear.)