Understanding the ratings

Evaluation criteria

The ratings are based on six criteria: cost, capacity requirements, features, accessibility, ethics/transparency, and track record/reliability. The score for each was standardized to a scale of 1-100, giving each criteria equal weight. The overall score for each platform is the average across the six criteria:

1. Platform cost:

Include two subcriteria: Cost of the software and additional configuration cost. Platforms with lower software and additional configuration costs (if any) are assigned more points.

2. Capacity requirements: This criterion focuses on the internal capacity requirements necessary to configure and use a platform for participatory processes. There are four variables

  1. Tech expertise required for configuration. Platforms received a higher score if they don’t require an expert to configure.

  2. Tech expertise required for maintenance. Platforms also scored higher if they don’t require an expert for maintenance.

  3. Hosting capacity. Platforms that are hosted locally received a lower score, while platforms on the cloud were scored higher. 

  4. Tech support. Platforms offering full-service support received a higher score. Support may include integration into existing processes (either virtual or on-demand) and onboarding support. 

3. Features: Platforms received a point for each feature and N/A if a feature isn’t offered or the committee members could not determine if it is. Thirteen possible features were identified

  1. Idea collection, through which users are allowed to submit proposals. After submission, they may modify it.

  2. Survey.

  3. Proposal-drafting, which allows multiple users to cooperate and co-create a proposal together (for example, draft a policy in a shared space).

  4. Voting.

  5. Discussion forum.

  6. Sentiment analysis (which categorizes the emotional tone of discussions).

  7. Commenting and sharing.

  8. Mapping, which allows projects and user contributions to be connected visually to a particular location.

  9. Process planning tools that enable users to design an activity or process to match their needs.

  10. Communication with participants. 

  11. User verification and security.

  12. Quantitative data analysis.

  13. Heat mapping, which uses data visualization to indicate strength of support for an idea).

    Note: This criterion is based on the type/quantity of features. It does not rate the features based on quality. In the future, we may explore how we can evaluate feature quality

4. Accessibility. Platforms were given more points if they allow meaningful participation, including by those facing barriers. Seven variables were evaluated: 

  1. Number of countries where the platform has been used. (This metric is a reflection of the adaptability of a platform to different contexts.)

  2. Functionality in multiple languages.

  3. Accessibility for people with disabilities. (The platform is designed to accommodate people with visual or hearing impairments and/or clients can make further customizations.) 

  4. Integration with in-person activities. (Platforms that facilitate integration with in-person activities received a higher score.)

  5. Browser and technology compatibility. (Platforms compatible with the most-used browsers or other technologies received a higher score.) 

  6. Connectivity requirements. (Platforms suitable for communities with connectivity challenges received a higher score.)

  7. Mobile friendliness. (Platforms that are fully functional on mobile devices received a higher score.)

5. Ethics and transparency. Platforms received more points if they publish rules governing data use, protection of personal information and content moderation. Four variables were considered: 

  1. Open source: Is the code published under an open-source license? Or is it open core, partly open source and partly proprietary?

  2. Data policy: Is its data policy transparent? (For example, does the platform inform users how their data are used?) Does the platform have a public ethics standard?  (For instance, does the platform sell user information to third parties?)

  3. Data protection: Are collected data protected from leaks and outside use?

  4. Code transparency: Is hate speech flagged??  Is the content-moderation process transparent

6. Track record and reliability: Platforms were given more points if they have been on the market longer, indicating maturity and reliability. They also received more points if they have a diverse clientbase. These two variables are summarized below:

  1. Length of time on the market. (A new platform that makes it into the ratings will progressively win points over time.)

  2. Types of current users (for example, governments, schools, CSOs).

 
 
 

A coach teaches a resident in Rosario, Argentina, how to engage with the government online.