Today, both our systems of public governance and the processes of civil society are digitally dependent. We use text messages to mobilize, social media to share the news, phone cameras to document abuse, and email, websites, online forums, and internet portals for our daily business of citizenship. Our dependence on them means that these digital systems are now part of the infrastructure of democracy, both directly when government agencies use them for governing and indirectly when we use them in civil society. Just as voting is a critical part of democracy, so are the digital systems that now undergird our campaigns, our election information exchange, the machines we use to collect and count our votes, and the systems by which we announce the results. These constitute the digital infrastructure for democracy.
Similarly, the websites, apps, internet tools, databases, algorithms, and communications tools used by nongovernmental organizations, civic associations, political protestors, community organizations, and, yes, philanthropy, form the core digital infrastructure for today’s civil society. In more ways than we can probably count, our daily practices of democracy are now dependent on digital data, infrastructure, and tools.
Since our democracy depends on them, these technological systems should be subject to some degree of scrutiny when they are being used as part of the decision-making apparatus of democratic institutions.
Institutional practices, such as open meetings, sunshine laws, media coverage, and reporting requirements, are the familiar ways that we “see into” the decision-making processes of our governments. But policymakers are increasingly dependent on data sets and algorithmic analysis—systems that are much more difficult to peer into than a meeting room. For one thing, scrutinizing computational systems requires the ability to read and write software code. Second, much of the code that’s driving these systems is proprietary, and only the companies that sell the tools can see the component pieces of the analytics process. Third, and perhaps of greatest concern, it’s increasingly the case that even those who program the systems, write the code, and monitor the machines as they learn can’t actually explain what’s going on.
In the E.U., the growing use of algorithms for decision making has led to the passage of laws protecting people’s “rights to explainability.” Citizens have a right to an explanation of decisions that affect them. If the policies are informed by digital data and algorithms, then the data used and the computational processes need to be explainable. Other countries and regions lack similar mechanisms for citizens to interrogate the systems that make decisions on or about them—either by governments or by corporations.
The volume of data and calculations and the number of dependent variables in many algorithmic systems are too great and too complex to be succinctly explained. The question we face as democracies is whether such systems—even if they promise greater accuracy—should be tolerated if they can’t be scrutinized.
Takeaways are critical, bite-sized resources either excerpted from our guides or written by GrantCraft using the guide's research data or themes post-publication. Attribution is given if the takeaway is a quotation.
This takeaway was derived from Philanthropy and the Social Economy: Blueprint 2017.