Supporting Good AI Governance in Municipalities: Findings and Recommendations, pt 1.

Through our role as the Lead Technical Advisor in the Community Solutions Network, Open North has worked to bring leading ethics, policy, and governance research into productive connection with the concrete and practical realities facing practitioners in local government. This intersection is, we think, where real and lasting improvements to peoples’ lives will be made. It is where the metaphorical rubber meets the road for using data and digital technologies to promote equity and sustainability. Taking an abstract plan, like tackling the digital divide, assessing solutions, like free wifi in public spaces, and then developing an implementation plan is a complicated process intersecting a wide range of policy, resource, support, time, and other contingencies. However, it is also essential to turning a well-intentioned idea into a responsible and effective project. Providing the resources, space, and support to work through this implementation process is the core purpose of our CSN project.

It was to this end that we held a Community of Practice workshop as a part of CSN to initiate such a practice-oriented discussion around ‘artificial intelligence’ technologies. We explicitly place the term in scare quotes to highlight that it is a broad and frequently disputed term that covers a fuzzy range of tools and debates about the tools. It was in this very fuzzyness that we situated the conversation in order to begin mapping out a data and experience-driven understanding of how local governments were approaching this very topical problem.

There is a rapidly expanding mountain of literature on theoretical best practices around ‘AI’ technologies. This research is vitally important and has led to a strong international convergence around ‘AI’ ethics principles, particularly (with some differentiation in definition): “fairness, justice, and inclusion; transparency, explainability, and interpretability; privacy; responsibility and accountability; reliability, robustness, safety, and security.”1 However, there is considerably less research on how these abstract principles are being picked up, thought through, and implemented by practitioners. Some research groups, like Urban AI, BABL AI, CIDOB, and GovLab have released reports on this intersection, but a much greater understanding of the needs, interests, barriers, and progress amongst local government practitioners is still desperately needed to ensure the kind of responsible implementation we strive for. 

Together with our partners at the Concordia University’s Applied AI Institute we developed a workshop series to convene government practitioners on municipal and provincial levels, with the goal to have a conversation about the practicalities they were facing as they navigate the accelerating discourse and technological trends around ‘AI.’ Rather than the more customary research approach of assessing governments’ progress against a normative framework of good governance, we chose to explore the fuzzy realities of their interests, steps, barriers, and needs — all with the aim to better understand how to support their attainment of good governance.

This workshop was held as a part of the CSN program, however it is only the first in a bigger series of workshops that bring together practitioners and experts in local government, academia, and civil society to work through a range of different ‘AI’ governance issues. The coming months will see at least two more workshops, both with our partners at the Concordia Applied AI Institute. If you are interested in participating, please reach out to us at thomas@opennorth.ca

Insights

The workshop included over 25 representatives from a wide range of local and regional governments, each differing in size, geography, and technological development. However, while this socio-economic, professional, and technological heterogeneity provided an important range of perspectives on the various questions, it also underscores the several overarching positions that emerged.

Uncharted Territory

The most important — and perhaps also most unaccountably under-discussed point in the existing research — was the widespread sense amongst practitioners that they are in uncharted territory and are ambivalently leading the way. This position they find themselves in is sometimes implied in reports, via analyses that, for example, many cities lack comprehensive ‘AI’ governance frameworks.2 However, such assessments do not capture the many intersecting practical realities that are involved in the production of a position that results in, for example, a lack of comprehensive ‘AI’ governance. Understanding these realities is key to the successful implementation of an ‘AI’ ethics principle like governance. The following points are all implicated in the situation, and thinking through their resolution charts a way forward.

Education

With the literal and metaphorical sense of uncharted governance came the entirely logical need to educate those involved. Insufficient digital literacy on the topic is both wholly understandable, given the nascence of the issue, and also well-trodden in the general literature on government digital transformation. Open North offers a trove of research and resources on the topic, and has worked for years to support local governments to build out their digital literacy and capacity. However, the issues in digital transformation evolve as quickly as the technologies, and this in-depth Community of Practice conversation unearthed several intersecting aspects of education that play crucial roles for ‘AI’. 

Several participants had found that their residents had an insufficient understanding of the topic and were worried that, for example, announcing the trial of a new system could spark hyperbolic fears and even conspiracy theories, as in the case of 5G. Participants universally agreed that extensive public engagement on the topic was essential to the responsible and successful exploration of ‘AI’ – yet none thought they had a clear idea of how best to undertake such an engagement. This parallels BABL AI’s finding that governments think public engagement is important, but aren’t undertaking it;3 and adds the layer that, at least in part, they aren’t undertaking it because educating on ‘AI’ is considerably more complicated than on many other issues, and they don’t yet know how best to do it.

In conjunction with external engagement came several issues around internal literacy. Here too, unsurprisingly, local governments are well aware that there exists a literacy deficit, however the deficit is not uniform. Different kinds exist at different levels of the organizations. Participants described how many entry-level employees were worried that ‘AI’ was going to replace them, while those in senior positions were resistant to technological change in general and dismissive of or uninterested in grappling with ‘AI’ as a result. Interestingly, and in our opinion very importantly, many participants were also keenly aware of the difficulty of defining what the education of either group should even consist of. One participant framed it as the difference between “educating” and “asking questions,” i.e. between imparting established knowledge and fostering a culture of critical reflection. These two modes play out differently when engaging internally and externally.

Internally, participants noted that they were struggling to simultaneously support adequate comprehension to allay fears and enable a productive conversation, while also undertaking the necessary and complex questions of solutions and risk assessment. The former is something of a prerequisite for the latter, and so doing both at once is challenging. In addition, several said that ‘AI’ necessitated an expansion of the usual analytical approaches to digital tools: established privacy and data management regulation compliance was no longer sufficient, and more comprehensive and complicated questions about impact, risk, data governance, ethics, transparency, and public engagement were unavoidable — but also equally new territory for almost everyone in the organization. Local governments need support growing internal competency as well as support working through the wholly new questions demanded by ‘AI.’ As BABL AI’s report points out, local governments with the strongest internal cultures of literacy and critical conversation also have the better governance outcomes.4

New Strategies

Externally, participants were equally aware of the need to be as transparent and inclusive in their “questioning” approaches to ‘AI.’ The importance of involving the public in critical deliberations about risk, purpose, value, governance, etc around ‘AI’ was not lost on anyone. However, the chart, the clear strategy, for how best to undertake such an engagement was still a large unknown. A known unknown, but unknown nonetheless. In addition to the lack of established policy, participants also raised practical issues around cost and benefit of types of engagement: how to involve the public, to what degree, and when or how frequently. These are expensive questions to experiment with for smaller municipalities, and such there is risk and resistance here too to being the leaders. Further discussion revealed that this lack of an established strategy is also intertwined with several other missing metaphorical charts for the new territory, making the position all the more difficult.

The Community of Practice conversation uncovered a complicating tension at the heart of this unknown: a crucial — and quite legitimate — demand from the public is that the government have good governance frameworks for emerging technologies and practices. It is the basis for their trust in adequate risk management, and more for democratic legitimacy as a whole. However, as one participant succinctly put it, the government “is not in the risk taking business” and implementing these technologies entails risk. Thus what is needed is not just a new chart for community engagement, but also ones about other foundational issues like data governance (an important connection that others have noted too), intersectional risk or impact analysis frameworks, prototyping and testing strategies, or measurement and evaluation tools. However, many medium-sized and smaller cities frequently do not have the resources to undertake this kind of development work.

Insufficient policy guidance from federal and provincial levels was frequently cited as a serious constraint to participants’ abilities to act on this tension. It was pointed out that, on impact assessment at least, there does exist some established policy: the Canadian federal government does have an algorithmic impact assessment tool that, while it officially only required for federal organizations, could be used by cities. However, many participants still said, that taken as a whole, there was a wide-scale lack of leadership for them to safely follow, and considerable risk in going it alone. However, as previously pointed out and was raised here again, there is also considerable risk in waiting. The public could lose trust, and they could be unprepared to deal effectively with vendors. The case of Sidewalk Labs in Toronto was frequently cited as an example of a municipality that was unequipped to assess what was offered, and its outcome continues to serve as a clear and present warning. Indeed, Toronto’s response in the form of the Digital Infrastructure Strategic Framework was cited in the same breath as the kind of policy many participants needed — yet didn’t have the resources or official support to work through implementing.

It was at this point that the conversation revealed another practical aspect of the problem: the issue frequently isn’t that the information or knowledge of the theoretical best practices is lacking, but rather that the tangible resources for the implementation process as described at the outset of this report are insufficient and, equally problematically, scattered throughout local government. This siloing and fragmentation of what few resources and skills exist further exacerbates the organization’s capacity to cohesively respond. As a result, the combination of heterogeneous literacy issues and insufficient official regulatory leadership is compounded by a severe cross-spectrum resource shortage, such that even when participants knew how to proceed they found they could not. This insight could well shed some useful causal light on BABL AI’s results that “the most commonly cited reasons for not yet having Al governance were concerns about the “feasibility” of doing so, and waiting for regulation.”5

The following Takeaways section deals with these points in greater detail.

Takeaways

This is just the first workshop in a series designed to first explore and map the issues, and then unpack, analyze, and design solutions. However, a few key analytical points stand out and are worth outlining in greater detail here. 

First, the difficult position local governments find themselves in between choices and risk is not simply an interesting ‘academic’ debate, but when seen from the perspective of an apparent irresistible tidal wave of ‘AI’ tools being marketed to every unit from traffic to waste management to HR it is a very real problem that needs to be dealt with responsibly. A central element in the implementation process of good governance principles is assessing the viability of a potential ‘AI’ solution and being able to reject it if it falls short. This is a crucial test, yet one on which there is little leadership support and insufficient existing resources. Developing this kind of capacity to critically compare problem analysis to proposed solutions and weigh risks and alternatives, while effectively involving the public, is critical.

Second, the combination of the complex problem of building sufficient digital literacy and the lack of official policy leadership has led not only to  some known unknowns, but more problematically to several poorly known or even unknown unknowns around the broader governance aspects required for the responsible assessment and potential implementation of ‘AI.’ The issue of public engagement is largely understood, but others around data governance or algorithmic impact assessments6 much less so. Both of these components are not only crucial to responsible governance, but also include significant systemic analysis and so can provide important frameworks for digital transformation in general and ‘AI’ in particular.  It is in this gap that outside support of a kind that can introduce new and more comprehensive thinking is most critically needed.

Third, the practical realities we have underscored throughout this report are also largely unique to each case. While there are trends, some of which we’ve identified here, the solutions to these issues will depend upon the specificities of their contexts. It is in operationalizing the abstract project goals as well as the relevant abstract governance principles through the specific project contexts that a comprehensive process of implementation is most necessary. This is that intersection where the rubber hits the road as we said at the beginning, and it is here that Open North and this workshop series will continue working to provide local governments with the support they need.

  1. Jovana Davidovic et al., “The Current State of AI Governance” (Iowa City, IA: BABL AI, 2023), 11, https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf. ↩︎
  2. Sarah Popelka, Laura Narvaez Zertuche, and Hubert Beroche, “Urban AI Guide” (Urban AI, 2023), https://urbanai.fr/wp-content/uploads/2023/03/Urban-AI-Guide-2023-V2.pdf; Sara Marcucci, Uma Kalkar, and Stefaan Verhulst, “AI Localism in Practice: Examining How Cities Govern AI,” November 15, 2022, https://dx.doi.org/10.2139/ssrn.4284013; Jovana Davidovic et al., “The Current State of AI Governance” (Iowa City, IA: BABL AI, 2023), https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf. ↩︎
  3. Jovana Davidovic et al., “The Current State of AI Governance” (Iowa City, IA: BABL AI, 2023), https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf. ↩︎
  4. Jovana Davidovic et al., “The Current State of AI Governance” (Iowa City, IA: BABL AI, 2023), https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf. ↩︎
  5. Jovana Davidovic et al., “The Current State of AI Governance” (Iowa City, IA: BABL AI, 2023), 13, https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf. ↩︎
  6. Jacob Metcalf et al., “Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, 735–46, https://doi.org/10.1145/3442188.3445935. ↩︎