top of page

Civic and Political Leadership in the Age of AI! High-Tech Worlds and the Challenge of Democracy, Freedom and Human Dignity.


Kristina Nano

Doctoral Candidate in Leadership, James Madison University (JMU), USA


Abstract


Artificial intelligence and high-tech systems are reshaping how political will is formed, how information flows, and how citizen participation is organized.


The state, the market and digital platforms have come to administer an expanding share of societies’ data, attention and trust, while algorithms filter public discourse through logics that operate largely in the background (Manheim & Kaplan 2019; Tuohy-Gaydos 2024).


Research over the past decade on AI, democracy and digital leadership shows that the generation of artificial content, political micro-targeting and practices of digital authoritarianism, particularly in the dimensions of representation, accountability and trust, directly affect the core pillars of democratic order (Kreps & Kriner 2023; Freedom House 2018; Mantellassi 2023; Nature Editorial 2025).


Against this backdrop, the study draws on a conceptual and comparative analysis of the international literature (2018 - 2025) on AI, democracy and digital leadership, with a particular focus on risks to representation, accountability and trust, and proposes a new orienting framework for public leadership, termed the Digital Civic Stewardship Model.


The article articulates seven pillars of this model, situates them in relation to the existing literature on digital democracy, and illustrates the model’s operation through a micro-case on AI-enabled political micro-targeting.


The framework is interwoven with contemporary developments in civic technologies, democratic digital infrastructure and citizenship education programmes in the online ecosystem, with the aim of shaping a leadership profile that protects the inalienable dignity of the person, freedom of conscience and the integrity of institutions (Garcia & Hubbard 2025; CIVICUS 2025; Sefton-Green 2020).


In conclusion, the study argues that civic and political leadership in the age of algorithms finds its deepest meaning in defending the space where the human being remains more than their data profile and where the voice of conscience stands beyond any predictive model.


Introduction


Every technological revolution confronts society with the question of how it will use the new power placed in its hands.


Artificial intelligence, global platforms and the data economy create an environment in which even the simplest communicative act leaves a trace, is collected, analysed and transformed into epistemic and political capital.


Political will is formed in the presence of an algorithmic architecture that filters reality according to commercial, strategic and often geopolitical criteria (Manheim & Kaplan 2019).


Over the past decade, the literature on AI and democracy has brought to light a persistent tension.


Studies such as Kreps and Kriner (2023) analyse how the mass generation of political messages by generative models blurs the distinction between authentic citizen voices and simulated communication, reshaping elected representatives’ perceptions of the electorate’s expectations.


Freedom House (2018) and GCSP (Mantellassi 2023) broaden this concern to the global level through cases in which digital infrastructure is used for surveillance and political disciplining.


On the other hand, programmes such as the Digital Democracy Initiative and the Ash Center’s work on digital civic infrastructure show that technology has a tangible capacity to become a new bridge between citizens and institutions (Digital Democracy Initiative 2023; Garcia & Hubbard 2025).


This article seeks to formulate a normative response within this field of tension by asking what profile of civic and political leadership safeguards democracy, freedoms and human dignity in a world permeated by AI.


The answer is developed in three steps, a brief reading of the risks that AI amplifies for democracy and the digital public sphere, an analysis of the opportunities created by civic technologies, digital democracy platforms and citizenship education programmes, and the proposal of the Digital Civic Stewardship Model, illustrated through a micro-case on political micro-targeting from which practical implications for public leadership are drawn.


In this way, the article treats leadership as an active stewardship over the algorithmic architecture of the public sphere, which then takes full shape through the Digital Civic Stewardship Model, rather than merely as a communication skill in digital media.


Theoretical Framework: Public Leadership in the Technological Architecture of Our Time


In this article, public leadership is understood as the capacity to orient the community toward the common good through vision, character and the construction of institutions that retain their weight beyond electoral cycles.


The scholarly tradition links this role to integrity, responsibility and the ability to hold together diverse interests around a set of principles that do not collapse under the pressures of the moment and that confer dignity both to self and to others.


Developments in digital leadership add the dimension of the strategic use of technology, work within interdisciplinary teams and the reading of social media dynamics (Villaplana Jiménez & Fitzpatrick 2024).


In the sphere of civic participation, studies on digital democracy and online activism show that social platforms, advocacy tools and e-participation instruments intensify communication between citizens and institutions, while also generating new forms of fragmentation, polarization and fatigue (CIVICUS 2025; Sefton-Green 2020).


This context calls for a form of leadership that sees technology in an organic relationship with the social body, local communities, families, associations, faith communities and civil society organizations.


In this sense, civic leadership appears as the capacity of individuals, organizations and movements to build trust, facilitate cooperation and safeguard spaces for citizen agency, both in the physical world and in the online ecosystem.


Political leadership is expressed as the exercise of public power in a way that guarantees real freedom, the rule of law, accountability and protection of those most exposed to crises and abuses.


These two dimensions of leadership converge in the duty of stewardship over the space in which civic will is formed, a duty that, in the following sections, takes concrete shape through the Digital Civic Stewardship Model.


That space is the primary terrain on which democracy is tested in the age of AI.


Risks: AI, Disinformation and Digital Authoritarianism


The risks that AI poses to democracy can be grouped into three broad areas, the distortion of political communication, the concentration of control over information, and the erosion of privacy and informational self-determination.


Kreps and Kriner (2023) describe how generative models produce political messages, emails and correspondence that mimic citizens’ style, making it increasingly unclear which reactions truly come from the electorate and which are issued by automated campaign systems.


Elected officials face a feedback landscape in which authentic signals are embedded in a sea of correspondence generated by intelligent systems.


Reports by Freedom House (2018) and Mantellassi (2023) on digital authoritarianism set out the many ways in which control-oriented regimes use digital infrastructure for mass surveillance, filtering of public discourse and punishment of dissent.


Social media, payment platforms and social credit systems become tools of discipline and sanction.


Even societies with democratic institutions face the risk of a gradual transplanting of this data-control logic into the public digital sphere whenever the culture of freedom proves too weak to set clear boundaries.


A third dimension relates to privacy and informational self-determination.


Manheim and Kaplan (2019) analyse how artificial intelligence amplifies risks to privacy and democratic processes through behavioural profiling, content analysis and preference prediction.


Editorials such as Nature’s “AI has a democracy problem” underline that citizens inhabit the online environment with a sense that every trace of their behaviour can be turned into an instrument of influence against them (Nature Editorial 2025).


Trust in institutions, media and democratic procedures themselves thus acquires new weight and becomes more fragile in the face of invisible practices of data processing.


Opportunities: Civic Technologies and Democratic Digital Infrastructure


Alongside risks, the literature on civic technologies and digital democracy presents examples that demonstrate significant potential for strengthening participation and accountability.


The Digital Democracy Initiative, supported by European and Danish actors, aims to create an ecosystem in which platforms, advocacy tools and digital social innovation provide citizens with tangible opportunities to influence public policy (Digital Democracy Initiative 2023).


The Ash Center, through the concept of “digital civic infrastructure”, reads technology as an institutional layer of democracy itself (Garcia & Hubbard 2025). The MAPLE platform, which opens the legislative process in Massachusetts to citizen input and makes the trajectory of testimonies through law-making traceable, serves as a concrete example of this approach.


Similar initiatives across Europe on digital participation herald potential for online participatory budgeting, hybrid citizens’ panels and open consultations (CIVICUS 2025; Hornstein 2025).


Education for democracy in the digital ecosystem takes on a central role.


Sefton-Green (2020) argues that curricula which combine media literacy, the ethics of technology and deliberative skills form citizens who read the online environment with clarity, use technology to raise public issues and maintain a live connection between screen and concrete reality.


The EDDA programme in Europe embeds this formation as a stable component of citizenship education (EDDA Project 2023).


Taken together, these developments prepare the ground for a leadership model that treats technology as civic service infrastructure for persons and communities.


The Digital Civic Stewardship Model


In light of this literature, the study introduces the Digital Civic Stewardship Model as an orienting framework for civic and political leadership in the age of AI.


The model rests on seven interdependent pillars.


First, value clarity and personal dignity emerge as the ethical core.


The digital civic steward sees each person as an end in themselves, not as a data source or strategic target, and uses the dignity of the person as a criterion for assessing every form of AI use in the public sphere.


Second, responsible freedom in the online ecosystem defines the horizon of action.


The digital civic steward reads freedom in intrinsic relation to responsibility for how content is created and disseminated.


Political communication strategies avoid emotional manipulation of attention and aim instead at honest information, reasoned debate and time for reflection.


Third, intermediary institutions appear as the sustaining network.


The digital civic steward strengthens families, local communities, associations, faith communities and civil society organizations as a connective fabric that keeps people rooted in real relationships.


Whenever this network is strengthened, global platforms lose their claim to being the sole locus of belonging and meaning.


Fourth, the space of truth and trust stands as a precondition for public discussion.


The digital civic steward supports professional media, fact-checking, responsible reporting and a culture in which errors are acknowledged and corrected openly.


This creates a field in which truth retains a special status and trust in particular sources of information keeps deliberation possible.


Fifth, algorithmic transparency and the explainability of automated decisions act as guardians of technological power.


The digital civic steward promotes laws and regulations that impose duties of explainability, independent auditing and human responsibility for decisions in which algorithms play a significant role (Mahapatra 2025; OECD 2022).


Legal architecture thus appears as an indispensable filter for the integrity of AI applications in the public domain.


Sixth, digital civic infrastructure forms the bedrock of everyday participation.


The digital civic steward builds and supports official platforms for public consultation, participatory budgets, online citizens’ panels and mechanisms such as MAPLE that make democracy a continuous and traceable practice, rather than a rare electoral event (Garcia & Hubbard 2025).


Seventh, transnational cooperation creates a protective frame for democratic order.


The digital civic steward engages in common standards for algorithmic transparency, the protection of activists and journalists, and mechanisms to address malign interference in elections and public debate (Tuohy-Gaydos 2024; Hornstein 2025).


In this way, an architecture of shared values and rules emerges that binds democratic orders together across borders.


Taken together, these pillars define the digital civic steward as a leadership figure who orients technology toward service rather than domination.


This profile treats technological power as a mandate of responsibility, understands the digital public sphere as part of the common good, and keeps the human person at the centre of every decision.


Micro-Case: Political Micro-Targeting and the Erosion of the Deliberative Dimension


To make the operation of the Digital Civic Stewardship Model more concrete, it is useful to look closely at the mechanics of AI-enabled political micro-targeting.


A political actor gains access to a platform that contains fine-grained data on habits, fears, aspirations and online behaviours of residents in a given electoral district.


A machine-learning model is trained on this base to identify distinct segments, young parents facing economic insecurity, older people living alone, small entrepreneurs exposed to financial risk, students struggling to enter the labour market.


For each segment the model identifies topics that trigger strong emotional reactions and language patterns that increase the likelihood of a favourable response.


In the next phase, the system generates personalized texts, short videos and visual messages.


Each group receives its own narrative, written in a style close to its daily language, while the overall political programme remains in the background.


Communication unfolds through targeted channels, outside a shared horizon of debate and without guaranteed exposure to counter-arguments or to the real costs of policies.


In terms of democratic theory, this process weakens the deliberative dimension of collective will and replaces it with a fragmented aggregation of preferences that are emotionally stimulated, informationally isolated, and algorithmically managed, which in turn makes the legitimacy of political decisions more fragile.


The Digital Civic Stewardship Model reads this mechanism by affirming several clear boundaries: transparency regarding micro-targeting practices, a duty to accompany messages with verifiable information about concrete policies, and real possibilities for counter-balancing by media, civil society and independent institutions.


The digital civic steward treats the integrity of political will as a shared asset that requires permanent protection.


Practical Implications for Civic and Political Leadership


From the preceding analysis, several key directions for action emerge at local, national and transnational levels.


At the normative and legal level, lawmakers who take the ethical governance of AI seriously build frameworks that balance innovation with the protection of fundamental rights.


Policies for algorithm auditing, the right to explanation in automated decisions, limits on intrusive surveillance and the promotion of AI applications that improve public services form one central pillar of this approach (Mahapatra 2025; OECD 2022).


In this regard, the Digital Civic Stewardship Model serves as a guiding criterion for evaluating any new policy on AI and data in light of its impact on representation, accountability and trust.


At the institutional level, parliaments, municipal councils and ministries develop digital civic infrastructure that turns citizen participation into a continuous practice.


Carefully designed platforms such as MAPLE in Massachusetts show that legislative processes gain transparency and participatory density when citizen testimonies and proposals are channelled through structured pathways of listening and response (Garcia & Hubbard 2025).


In the sphere of education and civic formation, school systems and universities integrate digital environment education at the heart of curricula.


This includes media literacy, the ethics of technology, deliberative skills and training for participation in digital democracy platforms (Sefton-Green 2020).


Programmes that enable young people to design concrete technological projects in service of their communities link knowledge to service and make the connection between technology and the common good tangible.


At the transnational level, democratically oriented states coordinate standards for algorithmic transparency, the protection of activists and journalists, and mechanisms to address malign interference in elections and public debate (Tuohy-Gaydos 2024; Hornstein 2025).


Such cooperation becomes a protective canopy over national public spheres and generates an architecture of values that keeps democratic orders connected beyond state borders.


Conclusions


The age of AI and high-tech systems places civic and political leadership before a clear choice.


One path sees the algorithmic architecture as a tool for concentrating attention, optimizing messages and consolidating power.


The other reads the same architecture as a field of service, where prudence, self-restraint and responsibility are required in every decision that touches human lives.


The Digital Civic Stewardship Model offers a guide for this choice.


Its seven pillars place the dignity of the person at the centre, read freedom in inseparable relation to responsibility, strengthen the role of intermediary institutions, and defend the space of truth and trust.


Digital civic infrastructure, algorithmic transparency and transnational cooperation appear as instruments that translate this vision into practice.


Civic and political leadership in the age of AI is ultimately measured by its capacity to preserve the space in which the human being remains more than their data profile and where the voice of conscience stands above every algorithm.


It is in this daily stewardship that technology itself finds its true meaning, as an instrument that enlarges freedom and elevates dignity, far from any reduction of the person to a mere object of measurement, prediction, or control.


References


Ash Center for Democratic Governance and Innovation 2025, Experiential civic learning for American democracy, Harvard Kennedy School, Cambridge, MA, viewed 18 November 2025, <https://ash.harvard.edu>.


CIVICUS 2025, Digital democracy initiative: synthesis report, analysis of the digital democracy ecosystem, CIVICUS, Johannesburg, viewed 18 November 2025, <https://www.civicus.org>.


Digital Democracy Initiative 2023, Digital Democracy Initiative 2023–2026: programme document, Ministry of Foreign Affairs of Denmark & Team Europe Democracy, Copenhagen/Brussels, viewed 18 November 2025, <https://um.dk>.


EDDA Project 2023, Education for Democratic Digital Awareness (EDDA): project summary report, [responsible institution], [city], viewed 18 November 2025, [URL].


Freedom House 2018, Freedom on the Net 2018: the rise of digital authoritarianism, Freedom House, Washington, DC, viewed 18 November 2025, <https://freedomhouse.org>.


Garcia, C & Hubbard, S 2025, A framework for digital civic infrastructure, Ash Center for Democratic Governance and Innovation, Harvard Kennedy School, Cambridge, MA, viewed 18 November 2025, <https://ash.harvard.edu>.


Hornstein, LM 2025, Rebuilding public trust in the digital age: young liberals point the way, Friedrich Naumann Foundation for Freedom, Europe, viewed 18 November 2025, <https://www.freiheit.org>.


Kreps, S & Kriner, DL 2023, ‘How AI threatens democracy’, Journal of Democracy, vol. 34, no. 4, pp. 122–131.


Mahapatra, S 2025, Ethical governance of AI and the prevention of digital authoritarianism in South and Southeast Asia: case studies of India and Singapore, GIGA Focus Global, no. 3, German Institute for Global and Area Studies, Hamburg.


Manheim, K & Kaplan, L 2019, ‘Artificial intelligence: risks to privacy and democracy’, Yale Journal of Law and Technology, vol. 21, pp. 106–188.


Mantellassi, F 2023, Digital authoritarianism: how digital technologies can empower authoritarianism and weaken democracy, Geneva Centre for Security Policy, Geneva.


Nature Editorial 2025, ‘AI has a democracy problem, here’s why’, Nature, 18 November.


OECD 2022, The protection and promotion of civic space, Organisation for Economic Co-operation and Development, Paris.


Sefton-Green, J 2020, ‘Educating for democracy in the digital age’, Oxford Research Encyclopedia of Education, Oxford University Press.


Tuohy-Gaydos, G 2024, Artificial intelligence (AI) in action: a preliminary review of AI use for democracy support, Westminster Foundation for Democracy, London.


Villaplana Jiménez, FR & Fitzpatrick, J 2024, ‘Digital leaders: political leadership in the digital age’, Frontiers in Political Science, vol. 6, article 1425966.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page