This policy briefing paper explores the intersection of artificial intelligence (AI) and democracy, focusing on issues like extremism, misinformation, and harmful online content. It discusses the potential risks posed by AI technologies in misleading and harming citizens, while also highlighting their role in detecting and countering such harms. The paper categorizes AI systems into those that generate, disseminate, target, select, amplify, and mitigate online content. It addresses concerns related to AI accountability, data quality, and model opacity, with special attention to generative AI systems. The paper concludes by examining potential mitigations through ethical principles, public policy, and emerging AI regulation. Although not exhaustive, the paper aims to empower policymakers by providing insights into core AI-related concerns and suggesting practical solutions.
Access the full resource here.