Journal of Applied Mechanics Reviews and Reports

Predictive Protection or Profiling? A Legal-Ethical Framework for Algorithmic Risk Tools in Child Welfare Systems in Spain

Abstract

Elemegious Mugamba

The increasing integration of algorithmic risk assessment tools in child welfare systems across Europe intended to enhance predictive protection and resource allocation—poses profound legal, ethical, and socio-technical challenges. This article interrogates Spain’s fragmented experimentation with predictive analytics in the child protection domain, situating it within broader European developments and the evolving corpus of fundamental rights jurisprudence. Drawing on original fieldwork conducted across three autonomous communities, and supported by 48 semi-structured interviews with frontline professionals, data engineers, and policymakers, the article combines doctrinal legal analysis with empirical inquiry and computational audit techniques to assess the regulatory sufficiency and normative coherence of algorithmic interventions in the Spanish child welfare sector. The study reveals significant divergence in algorithmic governance across regional jurisdictions—such as Catalonia’s Sistema de Valoración de Riesgos Sociales (SVRS) and Madrid’s binary risk scoring prototype—characterised by legal ambiguity, epistemic opacity, and minimal procedural safeguards. Despite Spain’s constitutional guarantees (Articles 18 and 39 CE) and obligations under the General Data Protection Regulation (GDPR) and the European Convention on Human Rights (ECHR), field evidence shows widespread non-compliance with Article 22 GDPR, the principle of legality under Article 9(3) CE, and the jurisprudence of the European Court of Human Rights (ECtHR) and the Court of Justice of the European Union (CJEU), notably in cases such as NJCM v Netherlands and López Ribalda v Spain. This article makes three principal contributions. First, it offers a granular comparative analysis of algorithmic decision-making architectures and scoring logics through the creation of original evaluative indices—SALI (Systemic Algorithmic Legality Index) and MAGI (Minimum Algorithmic Governance Index). Second, it provides a rights-based critique of Spain’s current administrative practices, arguing that algorithmic opacity, statistical bias, and inadequate contestability mechanisms risk transforming predictive tools into instruments of structural surveillance and automated marginalisation. Third, the article proposes a legal–ethical governance framework for high-risk artificial intelligence (AI) in social protection domains, aligning with the risk-tiered obligations under the proposed EU Artificial Intelligence Act (AIA) and value-sensitive design principles. By foregrounding the constitutional asymmetries, regulatory lacunae, and institutional inertia embedded in Spain’s decentralised welfare state, the article underscores the urgent need for harmonised legal standards, ex ante algorithmic impact assessments, independent audits, and participatory oversight mechanisms. The findings hold broader implications for European digital welfare governance, offering a replicable analytical methodology and normative blueprint for jurisdictions seeking to reconcile algorithmic innovation with human dignity, legal certainty, and child-centred care.

PDF

VIRAL88