ISSN 2039 - 6937  Registrata presso il Tribunale di Catania
Anno XVI - n. 07 - Luglio 2024

  Studi



On the legal discrepancy between AI administrative decision-making and fundamental EU administrative principles

Oznur Uguz
   Consulta il PDF   PDF-1   

On the legal discrepancy between AI administrative decision-making and fundamental EU administrative principles

 

Oznur Uguz

 

Abstract

Artificial intelligence has started to change the face of public administration by automating the traditional way of exercising administrative functions. This gradual but steady transformation has come with consequences, some of which potentially comprise the established legal and administrative principles and fundamental rights and freedoms protected under EU law. As the role of AI in public decision-making is expanding, it is changing the way administrative discretion is exercised.

There is not much administrative law legislation specifically addressing the use of automated decision making (ADM) systems in general and of AI systems in particular, since the dominant role of data protection law in regulating ADM and the idea that any instance of administrative decision-making should comply with certain general principles of administrative law. In the absence of specific legislative provisions administrative courts resort to general principles of administrative law to evaluate the use of ADM by public administrations/authorities. In that respect, the use of AI for making decisions in public administration has prompted serious concerns and sparked an ongoing debate to the point that the legitimacy of AI-involved administrative decision-making has come into question.

This article tries to answer this question from the perspective of EU law.

After a short review of the academic literature on the subject, the paper analyses the conformity of AI decision-making in public administration with a reference to fundamental values and administrative principles of EU law. The paper seeks to contribute to the scholarly discussion on the legal discrepancy between AI administrative decision-making, fundamental administrative principles, and the existing EU regulation.

 

Keywords

Artificial intelligence; Administrative decision-making; Public administration; European Union law.

 

Introduction

Recent developments in Artificial Intelligence (AI) technologies have brought about unprecedented changes to life as we know it. With its broad scope of application, AI is being increasingly utilised in a vast array of sectors to enhance the speed, efficiency, and quality of goods and services and decrease administrative costs and burdens.

In AI development and application, the private sector has been the main actor since the technology’s introduction. Yet, recently, efforts towards public sector AI adoption have surged globally, with countries realising the integrality of public sector participation for achieving utter success in AI implementation.

AI has a wide scope of application in the public sector from risk assessment and compliance to forecasting and decision-making. One of the most debated applications of AI in the public sector is its use for decision-making in public administration due to its potentially far-reaching impact on people’s rights and freedoms as well as on countries’ constitutional principles and long-established administrative practices. As the role of AI in public decision-making is expanding, it changes the nature of public bureaucracy, and the way administrative discretion and professional judgment are exercised.[1]

Typically, public authorities are bound by certain legal principles, regulatory frameworks, and responsibilities. In the European Union (EU) context, administrative bodies of both the EU and the Member States, insofar as they apply EU law, must comply with the general principles of law including the principles of legality, proportionality, legal certainty, and equality as well as the fundamental human rights and a range of procedural principles. They are bound by the existing EU legal provisions in their decisions and must abstain from any violations of such while fulfilling the stipulated obligations under these regulations.[2] Like any other decisions exercised by public officials acting in an administrative capacity, AI-made or supported decisions must be in conformity with the applicable administrative law and principles. Yet, AI decision-making differs substantially from the “classic” form of administrative decision-making performed by humans, which those laws and principles are tailored to address.

AI automates the decision-making process by relying on large-scale data and algorithms that structure organisational processes in a formalised way based on predefined rules to be executed in step-by-step computational operations. While AI outperforms human intelligence in terms of physical computing capacity, computing speed, and complexity handling, it lacks certain social, emotional, and behavioural skills that are integral components of administrative decision-making such as empathy, emotional intelligence, and conflict resolution.[3]

The unique self-learning and automated nature of AI, which mostly operates in a complex and opaque manner, inherently contradicts some of the integral principles and legal provisions to be respected in administrative proceedings that safeguard the rights and freedoms of people as well as the core values forming the foundation of the EU. That raises concerns regarding the legitimacy of using AI decision-making in EU public administration and begs the question of whether it is possible to use AI in public decision-making in a way compliant with the fundamental rights and values the EU was founded upon.  

 

  1. A Literature Review

Academic scholarship on the relationship between AI and public administration dates back to the late 1980s.[4] The game-changing potential of AI for increasing the efficiency and quality of operations and services in public administration and its unexampled implications on the way administration’s function was a prominent topic of interest for researchers as well as policymakers. Hadden (1989) was one of the early researchers who foresaw the latent capacity of AI for transforming the working of public administrations, indicating that “expert systems” will improve decision-making and productivity in public administrations[5]. Barth and Arnold (1999) discussed the potential benefits and risks of the use of AI in public administration, with a focus on the implications of technology in the context of administrative discretion[6].

In recent years, AI research has gained more prominence in parallel to the rapid technological advancements and accompanying policy attention.[7] Research on the subject has spanned a diverse range of topics from impact and challenges of AI implementation in public administration[8] to country-specific analyses of AI strategy and deployment in the public sector[9] and studies focusing on certain applications of AI technology such as chatbots.[10]

As the application of AI in public administrations worldwide grows in number and scope, presenting unprecedented legal and administrative challenges to be dealt with, the concept of “administrative AI” has begun to attract wider interest from scholars of law, public policy, and public administration, with some of the most-discussed themes being the implications,[11] advantages,[12] risks,[13] impact,[14] and governance[15] of AI in the administrative context. In terms of the purpose of deployment, the role of AI in administrative decision-making has been a popular subject of debate on the grounds of its potential consequences, which may not only negatively affect the rights and interests of individuals but may also undermine certain principles and values of law and public administration. The research on AI in administrative decision-making particularly clusters around the issues of legality, accountability, privacy and data protection, transparency and algorithmic discrimination.

Fink and Finck (2022)[16] considered the automation of administrative decision-making through AI systems with a reference to the administrative pillar of the duty to give reasons under EU law and examined whether it is possible to efficiently apply the pillar to automated decision-making and to what degree it safeguards the procedural rights of individuals against automated administrative decisions. Busuioc (2020)[17] discussed the implications and limitations of AI decision-making algorithms for ensuring public accountability by analysing the inherent features of those systems that hinder compliance and pointed out the importance of explainability and interpretability by design while emphasizing the role of regulatory efforts and public administration scholars. Mitrou, Janssen, and Loukis (2021)[18] analysed the level of discretion and human control needed in AI public decision-making from a perspective of administrative law in the light of the principles of the rule of law, fairness, non-discrimination, transparency, justifiability, and accountability and identified certain situations in which some degree of human control, intervention, and discretion might be required. In addition to those, there exists research which tackles the problem from a national law perspective[19] or examines the issues arising in the context of AI decision-making in a specific area of public administration.[20]

Nonetheless, there is a considerable gap in legal scholarship for a comprehensive analysis of the conformity of AI administrative decision-making with EU law and administrative principles. Existing studies tend to approach the issue from a single point of view such as administrative discretion or transparency or they lack the detail in their analysis. This article aims to contribute to the narrowing of this gap in the literature by examining the legal challenges engendered by the use of AI for administrative decision-making based on EU law and with reference to relevant legal principles which administrative authorities are bound by and struggle most to comply with in the context. Through this analysis, the article seeks to answer the questions of to what degree AI decision-making is compatible with the current EU legal framework and how the existing discrepancies can be addressed to ensure compliance and coherence across EU public administration.

 

  1. Main general principles of administrative law
    • Principle of Legality

The principle of legality is one of the founding principles of the EU, which must be respected by all administrative authorities operating under EU law. The principle requires decision-makers to make reasoned decisions having a legal basis and allows the exercise of administrative discretion only if it is justifiable on legal grounds.[21]  Article 2 of the Recommendation on Good Administration by the Council of Europe[22] (Recommendation on good administration) contains a clear requirement for public authorities to act in line with the law and within their powers. The article deems arbitrary or ultra vires acts and decisions unlawful and requires the decisions taken by public bodies to have a legal ground both in terms of their content and decision-making procedure.

Apart from the certain situations listed in the General Data Protection Regulation (GDPR), AI-made or supported decision-making is not explicitly excluded by EU law, which might be regarded as implicit consent to the concept. Still, the issue of whether the use of AI for administrative decision-making is actually lawful requires a more complex analysis, particularly in cases where AI is used for probabilistic or predictive decision-making. This is because in such cases, AI calculates future events based on probabilities derived from the facts of the case in question and the past cases with similar characteristics. Such reliance on probabilities, which are simply estimations lacking a legal basis, conflicts substantially with the principle of legality. That also creates problems in terms of accuracy as predictions do not always match the actual outcomes, but false negatives and false positives could also occur, leading to outcomes opposite to what the system has predicted and based its decision on. Consequences for such inaccuracy could be particularly detrimental in the administrative context as administrative decisions mostly have a direct effect on individuals’ rights and interests.

The most prominent examples of probabilistic AI decision-making in public administration are seen in the law enforcement area in the form of predictive policing. Predictive policing can be defined as the use of analytical techniques to collect and analyse data from previously committed crimes for the prediction of individuals and geospatial areas with a high probability of future criminal activity to adopt police intervention measures such as patrolling and surveillance.[23] Applications of predictive policing are already being used in many European countries, including the Netherlands, France, Germany, Austria, and Estonia as well as the United States and the United Kingdom. Still, they remain controversial due to the high possibility of these systems to leading discriminatory and unlawful practices based on inaccurate or biased predictions along with the concerns they raise regarding transparency and privacy.[24]

The pending EU Artificial Intelligence Act classified predictive policing AI applications as “high-risk” along with many other uses of AI in law enforcement, including individual risk assessment and crime analytics of natural persons, due to their potential adverse impacts on fundamental rights, which might be even more detrimental in such a setting characterised by significant power imbalance. AI systems utilised to determine people’s access to education, training, financial resources, and essential services such as housing, electricity, and telecommunication[25] and those used for the administration of justice and democratic processes[26] are further considered high-risk, while social scoring of people by public authorities and ‘real-time’ remote biometric identification of people in public spaces was directly banned as unacceptable risk applications.[27]

Under the risk-based approach of the Act, “high-risk AI” systems refer to those that could pose significant risks to people’s health, safety or fundamental rights whereas those classified under “unacceptable risk” are deemed as a clear threat to people.[28] Accordingly, unacceptable-risk AI applications are prohibited completely or excluding certain special circumstances while high-risk AI systems can only enter the EU market or be put into use if they meet certain mandatory requirements imposed by the act before and throughout their deployment.[29]  These measures aim to mitigate the risks without constraining technological development and cover a wide range of duties concerning data management, documentation and record keeping, transparency and information provision, human supervision, robustness, accuracy, and security.[30]

The upcoming AI Act bears significant importance for the use of AI in the public sector as it provides detailed guidance on the development, procurement, and use of AI in the public sector as well as safeguards for individuals against the potential negative effects of AI on their rights and interests. Once adopted, the AI Act will respond to most of the concerns regarding the principle of legality on the matter by providing a solid legislative ground for the use of AI for public decision-making and prohibiting or strictly regulating the applications contradicting the principle.  

  • Administrative Discretion

Administrative discretion is a long-established practice exercised in the EU administration within the limits established by the Treaties and the case law of the Court of Justice of the European Union (CJEU). The concept can be defined as the power vested in administrative authorities to allow them some degree of flexibility in their judgement when making an administrative decision and conflicts with AI decision-making from several aspects.

To begin with, AI systems lack certain social and emotional human abilities that have a critical influence in the exercise of administrative discretion and thereby on the final decision. This makes the legitimacy of handing over the decision-making power to such systems questionable on the grounds of ethical and moral considerations. Another concern arises from the predetermined and systematic functioning of AI with no margin of appreciation. AI algorithms automate decision-making by processing data fed to the system based on a set of clear and pre-defined criteria, which usually leaves no room at all for the exercise of executive discretion.[31] That might lead to the complete fettering of administrative discretion in decision-making depending on the level of autonomy and involvement of AI in the process. While such type of “clear-cut decision-making” might work well for basic routine tasks, complex cases of decision-making with unstructured information and conflicting interests may still require some degree of discretionary margin and necessitate human oversight and intervention in the process. A study by Criado et al.,[32] which analysed the effects of the utilization of AI algorithms for public decision-making on the discretionary power of public servants and their work, found that AI has a positive impact on both when it performs the role of a support tool rather than a decision-maker.[33] This indicates that humans’ role in decision-making should not be undermined when algorithms are included in the process but should even be strengthened to oversee the operation of AI systems and intervene when needed. Human agents must retain the competence to diverge from the AI-generated outcome in case of non-compliance with certain aspects of law or technical failures.

Some argue that human oversight is required also to comply with the legal limitations on the delegation of executive powers on the grounds that the transfer of decision-making powers to AI amounts to a de facto delegation of administrative powers.[34] For an administrative decision to be legally valid, it must have been executed within the competence of the acting public authority by following a pre-established administrative procedure.[35]

Delegation of powers has a broad meaning in the EU law, encompassing the transfer of quasi-legislative powers to executive EU institutions, conferral of certain administrative responsibilities and authorities to supranational institutions and EU agencies, and internal delegation of administrative functions within an executive body. Depending on the nature of delegation, different conditions and limitations are in place under EU law enshrined in different EU legal instruments. Article 290 of the Treaty on the Functioning of the European Union (TFEU) opens the door for the delegation of power to supplement or amend legislation to the European Commission under certain conditions[36] while Article 291(2) TFEU allows the conferral of the competence to adopt implementing acts to the Commission in specified circumstances.[37] In the Meroni case, CJEU established “the non-delegation doctrine” or “the doctrine of institutional balance” and held that the delegation of power to adopt executive acts to EU agencies was admissible while prohibiting the conferral of discretionary powers to bodies not established by EU treaties including EU agencies. The court also imposed certain conditions on delegation, limiting it to the clearly defined executive powers[38] that are necessary for the performance of the task in question[39] and subject to the supervision of the delegating authority and judicial review.[40] In United Kingdom v. Council and European Parliament,[41] known as the ESMA Short-Selling case, CJEU allowed the delegation of discretionary powers as long as the discretion is limited and subject to specific criteria and limitations that are amenable to judicial review.[42]

While none of these legal principles directly apply to the so-called delegation of administrative decision-making powers to AI, some inferences regarding its legitimacy can be made in the light of the judgements. In Meroni, the main reason why the court forbade the delegation of discretionary powers to agencies which did not have a legislative ground was the lack of judicial protection against the acts of such bodies on the date of the judgement when TFEU holding EU agencies subject to CJEU’s jurisdiction was not in force.[43] The vitality of judicial reviewability of the delegated administrative tasks was also emphasized in ESMA Short-selling in which even discretionary tasks were permitted to be delegated as long as they were open to judicial supervision. The scope of the delegated discretionary powers is also an important consideration as the court in ESMA put a strong emphasis on the limited discretionary nature of ESMA’s powers in the reasoning of its decision.

The application of these to the delegation of decision-making task to AI might imply that such delegation would require a legal ground established by EU legislation as decision-making is a broadly discretionary task not permissible for delegation unless authorized by law. Since no legal provision permitting such delegation exists, it would be only lawful in case of meaningful oversight of the decision by a human agent. Meaningful oversight in this case requires public authorities to consider all the relevant information about the case and have the competence to amend the decision made by the AI system if they find it necessary.[44] This requirement for meaningful human oversight creates an additional challenge for public servants. It requires them to have a full understanding of the analytic process through which the AI system operates to be able to detect any incorrections and gaps. That is, however, not an easy thing to do owing to the “black-box” nature of many AI systems that hide their operational logic from the user.[45] With such advanced technologies consisting of a range of complex layers and steps of operation, there is a risk of the loss of predictability. This means that the outcomes arrived at by AI systems might not be comprehensible and confirmable by human agents anymore, which may lead to a loss of control by humans over the system and its output.[46] In an experimental study by Janssen et al, which compared decision-making by humans with and without support from algorithms, even experienced professionals could not detect all the mistakes made by the algorithmic system.[47] Where the human agent who monitors the process has little to no insight into how the system works, the notion of “human on the loop” becomes meaningless. The fact that public authorities have usually limited expertise in using AI and public servants often lack the necessary skills to supervise the technology[48] exacerbates the risk of the loss of human control. Mitigating this risk necessitates serious capacity-building in public administrations, including the provision of adequate education and training to civil servants. This, however, would mean that additional costs of time and money will need to be incurred, which would be challenging for public bodies whose organisational capacity is limited by budget constraints and limited resources.

  • Transparency and Explicability

Transparency is an essential principle in ensuring that administrative procedures are functioning in an open and orderly manner. Under Article 10 of the Recommendation on good administration,[49] public bodies must act in conformity with transparency and make sure that individuals are well-informed about administrative actions and decisions. The transparency requirement also applies to the information used by a public body in decision-making.[50] Under the Council of Europe Recommendation on access to official documents, public authorities are obliged to grant public access to official documents held by them subject to certain limitations but without discrimination.[51]

In the context of AI decision-making, the principle of transparency, first of all, requires the enabling of the public’s access to information regarding AI systems and processes used in public administrations for decision-making.[52] Individuals affected by an administrative decision should also be informed about the data and the algorithm used in decision-making, which is the first step towards understanding the motive of the decision and challenging it in case of a dispute. Nevertheless, ensuring transparency is not enough for the public to fully comprehend the reasoning and implications of an AI-made decision. The role of AI and other parties in decision-making and how the process works should also be explicable. Explicability in this context means that the process of decision-making is explainable and interpretable to humans for them to have a clear understanding of why and how an AI decision is made.

Transparency and explicability are closely linked to the duty to give reasons for an administrative decision, a fundamental principle of EU administrative law. Articles 41 and 47 of the Charter of Fundamental Rights of the European Union (the EU Charter) grant every person the right to good administration and the right to an effective remedy and fair trial, respectively,[53] which entails the duty to give reasons for an administrative decision. The duty makes it necessary for the public body in charge of decision-making to give concrete reasons for its administrative decision, even when it is made by an AI system. In R.N.N.S and K.A, CJEU held that the public body that made the decision must have communicated to the affected person not only the ground on which the decision was made but “the essence of the reasons” for the decision.[54] In Ligue des droits humains, CJEU clarified the extent of the duty to give reasons for decisions made based on the automated processing of personal data. The court stated that the competent authority must ensure that the person affected understands how the predetermined assessment criteria and the programs applying it work in so far as to decide with full knowledge whether to exercise their right to judicial redress.[55]

Having said that, the opacity of AI systems makes it hard to achieve transparency and explicability in decision-making and compliance with the duty to give reasons. In some cases, even the public officials in charge of the decision-making process might not have a full understanding of the algorithmic functioning and the logic of the decision, let alone they could explain them to the concerned individual. Still, the obligation cannot be deferred by referring to the AI system that made the decision but must be fulfilled by the public authority personally.[56] AI systems lack the autonomy and agency that humans have, meaning that they cannot be held responsible for any decision they make, and accountability must lay with a human agent involved somewhere in the process. Determining the responsible party is, however, difficult to achieve when AI systems are concerned, whose development and functioning require involvement and interaction from multiple agents. That makes assigning liability difficult in case of harm resulting from an AI decision and creates difficulties for individuals impacted by the decision to be fairly compensated for their damage. The solution to this problem requires not only the establishment of a specific liability framework for AI-made acts and decisions but also the regular updating of such a framework according to the recent developments in the technology’s level of autonomy. A more concrete solution would be the development of explicable AI systems which operate in a more transparent and comprehensible structure that simplifies liability assignment.  That is, however, a task beyond the means of administrative authorities and policymakers.

  • Principles of Equal Treatment and Non-Discrimination

Fair and impartial treatment by administrative authorities is a right that every person should enjoy under Article 41 of the EU Charter.[57] Equal treatment and non-discrimination are fundamental principles public administrations must follow to ensure the protection of people’s rights and freedoms during administrative processes. The principle of equal treatment prescribes the equal treatment of individuals in similar situations, while the principle of non-discrimination prohibits discrimination of individuals on grounds such as race, age, gender, sexual orientation, ethnic and social origin, religion or belief, political or other opinions.

Discrimination is forbidden under various instruments of EU primary and secondary law. Article 2 of the Treaty on the European Union (TEU) recognises non-discrimination as a prevailing principle of the EU[58] while Article 21 of the EU Charter[59] and Article 14 of the European Convention on Human Rights (ECHR)[60] prohibit discrimination on any grounds such as sex, race, colour, and language. The non-discrimination principle is also recognized under the case law of the CJEU, which confirmed in Mangold[61] that non-discrimination constitutes a general principle of the EU law.

From the public administration perspective, Article 1 of Protocol No. 12 to ECHR[62] directly addresses public authorities and forbids any kind of discriminatory treatment by any public body. Article 3 of the Recommendation on good administration obliges public authorities to act in line with the principle of equality while Article 4 urges them to act impartially and objectively.[63] Accordingly, public authorities must treat individuals who are in the same conditions equally and act in an unbiased manner by taking into consideration not their personal beliefs and interests but only relevant matters of the case when exercising their administrative powers under EU law.

Yet, ensuring compliance with these principles is, again difficult in the context of AI decision-making. Since algorithmic outputs typically reflect what is given,[64] even a trace of algorithmic bias trickled into a decision-making AI system might result in biased and discriminatory decisions and reproduce existing inequalities in society. Algorithmic bias might occur through the integration of human bias into the functioning of a decision-making algorithm during the designing, training, or operation of the system, whether intentionally or unconsciously. Human bias is drawn from the personal preferences or prejudices of human agents involved in such processes and might lead to the prioritisation or exclusion of certain groups based on their distinctive characteristics such as race, gender or socioeconomic status. The selection of the data samples used to train and test a system and the criteria for the data analysis are of paramount importance in that respect.  Any underrepresentation or overrepresentation of a particular group in a dataset used to train an AI algorithm could result in the disregard or mistreatment of that group in the final AI decision. That might have drastic consequences in the public administration context such as denial of access to healthcare or welfare benefits. As the role and application scope of AI algorithms in public decision-making are expanding, those potentially harmful effects are compounding. In most serious cases, even the freedom of individuals could be at stake as AI is now increasingly being used in law enforcement and criminal justice across the world.

One of the most well-known examples of how AI decision-making in the criminal justice area could reproduce discrimination is COMPAS, which stands for “Correctional Offender Management Profiling for Alternative Sanctions”. COMPAS is used in some parts of the United States to help judges determine the sentences of defendants by predicting their probability of committing a crime again. However, research by the non-profit journalism organisation ProPublica found that the system has been systemically biased against Afro-Americans.[65] According to the research, black defendants are almost double as likely as whites to be inaccurately marked as re-offenders, while whites are much more likely than blacks to be mislabeled as lower-risk defendants but later commit crimes again.[66]

The Crime Anticipation System (CAS) of the Netherlands[67] is another AI application similar to COMPAS in terms of its discriminatory nature. The system has been developed in Amsterdam to define “hot spots” and “hot times” in which crimes are likely to occur by using Big Data, machine learning, and geospatial technology.[68] While ethnicity might not have been used by CAS as a criterion explicitly, it was found that CAS picked out the same category of people as risky, namely “young, male, ethnic minorities from  poor  families,” on the grounds of their location of residence.[69]

Another controversial use of AI in public decision-making that threatens not only the right to privacy but also the right to non-discrimination, is surveillance and censorship. AI’s ability to track, analyse, filter, and rank large amounts of data and identify patterns is increasingly being exploited by governments and other public bodies to monitor citizens through AI technologies such as face and speech recognition and take actions based on the collected data.[70] Such use not only presents an enormous risk to democratic values and fundamental human rights and freedoms such right to privacy but may also produce wide-reaching discriminatory results due to algorithmic bias. According to studies, facial recognition systems are mostly biased against Black people and females, making Black women the most likely victims of algorithmic bias.[71] The use of such potentially biased surveillance systems by public administrations for decision-making might disproportionately affect certain categories of people, mostly minorities, in many ways such as by causing them to be unfairly exposed to over-policing or false arrest. While rooting out bias from an algorithmic decision-making process is an onerous task, gradual steps can be taken such as by striving for a more diverse data analysis with a more careful selection of data sources and collection methods. For a more permanent solution, raising awareness and increasing diversity among public servants is pivotal, which might be a lengthy process requiring a systematic change of mindset and remodelling in public administrations.

  • Principle of Proportionality

The principle of proportionality is one of the general principles of EU law and thereby must be observed by public administrations in their acts and decisions. The principle was developed by CJEU in its case law[72] and is now enshrined in Article 5(4) of TEU.[73] According to the principle, measures adopted by EU bodies must be necessary and adequate with respect to the desired objective and must not impose on individuals a burden that is disproportionate to the objective to be achieved.[74]

The Recommendation on good administration includes proportionality among the main principles of good administration.[75] The principle of proportionality prohibits excessive and arbitrary administrative acts and decisions by public authorities and requires striking a fair balance between public interest and fundamental human rights. For the application of the principle to AI decision-making, the main question arising is whether AI is the appropriate tool to entrust with the task of administrative decision-making in light of the technology’s benefits and vulnerabilities. Since AI has unique characteristic features with a high chance of compromising human rights and certain legal norms, it might not be the optimal means for making high-impact administrative decisions such as deciding entitlement to essential public services or restricting the exercise of particular rights. Meanwhile, AI may be more suitable to be utilised for the execution of rule-based repetitive tasks that require no human judgement. Striking the balance between the efficiency and public value created through AI and individual rights and interests at stake necessitates a case-by-case evaluation of individual circumstances by public authorities. The principle of proportionality is also closely related to other general principles of law and administrative considerations, particularly the right to privacy and data protection, and plays an important part in their application and compliance.

 

  1. Right to Privacy and Protection of Personal Data

An important fundamental right against which AI decision-making poses a serious threat is the right to privacy protected under Article 8 of ECHR. Under the article, any interference by public authorities with the right is prohibited except where it is allowed by law and is necessary to achieve certain paramount objectives such as public safety and national security.[76] Privacy is also covered in Article 7 of the European Charter, which states that “everyone has the right to respect for their private and family life, home, and communications.”[77] The right to privacy in the administrative context requires public authorities to respect and protect the privacy of individuals in their exercise of administrative duties, particularly when it comes to the processing of their personal information. Article 9 of the Recommendation on good administration obliges administrative authorities to take all appropriate measures to ensure privacy in personal data processing, especially when they are using electronic means.[78]

The protection of personal data has long been regarded as a matter of importance in the EU and is regulated under several legal instruments. Under Article 16(1) of TFEU, which succeeded Article 286 of the EC Treaty, everyone has the right to the protection of their personal data.[79] The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data signed in 1981 (Convention 108)[80] contains a series of provisions concerning the access, rectification, and erasure of personal data that undergoes automatic processing, which is also applicable to administrative decision-making as a binding EU legal instrument. Article 5 of Convention 108 sets out the conditions to be fulfilled for any personal data subject to automatic processing. Accordingly, such personal data must be “obtained and processed fairly and lawfully”, “stored for specified and legitimate purposes”, and “not used in a way incompatible” with those objectives.[81] In addition, the data must be “adequate, relevant, and not excessive” to the storing purposes, be “accurate” and “up to date,” and “preserved in a form permitting identification of data subjects for no longer than is required for the purpose”.[82]

Besides restrictions, the convention grants individuals whose personal data is subjected to automatic processing a series of rights under Article 8 of Convention 108. These include obtaining information regarding the automated personal data file, receiving confirmation of whether such data is stored, getting the data rectified or erased in certain cases, and having remedies in case of incompliance with such rights.[83] Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data[84] confers even further rights to data subjects. The protocol presents people with the right to not be subject to a decision significantly affecting them and based solely on automated processing without their views taken into account.[85] Individuals are further entitled to obtain information regarding the reasoning behind automated processing where its results are applied to them[86] as well as to object to such processing unless overriding grounds are present.[87]

In 2018, Convention 108 was modernised by the Council of Europe to respond to the challenges prompted by the recent advancements in information and communication technologies and bring it in line with the EU’s new legal framework on data protection enshrined in the Regulation 2016/679, known as the General Data Protection Regulation (GDPR).[88] While the updated Convention takes a more stringent approach to data protection than the original version, it is still less detailed, encompassing, and demanding compared to GDPR in many aspects.[89] GDPR lays down a precise framework for the processing and free movement of personal data, setting forth a series of additional conditions and obligations to be met by data controllers to safeguard the rights of data subjects regarding their personal information. Under GDPR, personal data processing is only lawful when one of the exhaustive list of conditions in Article 6 of GDPR applies, including when the data subject gives her consent or the processing is necessary for compliance with a contract or legal obligation.[90] Notably, GDPR grants data subjects further rights not explicitly conferred by the updated Convention 108 such as the right to be forgotten and the right to data portability.

GDPR contains important provisions directly addressing automated decision-making (ADM), which also apply to AI decision-making as a subset of ADM. Under Article 22 GDPR, individuals are entitled to not be subject to a decision solely based on automated data processing, including profiling, which generates legal effects that concern or similarly significantly affect them.[91] Under the regulation, such decision-making is possible only in certain situations covered in the article, namely when it is required for entering into a contract or performing it, authorised by law, or based on the data subject’s explicit consent.[92] Even then, the article imposes obligations on data controllers to protect data subjects’ rights, freedoms, and legitimate interests. In such situations, a data subject is at least entitled to request human intervention in the process, express their view, and challenge the decision.[93] Under Article 15, the data subject has also the right to know if their personal information was subject to automation and if so to obtain at least meaningful information on the reasoning behind the decision as well as about the significance and consequences of the processing for them.[94]

Although Article 22 seems quite comprehensive in endowing individuals with rights that protect them against the unwanted consequences of automated decision-making, its application area is actually quite narrow. That is because Article 22 only refers to “solely automated decision-making” that is performed without any human involvement. This means that the provision could easily be circumvented by the participation of a human agent in the decision-making process. Human intervention, however, is usually inadequate to extenuate the detrimental implications of AI decision-making on individuals, as explained previously.

Even when Article 22 is applicable, the three exemptions to the prohibition weaken the safeguard bestowed by the provision. Under Article 22(2)(a), the prohibition on the use of fully automated decision-making does not apply if the decision to be made is necessary for entering into or performing a contract between the data subject and the data controller.[95] It is clear from the wording that for a public body to rely on this exemption, the use of automated means must be “necessary.” In other words, there must be no alternative means to better ensure the protection of personal data and privacy of individuals. This is a question of “proportionality” and would require a similar test of necessity entailed under the principle of proportionality.

Article 22(2)(b) provides a second exemption to the prohibition in the event that it is authorized by a law that also safeguards the rights and interests of the data subject.[96] That said, it is unlikely that a general law permitting the use of automated means for decision-making in public administration would be sufficient for the exemption to apply. Since the implications of an administrative AI decision would differ widely depending on the purpose for which it is made, a specific authorization allowing fully automated decisions for the performance of a certain administrative task in a designated area of administration would likely be needed. 

The third and last exemption comes under Article 22(2)(c) and allows the execution of decisions based on fully automated processing where the data subject’s explicit consent is present.[97] Under Article 4(11) of GDPR, consent is defined as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she… signifies agreement to the processing of personal data relating to him or her.”[98] When assessing if consent is freely given, whether the provision of a service conditional on such consent is of utmost importance as it means that the data subject has no genuine choice but to submit to access the service, particularly when essential public services are concerned. At all events, the assessment of consent requires a more scrutinized evaluation when the party asking for the consent is a public authority due to the power imbalance between the data subject and the controller.[99] In fact, Recital 43 to GDPR states that where the data controller is a public authority, it is unlikely for consent to be freely given in any event and therefore consent should not provide a legal ground for the processing in such circumstances.[100]

Any administrative decision subject to the prohibition under Article 22(1) but executed based on one of the exemptions in Article 22(2)(a), (b), (c), must fulfil the conditions for the exemption to be applicable. Any failure of such would mean a lack of legal ground for the decision and result in incompliance with the principle of legality and the data protection principle of lawfulness under Article 5 of GDPR,[101] rendering the data processing and the emanating administrative decision unlawful.

AI in administrative decision-making creates further problems in terms of compliance with the essential data protection principles covered under Article 5 of GDPR, namely “lawfulness, fairness, and transparency” (1(a)), “purpose limitation” (1(b)), “data minimisation” (1(c)), “accuracy” (1(d)), “storage limitation” (1(e)), “integrity and confidentiality” (1(f)), and “accountability” (2).[102] According to the principle of lawfulness, fairness, and transparency, the processing of personal data must be performed in a lawful, fair, and transparent manner with respect to the data subject.[103] This principle is closely related to the administrative principles of non-discrimination and transparency, whose application to AI decision-making might be problematic due to algorithmic bias and the complex black-box nature of AI systems. Purpose limitation, data minimisation, and storage limitation principles might also be difficult to observe in data processing performed through AI and Big Data which AI usually relies on to function. According to the principles, personal data must be adequate, relevant, and necessary for the purpose it was collected – “data minimisation”,[104] processed no further than the specific purpose it was collected for – “purpose limitation”,[105] and stored no longer than the purpose requires – “storage limitation”.[106] These, however, directly contradict the very nature of Big Data, which consists of large volumes of data collected to be used many times for different purposes. Whether it is Big Data or not, the storage of personal data for longer than required and the re-use of second-hand data are problematic, particularly in the public administration context. When the data used for analysis is secondary, it could well be beyond the actual consent of the data subject and might constitute a serious infringement of their right to privacy. In such cases, whether the initial data collection is conducted respecting the fundamental rights of persons and general legal principles might also be unknown.

That said, Recital 50 of GDPR opens a door to the further processing of data by public authorities. The recital allows personal data processing for secondary purposes when they are compatible with the initial purpose for which the data was collected.[107] According to the recital, the further processing of data should be deemed compatible with the original collection purpose when it is necessary for performing a task executed in the public interest or in the exercise of official authority by the data controller.[108] This brings the mind whether the secondary data processing for the execution of administrative AI decisions could be allowed on such grounds. The reason of “public interest” is unlikely to apply to the cases of data processing in the making of standalone administrative decisions by AI since such decisions typically concern the personal rights and interests of individuals rather than the general public. While such decisions are performed “in the exercise of official authority” in the literal meaning of the words, the statement seems too broad and ambiguous to be applicable to administrative AI decision-making. Nevertheless, public authorities can still re-use personal data according to the recital, provided the use is compatible with the original aim of the collection.

The view of the European Court of Human Rights (ECtHR) on the matter of the storage and re-use of personal data, however, appears different. In S. and Marper v. the United Kingdom,[109] the storage of the DNA samples and fingerprints of the applicant by authorities to be retained indefinitely and used for secondary purposes was held by ECtHR as an invasion of the right to respect for private life.[110] In its judgement, the court stressed the significance of personal data protection to a person's enjoyment of their right to privacy under Article 8 ECHR, which necessitates all appropriate safeguards to be taken to prevent any misuse inconsistent with Article 8.[111] According to the court, the existence of such safeguards is even more important when the automatic processing of personal data is concerned.[112] The court considered that any utilisation of modern scientific techniques in criminal justice that was carried out at any cost and without striking the right balance between their potential advantages against important interests of private life would unacceptably diminish the protection of privacy granted under Article 8.[113] In that respect, any country claiming a pioneering role in the development of emerging technologies was said to bear special responsibility for finding such balance.[114] This stance of ECtHR was more recently reiterated in Gaughran v. the United Kingdom.[115] The court in its judgement held that the indiscriminate retention of the DNA profile, fingerprints, and photographs of the applicant as an offender, without consideration to the seriousness of his offence and the possibility of a need for the indefinite retention of the data, amounts to a disproportionate intrusion with the applicant’s right to privacy.[116]

The stance of ECtHR in the aforementioned cases indicates that the threshold for compliance with the right to privacy under Article 8 is higher for public administrations where technology is used in a manner that could disproportionately interfere with the fundamental rights of individuals. Accordingly, public authorities will have to ensure more effective safeguards to prevent any violation of rights when employing technological means in their tasks, including AI. The critical balance to be stroke between public interest and the rights and freedoms of people when using AI for public decision-making highlights the vitality of effective policy-making that establishes common standards on AI’s uptake and regulation in the public sphere to be followed with a coherent implementation process across administrations in the light of the core values and principles EU administration is functions on.

 

  1. Final remarks

The global debate on the legitimacy of AI decision-making in public administration is likely to remain unresolved in the foreseeable future since there exists no clear answer to most of the concerns raised on the matter. Considering the ever-evolving nature of AI technology, even a comprehensive legal framework on AI that is developed with a reference to its current application is unlikely to permanently solve the problems the technology entails. As AI systems gain more autonomy in performance, new scenarios and potential risks will come up and create new dilemmas between efficient public value creation and protection of fundamental rights, freedoms, and principles. Still, this does not mean that ongoing global efforts on AI regulation are for nothing. In fact, they are of utmost importance for incremental AI adoption that allows time for adaptation to the changes brought by the technology.

The upcoming AI Act constitutes a solid step in the EU’s AI uptake as the first-ever legally binding instrument tailored precisely for regulating the application of AI in the EU. The proposed AI Act encompasses the utilisation of AI for decision-making purposes and brings about detailed restrictions and limitations to such use to respond to the long-debated concerns on the matter. The act takes a relatively firm stand on the use of AI systems for predictive policing and surveillance, which, arguably, poses the greatest risk to fundamental rights and freedoms of individuals. Whether the act will succeed in offering a solid solution to the pending legal issues on AI decision-making is, however, a matter of time to decide. Even if the AI Act offers some solutions to the issues of transparency, accountability, data protection, and algorithmic discrimination to a degree, broader actions will be needed for more permanent solutions, including the training of public servants on digital skills and the promotion of diversity and inclusion in public administrations. Increasing the explicability of AI decisions with a more transparent and traceable decision-making process and keeping humans involved are also integral for ensuring the legality and reviewability of administrative AI decisions.

 

 

 

 

 

 

 

 

 

 

 

 

[1] Madalina Busuioc, ‘Accountable Artificial Intelligence: Holding Algorithms to Account’ (2020) Public Admin Review 81(1) <https://doi.org/10.1111/puar.13293>  accessed 20 January 2024.

[2] Case C-265/95 Commission v France [1997] ECLI:EU:C:1997:595, paras 30–32.

[3] Johan Egbert Korteling and others, ‘Human- versus Artificial Intelligence’ (2021) Frontiers in Artificial Intelligence 4 <https://doi.org/10.3389/frai.2021.622364> accessed 20 January 2024.

[4] Mehmet Metin Uzun and others, ‘Big Questions of Artificial Intelligence (AI) in Public Administration and Policy’ (2022) SİYASAL: Journal of Political Sciences 31(2), 423-442 < https://iupress.istanbul.edu.tr/en/journal/jps/article/big-questions-of-artificial-intelligence-ai-in-public-administration-and-policy> accessed 20 January 2024.

[5] Susan G. Hadden, ‘The Future of Expert Systems in Government’ (1989) Journal of Policy Analysis and Management 8(2), 203-208 <https://doi.org/10.2307/3323379> date accessed 20 January 2024.

[6] Thomas J. Barth and Eddy Arnold, ‘Artificial Intelligence and Administrative Discretion: Implications for Public Administration’ (1999) The American Review of Public Administration 29(4) <https://doi.org/10.1177/02750749922064463> date accessed 20 January 2024.

[7] Anneke Zuiderwijk and others, ‘Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda’ (2021) Government Information Quarterly 38(3) < https://doi.org/10.1016/j.giq.2021.101577> date accessed 20 January 2024.

[8] Justin B. Bullock, ‘Artificial Intelligence, Discretion, and Bureaucracy’ (2019) The American Review of Public Administration 49(7) <https://doi.org/10.1177/0275074019856123> ; Yogesh K. Dwivedi and others, ‘Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy’ (2021) International Journal of Information Management 57 <https://doi.org/10.1016/j.ijinfomgt.2019.08.002> ; Slava Jankin Mikhaylov and others, ‘Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration’ (2018) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376(2128) <https://doi.org/10.1098/rsta.2017.0357> date accessed 20 January 2024.

[9] Samar Fatima and others, ‘National strategic artificial intelligence plans: A multi-dimensional analysis’ (2020) Economic Analysis and Policy 67, 178-194 <https://doi.org/10.1016/j.eap.2020.07.008> date accessed 20 January 2024. Aong the others see also C. Coglianese e D. Lehr, Regulating by Robot: Administrative Decision Making in the Machine Learnig Era, in The Georgetown Law Journal, 1156 – 1160; D. Keats Citron – F. Pasquale, The scored society, Due process for authomated predictions, in Washington Law Review, Vol. 89, 2014, 1 ss.; G. Avanzini, Decisioni amministrative e algoritmi informatici, Napoli, 2019, 22; B. Raganelli, Le decisioni pubbliche al vaglio degli algoritmi, Federalismi.it, 2020; A.I. Ogus, Regulation: Legal Form And Economic Theory Har Publishing Oxford Portland Oregon, 2004; L. Viola, L’intelligenza artificiale nel procedimento e nel processo amministrativo; E. Picozza, Intelligenza artificiale e diritto, in Giur. It., 2019, 7, pp. 1761 ss.; S. Crisci, Intelligenza artificiale ed etica dell’algoritmo, in Foro Amm. 2018, 1787 ss; J. Palma Méndez, R. Marín Morales, Inteligencia artificial, Madrid, 2011, 83 e ss.; M. Zalnieriute, L. Bennett Moses, G. Williams, The rule of law and automation of Government decision making”, University of New South Wales Law Research Series, 6 e ss. A. Amidei, Intelligenza artificiale e diritti della persona: le frontiere del “transumanesimo”, in Giur. It., 2019, fasc. 7, pp. 1658 – 1670; A. Amidei, Intelligenza artificiale e product liability: sviluppi del diritto dell’Unione Europea, in Giur. It., 2019, fasc. 7, pp. 1715 – 1726; D. Amoroso e G. Tamburrini, I sistemi robotici ad autonomia crescent tra etica e diritto (Increasingly autonomous robotic systems between ethics and law: what role for human control), in BiolLaw Journal – Rivista di BioDiritto, 2019, fasc. 1, pp. 19 e ss.; A. Celotto, Come regolare gli algoritmi. Il difficile bilanciamento tra scienza, etica e diritto, in Analisi Giuridica dell’Economia, 2019, fasc. 1, pp. 47 – 60.

[10] Naomi Aoki, ‘An experimental study of public trust in AI chatbots in the public sector’ (2020) Government Information Quarterly 37(4) <https://doi.org/10.1016/j.giq.2020.101490> ; Aggeliki Androutsopoulou and others, ‘Transforming the communication between citizens and government through AI-guided chatbots’ (2019) Government Information Quarterly 36(2) <https://doi.org/10.1016/j.giq.2018.10.001> date accessed 20 January 2024.

[11] Anneke Zuiderwijk and others, ‘Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda’ (2021) Government Information Quarterly 38(3) < https://doi.org/10.1016/j.giq.2021.101577> ; Maria Nordström, ‘AI under great uncertainty: implications and decision strategies for public policy’ (2022) AI&SOCIETY 37, 1703-1714 <https://doi.org/10.1007/s00146-021-01263-4> date accessed 20 January 2024.

[12] Paul Henman, ‘Improving public services using artificial intelligence: possibilities, pitfalls, governance’ (2020) Asia Pacific Journal of Public Administration, 42(4), 209-221 <https://doi.org/10.1080/23276665.2020.1816188 date accessed> 20 January 2024.

[13] Ibid.

[14] Jaoa Reis and others, ‘Impacts of Artificial Intelligence on Public Administration: A Systematic Literature Review’ (2019) 14th Iberian Conference on Information Systems and Technologies (CISTI) IEEE <https://doi.org/10.23919/CISTI.2019.8760893> date accessed 20 January 2024.

[15] Helen Margetts, ‘Rethinking AI for Good Governance’ (2022) Daedalus 151(2), 360-371 <https://doi.org/10.1162/daed_a_01922> date accessed 20 January 2024.

[16] Melanie Fink and Michele Finck, ‘Reasoned A(I)dministration: Explanation Requirements in EU Law and the Automation of Public Administration’ (2022) European Law Review 47(3), 376-392 <https://hdl.handle.net/1887/3439725> date accessed 20 January 2024.

[17] Madalina Busuioc, ‘Accountable Artificial Intelligence: Holding Algorithms to Account’ (2020) Public Administration Review 81(5), 825-836 <https://doi.org/10.1111/puar.13293> date accessed 20 January 2024.

[18] Lilian Mitrou and others, ‘Human Control and Discretion in AI-driven Decision-making in Government’ (2021) 14th International Conference on Theory and Practice of Electronic Governance <https://doi.org/10.1145/3494193.3494195> date accessed 20 January 2024.

[19] Rebecca Williams, ‘Rethinking Administrative Law for Algorithmic Decision Making’ (2021) Oxford Journal of Legal Studies 42(2), 468–494 <https://doi.org/10.1093/ojls/gqab032> ; Jennifer Raso, ‘AI and Administrative Law’ Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada <https://dx.doi.org/10.2139/ssrn.3734656> accessed 20 January 2024.

[20] Elizabeth Bishop, ‘Legal issues arising from the use of artificial intelligence in government tax administration and decision making’ (2021) Journal of AI, Robotics & Workplace Automation 1(1), 99-108 <https://hstalks.com/article/6493/>

[21] David Dyzenhaus and others, ‘The Principle of Legality in Administrative Law: Internationalisation as Constitutionalisation’ (2015) Oxford University Commonwealth Law Journal 1(1)  <https://doi.org/10.1080/14729342.2001.11421382> date accessed 20 January 2024.

[22] Recommendation CM/Rec(2007)7 of the Committee of Ministers to member states on good administration, art 2.

[23] Ishmael Mugari and Emeka Obioha, ‘Predictive Policing and Crime Control in The United States of America and Europe: Trends in a Decade of Research and the Future of Predictive Policing’ (2021) Social Sciences 10(6), 234 <https://doi.org/10.3390/socsci10060234> date accessed 20 January 2024.

[24] Ibid.

[25] Proposal for A Regulation of The European Parliament And of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, COM/2021/206, para 37.

[26] AI Act (n 8), para 40.

[27] AI Act (n 8), art 5(d).

[28] ‘EU AI Act: first regulation on artificial intelligence’ (2023) European Parliament News <https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> date accessed 20 January 2024.

[29] AI Act (n 8), art 8(1).

[30] AI Act (n 8), art 9-15.

[31] Lilian Mitrou, ‘Human Control and Discretion in AI-driven Decision-making in Government’ (2021) 14th International Conference on Theory and Practice of Electronic Governance <https://dl.acm.org/doi/10.1145/3494193.3494195> date accessed 20 January 2024.

[32] Ignacio Criado and others, ‘Algorithmic Transparency and Bureaucratic Discretion: The Case of SALER Early Warning System.’ (2020) Information Polity 25(4), 449-470 <https://doi.org/10.3233/IP-200260> date accessed 20 January 2024.

[33] Ibid.

[34] Johan Wolswinkel, ‘Comparative Study on Administrative Law and The Use of Artificial Intelligence and Other Algorithmic Systems in Administrative Decision-Making in The Member States of The Council Of Europe’ (2022), para 11 <https://www.coe.int/cdcj> date accessed 20 January 2024.

[35] Francisco Cardona, ‘The Delegation of Administrative Decision-Making Powers: A Tool For Better Public Performance’ SIGMA/OECD < https://www.nispa.org/files/conferences/2004/papers/200405081127270.Cardona.pdf> date accessed 20 January 2024.

[36] Consolidated version of the Treaty on the Functioning of the European Union, art 290.

[37] TFEU (n 19), art 291(2).

[38] Case C 9/56 Meroni [1958] ECLI:EU:C:1958:7, para 152, subpara 5.

[39] Meroni (n 21), subpara 4.

[40] Meroni (n 21), subpara 5.

[41] C-270/12 United Kingdom v Parliament and Council [2014] ECLI:EU:C:2014:18

[42] Marloes van Rijsbergen and Mira Scholten, ‘The ESMA-Short Selling Case: Erecting a New Delegation Doctrine in the EU upon the Meroni-Romano Remnants’ (2014) Legal Issues of Economic Integration 41(4), 389–406 <https://doi.org/10.54648/leie2014022> date accessed 20 January 2024.

[43] Ana Kozina and others, ‘The Delegation of Executive Powers to EU Agencies and the Meroni and Romano Doctrines’ (2017) <https://hrcak.srce.hr/file/284167> date accessed 20 January 2024.

[44] Artificial Intelligence and Administrative Law (n 17), para 11.

[45] Riccardo Guidotti and others, ‘A Survey of Methods for Explaining Black Box Models’ (2018) ACM Computing Surveys 51(5), 1-42 <https://doi.org/10.1145/3236009> date accessed 20 January 2024.

[46] Matthew U. Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) Harvard Journal of Law & Technology 29(2) <https://ssrn.com/abstract=2609777> date accessed 20 January 2024.

[47] Marjin Janssen and others, ‘Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers’ Experience on AI-supported Decision-Making in Government’ (2020) Social Science Computer Review 40(1), 1-16 <https://doi.org/10.1177/0894439320980118> date accessed 20 January 2024.

[48] Evrim Tan, ‘Deliverable 3.2.1 A conceptual model of the use of AI and blockchain for open government data governance in the public sector’ (2021) <https://www.researchgate.net/publication/351935084_Deliverable_321_A_conceptual_model_of_the_use_of_AI_and_blockchain_for_open_government_data_governance_in_the_public_sector?channel=doi&linkId=60b0b475458515bfb0ac027f&showFulltext=true> date accessed 20 January 2024.

[49] Recommendation CM/Rec(2007)7 of the Committee of Ministers to member states on good administration, art 10.

[50] Michele Finck, ‘Automated Decision-Making and Administrative Law’ (2020) Max Planck Institute for Innovation & Competition Research Paper, 19-10 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3433684> date accessed 20 January 2024.

[51] Recommendation Rec(2002)2 of the Committee of Ministers to member states on access to official documents, General principle on access to official documents.

[52] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, ‘Algorithmic Accountability for the Public Sector’ (2021), 18 <https://www.opengovpartnership.org/documents/algorithmic-accountability-public-sector/> date accessed 20 January 2024.

[53] The Charter of Fundamental Rights of the European Union, art 41, 47.

[54] Case C-225/19 R.N.N.S. and K.A. v Minister van Buitenlandse Zaken ECLI:EU:C:2020:951, para 46.

[55] Case C-817/19 Ligue des droits humains ASBL v Conseil des ministers ECLI:EU:C:2022:491, para 210.

[56] Melanie Fink and Michele Finck, ‘Reasoned A(I)dministration: explanation requirements in EU law and the automation of public administration’ (2022) European Law Review 47(3), 376-392 <https://hdl.handle.net/1887/3439725> date accessed 20 January 2024.

[57] The EU Charter (n 36), art 41.

[58] Consolidated version of the Treaty on European Union, art 2.

[59] The EU Charter (n 83), art 21.

[60] Convention for the Protection of Human Rights and Fundamental Freedoms, art 43.

[61] Case C-144/04 Mangold v Helm [2005] ECLI:EU:C:2005:709.

[62] Protocol N. 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms, art 1.

[63] Recommendation on good administration (n 5), art 3,4.

[64] Thilo Hagendorff and Katharina Wezel, ‘15 challenges for AI: or what AI (currently) can’t do’ (2020) AI & SOCIETY 35, 355–365 <https://doi.org/10.1007/s00146-019-00886-y> date accessed 20 January 2024.

[65] Julia Angwin and others, ‘Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks’ (2016) ProPublica <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> date accessed 20 January 2024.

[66] Ibid.

[67] Serena Ooosterloo and Gerwin van Schie, ‘The Politics and Biases of the “Crime Anticipation System” of the Dutch Police’ (2013) <https://ceur-ws.org/Vol-2103/paper_6.pdf> date accessed 20 January 2024.

[68] Paul Mutsaers and Tom van Nuenen, ‘Predictively policed: The Dutch CAS case and its forerunners’ (2020)   <https://www.researchgate.net/publication/346593158_Predictively_policed_The_Dutch_CAS_case_and_its_forerunners?channel=doi&linkId=5fc8d3a9299bf188d4edb5cd&showFulltext=true> date accessed 20 January 2024.

[69] Ibid.

[70] Melissa Heikkila, ‘The rise of AI surveillance’ (2021) POLITICO <https://www.politico.eu/article/the-rise-of-ai-surveillance-coronavirus-data-collection-tracking-facial-recognition-monitoring/> date accessed 20 January 2024.

[71] Larry Hardesty, ‘Study finds gender and skin-type bias in commercial artificial-intelligence systems’ (2018) MIT News <https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212> date accessed 20 January 2024.

[72] Herwig C. H. Hofmann, ‘General Principles of EU Law and EU Administrative Law’ (2014) < https://orbilu.uni.lu/bitstream/10993/13996/1/book%20-%20eu%20law%20-%20ch%208%20-%20HOFMANN.pdf> date accessed 20 January 2024.

[73] TEU (n 41), art 5(4).

[74] Ibid.

[75] Recommendation on good administration (n 5), art 5.

[76] ECHR (n 43), art 8.

[77] The EU Charter (n 36), art 7.

[78] Recommendation on good administration (n 5), art 9.

[79] TFEU (n 19), art 16(1).

[80] Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data [1981] European Treaty Series 108.

[81] Convention 108, art 5.

[82] Ibid.

[83] Convention 108, art 8.

[84] Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data [2018] Council of Europe Treaty Series 223.

[85] Ibid, art 9(1)(a).

[86] Protocol amending the Convention 108, art 9(1)(c).

[87] Protocol amending the Convention 108, art 9(1)(d).

[88] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[89] Graham Greenleaf, ‘Renewing  data       protection             Convention          108:      

The      CoE’s     ‘GDPR   Lite’ initiatives’ (2016) Privacy Laws & Business International Report, 142 <https://www.austlii.edu.au/au/journals/UNSWLRS/2017/3.pdf> date accessed 20 January 2024.

[90] GDPR, art 6.

[91] GDPR, art 22(1).

[92] GDPR, art 22(2)(a), (b), (c).

[93] GDPR, art 22(3).

[94] GDPR, art 15(h).

[95] GDPR, art 22(2)(a).

[96] GDPR, art 22(2)(b).

[97] GDPR, art 22(2)(c).

[98] GDPR, art 4(11).

[99] GDPR, art 7(4).

[100] GDPR, recital 43.

[101] GDPR, art 5(1)(a).

[102] GDPR, art 5(1)(a), (b), (c), (d), (e), (f), (2).

[103] GDPR, art (5)(1)(a).

[104] GDPR, art 5(1)(c).

[105] GDPR, art 5(1)(b).

[106] GDPR, art (5)(1)(e).

[107] GDPR, recital 50.

[108] Ibid.

[109] S. and Marper v. the United Kingdom App no 30562/04 and 30566/04 (ECHR, 4 December 2008). 

[110] S. and Marper v. the United Kingdom, para 125.

[111] S. and Marper v. the United Kingdom, para 103.

[112] Ibid.

[113] S. and Marper v. the United Kingdom, para 112.

[114] Ibid.

[115] Gaughran v. the United Kingdom App no 45245/15 (ECHR, 13/06/2020).

[116] Gaughran v. the United Kingdom, para 96.