Skip to main content
SearchLoginLogin or Signup

AI biases and its consequences on taxation

This is a chapter from the book: "Taxes Crossing Borders (and Tax Professors Too) - Liber Amicorum Prof Dr R.G. Prokisch".

Published onOct 14, 2022
AI biases and its consequences on taxation
·

N. Kerinc LL.M. 1 and M. Serrat Romaní 2

1. Introduction

The present contribution to Prof. Rainer Prokisch’ Liber Amicorum is a direct consequence of his mentorship on the two authors. Prof. Prokisch is aware of the impact that technology is having on all layers of taxation, from the local level to the international one, making the intersection of Technology and Taxation a necessary field of study in order to know how to get the most out of it in order to improve the whole functioning of the taxation system. Artificial Intelligence (AI), robotics and other potentially disruptive technologies are becoming an essential tool for the detection of fraudulent practices as well as helping improve the levels of tax compliance by taxpayers. Surveys show that particularly the use of AI systems is becoming more and more attractive for tax authorities. Depending on the chosen algorithmic model implemented and based on a vast amount of data about taxpayers, AI can help to increase tax compliance by taxpayers and to prevent cases of tax evasion and tax fraud. Despite the numerous advantages that AI might entail, there are also drawbacks. The objective of this contribution is to delve into one of the main issues the application of AI is experiencing: the biases in the algorithms and its consequences.

The structure of the present contributions starts with a section (section 2) where the authors will give an insight about what AI is and what benefits it entails, followed by a section (section 3) that will shed light on the different types of biases in AI and its characteristics. As a further step, section 4 will elaborate on the consequences of such biases within the European Union (EU). In that regard, this section will specifically cover the principle of non-discrimination, the gathering of more information than foreseeably relevant, the discrepancies with the framework as set forth by the GDPR, as well as the principle of proportionality, followed by the EU’s response. We will end this contribution with a conclusion.

2. The use of AI in taxation – What is AI, examples of use in taxation and its benefits

More and more governmental institutions to improve their functioning in private and public sectors use technological advancement and the gathering of tremendous amounts of data.3 This also applies to tax authorities who implement AI systems with the aim to ensure the effectiveness and efficiency of their tax systems. According to the OECD report ‘Tax Administrations 2021: Comparative Information on OECD and other Advanced and Emerging Economies’, the tax administrations of 40 states already leverage the use of AI or envisage doing so in the short-term.4

Despite the increasing popularity of AI in the public and private sector, a discussion amongst stakeholders from various backgrounds redresses the balance of automation and sets the benefits and opportunities of AI alongside the undoubted drawbacks and challenges. Before exposing the imminent disadvantages of AI in taxation in the following sections, this part will present examples of the use of AI by tax administrations and its beneficial effects.

In order to detect the advantages of AI, it is necessary to grasp what AI is. Influenced by science fiction books5 and movies6, the term ’artificial intelligence’ many times creates the idea of a humanoid robot which moves, acts, and thinks like mankind. It is, however, much more multifaceted than that in actuality. AI is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”.7 Overall, the term is used to describe the development of systems which are furnished with the ”intellectual processes characteristics” of humans, as for instance the capability to reason and to learn from previous experiences.8 Since the development of AI did not mature sufficiently at this point, the potential and resulting effects on economies and societies, and therefore its meaning, remains uncertain. Even though the unpredictable potential of AI makes it difficult to define the term as such, it appears that there is general consent with respect to the decisive characteristics of AI.9

The term ’artificial intelligence’ has been coined for the first time at the second Dartmouth conference, organized by John McCarthy10 in 1956. He defines AI as ”the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable”.11 Since then, the definition of the term AI underwent several shifts, mostly revolving around computers which simulate intelligent behavior.

AI pioneer Marvin Minsky defined AI as ”[…] the science of making machines do things that would require intelligence if done by men”.12.Later, in 2009, Stuart Russel and Peter Norvig elaborated on a more precise definition of AI, differentiating between the rationality, thinking and acting capabilities of computer systems.13

The textbook ‘Artificial Intelligence: A Modern Approach’ became one of the leading conceptual views of AI.14 According to that view, AI systems are about the three components of sensors, operational logic, and actuators. By means of sensors, the underlying system collects vast amounts of data as for instance texts and pictures. Its operational logic materializes the collected data and provides instructions for the action that needs to be taken. Additionally, this element analyses the given situation and calculates the next steps in form of recommendations, predictions, or decision to achieve a desirable outcome.15 The actuator physically applies the instructions and changes the environment. All three components together are described as an intelligent agent. The intelligent agent affects its environment by taking actions based on instructions it receives from the environment. The approach of Russel and Norvig to define AI allows for the differentiation of various fields of AI as for instance computer vision, speech processing, natural language understanding, reasoning, knowledge representation, learning, and robotics.16

Just as its scope and functions, the application of AI in the field of taxation creates manyfold possibilities. With the aim to transform sets of data into assets of knowledge, the impact of AI reaches from tax management to the interaction between tax administrations and taxpayers.17

Thus, for instance, virtual conversational assistants and chatbots are used to assist taxpayers with inquiries regarding the completion of their tax return or technical issue. These AI systems furthermore entail the functions of data sharing and data matching.18 Besides that, AI is implemented to detect tax fraud patterns in the case selection for upcoming audits. It prevents breaches with risk analysis and sometimes even carries out the audits itself.19 Some of the benefits of AI systems in taxation are the reduction of mistakes in tax practices, the processing of costs, to promote voluntary tax-compliance of taxpayers, and, ultimately, to increase the amount of tax revenues collected. In the following, some specific examples of the use of AI in taxation will be presented.

AI combined with the Internet of Things (IoT), Data Analysis and Data Analytics applied to a vast amount of taxpayer information implies many advantages in solving administrative tasks as well as in decision making processes.20 Besides enabling an increased efficiency by the automation of time-consuming routine tasks such as the upload of documents and their classification, AI entails the benefit of a more effective tax collection.21 In comparison to humans, AI systems process increasing amounts of economic information in a much faster way. They categorize this information more precisely which makes them capable to identify situations of non-compliance with the law and, thus, to counteract tax fraud more efficiently. Their capability to analyze highly complex situations combined with their objectivity, decreases the risk of erroneous assessments.22

Secondly, the processing of real time data and the use of coordinated algorithms enables the implementation of deep learning systems. Based on such systems, tax administrations can easily detect irregularities in transactions of taxpayers. This can have the benefit to identify cases of tax evasion and ensure the collection of sufficient taxes.23

A third benefit of AI systems in taxation is their ability to create detailed profiles of taxpayers. In this regard, previous and current conduct of individuals can be analyzed to predict their future behaviors. The application of machine learning to electronic invoice programs makes it possible to track the expenditures of taxpayers and to identify their consumption patterns.24

Furthermore, the ability of AI systems to conduct any kind of calculations and evaluate complex numbers can be used in many functions of the tax administration. So, for instance, systems can predict revenues and increase the overall effectiveness and efficiency of tax systems.25 This again have a potentially positive impact on government expenditure planning.

As the examples show, there are many advantages related to the use of AI in taxation. It can be to the benefit of a better communication between tax administrations and taxpayers.26 The capabilities of AI systems can ensure increased tax compliance and fight conduct of tax evasion and fraud. Consequently, tax systems can gain in their effectiveness and efficiency to collect tax revenues.27

3. Biases in AI: types and characteristics

AI is taking a predominant role in decision-making processes of all kind and hierarchic highness. Such algorithmic decision-making processes include different types of big-data and even predictive analytics, which use complex algorithms to predict future behaviour through analysing large data sets28. Somebody following some directives, moral codes or particular principles, or merely some guideline to focus on specific targets codes and programmes the algorithms. Precisely all these patterns are potential factors to make such technology not neutral. Therefore, algorithms used for social control under both the private sector and in the end unsupervised governmental power might jeopardise certain democratic principles29. Ultimately, decisions determined by machine learning systems might suppose objectionable social consequences. Because humans code them, machine-learning models reflect human biases that might conduct to an undesirable social outcome as discrimination, and consequently, they might infringe fundamental rights and freedoms. Before analysing what major consequences biases might have, first, it is necessary to check which type of biases one might find.

3.1 Human biases

A type of bias influencing algorithmic decision-making processes is the human bias or societal bias. It is challenging for humans to act in pure neutrality in the process of decision making. Stereotypes and biases influence all aspects of our lives, from the simplest, basic decisions to the most complicated and challenging ones.30 Like it or not, human reasoning is biased by multiple cognitive and psychological biases31, due to multiple variables. Cognitive biases are defined as repetitive paths that your mind takes when doing things like evaluating, judging, remembering, or making a decision32. Implicit stereotypes and cognitive biases are part of our reasoning.

Artificial Intelligence might be helpful in order to detect and prevent such human inevitable biases. Human decisions can be unconsciously (or consciously) influenced by their personal beliefs and characteristics that determine the way they think. Moreover, there is an abundance of research on behavioural science regarding unconscious human decisions. Choices and decisions sometimes can even be made automatically.33

AI can process and analyse data more accurately and more quickly than us, which helps us to improve our life quality in many different aspects.34 Moreover, algorithms might even potentially reduce the consequences of human-biases in decision-making, improve the prediction and even be used to identify human biases.35 However, despite these technological advantages that AI can bring to improve fairness and, especially, to protect equality, it has its flaws that conduct to biased forms of AI, mainly because of human influence. The same way we acquire our biases from our interaction with our world, AI - mathematical algorithms designed and coded by us- learns its biases from us.

The influence of human biases on algorithmic biases occur when training data falls into common stereotypes and when it prejudices latent or explicit in the population36. In the situation that cultural influences or stereotypes are not fixed in machine learning models when either collecting or processing data, the outcome will be a programme that still follows the same stereotypes marked by the data that fed it. However, sometimes identifying such biases might be difficult, since there might be a lack of awareness of the bias due to the automaticity of stereotypes.37

Examples of prejudicial biases are commonly found in the software of facial analysis. For instance, a rudimentary and extreme example is regarding the assumption that nurses are all women. If an algorithm is exposed to millions of images of people at work, taking into account that in Europe and the United States around the 90% of nurses are female, versus a 10% being male38, the algorithm, once trained, will identify the image of a nurse with a woman, and most probably will conclude that all nurses are female. The contrary can happen with all those professions traditionally developed by men or popularly associated with men.39

Another well-known case of human biases that ended up influencing on an algorithm, is the research experiment Microsoft tried to conduct with the self-learning AI, Tay.40 Sinders noted on Tay: “People like to kick the tires of machines and AI, and see where the fall off is. People like to find holes and exploit them, not because the internet is incredibly horrible (even if at times it seems like a cesspool); but because it’s human nature to try to see what the extremes are of a device. People run into walls in video games or find glitches because it’s fun to see where things break”.41 For this reason, before training the algorithms, it results imperative to thoroughly, honestly and openly question what are the preconceptions that could currently exist and actively hunt for how those biases might manifest themselves in data42 when designing the algorithms.

3.2 Data biases

3.2.1 Introduction on Data biases

Apart from the human biases, there are other types of biases, which are the result of training algorithms with biased data. There are different ways data biases can be classified depending on the moment they are occurring during the whole training process.

3.2.2 Sample biases

Sample biases occur because of a flaw in the selection process of the data. Generally, when collecting samples, whatever the discipline, they aim to obtain the representation of data closest to reality. Sample biases in the path of AI are those biases happening when the collected or trained datasets do not show the reality for which the model is set up. A notorious example of sample bias in AI happened in Amazon recently. Amazon had been developing recruitment software that could go through the different candidates' curricula vitae. The theory sounded very efficient. Conversely, the AI seemed to have a serious problem with women, and it emerged that the algorithm had been programmed to replicate existing hiring practices, meaning it also replicated their biases.43 The issue occurred because the samples of collected data did not obey anymore to current ways to proceed in society.

3.2.3 Exclusion biases

This type of biases happens mostly not when collecting the data but when processing them44. The exclusion bias occurs when some data from the data set are excluded, and thus, not taken into account during the processing. An example of exclusion bias happens when certain features are not taken into account because they are considered not relevant for the software.45 We can find an example of such bias in Amazon, again. The profitability model of Amazon for the US (algorithmically) excluded a series of neighbourhoods from the same-day Prime delivery service. In order to fit in the same-day delivery programme, the neighbourhoods had to match three factors: have sufficient number of Prime members, to be near a warehouse, and have sufficient people willing to deliver to that zip code. The algorithm was trained to cross the data to exclude results that did not match these criteria from the Prime service. However, when training on establishing these criteria, which in terms of profitability made sense, it turned out that the data that were excluded from the model made that the excluded neighbours coincided with either lower-income neighbourhoods or neighbourhoods which were primarily populated by black citizens. Amazon was careless enough to exclude from the software such sensitive data, and [s]imple correlation between apparently neutral features can then lead to biased decisions46, which can end up having adverse effects in the image of the company, and, therefore, triggering into economic losses.

3.2.4 Algorithmic biases

Algorithmic biases could be defined as those errors that make computer systems trigger unfair outcomes. There are many reasons for algorithmic biases to happen, yet, the most common ones are related basically to mistakes in the design of the algorithm (code), mistakes in the data collection or the way this data is selected and processed. Most of the algorithmic biases have a human origin. As Kirkpatrick states: "Algorithms simply present the results of calculations defined by humans using data that may be provided by humans, machines, or a combination of the two (at some point during the process), they often inadvertently pick up the human biases that are incorporated when the algorithm is programmed, or when humans interact with that algorithm."47

A well know case that applies to this type of bias is the COMPAS case. The algorithm used by the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software showed that the predictions were racially biased against black defendants.48 When checking how many of the previously condemned offenders were charged with the same new crimes on the next years, regarding violent crime only 20 percent of the people predicted to commit violent crimes actually went on to do so.49

It was not only discovered that the overall predictions were not assertive, but also that when forecasting the possible re-offenders, white defendants were mislabelled as low risk more often than black defendants50. The algorithm almost doubled the chances of black defendants, compared to white defendants, to become future criminals. Algorithms sometimes use specific data that can be ’proxies for race’.51 This example highlights the relevance of having a well-focused study regarding what data needs to be trained. This case showed how an algorithm that tries to reflect our world accurately, also reflects our human biases.

4. Consequences of AI biases within the EU

4.1 Introduction

The European Commission believes that the main pillars to start building a trustworthy AI start with common AI ethical principles that must be based on the fundamental rights enshrined in the EU Treaties, the EU Charter and international human rights law.52 In that sense, the European Union High Level Expert Group on Artificial Intelligence has suggested that machine learning models should follow a series of principles, which are entirely related to the OECD ones.53 The principles need to comply with three major characteristics: Lawfulness, robustness and ethics. In a nutshell, they have to respect all applicable laws and regulations as well as to take into account and respect ethical principles and values and be technically solid while taking into account the social environment.54 Not all research and not all purposes of building AI are valid and not all ways to codify a machine learning model are accepted. The European Commission intends to delimit the interest and the objectives driving the design of an algorithm. The European Commission does not only consider that substantive fairness consists of ensuring an equal and just distribution of the wealth derived from AI, but it also should include the avoidance of unfair biases that might lead to undesirable consequences which might endanger some core principles of EU Law. The European Commission also relates the principle of fairness with the principle of proportionality, which is one of the core principles of EU Law.55 AI has to be proportional regarding its means and objectives.

4.2 Violation of the principle of proportionality

Another principle which is required to be adhered to in the institutionalization of AI in taxation is that of proportionality.56 The principle of proportionality determines the lawfulness of the use of AI by tax administrations and balances the degree of interference by tax authorities and taxpayers’ rights, especially their rights to privacy and data protection.57 As presented above, AI systems are based on a vast amount of data. In tax practice this data consists of personal data of taxpayers. Despite the collection and use of personal data of taxpayers for self-learning systems to return a result in a specific situation, depending on the model implemented, such data could be used to anticipate taxpayers’ behavior and assess their risk of non-compliance with tax law.58 Under both circumstances, the processing of data will be regulated by the GDPR as soon as the taxpayer is an individual. Under this set of rules, the balancing of taxpayers’ rights and interfering actions on behalf of tax authorities is left to the Member States. National legislators’ freedom of action in this balancing act is, however, limited by the principle of proportionality as established in art.6(3) GDPR which stipulates that the processing of personal data for purposes of the public interest needs to have a legal basis in either EU or domestic law to which is proportionate to the legitimate aim pursued.59

And yet, even though AI systems may help to estimate taxpayers’ likeliness to comply with the tax law, the results retrieved from the algorithmic model should not serve as the sole basis for profiling or decision-making processes of tax authorities.60 This prohibition as set out by Art. 22 GDPR applies to all scenarios in which personal data is used as a basis for profiling taxpayers as well as for automated decision-making processes where such decisions produce legal effects concerning the taxpayer or similarly significantly affects him.61 Even though the term ’significantly’ is not clearly defined, it has been further elaborated by the Working Party which in its guidelines includes, amongst others, decisions which affect a person’s access to health services and services or decisions which affect his or her financial circumstances.62 Art. 22, paragraph 2 GDPR provides for exceptions to the general prohibition. The prohibition does not apply if the decision is necessary for the conclusion or the functioning of a contract between taxpayers and tax administrations.63

Furthermore, automated decision-making can be the sole ground for a decision where it is authorized by EU law or the law of the Member State to which the tax authority in question is subject and where that law sufficiently safeguards taxpayers’ rights, freedoms, and legitimate interests.64 The third exception provided in Art. 22(2)(c) GDPR is based on the taxpayer’s explicit consent. Additionally, Art. 23 GDPR restricts the scope of the obligations and rights provided for in, amongst others, Art. 22 GDPR by legislative measures of tax authorities as long as such restrictive measures have to respect the essence of the fundamental rights and freedoms. Moreover, the legal measure which is used to limit taxpayers’ rights is required to adhere to the proportionality principle.

In the light of prevalent tax fraud practices, in 2019 the Italian legislator modified its domestic privacy regulations by including the processing of data with an aim of prevention and fight against tax evasion as a measure which is necessary for reasons of substantial public interest. This inclusion would allow for a limitation of the GDPR requirements and still needs to be reviewed under the aspects of adequacy and proportionality.

4.3 Discrimination

The risks related to the misuse of AI systems in taxation requires the adherence of certain principles which should govern administrative actions to safeguard taxpayers’ rights. In the EU, even though it is up to the sovereignty of EU Member States to regulate direct taxation under their domestic laws, such regulations must be in accordance with the fundamental freedoms and principles of EU law.65 One of these principles is that of non-discrimination as embedded in art.2 TEU.66 The principle of non-discrimination in conjunction with the protection of property67 as set forth by the ECHR has to be considered by national tax authorities in their decision-making process of tax law cases. The principle of non-discrimination requires that all taxpayers are treated equally by the law, based on neutral facts and irrespective of personal criteria such as sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation.68 Thus, a difference in treatment is discriminatory if it is not justified by objective and reasonable criteria, i.e., if it does not pursue a ‘legitimate aim’ or if there is not a ’reasonable relationship of proportionality between the means employed and the aim sought to be realized’.69 In other words, a different treatment of taxpayers is only in accordance with the non-discrimination principle if that difference is of relevance for the application of the tax provision in question. Since AI systems are based on a hypothesis developed by scientists, they entail a considerable risk of human biases and errors. Where such biases and errors are transferred to, and thus become a part of, algorithms, they have an immediate impact on their results.70 One example of algorithmic bias is the case of Tay(bot), an artificially intelligent chatbot released by Microsoft Corporation via Twitter71. Tay served as a trial in the sector of ’conversational understanding’, getting smarter and learning to engage with humans through ’casual and playful conversations’ with Twitter users.72 After receiving several misogynist, racists, and Donald Trumpist messages, the initially neutral chatbot started to repeat the offensive statements of human users such as ’Hitler was right’ and ’9/11 was an inside job’.7374 After being exposed to racist data, Tay became a racist itself within less than 24 hours.

The fact that the AI systems are inevitably exposed to imbalanced and prejudiced data constitutes a crucial issue in the field of taxation in that it can easily result in misclassifications. The vastly subjective terminology of tax legislation, resulting in interpretation issues, as well as constantly arising ambiguous tax situations, e.g. due to changing behavioral patterns of taxpayers in the conduct of business as well as consumption, creating loopholes in tax law, contribute considerably to the discriminatory treatment of taxpayers. The data which is transferred to algorithms has to be of such a nature as to enable the system to generate generalized conditions of validation. The root problem lies with the difficulty to collect a substantial amount of error- and bias-free information which can be used as a basis for a reliable and robust model for tax practices.75

A further issue may arise with regard to the application of General Anti-Abuse Rule (GAAR) provisions such as Art. 6 of the Anti-Tax Avoidance Directive. The implementation of AI systems may lead to the preferential tax treatment of some taxpayers while it may discriminate against others based on criteria which are irrelevant under the GAAR in question. So, for instance, a small-sized jurisdiction which aims at increasing FDI from big economies may use AI models which treat taxpayers from the envisaged jurisdictions favorable by not applying the denial of tax benefits as stipulated by the applicable GAARs. The favorable treatment vis-à-vis taxpayers from smaller economies who are denied tax advantaged under the GAAR, is a consequence of the use of criteria which are irrelevant in the light of that provision.76

5. The European response in front of such challenges

The European Commission in its “White paper On Artificial Intelligence - A European approach to excellence and trust” 77 directly relates the potential biases of algorithms with an outcome of discrimination that might lead to the violation fundamental rights and, therefore, as a last resort, it might lead to the infringement of EU primary and secondary law. There is still no case law to help determining the limits on the application of AI within the EU law, but in the field of taxation and social security, The Hague District Court (The Netherlands) ruled a historical sentence by referring the ECHR78.

From a broader perspective outside the EU, art. 14 of the European Convention on Human Rights (ECHR) states that "the enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status". However, the main issue we face in front of AI biases that might lead into discriminatory practices is the indirect discrimination produced by AI, since indirect discrimination requires a more significant effort to detect and demonstrate than intentional or direct discrimination.

Indirect discriminations occur when a practice that seems neutral at first glance ends up discriminating against people of a certain ethnic origin, or another protected characteristic.79 This type of discrimination in AI might appear, for instance, when training data do not contain information about protected characteristics such as gender or race. For instance, the algorithm learns that people living in a particular area with a specific postcode were likely to default on their loans and uses that correlation to predict defaulting. Hence, the system uses what is at first glance a neutral criterion (postcode) to predict defaulting on loans. But suppose that the postcode correlates with racial origin.80 Ultimately, the algorithm that was programmed to take neutral references ended up discriminating because of racial origin. Nevertheless, as Borgesius indicates: “Case law shows that the European Convention on Human Rights prohibits both direct and indirect discrimination.81

The text of Article 14 of the ECHR does not provide for a definition of what constitutes direct or indirect discrimination; yet suited to several case law of the ECtHR it is understood as "difference in treatment of persons in analogous, or relevantly similar situations” and “based on an identifiable characteristic, or ‘status’”82. Indirect discrimination is also covered under the umbrella of the ECHR83, which considers that the fundaments indicating there is indirect discrimination are first, to have what appears to be a neutral rule or policy. Second, that such rule or policy affects a specific group of people, defined by one of the protected grounds in article 14, in a significantly more negative way than to those in a similar situation who are not part of this affected group even though there are no intentions of such effects.84

Focusing the attention on The Hague District Court sentence, the Court ruled a historical sentence on a Dutch AI system called Systeem Risico Indicatie or SyRI (not to mistake for Apple’s Assistant, Siri). SyRI is an AI tool used by the Dutch government to detect several ways of fraud to public Administrations, including social security benefits and tax fraud. In accordance to the District Court, the legislation regulating SyRI infringes Art. 8 ECHR on the right to privacy and family life. The District Court made the following connection from the right to privacy passing by the right to data protection until the right to non-discrimination: The right to respect for private life in the context of data processing concerns the right to equal treatment in equal cases, and the right to protection against discrimination, stereotyping and stigmatization.85

SyRi was programmed to detect abuse on social security benefits or tax fraud focusing specifically in ’problem districts’ with the aim to increase the chances of discovering irregularities in such areas as compared to other neighbourhoods. The District Court believes that the way SyRI was processing the data was leading to unjustified exclusion, stigmatization and discrimination against certain neighbourhoods, contributing to stereotyping and reinforcing a negative image of the occupants of such areas, even if no risk reports have been generated about them.86 The application of SyRI only in such problem districts could imply that the system was biased, attributing by default the abuse of social and tax benefits to certain profiles of people.

Such sentence supposed a major advancement regarding indirect discrimination on AI systems not only within Europe but within the EU, and concretely regarding how risk-management systems can lead to discrimination of certain groups of taxpayers. Therefore, European non-discrimination law can offer protection against algorithmic indirect discrimination affecting taxpayers, even though the most significant difficulty for the applicants is proving the discriminatory treatment of AI biases.

6. Conclusions

Just as its scope and functions, the application of AI in the field of taxation creates manifold possibilities. With the aim to transform sets of data into assets of knowledge, the impact of AI reaches from tax management to the interaction between tax administrations and taxpayers. Presenting one of the core principles of EU Law, the use of AI by tax administration must correspond to the principle of proportionality. Accordingly, the interference with taxpayers’ rights through the collection and processing of personal data by means of AI must be proportionate in light of the legitimate aim. Biases which might arise due to AI being exposed to imbalanced and prejudiced data can lead to misclassifications and a difference in treatment of taxpayers. In this regard, taxpayers’ rights are safeguarded by the principle of non-discrimination. In order to identify biases on time and minimize the risks, internal and external auditing and supervision during the process of design, data collection and training and after the AI is implemented are important to prevent and promptly detect any kind of biases that might turn out in discriminatory results. These ex-ante measures should be accompanied by updated regulations to guarantee a legal security in terms of non-discrimination and equality.

It is clear that the existing legal standards are a good starting point to claim for fairness and equality in Court. International conventions and most of the domestic constitutions and statutory regulations contain legal precepts to prevent direct or indirect intentions of discrimination. However, the complexity of AI leaves some legal loopholes that still leave room for indirect discrimination cases, since proving the existence of biases turns to be difficult. Maybe it is time to create a comprehensive one-size-fits-all AI statutory regulation and create more specific guidelines on internal and external accountability processes to identify and prevent discriminatory biases.

Comments
0
comment
No comments here
Why not start the discussion?