Skip to main content
SearchLoginLogin or Signup

Chapter 9. Summary of conclusions and final remarks

Published onDec 13, 2024
Chapter 9. Summary of conclusions and final remarks
·

9.1. Introduction

The proliferation of dark patterns in digital environments presents a challenge to the level of protection we afford to consumers online, the way we design regulation to tackle harms arising out of socio-technical artefacts and the way we detect these harms on digital markets. This book set out to investigate how we could regulate the use of dark patterns effectively within the realm of EU consumer protection law, considering both legal and technical solutions. This chapter answers this overarching research question by summarising the main conclusions with regard to the five sub-questions posed in Chapter 1:

  1. Do dark patterns need to be regulated?

  2. How could dark patterns be regulated effectively? What is the optimal balance between technology neutrality and technology specificity in technology regulation?

  3. How does the current substantive legal framework apply to dark patterns, and is it effective at tackling them?

  4. How could the effectiveness of the substantive legal framework be improved?

  5. What lessons can be learnt from the interaction of the substantive legal framework and computational methods for the task of automatically detecting unlawful dark patterns?

These questions were answered through the prism of the Shopping dark patterns found by Mathur et al.1 in their large-scale measurement of dark patterns on e-commerce websites, as well as the Unfair Commercial Practices Directive (UCPD) and the Consumer Rights Directive (CRD).

This chapter first summarises, in section 9.2, the main conclusions with regard to the five sub-questions posed in Chapter 1. The chapter then reflects on the book’s contribution (section 9.3) and its limitations and future research pathways (section 9.4), and then offers some final remarks (section 9.5).

9.2. Summary of conclusions

9.2.1. Do dark patterns need to be regulated?

Whether dark patterns ought to be regulated or not is a question that is tightly linked to how we problematise them. Chapter 2 frames dark patterns as a socio-technical artefact, i.e. a potential product of user experience optimisation, a process that is subject to technical, economic and organisational considerations. As Chapter 3 shows, dark patterns may also be a vehicle for the behavioural exploitation of the users of various digital products, including e-commerce websites, for various purposes: collecting personal data (Privacy), making services more engaging (Engagement) and getting users to spend more (Shopping); I focus on the last-mentioned purpose. While dark patterns are a new(er) term, the behavioural exploitation of consumers as a means of market manipulation is old(er) news. Dark patterns are therefore not a new problem; they reflect a socio-technical change in the scale of an older problem, and the need to regulate them can be assessed on the basis of well-established grounds for intervention.

As Chapter 4 explains, there are two main conceptual frames for assessing whether regulators ought to be concerned about the older problem (of consumers’ behavioural exploitation): welfarist (behavioural law and economics) and autonomist (autonomy theory). On a welfarist view, dark patterns could lead to both traditional market failures, in the form of information asymmetries and transaction costs, and behavioural market failures. On an autonomist view, we may question whether there is any room for autonomy in digital environments that are characterised by pervasive manipulation. Both frames, therefore, offer support for intervention, but the welfarist perspective warns us that, insofar as individual dark patterns are concerned, the devil is in the detail, and we ought to proceed based on empirical evidence of welfare costs.

9.2.2. How could dark patterns be regulated effectively? What is the optimal balance between technology neutrality and technology specificity in technology regulation?

It is famously difficult to regulate behavioural phenomena, let alone those that come packaged as socio-technical artefacts. The risks of regulators failing in this regard are therefore exacerbated. To try to minimise risks while accounting for dark patterns’ double-sided nature as examples of behavioural exploitation as well as socio-technical artefacts, in Chapter 4 I draw on insights from (behavioural) law and economics, autonomy theory and the regulatory branch of law-and-technology literature to assess how we could regulate dark patterns effectively. Behavioural law and economics cautions us that information remedies are unlikely to do a good job in this regard, and if we accept the autonomist argument warning us that autonomy is illusory in digital markets, it becomes possible to consider more direct regulation of traders’ conduct by means of prescriptions or prohibitions of commercial practices as a viable policy option. When we look at this prescription through the lens of the theory of socio-technical change, it transpires that we may want to make this standard-setting exercise (more) technology specific if we seek to increase compliance with technology regulation. Designing digital products such as user interfaces so that they are compliant with regulatory requirements is an interpretative exercise. Technology-neutral regulation may leave too much to the imagination and the pursuit of private interests by ill-intended and highly capable regulatees. Smaller, less-resourced players may have difficulties not only with performing this translation, but also with discriminating between the non-compliant third-party solutions they use for the development and maintenance of their digital presence, which are not always equally adjustable. We may therefore want to regulate in a more technology-specific manner. Even if regulation were to become more technology specific, there is still a role for technology neutrality in preventing circumvention attempts by resourceful regulatees. As to the timing of this regulation, given that innovation in terms of digital product design – such as the design of the consumer-facing interfaces of e-commerce websites – is a continuous process, the time when we know enough about dark patterns to regulate them in one go may never come. What we can do instead is regulate in a risk-based, incremental fashion, starting with the practices which are most harmful, so as to both protect consumers and not over-restrict beneficial innovation. Here, technology-neutral regulation can serve a gap-filling purpose as new instances of harmful innovation arise.

Ultimately, however, what may matter the most in the long run for the effectiveness of techno-regulation is the regulatory environment’s ability to speedily respond to regulatory disconnection. Regulatory disconnection comes in two flavours: normative and descriptive. Normative disconnection occurs when there has been a shift in our thinking about the values and goals underlying our legal system. Descriptive disconnection occurs when the way we describe technology in regulation is no longer representative of that technology or the way it is developed. In socio-techno-legal landscapes marked by continuous change, regulatory disconnection is a constant risk. The next chapters look at whether we are facing a regulatory disconnection in the way we currently regulate dark patterns in EU consumer law, whether we have the regulatory tools to tackle it now and in the future, and how we may address it while bearing in mind the regulatory design considerations put forward in Chapter 4.

Chapter 5 showed that the current system of protection established by the UCPD and CRD relies heavily on information remedies that may be of use to the average consumer, and is relatively technologically neutral. Chapter 6 then looked at how the current legal framework addresses Shopping dark patterns, and gauged its effectiveness in light of the theoretical framework established in Chapter 4. The good news is that our current system of protection seems to have the flexibility to apply to a wide range of dark patterns; in other words, we are not faced with a legal disconnection. It also has the potential to tackle Deceptive dark patterns rather well, as well as Information-hiding dark patterns insofar as these present instances of pure omission, i.e. complete failure to disclose relevant information to consumers. From here on, things get trickier. Where information is not entirely absent, but rather obscured – and dark patterns provide many ways of achieving this (e.g. colour, font, size, positioning) – our current information presentation requirements do not appear to provide much direction on how to design compliant interfaces or on what should be deemed non-compliant design. By not engaging much with information presentation, we also miss out on an opportunity to require the disclosure of particularly important information (say, on price) in a way that not-so-average consumers will be able to absorb. Beyond the informational dimension, where dark patterns are Restrictive, Covert and/or Asymmetric, the prohibition of aggressive commercial practices in the UCPD offers some potential to tackle instances of extreme pressure, but this prohibition remains underspecified and sparsely addressed in case law 20 years after the adoption of the UCPD. It is also subject to the average consumer test, which means that at least in some instances (which we do not know much about) we may expect consumers to resist (some) commercial pressure. Lastly, our current legal framework is unlikely to take issue with the fact that dark patterns generally may exploit cognitive biases – the average consumer does not have cognitive biases. These findings lead me to conclude that we have a regulatory disconnection on our hands. More specifically, the potential gaps in protection that may be the result of the application of the average consumer test and our failure to regulate information presentation for real, rather than average, consumers seem to point to a normative disconnect. Further, the under-specification of information presentation requirements and of the UCPD’s prohibition of aggressive commercial practices, which are the consequences of the technologically neutral way in which we regulate, allows us to see the high rates of non-compliance in digital environments in a new light – that of a descriptive disconnect. Faced with technology-neutral provisions, some regulatees may devise creative ways to sidestep consumer rights through the design of their user interfaces. Others may deny consumers their rights unintentionally, either because they do not know how to comply or because they are victims of the servitisation and platformisation of web development and design, unable to change the default settings of the third-party solutions they use. Technology neutrality may also make enforcers more hesitant to take action for fear of legal challenges, which may create a vicious circle in which the lack of enforcement leads companies to believe they can get away with anything in digital environments. Both the normative and the descriptive disconnection put the effectiveness of our current legal framework at risk. It is also concerning that we do not have regulatory mechanisms in place to deal with the ongoing risk of regulatory disconnection in socio-techno-legal environments: the only way to adapt both the UCPD and the CRD is through the (long) ordinary legislative procedure. It may be time to return to the regulatory drawing board.

Having established that there is both a normative and a descriptive disconnect with respect to the regulation of dark patterns in the UCPD and CRD, in Chapter 7 I reviewed a variety of policy proposals that have been put forward in the ongoing Digital Fairness Fitness Check in light of the theoretical framework proposed in Chapter 4, and envisaged how we could improve the effectiveness of the substantive legal framework. While the normative disconnect means that it may be time to rethink the values underlying our consumer protection system, and the body of scholarship calling on policymakers to rethink vulnerability in digital environments is impressive,2 I am sceptical about both the prospects and desirability of achieving this in the current Fitness Check, which is limited in scope to the UCPD, CRD and the Unfair Contract Terms Directive. These questions touch upon the overall level of protection afforded to consumers – which so far have been defined in ‘average consumer’ terms – throughout the wider EU consumer acquis, and the remedies we use to ensure this level of protection (information remedies). We may also need a more thorough rethinking of what we deem acceptable both offline and online – consumers are not only (behaviourally) manipulated online. These considerations lead me to believe that we should tread carefully in amending core provisions to make sure we do not move too fast and break things. This is all the more important given the substantial body of recently adopted digital acquis in the EU, which continues to grow without much attention being paid to the interaction of new and old instruments.

In light of these considerations, we may instead want to regulate dark patterns specifically. That being said, while there is no uniform definition of dark patterns in Human–Computer Interaction (and there may never be one), there is also an undeniable risk that policymakers might miss the mark with a general ban on dark patterns. Turning to the direct regulation of certain aspects of user interface design by prohibiting or prescribing design requirements for individual dark patterns that we already know have the potential to cause substantial consumer financial detriment – Hidden Costs, Hidden Subscriptions and Hard to Cancel – appears to be a safer bet.

Lastly, as discussed in Chapter 4, when it comes to ensuring the continued effectiveness of technology regulation in socio-technical environments marked by continuous change, it is not just policy design that matters; the design of the regulatory environment, and its ability to adapt to changing landscapes of consumer harm specifically, is also important. We could subject consumer protection instruments to periodic legislative reviews as foreseen in the new digital acquis, but we could also opt for faster and possibly more effective mechanisms. These might include targeted amendment procedures that are speedier (delegated acts) for prohibitions and/or leaving the specification of some design prescriptions at EU level to the Commission (implementing acts); involving the industry in (co-)regulation through the New Approach could also be an option. While the latter option is the most efficient, and could also lead to improved effectiveness as the industry is in a better position than policymakers to translate legal requirements into technical ones, this option could raise legitimacy concerns. Ultimately, whatever we do, consumer rights mean nothing if they are not enforced. To enforce them, however, authorities need to first detect infringements on digital markets. This is an aspect in which technology may provide solutions rather than pose problems.

In Chapter 8, I looked at web measurement literature to gauge the technical feasibility of detecting unlawful dark patterns using automated approaches in enforcement authorities’ exercise of their market-monitoring functions, and I probed the relationship between regulatory design and technical feasibility. Two main findings transpired from this exercise.

First, the technological state of the art means that it is possible to collect data about dark patterns from shopping websites at scale using web measurement methods such as crawling and scraping. Researchers have developed techniques to interact in an automated manner with shopping websites in the way a consumer would, adding a product to the virtual shopping cart and checking it out, while automatically extracting information in this process, including the website source code. The source code can reveal not just the presence of a dark pattern, but could also point to its deceptiveness, which is a relevant feature in the eyes of the law. Researchers have developed ways to automatically analyse this data by linking textual and design elements to dark patterns and legal infringements. While most prior studies have investigated Privacy dark patterns and data protection law infringements, and only a few projects target Shopping dark patterns specifically, the studies reviewed in Chapter 8 nevertheless illustrate that there is definitely merit in using computational methods to detect unlawful dark patterns in website data.

Computational data collection and analysis methods have some limitations, however. Automated data collection techniques are vulnerable to websites’ adversarial behaviour and measurement bias, are resource-heavy in multilingual settings (such as the EU-27 market), can only access publicly available data, may struggle to scale up to a large number of websites in conditions of website design variability and may become outdated as design changes over time (design volatility). Automated data-analysis methods are not always entirely accurate either; they also demand substantial resources in multilingual environments and may also become deprecated due to regular or adversarial changes to websites, as well as in the face of a continuously evolving landscape of harm where new dark patterns may emerge (design volatility). Lastly, automated data-analysis methods require measurable indicators of unlawfulness.

Second, the way we design technology regulation has a bearing on the technical feasibility of addressing some of these limitations. Technology-neutral regulation goes hand in hand with design variability and volatility, as it leaves significant room to website operators to design and alter the design of their websites. By not engaging with website structure, a technologically neutral legal framework may also contribute to the issue of data inaccessibility. However, even where infringement-related data can be collected, the lack of measurable indicators of unlawfulness could impede the development of data-analysis methods. We may be able to extract features from websites that can be linked to a dark pattern, such as the position/size/colour/label of a cancellation button, but in order to detect infringements we need to know what the law deems to be compliant and non-compliant cancellation buttons. More technology-specific substantive regulation in the form of design standards and the prohibition of some harmful design practices could help in this regard. To revisit the main research question: in digital environments, we may need technology specificity to provide effective legal and technical solutions to the proliferation of consumer harms stemming from the use of dark patterns.

9.3. Contribution

From an academic point of view, by building and applying a theoretical framework rooted in the theory of socio-technical change to EU consumer law instruments, this book connects the consumer law and law-and-technology debates on what it means to regulate effectively against fast-changing and large-scale socio-technical sources of consumer harms. While consumer law scholarship has written extensively about the normative disconnect in EU consumer law, i.e. the failure to account for consumers’ bounded rationality, this study reflects one of the first attempts to systematically analyse EU consumer law as socio-technical regulation. In doing so, it probes the technology-neutral shape of our current legal framework, hypothesises that technology-neutral regulation may be linked to the large rates of non-compliance with EU consumer law instruments in digital environments like e-commerce websites, and offers (more technology-specific) policy options for going forward based on the lessons learnt in this exercise. The book also contributes to the emerging body of literature on digital (consumer law) enforcement by advocating for the use of technology to further rather than hinder consumers’ interests in digital environments through the automation of some of public authorities’ market-monitoring. To that end, the study provides a novel analysis, based on a review of web-measurement literature, of the technical feasibility of automating the detection of unlawful dark patterns in consumer markets, introducing computational methods to a legal audience. In the course of this exercise, the book also shows that the technology-neutral shape of our legal framework may pose problems for the computational task of automatically detecting unlawful dark patterns. Technology specificity may be a necessary pre-condition for the automation of infringement detection in digital markets by enforcement authorities and, conversely, the necessity of automated compliance monitoring may be a relevant dimension to consider in choosing the design of socio-technical regulation.

As to societal relevance, the legal and technical solutions this book points to could serve as a source of inspiration for policymakers (particularly in the context of the ongoing Digital Fairness Fitness Check) and for enforcement authorities interested in going digital, and so could contribute to bettering consumer protection in digital environments. The main lessons for policymakers in this regard, as discussed in Chapters 7 and 8, are:

  • Addressing the normative disconnect in EU consumer law by reviewing the core technologically neutral provisions of the UCPD may require a review of the entire consumer acquis;

  • A general ban of dark patterns may not deliver on its promises;

  • Some dark patterns, like Hidden Costs, Hidden Subscriptions and Hard to Cancel, are ripe for regulation of a technology-specific nature;

  • Technology specificity is a spectrum; we could weed out undesirable user interface design choices through prohibitions (e.g. prohibiting the placing of material information in drop-down menus) and prescriptions (e.g. requiring that all information be presented in the same font, size and colour on a particular page); provide uniform design standards (e.g. standardised presentations of material information or cancellation buttons); and/or provide uniform, mandatory code for the implementation of these standards; these options reflect different degrees of intrusion into traders’ freedom to conduct business, and require thorough cost–benefit analysis;

  • An effective system of consumer protection against dark patterns requires some mechanisms to be put in place to ensure adaptability to changing socio-technical landscapes of harm.

The main lessons for enforcement authorities can be summed up as follows:

  • The detection of deceptive, clearly unlawful dark patterns (e.g. fake Countdown Timers) can already be automated as their legal treatment is clear, and generally web measurement methods can be used to automatically extract a wide variety of user interface design features, but whether these can be automatically mapped to unlawful dark patterns is not clear. We may need more technology-specific policy to make computational infringement detection a reality;

  • Automation is not a panacea; some practices will still require manual analysis, but automating market monitoring to some degree may free up resources for manual investigations;

  • Automation requires thorough engagement with considerations of procedure and legitimacy, and questions of organisational resources and technological management.

9.4. Limitations and future research

A word of qualification – and caution – is necessary. My investigation focused on the use of Shopping dark patterns on e-commerce websites. My use of dark pattern attributes, and the fact that Shopping dark patterns may also be used on other kinds of websites, e.g. online accommodation booking websites, entails that my results may have broader relevance. At the same time, it is important to keep in mind that the landscape of potential digital consumer harms is much, much broader. As Chapter 3 shows, dark patterns may be deployed on a variety of platforms (other than websites) and for a variety of reasons, such as to collect personal data, or to keep users engaged so that they share more personal data and spend more money in the long run. My theoretical framework is relatively technology-neutral: I link dark patterns to the way user interfaces are designed under technological, economic, organisational and legal constraints. My recommendations may therefore also have relevance for other types of dark patterns that manifest on a graphical user interface (GUI). Researchers in the area of data protection have raised the same concerns with regards to the link between technology neutrality and compliance, as seen in Chapters 4 and 7, and many of the sources I cite in support of the technical argument in Chapter 8 are privacy web measurement studies. The argument may also hold beyond the web platform: Kollnig, for example, has argued that the detection of data protection law violations in mobile apps necessitates the introduction of reliable and computable metrics in technology regulation.3

That being said, there is a danger that, by focusing on the regulation and measurement of GUI, i.e. visual dark patterns, we may be losing sight of what is less or not at all visible. Advances in the NLP subfields of speech recognition and text-to-speech in recent years have allowed the voice-assisted consumer technology market to finally gain traction. Voice-controlled interfaces are the primary mode of interaction with devices such as smart speakers. HCI scholars started looking into the manipulative potential of smart devices fairly recently – in 2023, Kowalcyzk et al. investigated the use of dark patterns in 57 home IoT devices, some of which featured a voice interaction modality.4 The researchers found that voice modalities were a minor contributor to dark patterns compared to visual modalities, but state that this may be partially attributed to speaker devices representing a limited portion of the studied devices, and emphasise the need for future work to explore voice-controlled interfaces in-depth.5

Another less visible aspect of digital manipulation is manipulation that happens beyond the interface. In a recent working paper,6 Leiser and Santos sound the alarm about the fact that many recent and ongoing regulatory endeavours might have overlooked ‘algorithmic dark patterns’, i.e. personalised dark patterns that weaponise individual users’ cognitive weaknesses. For now, there is no widespread evidence of personalised dark patterns being deployed in the wild,7 and a lot of uncertainty about whether and which personalised commercial practices are harmful persists.8 Personalised practices are, however, more difficult to detect and measure than other kinds of dark patterns,9 so the lack of evidence cannot be equated with lack of usage. This should, therefore, also prove to be fruitful ground for future research, especially in light of Art. 40 DSA, which has the potential to open up the ‘black box’ of very large online platforms (VLOPs). More specifically, Art. 40 will allow vetted researchers to request data from VLOPs to research systemic risks, which, according to Art. 34, include ‘any actual or foreseeable negative effects for the exercise of fundamental rights, in particular [...] to a high-level of consumer protection enshrined in Article 38 of the Charter’. It remains to be seen what opportunities this provision will present for research on digital consumer harms in practice; some scholars have low expectations.10

We could also worry that targeting dark patterns through policy may be equivalent to targeting the symptoms of the actual problem: deeply entrenched and, until fairly recently, largely unchallenged digital business models.11 These business models are the consequence of policymakers’ hesitation to regulate the internet in its early days. The types of dark patterns used in a particular setting will likely depend on the digital business model.12 For example, using Privacy, Engagement and Shopping dark patterns in tandem may be profitable in the context of freemium digital services, such as dating apps, which make money through subscriptions, in-app purchases, ads and the sale of users’ personal data. The link between various types of dark patterns and business models is under-explored in the literature so far,13 and is probably going to be a promising avenue for future research.

Lastly, there is also still a lot we do not know about Shopping dark patterns. For example, the evidence with regard to the effectiveness of truthful scarcity messaging is mixed – it seems to work in some contexts, but not others. Further, for many Shopping dark patterns we still do not have evidence of consumer detriment. There thus remains a critical need for more research to inform future, and hopefully incremental and technology-specific, policy efforts.

9.5. Final remarks

There is still much more to uncover about the functioning of digital markets and the kinds of harms they entail for consumers. For now, it suffices to say that while more technology-specific policy may not be the definitive answer to consumers’ woes in digital markets, there are good reasons to believe that it may improve compliance by restricting the freedom of some and addressing the hurdles faced by other regulatees, and by giving the law teeth in the form of public interest technology that can unearth the scale of non-compliance with consumer laws on digital markets. It is time we ensured that user interfaces are optimised for consumer welfare rather than detriment.

Comments
0
comment
No comments here
Why not start the discussion?