As the previous chapter has explained, despite providing rich descriptions of dark patterns ‘in the wild’, the Human-Computer Interaction (HCI) literature on dark patterns has, for the most part, paid insufficient attention to articulating what makes dark patterns problematic from a normative perspective, and it has not provided recipes for the effective regulation of dark patterns. At the same time, a desire for regulatory action reverberates through the HCI community1 as well as consumer organisations’ policy documents,2 and regulators around the globe,3 including the EU,4 are taking note.
To inform policy efforts aimed at tackling dark patterns in EU consumer law, this chapter explores whether regulators should – and how (if justified) they could – regulate the use of dark patterns in digital consumer markets in an effective manner. It does this by drawing on insights from regulation theory. Regulation theory is a multidisciplinary field combining insights from law and social sciences5 which has, over the past decades, generated insights ‘identifying and prescribing the conditions under which various tools and techniques are likely to achieve defined social goals most effectively’.6 To be clear, regulation theory is a vast disciplinary space, and some may even doubt that it is possible to speak of a unitary field.7 Some clarification on where this piece finds itself in the theoretical landscape is therefore warranted. My starting point is the literature on the regulation of technology, and, more specifically, the regulation of socio-technical change.
Socio-technical change8 is a theoretical framework that describes the challenges regulators may face when trying to regulate technological phenomena. The theory of socio-technical change may be juxtaposed with ‘technological exceptionalism’, a theoretical frame that seeks to identify the unique, essential characteristics of a new technology that ‘disrupt’ the legal system and therefore justify the regulation of that technology or, where no exceptional characteristics are found, to confirm that the existing regulation is adequate.9 Technological exceptionalism’s first problem is that it tends towards technological determinism, envisaging a one-sided relationship between technology and law, where the latter is doomed to assume a reactionary position in relation to ‘new’ technologies.10 Technology does not emerge in a vacuum; it is shaped by the society it exists in, of which regulation is an important part. Accordingly, the law may be complicit in its own disruption – as Kaminski put it, ‘[a] particular feature of a particular technology disrupts the law only because the law has been structured – doctrinally and theoretically – in a way that makes that feature relevant’.11 Regulation may, therefore, have a role in preventing (its own) disruption or harms emerging from the use of technology. The second problem with technological exceptionalism is an over-preoccupation with the novelty or newness of technology. First, whether any technology is ever truly exceptional may be doubted.12 Second, there is rarely, if ever, anything inherently harmful about new technology that justifies its regulation. Instead, we might want to focus on how technology, old or new, is employed in society for good or for ill.13 This is the road taken by scholars approaching the regulation of technology through the theory of socio-technical change.
Socio-technical change occurs when new technological artefacts, conduct and/or relationships are made possible by the use of a new or updated technology, the new uses of old technology or a change in the scale of use of a technology.14 The need for regulatory intervention with regards to new artefacts, conduct or relationships that are related to technology should be assessed based on established rationales for intervention.15 As explained in Chapter 2, the use of A/B testing for the optimisation of user interfaces and interactions has enabled the creation of dark patterns. Dark patterns are therefore a socio-technical artefact. With regard to regulatory rationales, as the previous chapter shows, dark patterns are, to a great extent, a behavioural exploitation problem. There are two main conceptual frameworks for incorporating behavioural insights into consumer policymaking: welfarist (behavioural law and economics) and autonomist (autonomy theory).16 This chapter therefore begins by drawing on both theoretical frameworks to develop arguments for government intervention in relation to dark patterns, without arbitrating between the two, and spells out the limitations of both. It then goes on to explore what shape regulatory efforts around dark patterns could take. Regulatory design is an important consideration in taming the threat of regulatory failure17 – particularly when it comes to technology regulation. To outline some directions for the effective regulation of dark patterns, I draw on the prescriptions of behavioural law and economics and autonomy theory. I also look into the law and technology literature on the challenges of regulating socio-technical change. While law and technology as a field of inquiry is rather new, as are its theories,18 this strand of scholarship has produced rich insights on the difficulties regulators may face when trying to address socio-technical phenomena, and has also sketched some roadmaps towards addressing the bottlenecks of technology regulation. The theoretical framework I develop here will be used to gauge the effectiveness of the current EU consumer regime as it applies to dark patterns (Chapter 6) and outline some future policy directions (Chapter 7). As explained in Chapter 1, I view regulatory effectiveness in relative rather than absolute terms, which means that I explore policy solutions that could be more effective in tackling dark patterns than our current way of addressing them in EU consumer law. This exercise has some limitations. To begin with, I look at effectiveness as a matter of regulatory design, whereas there are other internal and external factors, aside from the shape of regulation, that may affect the effectiveness of a regulatory regime.19 Further, as Brownsword rightfully points out, in the modern regulatory state, efficiency – as a matter of the ‘optimal gearing of regulatory input to effective regulatory output’ – is also a key consideration in choosing how to regulate.20 As we will see, where regulatory intervention in (consumer) markets is justified, adequate regulatory design may lower the risk of our incurring arguably the highest cost of regulation – failure, and the design of the regulatory environment (i.e. the tools and mechanisms in place to create and update regulation) could also play a role in ensuring that regulatory resources are both well spent and well targeted. Lastly, my exercise here is theoretical, whereas the effectiveness and efficiency of policy instruments are ultimately empirical questions. For example, in the EU, the Commission’s Better Regulation guidelines, which are addressed to Commission staff,21 state that policy preparation should be supported by evaluations and impact assessments, which are both evidence-based exercises that ‘look at the underlying causes of the problem at hand and how it has been or is to be addressed to achieve the desired objectives, taking account of costs and benefits’.22 The evidence the Commission bases its assessment on includes ‘multiple sources of data, information, and knowledge, including quantitative data such as statistics and measurements, qualitative data such as opinions, stakeholder input, conclusions of evaluations, as well as scientific and expert advice’.23 Any recommendations I make here and in the following chapters, alongside their costs and effects, will therefore need to be examined in more detail before it can be concluded that they will constitute an efficient improvement to the status quo.
This chapter proceeds as follows. Section 4.2 develops an argument for regulatory intervention with respect to dark patterns based on welfarist (4.2.1) and autonomist concerns (4.2.2). Section 4.3 looks at the shape effective regulation addressing the use of dark patterns could take. Section 4.4 summarises these discussions and the main takeaways of the chapter.
Behavioural law and economics refers to normative economic analysis of the law, enriched with behavioural insights.24 This is the predominant paradigm for integrating behavioural insights into consumer law theory and practice.25 Let us start with the foundations of traditional normative economic legal analysis, as behavioural insights both challenge and complement them.
The starting point of traditional normative economic legal analysis is that, in the absence of transaction costs, the unrestricted interaction of market forces may produce allocatively efficient outcomes.26 Allocative efficiency refers to a state of the market in which resources are put to their most valued uses.27
Consumer choice is central to allocative efficiency: if consumers’ purchasing decisions match their preferences, their money is put to good use. For this to obtain, some key assumptions need to hold:
consumers must possess adequate information on the set of alternatives available and the consequences of different choices, and
they must be capable of processing this information and of making rational utility-maximising choices, i.e. choices that are in line with their preferences.28
In the real world, these assumptions will rarely be fulfilled; in other words, there is a market failure, and regulatory intervention into the market may, prima facie, be justified.29 Externalities, information asymmetries and transaction costs30 (or a combination thereof) are the market failures that are particularly relevant to consumer policy;31 however, I am only concerned with the latter two market imperfections – externalities (negative effects on individuals that are not part of a market transaction) are a matter of concern for consumer-product quality and safety regulation.32
Information asymmetries are situations where one party possesses information about a product characteristic and the other party does not. Akerlof was the first to describe the mechanism of market failure caused by information asymmetries.33 Informational deficits on the demand side due to high search costs generate a risk of adverse selection amongst available products, resulting in an ever-increasing number of ‘lemons’, i.e. low-quality goods, on the market, potentially resulting in a decrease in welfare. The traditional ‘medicine’ prescribed for information asymmetries in consumer markets consists of information duties, rules prohibiting the provision of false and misleading information34 and cooling-off periods.35
Transaction costs lack a universal definition,36 but can be described in simple terms as ‘the costs of using the marketplace’,37 where ‘costs’ refers to money, effort and time.38 Transaction costs do not (directly) benefit any of the parties to a contract.39 They may be incurred at any transactional stage: searching for an offer and appraising it, negotiating, contract conclusion, enforcement and termination.40 High transaction costs might generally prevent parties from engaging in welfare-enhancing transactions.41
This account of transactional costs assumes that contracting parties have no incentive to impose transaction costs. According to Sovern, consumer markets do not follow this rule – in many circumstances, sellers can benefit from increasing transaction costs to the detriment of the consumer.42 If a transactional task like reading contractual terms43 or cancelling a contract44 requires too much effort or time (because the terms are in fine print or in legalese, or the cancellation process is too burdensome), where the costs of carrying through outweigh the benefits to be gained, it may be rational for consumers to give up at some point.45
A mere finding of a market failure is, however, not sufficient to justify regulatory intervention from a law and economics perspective.46 Regulation is expensive. Regulators must acquire information on the market problem and regulatory options in order to design effective and efficient interventions, and the possibility that they will get it wrong in this respect cannot be ruled out entirely.47 As Cafaggi and Nicita put it: ‘it seems that the question of consumer protection should always be addressed under a narrow path neighbouring on two opposite risks: market failure and regulatory failure’.48 Political decision-making and regulation drafting also entail costs in terms of time and the salaries of any experts who may be consulted in the process.49 Once an intervention is adopted, companies will incur compliance costs, which may be passed on to consumers.50 Further, compliance monitoring and law enforcement costs need to be taken into account.51 It may therefore be less costly (i.e. more efficient) to leave it to the market to rectify the market failure, and intervene if market solutions are unlikely to emerge or be efficient.52 In consumer markets, the most obvious market solutions are consumer learning and consumer education by sellers.53
Once it is established that leaving things to the market may not be desirable, the question arises of what goals regulation should pursue, and how. From a normative economics perspective, the aim of market regulation is the maximisation of social welfare, typically understood as allocative efficiency based on the Kaldor–Hicks criterion.54 As Ogus puts it, a policy is Kaldor–Hicks efficient ‘if the aggregate gains [benefits] to whomsoever exceed the aggregate losses [costs] to whomsoever’.55 The choice between various regulatory instruments is thus to be made via a cost–benefit analysis, and the alternative that leads to a better cost–benefit ratio (more benefits than costs) is to be selected.56
A result of defining efficiency in these terms is that less intrusive forms of regulatory intervention (i.e. those that preserve freedom of choice) are to be preferred over more intrusive ones.57 In consumer markets, this translates into a preference for consumer empowerment, rather than for consumer protection measures. Consumer empowerment measures are those measures that help consumers help themselves, e.g. mandatory disclosure rules.58 In the EU consumer law microcosm, the Unfair Commercial Practices Directive and the Consumer Rights Directive are illustrative of this policy approach.59 Consumer protection measures, on the other hand, enable regulators to intervene more directly in the relationship between a trader and a consumer by excluding some practices (such as unfair contract terms, regulated by the Unfair Contract Terms Directive)60 from the market.61 Another consequence of using allocative efficiency as a regulatory goal is that distributive concerns are disregarded. The Kaldor–Hicks criterion does not attach any value to distributions of resources amongst different groups of people.62 The narrow definition of social welfare in terms of economic welfare has been met with opposition by legal scholars, also by those in the law and economics tradition.63 It is widely accepted that good policymaking entails a trade-off between efficiency and other (non-economic) goals, such as distributional justice;64 cost–benefit analysis is best viewed as a complement to political decision-making, not a substitute.65
As seen in the previous section, traditional economic analysis of the law proceeds under the assumptions of neoclassical economics, which understands consumers as perfectly rational agents. However, as the previous chapter explains,66 since the late 1970s behavioural scientists have uncovered a variety of cognitive biases and heuristics, casting doubt on the predictions of traditional economics. Behavioural law and economics purports to improve traditional law and economics analysis using behavioural insights,67 and allows policymakers to take into account threats to consumers’ economic interests that pre-behavioural theory disregards.68 The literature has proposed three ways to think of these (non-traditional) threats: behavioural market failures, market manipulation and ‘phishing’.69
‘Behavioural market failure’ is a term coined by Oren Bar-Gill in 200870 to emphasise the potential negative welfare effects of the exploitation by sellers of consumer biases.71 His theory rests on two tenets:
(i) consumers’ purchasing and product use decisions are affected by biases, and
(ii) (sophisticated) sellers design their products, contracts, and prices in response to consumers’ cognitive shortcomings.72
Bar-Gill argues that market forces require sellers to pay attention to consumer behaviour because their competitors do so.73 Let us zoom in on how consumer myopia and over-optimism interact with credit-card-pricing practices to see how this works.
Myopia refers to the tendency to care more about the present than the future.74 Over-optimism is a result of the optimism bias – the tendency to overestimate the likelihood of experiencing positive events and underestimate the likelihood of experiencing negative events.75 These two behavioural forces render some aspects of credit card prices – charges which may potentially be incurred in the future – non-salient.76 Credit card providers who are aware of these cognitive shortcomings devise complex credit-card-pricing schemes with low salient costs (e.g. low annual fees and short-term interest rates) and high non-salient costs (e.g. high penalty fees for late payment and high long(er)-term interest rates).77 As a result, there is a gap between the actual total price and perceived total price, and this leads to biased demand for consumer finance products that do not match consumers’ preferences and needs.78 To remain competitive, other credit card providers need to cater to this biased demand, i.e. they also need to create the appearance of a low price, rather than lowering the actual price, as consumers would simply not appreciate the difference.79 In other words, in such market settings, sellers may start competing in terms of their ability to exploit consumer biases, rather than by providing the best quality goods at the lowest price;80 competition may become a race to the bottom. Bar-Gill argues that this state of the market may not only lead to consumer welfare losses (in that consumers are not matched with traders who supply products matching their preferences), but also to allocative inefficiency, as consumers may not be matched with the most efficient trader (the one offering the lowest prices).81 Combining contract design and consumer behaviour evidence, in his book Seduction by Contract, Bar-Gill shows that behavioural market failures are present not just in the consumer credit market, but also the markets for mortgages and mobile phone subscriptions.82 Van Loo argues that there is also evidence of similar widespread behavioural market failures in the consumer goods market.83
Esposito has identified a second meaning of ‘behavioural market failure’ in the literature; the term is also used to refer to situations where markets do not provide sufficient incentives to firms to correct the detrimental effects of consumer behaviour that sellers do not profit from.84 The two types of behavioural market failure are not unrelated. As Esposito explains, the existence of the first type of behavioural market failure presupposes the second one: for sellers to be able to take advantage of consumers’ bounded rationality, there has to be a lack of incentive for their competitors to actively correct consumers’ (mis)perceptions.85 The second type of behavioural market failure can, however, arise even when sellers do not take advantage of cognitive shortcomings.86
Similar arguments linking behavioural exploitation for gain with market failure can be found in Hanson and Kysar’s 1999 work on market manipulation. Using the field of product liability as an example, Hanson and Kysar argue that market manipulation – the ability of the seller, as the party controlling the decision context, to determine market outcomes by exploiting the consumer’s cognitive biases – will come to characterise consumer markets, as market forces require sellers to engage in such manipulation in order to remain competitive.87
More recently, in their book Phishing for Phools, Nobel Prize–winners Akerlof and Shiller argue that, in unregulated markets, ‘phishermen’ seek to take advantage of ‘phools’, and that the forces of competition will require even firms led by ‘those with real moral integrity’ to ‘phish’ in order to survive.88 The scholars describe two types of ‘phools’: ‘psychological phools’, or boundedly rational consumers, and ‘information phools’, a term which refers to misled or deceived consumers.89 ‘Phishing’ is therefore a broader concept than ‘market manipulation’ and ‘behavioural market failure’, as it also extends to informational deficits.90
At their core, all three concepts capture commercial practices that steer consumer choice behaviour in particular directions by exploiting consumer biases. In what follows, I will use Bar-Gill’s notion of ‘behavioural market failure’ in order to illustrate the potential consumer detriment arising from the commercial use of dark patterns. The idea of ‘behavioural market failure’ is narrower than Akerlof and Shiller’s ‘phishing’, whereas Hanson and Kysar’s ‘market manipulation’ may be misunderstood as ‘manipulation’ in the sense in which philosophers think of it (which is something I discuss in 4.2.2).
At this stage, it is necessary to take stock of what the theory of behavioural market failures means for previous economic models, or rather, what it does not mean: the negation of all previous economic models. Behavioural market failure is merely an additional source of market imperfection, alongside the more well-known market failures.91 As Bar-Gill puts it:
Rational consumers form unbiased estimates of imperfectly known values. Faced with similarly limited information, imperfectly rational consumers form biased estimates. Unbiased estimates can cause market failure; biased estimates can cause market failure.92
However, while behavioural economics provides additional grounds for intervention in consumer markets, it is less clear what the goal of regulatory intervention should be. Pre-behavioural wisdom, as seen in the previous section, assumes that individuals are rational maximisers of their utility. This means that, absent a market failure, the preferences revealed by consumer choices represent desirable outcomes (this is the so-called ‘revealed preferences doctrine’).93 Traditional law and economics analyses thus assess the welfare implications of alternative policies in terms of the degree to which they lead to the satisfaction of individual preferences.94 Once we accept that consumers are boundedly rational, however, we can doubt whether consumer choice is always aligned with their preferences,95 with the consequence that a new welfare criterion is needed. As Fabbri and Faure put it, ‘the behavioral revolution left open a key normative question: which welfare criterion should be adopted for behavioral policy-making?’96 If it is not clear which criterion will form the basis of policy interventions, there is a risk that policymakers will paternalistically decide what is in individuals’ best interest.97 While paternalism itself may not be incompatible with economic notions of efficiency,98 many behavioural law and economics scholars have proposed minimalist behavioural interventions that preserve individuals’ freedom of choice, varyingly termed ‘soft’, ‘asymmetric’ and ‘libertarian’ paternalism.99 Bubb and Pildes have described such proposals as reflecting a ‘tautological precommitment to freedom of choice in the face of the overpowering empirical evidence they themselves offer’ against the effectiveness of certain policy options, such as mandated disclosures.100 In short, while the incorporation of behavioural insights into traditional economic analysis of the law provides a more comprehensive view of the harms consumers may suffer, once the assumption of consumer rationality is negated, policymakers are in the dark as to what they ought to do to address consumer harms. I return to the question of regulatory/policy instruments in section 4.3 below.
The use of dark patterns in consumer markets could cause both ‘traditional’ and behavioural market failures. This integrated view of traditional and behavioural market failures means that, in some instances, dark patterns may lead both rational (if they exist) and imperfectly rational consumers to take suboptimal decisions. Further, it is possible that the market will fail to rectify these failures.
Information asymmetries exist in both offline and online markets. Since the early days of the internet, consumer law scholars have been arguing that information asymmetries are likely to be exacerbated in the context of e-commerce.101 Information-hiding and Deceptive dark patterns allow digital information asymmetry to reach new heights, as they provide the technological means for sellers to hide relevant information from consumers, or to deceive them. However, as will be seen in the next chapter, EU consumer policy has gone a long way (if not too far) in its efforts to remedy information asymmetries. If dark patterns provide a means for sellers to circumvent existing information duties, the prospect of regulatory failure looms.
Dark patterns that exploit cognitive biases may lead to behavioural market failures. This could be the case for Hidden Costs.102 Hidden Costs involves the disclosure of unavoidable additional fees late(r) in the purchasing process. Rational consumers would see through such price framing and base their purchasing decision on the total price.103 Imperfectly rational consumers, however, may be lured in by the low headline price and, by the time the additional costs are revealed, may be so heavily invested in the process that they are keen to avoid the loss that going back to shopping around for other offers entails; in other words, this practice seeks to exploit the sunk cost fallacy.104 As a result, consumers may end up making more expensive purchases or buying products they would have forgone had they known about their full price before committing to the process.105 Indeed, an experiment by Blake et al. found that the use of Hidden Costs on a secondary ticket-purchase platform resulted in consumers spending approximately 21% more and being approximately 14% more likely to complete a purchase compared to a dark pattern-free environment.106 A 2010 study conducted for the Office of Fair Trading in the UK compared the effects of various price-framing techniques on consumer behaviour and found that Hidden Costs was the most detrimental to consumers.107
Practices like Pressured Selling, Sneak into Basket and Hidden Subscriptions with auto-renewal clauses – which seek to leverage the default effect by pre-selecting more expensive product versions, adding products to a consumer’s virtual shopping basket and automatically renewing subscriptions, respectively – may also lead to financial losses for consumers. Rational consumers faced with these dark patterns would deselect unwanted upgrades, untick pre-ticked boxes, read lengthy subscription terms and/or abandon their cart entirely and seek a deal that matches their preferences elsewhere. Imperfectly rational consumers, however, are more likely to go with the flow, because defaults are ‘sticky’, especially in consumer domains.108 The literature describes various psychological mechanisms that make users stick with default options. Implied endorsement suggests that defaults are effective because consumers think the choice architect is promoting an option that is for their own benefit.109 Product pre-selections will sometimes be labelled as ‘Deals’.110 Consumers may also stick with a default because it is physically and cognitively easier (this is also referred to as behavioural ‘inertia’),111 so it may be lucrative to make it harder to avoid the default (e.g. by obscuring the recurring nature of a subscription, and/or combining it with Hard to Cancel),112 or simply because the default option is deemed the ‘status quo’.113 Whatever the mechanism, the outcome is straightforward: consumers may end up spending more than they would like to, or (re-)buying products they do not need or want.
The same can be said for dark patterns relying on the scarcity bias – i.e. our tendency to attribute more value to scarce things114 – such as Countdown Timers, Activity Notifications, and Low-stock and High-demand Messages, especially where they are used in combination. A rational consumer’s evaluation of the offer would be based on its merits, and they would be able to disregard the scarcity cues (blinking timers, the use of more prominent colours and fonts for scarcity messages) that may make the offer irrationally appealing. An imperfectly rational consumer may, however, experience a fear of missing out on a potentially good offer, and make a purchase without shopping around for a deal that actually matches their preferences.115 A 2019 experiment by Sugden et al. found that, on average, participants who accepted time-limited deals ended up with lower pay-offs than participants who waited to see the other options.116
Some dark patterns may impose unnecessary transaction costs in an attempt to steer consumer choice away from their intentions and towards a trader-friendly option. These costs are called ‘roadblocks’,117 ‘implementation costs’118 and ‘mechanical costs’119 in the literature. This is arguably the case for Restrictive dark patterns like Hard to Cancel.120 When signing up for a service can be achieved seamlessly online, but its cancellation is a long and winding process, the exit costs involved are likely unnecessary. Bar-Gill and Ben-Shahar argue that ‘the artificial mechanical costs imposed by such sludges are reason enough for policy makers to intervene’.121
Further, both dark patterns that manipulate the information flow and those that exploit cognitive biases could also generate transaction costs. An Information-hiding dark pattern like Hidden Subscription may increase evaluation costs (the time and effort of shopping around for a deal that matches a consumer’s preferences). Behavioural practices like Hidden Costs may raise both search costs (the time and effort required to determine the total price) and evaluation costs.122 Other behavioural practices, like Sneak into Basket and Pressured Selling, may impose implementation costs, as consumers that do not want the additional products or upgrades need to take additional steps to avoid them.
Now that it has been established that there may be a prima facie case of market failure(s) where dark patterns are concerned, let us turn to the prospect of the market remedying this failure.
First, consumers may learn from their mistakes (of falling for a dark pattern). Generally speaking, leaving it to consumers to get ‘burnt’ and learn from their mistakes is undesirable because sometimes consumers may be severely hurt by a suboptimal decision, e.g. by remaining trapped in a costly yearly subscription.123 But even where the stakes are lower, as will often be the case in an e-commerce context, consumer learning is imperfect.124
For starters, consumers are only likely to learn if they realise their mistake (in the case of Deceptive and Information-hiding dark patterns) and that the deal they scored was not as good as they had thought. This will not always obtain. For example, when Hidden Costs is used, consumers may not know the prevailing product prices, and the use of the dark pattern may also make it harder to establish what a fair bargain would be by increasing search costs.125 Further, consumers need to be aware of the effect of dark patterns on their behaviour in order to learn; this is particularly unlikely to be the case where dark patterns exploit cognitive biases. As the study by Di Geronimo et al. shows, consumers are generally not aware of dark patterns and cannot recognise them correctly.126 To prevent consumers wising up and backlashing, traders may have an incentive to implement an ‘optimal level of manipulation’ in their interfaces,127 i.e. use neither too many nor too few dark patterns, and not to use them too obviously. As Luguri and Strahilevitz’s experiment results suggest, such ‘mild’ cases may significantly inflate willingness to pay without translating into lost goodwill.128 It is, however, possible that in some instances, sellers may still use dark patterns excessively and that will lead to consumer backlash. There is some evidence suggesting that the overuse of scarcity cues on travel websites has made it easier for consumers to recognise them and caused them to distrust the websites involved.129
However, even where consumers are aware of dark patterns’ influence on their behaviour, Bongard-Blanchy et al.’s study suggests that this will not always translate into increased resistance against manipulation attempts, possibly due to the high cognitive costs involved in such resistance.130 Learning may also not be durable – A/B testing provides some sellers with the means to continuously and subtly tweak the user interface design in search of new ways to manipulate, and consumers’ experience with a particular dark pattern may therefore offer some defence only until its newest iteration is deployed.
Lastly, there is also a risk attached to leaving it to consumers to learn that the lesson consumers draw from their exposure to dark patterns is not to avoid traders using them, but rather that all traders use dark patterns. Consumers may therefore become disillusioned with their ability to collectively discipline the market and disengage.131 As a result, being subject to dark patterns may become a self-fulfilling prophecy.
Alternatively, sellers could start competing by using advertising that describes their user interface design as ‘free from dark patterns’ and by educating consumers about hidden influences on their behaviour. However, for that to succeed, the increased business from educated consumers must offset the loss of profits. At least some benevolent traders will be put off by the risk of decreased revenue.132 Further, even if some traders do go on to educate consumers, such education suffers from a collective action problem.133 If a trader’s relinquishing of dark patterns and efforts to educate consumers do lead to increased profits, other sellers will likely free-ride on their initiative. When competitors adopt similar tactics, any profit that trader would have made will be competed away.134 Therefore, traders may also be put off the education strategy by the risk of their investments in this regard not paying off.
Summing up this section, dark patterns could cause both ‘traditional’ and behavioural market failures. The prospects of the market remedying these failures seem bleak – theory teaches us that competition will drive market actors to use dark patterns. Regulatory intervention appears to be justified on a (behavioural) law and economics view. At the same time, behavioural economics cautions us that economic decision-making is a complex affair that may be influenced by features of the decision-making environment, as well as by variations in decision-making capability and preferences amongst consumers.135 Regulation should only be considered where market-specific empirical evidence proves the existence, in that market, of market failures that generate significant welfare costs.136
Autonomy theory is an alternative approach which integrates behavioural insights into consumer law and policy. My use of the term ‘autonomy theory’ might seem to suggest that there is a comprehensive conceptual framework for integrating concerns with regards to autonomy – or consumer autonomy – into consumer policy. That is not the case. As Susser et al. put it, ‘respect for individual autonomy is a bedrock principle of liberal democracy’.137 According to some legal and philosophical views, autonomy is therefore a goal that is worthwhile for regulators to pursue, and this directive can be found under the terms ‘autonomy theory’,138 ‘autonomist perspective’ and ‘autonomy-based framework’.139 That is where its normative contributions to regulation stop140 (if we discount the fact that, as a value, it may play a role in the weighing of various values underlying regulatory choices, as evidenced by the ongoing academic discussion on the ethics of nudging).141 The way ‘persuasive technologies’,142 ‘digital [market] manipulation’,143 ‘online manipulation’,144 ‘manipulation by algorithms’145 and ‘hypernudging techniques’146 may detract from users’ or consumers’ autonomy is the subject of keen discussion in legal and philosophical contributions. Aside from not constituting a comprehensive regulation theory, autonomy theory suffers from definitional murkiness: there have been many attempts to identify what ‘autonomy’ means in various fields, as well as ways to describe undesirable influences on autonomy. For practical purposes, I will therefore focus on Susser et al.’s philosophical concept of ‘online manipulation’ as a source of autonomy loss147 – which is gaining traction in both philosophical148 and legal circles149 – and along the way, outline some considerations that have been pointed out by other scholars.
Susser et al. posit that people are autonomous if it is the case that they can ‘(mostly) rationally deliberate on the different options they are faced with, that they know (mostly and roughly) what they believe and desire, and that they can act on the reasons they think best’.150 Two conditions for the effective exercise of autonomy can be extracted from this: competency, i.e. the ability to decide the basis for one’s actions independently, and authenticity, i.e. the basis on which they act (goals, desires and values) is truly their own.151 Manipulation is conceptualised as intentionally ‘imposing a hidden or covert influence on another person's decision-making’ in order to suit the manipulators’ ends.152 This could be achieved by deceiving someone or by targeting and exploiting their cognitive, emotional or other decision-making vulnerabilities.153 Manipulation is, according to Susser et al., different from persuasion, which is morally acceptable in that it does not overtly appeal to a person’s decision-making power.154 They also draw a distinction between manipulation and coercion – the latter, according to the authors, is an overt restriction of one’s options, whereby complying with the coercer’s preferred course of action is the only rational choice.155 Hiddenness is therefore a central element of Susser et al.’s conception of manipulation, as the authors use it to distinguish between various types of behavioural influences.156 Hiddenness is also central to the harmfulness of manipulation – because manipulatees are unaware that they are being influenced, their capacity to (competently) deliberate is undermined, which leads to decisions they cannot endorse (authentically) as their own.157 On this view, the use of Covert, Information-hiding and Deceptive dark patterns could be deemed manipulative acts that seek to undermine users’ autonomy.158 However, what ultimately matters in establishing whether these individual strategies are manipulative is their effect on the user. As Susser et al. explain, manipulation is a ‘success concept’: to claim that someone was manipulated is to refer to the effects of a manipulative strategy on a manipulatee.159 I will discuss alternative approaches to identifying manipulation below. First, let us look into some conceptual objections to Susser et al.’s definition of manipulation.
Defining manipulation as an intentional hidden influence is not unproblematic. First, as Sax explains, a criterion of ‘hiddenness’ begs the question of what is being hidden – the fact that there is an influence, the mechanism of influence or the manipulator’s intentions?160 Second, other philosophers have proposed numerous counterexamples to illustrate the overtness of some manipulative practices, pointing out that forms of influence like guilt-tripping, peer pressure or blackmail – which operate in a somewhat similar way to Confirmshaming – as well as dark patterns from the Social Proof, Urgency and Scarcity categories of the truthful variety, may be very effective at influencing behaviour, and ought to be considered manipulative.161 That being said, it could be argued that these practices constitute coercion on Susser et al.’s view;162 for Restrictive dark patterns like Hard to Cancel, the coercive denomination seems to be a good fit as well. Third, and most fundamental for consumer policy, a focus on the covertness of manipulation implies that disclosing information about the influence may eliminate it, as Hacker points out.163 He states that it is not a given that mandating disclosure will lead the manipulatee to adjust their behaviour (a point on which scholars of behavioural law and economics will not disagree, as I discuss below).164 There are definitions of manipulation that do not require it to be hidden. A commonly employed alternative definition in discussions on online manipulation165 is that of Sunstein, who takes issue with influences that do not ‘sufficiently engage or appeal to [people’s] capacity for reflection and deliberation’.166 Sunstein’s account of manipulation has not gone uncriticised either;167 the limitation of his approach that matters for the purposes of the current discussion is that, as Sunstein himself notes, the requirement of (in)sufficient engagement of deliberative capacity leaves a degree of (perhaps desirable) ambiguity.168 Whether a practice is manipulative has to be judged based on ‘the sufficiency of people’s capacity to deliberate on the question at hand’,169 which is a highly context-dependent criterion that ultimately requires ‘empirical testing of representative populations’.170
It would be possible, however, to avoid answering such questions if the concept of ‘manipulation’ were uncoupled from its likelihood of success; this is an approach championed by Marijn Sax in his doctoral thesis on the ethics of mHealth apps.171 Sax proposes moving away from both the hiddenness and success of manipulation in order to open the door to considering the ‘intentional development and deployment of manipulative strategies by and through digital environments’.172 On this view, a digital environment is manipulative when it is ‘designed and operated in such a manner that we can be almost certain that at least some users will be manipulated’.173 Focusing on this structural dimension of online manipulation (rather than singular instances of manipulation), Helberger et al. have coined the concept of ‘digital vulnerability’, which refers to consumers’ universal state of ‘defencelessness and susceptibility to (the exploitation) of power imbalances that are the result of increasing automation of commerce, datafied consumer-seller relations and the very architecture of digital marketplaces’.174 Dark patterns, on this view, are therefore a symptom of a larger problem – the ability of sellers in digital environments to identify and/or create consumer vulnerabilities. As Eliza Mik explains, ‘it is the combined, mutually-enforcing effect of multiple technologies that influence consumer decisions at different stages in his path-to-purchase, creating an environment of ambient and pervasive manipulation’ which threatens autonomy.175 In this setting, consumer autonomy may become illusory.
To sum up the previous sections, (some) companies now have the ability to influence consumer behaviour at scale. What (behavioural) law and economics teaches us is that dark patterns may lead to suboptimal consumer decisions, and that the forces of competition make it so that using dark patterns is the course of action that (rational) sellers will take. From an autonomy perspective, dark patterns may lead to individual autonomy losses, and we may question whether the unilateral construction of digital choice environments allows any room for the exercise of autonomy altogether. Policymakers ought to intervene to prevent suboptimal consumer decisions and harms to individual autonomy. This statement needs to be qualified, however. First, this appraisal only assumes harms. These harms are not settled in the literature – it is not entirely uncontroversial to link behavioural market failures to suboptimal decisions176 and welfare loss(es),177 and scholars have also pointed out that more conceptual and empirical work is needed to link manipulative environments and practices to autonomy loss.178 Further, when it comes to the regulatory treatment of individual dark patterns, the welfarist perspective demands that policymakers be guided by (market-specific) evidence about their influence on consumer behaviour and the resulting harms.179 Some dark patterns may indeed be harmful, but the effects of others will be ambiguous, and yet others may prove to be beneficial to consumers;180 also, context – i.e. different product markets and variations in consumers’ decision-making capabilities and preferences – may influence this assessment.
Second, a situation in which the cure is worse than the disease cannot be entirely ruled out. In other words: even if we deem an intervention to be necessary based on the above considerations, it can hardly be guaranteed that regulation will be effective and deliver on its promises or do so efficiently.181 In other (Hurwitz’s) words, ‘in an imperfect world, regulations must accordingly be judged by their likely real-world effects, not against a world of costless and perfectly effective regulation’.182
An important consideration in avoiding regulatory failure is that of regulatory design. According to Stiglitz, ‘while no regulatory system is perfect, economies with well-designed regulations can perform far better than those with inadequate regulation’.183 There are many ways to think of well-designed regulations and there is no one-size-fits-all solution for all regulatory concerns. Instead, as pointed out by Black, ‘regulatory design has to be contextual’.184 Therefore, in the following section I look at what behavioural law and economics, autonomy theory, and insights from law and technology literature on the challenges of regulating socio-technical change have to say about the regulatory options with regards to dark patterns.
As the previous sections explain, once behavioural insights are incorporated into the economic analysis of the law, sketching out an appropriate regulatory response becomes more challenging, and ‘autonomy theory’ is not a comprehensive conceptual framework. Nonetheless, some pointers may be extracted from these two frameworks and their interaction.
First, behavioural economics teaches us that mandating the disclosure of information to imperfectly rational consumers is unlikely to be an effective policy instrument,185 an insight that many consumer law scholars have also written on.186 Consumers may disregard information because they underestimate the informational needs of the decision they are faced with, overestimate the level of regulatory oversight over the quality of disclosed information, or trust sellers not to harm their interests.187 Consumers also have limited cognitive bandwidth with which to process information. Therefore, when faced with too much information (a situation of ‘information overload’), they can disregard information or even refuse to take any decision altogether.188 The information overload problem is likely exacerbated in digital choice environments, which are characterised by ‘consent overload’ – consumers’ constant need to take decisions that reflect their privacy preferences for any new website they visit and to read the terms of service for any new service they start using.189 Further, even when consumers do engage with information, they may not do so rationally. Perhaps the most important finding in this respect is that the presentation and framing of information,190 as well as the timing of disclosure,191 may influence decision quality to a significant extent, a reality that is often disregarded by regulators when mandating information disclosure.
The behavioural evidence attesting to the ineffectiveness of information disclosure tilts the balance in favour of more direct regulation of traders’ behaviour, such as through prescriptions or prohibitions of commercial practices. Second, as explained in the previous sections, individual autonomy, as a value in its own right, is frequently invoked to resist ‘interventionist’ policy responses that subject market actors to stricter behavioural controls;192 these sorts of perspectives therefore favour information disclosure as an autonomy-preserving policy response. However, if there are reasons to believe that autonomy cannot be effectively exercised in digital environments (as outlined in section 4.2.2), the case for more direct regulation of commercial practices can also be supported from an autonomist perspective.
It would be premature to end the discussion here, however. As the chapter introduction explains, dark patterns are a possible result of user experience optimisation, i.e. they are a socio-technological artefact. The law and technology literature has mapped the challenges that may arise in attempts to regulate (socio-)technical phenomena. These challenges arise in relation to the content of regulation (what is regulated) and the timing of regulation (when regulation is introduced).193
With regard to content, academic literature usually frames the choice(s) regulators have to make in terms of technology-neutral or technology-specific regulation.194 As Koops explains, there is no settled meaning of ‘technology neutrality’; he identifies at least four in policy documents on ICT regulation, and illustrates that the demands of each definition may result in conflicts amongst them.195 In the literature, technology-neutral regulation is often associated with broad, principle- or goal-based regulation that does not refer to any technology in particular.196 Regulation that does directly address a particular technology, its users or its use as a means to an end is therefore technology specific.197 The academic discussion on the desirability of technology neutrality thus has parallels with the older ‘rules versus standards’ debate in other strands of regulation theory and amongst legal philosophers.198
Leaving aside whether it is at all possible for regulation to be technology neutral,199 the commonly shared view amongst policymakers that technology-neutral regulation is able to withstand the test of time (I discuss the future-proofing demand of regulating technology and alternative strategies in more detail below) has, according to some authors,200 generated a presumption in favour of technology neutrality. While technology-specific policy does tend towards obsolescence as technology evolves, the policy assumption in favour of technologically-neutral policy overlooks the costly trade-off between flexibility and clarity.201 As Ranchordas and Van t’ Schip explain, ‘not all fields of law can be regulated through principles and goals as legal uncertainty tends to generate high social costs and risks for markets, transactions and legal structures’.202 This might be the case when policy goals have to be furthered through technological design.
Ensuring compliance with regulation through the design of digital products is an interpretative exercise.203 In the absence of clear guidance as to how to realise policy goals through the design of a digital product, the resulting interpretations may yield a wide variety of outcomes that range from consumer-friendly to detrimental and possibly unlawful. Where market actors’ interpretations find themselves on this spectrum is something that is likely to be determined by their capacity to comply and attitude.204 Capacity is a regulatee’s ability to understand regulatory requirements, as well as their possession of the expertise, resources and managerial skills necessary to carry out the required actions.205 Attitude refers to a regulatee’s disposition towards the rules and the regulator.206 Technology neutrality is praised207 for its ability to curb creative circumvention efforts208 amongst highly capable and ill-disposed regulatees.209 At the same time, the uncertainty inherent in the application of technology-neutral regulation to technological design may lead the actors that can afford it to develop expansive, self-serving interpretations of regulations, possibly in disservice to their protective aims,210 and the emergence and adoption of A/B testing affords them the ability to design a myriad of ways to implement these interpretations. For less resourceful actors (i.e. low capacity, well-disposed regulatees), of which there are likely to be many more,211 the uncertainty inherent in technology-neutral regulation may translate into a dramatically, and possibly (depending on the applicable sanctions) illegitimately increased compliance burden,212 and unintended breaches of regulations. The woes of the small and under-resourced are likely compounded by their reliance on third parties for various components of their digital infrastructure (discussed in Chapter 2); some of these components may be non-compliant, and even where the user is aware of this – which will not always be the case – they may not be able to modify them. There is therefore a risk that, by not engaging with technological design at all, regulation may leave the arbitration of public and private values to technologists on either end of this spectrum (of capacity and intentions). Technology is never neutral;213 artefacts have politics.214
These concerns have been repeatedly voiced in relation to the technology-neutral nature of data protection and privacy laws. In the EU, scholars have warned that the ambiguous rules of the General Data Protection Regulation (GDPR)215 leave room for regulated companies to determine what the law means and to adopt symbolic compliance structures that are used to advance management goals to the detriment of data subjects, frustrating the goals of the law.216 At the same time, small and medium-sized enterprises (SMEs) report that they struggle to understand, and usually lack the necessary human and economic resources to implement, the obligations in the GDPR.217 Academic studies have highlighted that the qualitative aspects of the GDPR that are not linked to concrete technical requirements appear to cause the most issues for under-resourced companies,218 prompting those companies to take shortcuts to comply with data protection obligations and outsource some of their duties to third parties.219 This will not always lead to good outcomes for data subjects – researchers have expressed concerns about SMEs’ resultant reliance on design guidelines developed by market leaders in sectors like children’s apps,220 and design templates supplied by third parties such as consent management platforms, which do not always comply with data protection requirements.221 Further, the reliance on sample code from third parties that provide necessary infrastructure, such as ad networks, which are at the core of many mobile apps’ monetisation schemes, may lead to the inadvertent adoption of non-compliant designs.222 In the USA, scholars have started criticising the ‘implementation gap’ between privacy policy and online platform features, which renders policy ‘misfit for the issue of concern and [...] lacking in the hoped for remedial or preventative impacts’.223 Calls for more specificity in regulation and guidance from enforcement authorities on how to translate legalese into technical requirements reverberate through this literature. As Black puts it, ‘legislation [...] should be drafted primarily with those that [it is] intended to regulate in mind’.224
Leenes writes that the fact that not all regulatees are alike should caution us against concluding too easily that regulation is inadequate (in other words, aiming for perfect compliance is a fool’s errand),225 and adds that the incentives to comply – the level of sanctions and the rigour of enforcement efforts – may be insufficient.226 Leenes is not wrong on either of these points. At the same time, however, we might want to be concerned about the fact that the interpretative room afforded by technology-neutral provisions could lead even otherwise non-problematic, well-intended and well-resourced regulatees to view regulation through a risk management frame, whereby non-compliance becomes an option to be weighed against the risk of detection and sanctioning.227 As to enforcement efforts, elsewhere in the regulatory literature it is acknowledged that not considering regulatees’ dispositions and abilities to comply when designing regulation may impact the effectiveness of enforcement efforts.228 Where breaches are detected, vague rules may expose enforcement authorities to challenges as to their interpretation and application of the law, and make them hesitant to take action against infringers.229 The sheer scale of digital markets does, however, make it more likely that many more infringements will fly under the radar. As this study suggests, digital market-monitoring tools may help in remedying this problem, but, as Chapter 8 will show, they too call for technology specificity.
Some qualifications are necessary. The discussion so far has presented technology neutrality and technology specificity as binary options for regulators, which is not descriptively accurate, and most likely not particularly helpful in prescriptive terms – replacing the policy presumption in favour of technology neutrality with a presumption of technology specificity is unlikely to yield better results.230 While technology specificity may be an appropriate drafting style231 to apply when legal certainty is called for,232 such as when compliance with regulation is to be ensured via technological means, technology-specific regulation has its drawbacks, which include a broadening of the prospects for creative compliance by highly capable regulatees.233 Accordingly, most regimes combine technology-specific and technology-neutral goals and provisions,234 an approach that could mitigate the limitations of both technology specificity and technology neutrality. Further, as Koops points out, technology specificity may be a question of regulatory level.235 A regulatory framework may therefore contain higher-level technology-neutral principles, and lower-level technology-specific requirements. However, the crux of the matter is: effective regulation in the digital sector may require (some) legal certainty, which in turn may call for (some) technology specificity.
Another key challenge policymakers need to grapple with when regulating technology is the question of when to intervene. While at an early stage of technological development there may be uncertainties as to its benefits and harms, and regulations may be ill-targeted due to a poor grasp of how the technology operates, regulating at later stages may be more difficult or impossible, as technology becomes more resistant to regulatory prodding in time; this is the so-called ‘Collingridge dilemma’.236 Law and technology scholars typically advocate early intervention, based on the precautionary principle, as the technology is more malleable at an early stage.237 However, the imaginations of regulators are limited, and intervening at an early stage entails the risk that the law may become disconnected (we will return to ‘disconnection’ below) from technology in the future, as technology will develop in a different way to that which was envisaged by the policymakers.238 Further, (too) early, strict regulation may stifle innovation and prevent technology from delivering its benefits.239 Alternatively, regulators could adopt a permissive strategy, and delay regulation until the harms are clear. Whether the level of clarity required by this approach can ever be achieved in practice is questionable.240 Further, this ‘wait-and-see’ approach discounts regulation’s potential role in steering the direction of technological development in a desirable direction.241 A middle ground through this regulatory debacle would be to adopt a risk-based approach early on, modulating intervention according to the risk of harms.242
Looking at these considerations through the lens of dark patterns suggests two things. First, the optimal point in time when we know enough about dark patterns to exhaustively regulate them in one go may never arise. As Chapter 2 shows, innovation in terms of digital product design, such as consumer-facing user interfaces, is an ongoing, continuous process. If we want to protect consumers from the harms arising out of the use of dark patterns, we may want to act now. Second, the timing dimension alerts us to the fact that not (over-)restricting beneficial innovation may be an aim that technology regulation pursues,243 and that technology-specific regulation in conditions of uncertainty about how to define the target of regulation may backfire. Indeed, technology-neutral regulation is typically seen as allowing ‘maximum leeway for innovation’.244 Technology-neutral regulation does, however, also grant innovation the benefit of the doubt, in that it assumes that the benefits of innovation will outweigh its harms. The proliferation of dark patterns in digital markets seems to suggest otherwise, however. What might be an alternative, technology-specific way of minimising the prospect of ill-targeted regulations and undue restrictions on innovation? We could try to achieve this by choosing the target of a technology-specific intervention – the technological means (A/B testing) or its artefacts (dark patterns).
Regulating the technological means (A/B testing) rather than the artefacts (dark patterns) may produce more harm than good.245 As Chapter 2 points out, A/B testing is not just central to the development of dark patterns; it is also key to designing usable and user-friendly interfaces. It may be desirable to let innovation continue in this regard.
That leaves us with the option of regulating dark patterns, either generally or specifically. As Chapter 3 shows, the HCI literature has struggled to find an overarching definition of dark patterns.246 That regulators, who have less technical expertise than HCI scholars, will be able to achieve this feat appears unlikely. A misguided policy definition of dark patterns could also adversely affect innovation by having a chilling effect on user-friendly interface design due to the fear of beneficial features being labelled dark patterns. As Birnhack points out, minimising the risks of hampering beneficial innovation would seem to require the sparing and incremental use of technology specificity.247 There may therefore be some merit in adopting an incremental, risk-based approach to the (technology-specific) regulation of individual dark patterns that may pose high(er) risks of harms.248 This is, however, a rather permissive perspective in that only the practices deemed most harmful are regulated. In this context, technology-neutral background laws may play a role in mitigating the risks of a permissive approach249 by making sure that potentially harmful dark patterns that are not (yet) specifically regulated could still be addressed.
While the solution I propose seems to minimise harms to (beneficial) innovation, it still entails the risk that, with the evolution of dark patterns, it will become outdated. There does not seem to be one solution that is concomitantly conducive to beneficial innovation, capable of adequately tackling harmful innovation with sufficient legal certainty, and future-proof.250 Should this scare us?
This may not be a question of getting regulation right once and for all, but rather making room for getting it wrong. Recognising the difficulties of regulating socio-technical artefacts well, some scholars are instead starting to point out that fast-paced socio-technical change is a given in the current regulatory environment. According to Bennett Moses, regulatory regimes have to grapple with an ongoing challenge of regulatory disconnection.251 ‘Regulatory disconnection’ is a term coined by Brownsword which underlines the importance of correspondence between socio-technical change and both regulation and the values underlying that regulation.252 He draws a distinction between descriptive and normative disconnection. Descriptive disconnection occurs when the ‘descriptions employed by the regulation no longer correspond to the technology or to the various technology-related practices that are intended targets for the regulation’.253 Normative disconnection refers to a situation in which ‘technology and its applications raise doubts as to the value compact that underlies the regulatory scheme’.254 Brownsword posits regulatory connection as ‘the outstanding generic challenge presented by new technologies’,255 and qualifies its effects as ‘undesirable relative to considerations of regulatory effectiveness’. 256
Maintaining regulatory connection, and, by extension, effectiveness, is not merely a question of how regulation ought to be designed content-wise; it brings into focus the design of the regulatory environment. In a socio-techno-legal landscape marked by continuous change, occasional regulatory disconnect is a given; as Koops points out, once we set out to regulate some technology, it is paradoxical to argue that we ought not to adapt regulation for future technology.257 If we instead accept that regulating technology means, by definition, that we may need to revisit regulation more often, a more pressing question may be the availability of tools and institutions in place to deal with regulatory disconnection as it arises again and again and adapt regulation in good time.258 In other words, adaptability in response to new technology, new uses of technology and new evidence of the harms thereof seems to be a dimension of regulatory effectiveness shaped by the (ongoing) risk of regulatory disconnection. The more technology specific a regime, the greater the need for adaptability: as discussed above, technology-specific regulation naturally tends towards obsolescence. Where command-and-control regulation is chosen, adaptability may be crafted into a legislative act via forms of ‘regulatory innovation’ such as regulatory impact assessments, periodic evaluation and sunset clauses.259 Whether leaving it to a legislator to adapt a regulatory regime is the approach that is most conducive to well-informed and timely solutions is doubtful, however. Adaptability may therefore also mean the delegation of technical rule-making to independent agencies or standard-setting bodies in order to ensure the continuous adaptation of rules and to make necessary adjustments without statutory intervention, as Leenes et al. explain.260
Starting from the directive of socio-technical change theorists that the need to regulate socio-technical artefacts should be judged with reference to established regulatory rationales, this chapter first looked at whether government intervention with regard to dark patterns is justified, based on welfarist and autonomist perspectives. Both conceptual frames support intervention. (Behavioural) law and economics cautions us, however, that any attempt to regulate dark patterns should be backed by market-specific evidence about their influence on consumer behaviour and the resulting harms. It should also be mindful of the risk of regulatory failure. Regulating behavioural phenomena well is hard. Regulating behavioural phenomena that take the form of socio-technical artefacts, as dark patterns do, is bound to be even harder.
Appropriate regulatory design may help. As Kaminski puts it, ‘regulatory design is a perennially central issue for law and technology’.261 The welfarist perspective suggests that the regulation of dark patterns could benefit from more direct regulation of commercial practices (rather than traditional informational remedies); the autonomist perspective does not object to this. The theory of socio-technical change shows us that the content dimension of regulatory design involves trade-offs between flexibility and legal certainty. The timing dimension reveals that if we jump at the first opportunity to regulate technology, before all its harms and benefits are clear, we may both miss out on beneficial innovation and end up with ill-targeted laws.
To minimise risks across these considerations, for as long as we have attempted to regulate technology, technology neutrality has guided the way. This chapter outlines some considerations which cast doubt on the potential of technology-neutral regulation to curb the use of dark patterns in digital markets. By not engaging with the design of consumer-facing digital products, regulation may be leaving the determination of their consumer-friendliness to technologists with different organisational and technical capacities, and different appetites to comply. Technology is not neutral, and it may be time for regulation to become less neutral. What we could do is regulate dark patterns in a (more) technology-specific manner. That being said, the fact that dark patterns are still a fluid and broad concept points to a high risk that regulators will not be able to target regulation well, which means that regulation could not only have a chilling effect on beneficial innovation in consumer-facing UI design, as regulatees will not know where they stand, but also fail to protect consumers. I suggest that we could instead attempt to regulate, in an incremental fashion, individual dark patterns that bear a high potential of causing consumer harm. This kind of risk-based approach is also supported by the welfarist perspective, which cautions against intervening in (consumer) markets without evidence of market failures. My suggestion of an incremental, technology-specific regulatory approach does not mean a complete displacement of technology-neutral regulation. Technology-neutral regulation could play a background, safety-net role to both prevent circumvention by resourceful market actors, as well to avoid legal gaps. Concomitantly, there may be more to effective techno-regulation than its level of technology specificity. Regulation in a socio-techno-legal environment marked by constant change is vulnerable to regulatory disconnection, which may be bad news for its effectiveness. Effective techno-regulation therefore also seems to demand the set-up of a regulatory environment that facilitates the speedy adjustment of regulation in the face of changing landscapes of harm or its past mistakes. A lack of mechanisms and institutions that would allow us to bounce back when we get something wrong in the way we regulate technology may be the straw that breaks the camel’s back.
Ultimately, however, the law and technology literature on technology regulation does not operate on a prescriptive dimension. What we can do instead is look back on the laws we have operating in this space at the moment, look at how the balancing acts involved in techno-regulation have played out, and imagine what could be done differently. As Birnhack frames this exercise, ‘[w]e might not yet know how to regulate [new technologies], if at all, but at least we know the limits of our current legal scheme’.262 Indeed, any story about technology regulation needs to start at the right point in time, as Bennett Moses points out,263 and so does our story about the regulation of dark patterns. Accordingly, in the following chapters, I sketch the current EU regulatory landscape (Chapter 5) and analyse how it applies to the Shopping dark patterns discussed in Chapter 3, in order to make a judgement about its effectiveness (Chapter 6). I am particularly concerned in this regard with whether we are dealing with regulatory disconnection – which, based on Brownsword’s warnings, may put the effectiveness of a legal framework at risk – as well as with the mechanisms we have in place to make sure we can maintain regulatory connection in the long run; if there is a regulatory disconnection in how EU consumer law instruments currently govern dark patterns, this is unlikely to be the last time it occurs. Based on my findings in this regard, in Chapter 7 I formulate some future policy directions, guided by the considerations of regulatory design that I have discussed in section 4.3. However, even if we devise the ‘best’ laws possible, these will not be of help without effective enforcement. It is well understood that legal uncertainty (such as the legal uncertainty stemming from technology-neutral laws) may undermine the effectiveness of public enforcement mechanisms in relation to socio-technical phenomena: enforcement authorities may be reluctant to act because they are unsure of how to interpret the legislator’s will in the light of technological advancement, and may fear that their decisions will be overturned by courts.264 This assessment misses a crucial step, however: for considerations of (public) enforcement to come into play, administrative agencies need to be able to detect potential infringements of substantive rules, which now happen on a vaster scale than ever before. Low levels of infringement detection may make non-compliance with regulation in digital markets a self-fulfilling prophecy: the lower the perceived chances of breaches being detected and penalised, the higher the odds that a regulatee will be tempted to violate the law. Riefa and Coll warn us that the limited monitoring of digital markets will in time lead to a lack of credible deterrence,265 whereas Plana Casado argues that it has already ‘allowed a sense of impunity in e-commerce’.266 Using computational methods to monitor digital markets could help match the scale of detection efforts with that of infringements and lay the groundwork for enforcement actions. Nonetheless, as I will show in Chapter 8, technology neutrality and the legal uncertainty it presupposes may pose obstacles to the development of digital market-monitoring methods. Therefore, in digital markets, technology specificity may be a dimension of both effective substantive regulation and of effective enforcement.
However, let us avoid putting the cart before the horse by first taking a look at the current EU consumer law instruments.