Showing posts with label choice architecture. Show all posts
Showing posts with label choice architecture. Show all posts

Thursday, 27 October 2022

OECD report on Dark Patterns

 Yesterday, aka 26 October 2022, the OECD has released a report on Dark Patterns which had been in the making for almost two years. LLM students who would like to write about the topic or just about anyone looking for a clear intro to the subject - this report is your friend! It contains not only a helpful classification of different types of dark patterns but also a quite comprehensive review of relevant regulatory frameworks/interventions, known case-law and much (if not all, and if arguably too US-centred and English-based) of the literature you may also want to look at, including... Joasia's 2019 JCP paper The Transparent Trap! Kudos there.

A working definition is provided at the outset which may or may not gain traction in the field: dark patterns, accordingly, are 

"business practices employing elements of digital choice architecture, in particular in online user interfaces, that subvert or impair consumer autonomy, decision-making or choice. They often deceive, coerce or manipulate consumers and are likely to cause direct or indirect consumer detriment in various ways, though it may be difficult or impossible to measure such detriment in many instances."

[The first part of the report, where dark patterns are typified and their impact assessed, I skip for now - but you can find it all online!]

The report acknowledges that more enforcement is necessary in the EU, while ultimately praising the UCPD's relative ability to address the problem in comparison with other instruments: if on the one hand resonance with the black listed items in the annex makes it possible to address certain black patterns with a degree of legal certainty, the report observes, the "principle-based" prohibition of unfair commercial practices works quite well to cover technological and commercial developments like the ones at hand. 

One critical point that is (thankfully) mirrored in the report is known criticism of the average consumer standard: this standard is hard to square with consumers' apparent vulnerability to dark patterns & other online perils &, the report observes, seems particularly problematic in the context of increasing online personalisation. The report also highlights criticism of disclosure rules, in particular as a way of preventing consumers from falling for dark traps: it turns out, the report concludes, that all experiments trying to measure the effects of disclosures in this area failed to detect any serious improvement. Hence the relevance of information may be limited to broader education campaigns and possibly to a limited set of dark patterns. 

The report also interestingly reviews examples of technical supports that are being developed - essentially, dark pattern-blockers for one's browser. These are, apparently, useful in some cases but less so when the dark patterns is not to be "written away" in code (p 47). I would like an app like that though!

As a scholar who reads Law & Econ work with a mix of interest and skepticism, I was less impressed by the report's discussion of nudges on page 37, under "Digital choice architecture". The title reflects a trend that has been going on for a long time of course; the report, however, brings together under one technique concerns that may need to be kept separated. "Privacy by design", that is mentioned as example, is not the same as a "bright pattern" based on extrapolating "welfare enhancing" choices from supposed "preferences or expectations". While the report necessarily gives a limited overview on each issue, conflating privacy protection with "consumertarian" views and hard-core nudge advocates is to my mind quite problematic.

Anyway, this is really a good starting point but also, as far as I can tell, a fairly comprehensive restatement that those already in the debate will also benefit from. Recommended read!

Thursday, 4 February 2021

CMA's paper on algorithms & online platforms: comprehensive report on benefits and perils of AI regulation

The UK Competition and Market’s Authority recently published a report on the consequences of the online platforms’ use of algorithms (‘sequences of instructions to perform a computation or solve a problem’) for consumer protection and for competition (here). This report builds on the CMA’s 2018 paper on pricing algorithms (here). The report starts by highlighting that the increasing sophistication of algorithms usually means decreasing transparency. The CMA’s report acknowledges the benefits of algorithms to consumers, such as the possibility to save consumers’ time by offering them individualized recommendations. Additionally, algorithms benefit consumers by increasing efficiency, effectiveness, innovation and competition. However, the main goal of the report is to list (economic) harms caused to consumers as a result of algorithms.

The report highlights that big data, machine learning, and AI-based algorithms are at the core of major market players such as Google (e.g. their search algorithm) and Facebook (e.g. their news’ feed algorithm). The CMA also acknowledges that many of the harms discussed in this report are not new but were made more relevant by recent technological advances. Finally, the report acknowledges that the dangers brought by algorithmic regulation are even greater where it impacts consumers significantly (such as decisions about jobs, housing or credit).

The harms discussed in the report deal mainly with choice architecture and dark patterns (e.g. misleading scarcity messages on a given product or misleading rankings). Additionally, personalization is depicted as a particularly dangerous harm, since it cannot be easily identified and because it manipulates consumer choice without that being clear to consumers. Personalization is also worrying because it targets vulnerable consumers. In particular, the CMA is worried about possible discrimination as a result of personalization of offers, prices and other aspects.

Personalized pricing implies that firms charge different prices to different consumers according to what the firm (and their algorithms) think that the consumer is willing to pay. While this has some benefits – like lowering search costs for consumers, the CMA warns that consumers might lose trust in the market as a consequence of personalized pricing practices. While some personalized pricing techniques are well-known – such as offering coupons or charging lower prices to new customers, others are more opaque and harder to detect. Non-price related personalization is also described as potentially harmful, such as personalized search results rankings or personalized recommendation systems (e.g. what videos to show next). In particular, the CMA warns that these systems may lead to unhealthy overuse or addiction of certain services by consumers and to a fragmented understanding of reality and public discourse.

Additionally, the use of algorithms harms competition since it can exclude competitors (e.g. through platform preferencing, via ranking, of their own products). Through exclusionary practices, dominant firms can stop competitors from challenging their market position. A prominent example of this is that of Google displaying its own Google Shopping service in the general search results page more favorably than competitors that offer similar services. Finally, the CMA report zooms in on algorithmic collusion, or the use of algorithmic systems to sustain higher prices.

The report also highlights the obstacles brought by lack of transparency, particularly when it comes to platform oversight. The CMA warns that this lack of transparency and the misuse of algorithms may lead consumers to stop participating in digital markets (e.g. deleting social media apps). This justifies, in the CMA’s opinion, the regulators’ intervention. In particular, the CMA considers that regulators can provide guidance to businesses as to how to comply with the law or to elaborate standards for good practices. Overall, the report brings attention to the fact that many laws in place do not apply to algorithmic regulation, such as to discrimination in AI systems. Moreover, the CMA highlights that the application of consumer law to protect consumers against algorithmic discrimination is still an unexplored area.

The report ends with a call for further research on the harms caused by algorithmic regulation. The CMA suggests techniques to investigate these harms that do not depend on access to companies’ data and algorithms, such as enlisting consumers to act as ‘mystery shoppers’ or through crawling or scraping data from websites. The CMA also suggests specific investigation techniques when there is access to the code.

Overall, this is an extremely comprehensive report that not only explains the biggest consumer harms brought by AI regulation but also contains several practical examples, as well as concrete methodological suggestions for further research and for better enforcement. Definitely a recommended read for both academics and practionners alike.