Saturday 18 September 2021

A look back at consumer data protection - what has happened this year?

Today we rush to provide a brief summary of interesting developments at the interface of consumer law and data protection that have taken place since the beginning of this year. A lot has happened and so far we have only reported on selected cases. In this post we take a closer look at significant decisions and opinions issued this year, which will likely see continuation in the near future. 

We begin with a fairly recent decision adopted by the Irish Data Protection Commission (DPC) in the WhatsApp case. The decision captured headlines earlier this month - and not without a reason. After all, it is not every day that Facebook (as the parent company of WhatsApp) is hit with a fine of €225 million for its violations of the General Data Protection Regulation. The proceedings are interesting for both material and procedural reasons. Firstly, the ultimate decision reached by the DPC provides an extensive analysis of the GDPR’s transparency provisions, as applied to the case at hand. For a start, the decision points to the “over-supply of very high level, generalised information” and the possibility of creating an “information fatigue” - a well-known issue in consumer law and policy. Aside from problems of volume and presentation (e.g. multiple cross-references), violations of substantive transparency are also identified (e.g. not sufficiently specific categories of data recipients). Secondly, the proceedings reveal the relevance of the GDPR’s consistency mechanism in cases involving cross-border data processing (for related controversies and case law see our comment here). In the WhatsApp case, the Data Protection Commission was acting as the lead authority - and its original draft decision had not been quite as sweeping as the one eventually adopted. It was the interventions of the other supervisory authorities and the binding

decision adopted by the European Data Protection Board in late July that led the DPC to take a harsher stance - in relation to both identified infringements and calculated fines. Facebook has reportedly challenged the decision before a competent court, so stay tuned!


An even greater fine was imposed in July on Amazon; the Luxembourg data protection authority fined the company with €746 million (record-breaking so far) for illegal ad targeting and ordered the company to review its practices. As La Quadrature du Net (French privacy rights group that issued a complaint) explains, Amazon was targeting data subjects for advertising purposes without their freely given consent and was therefore processing consumer data without a legal basis. Unfortunately, the Luxembourg authority did not publish the content of the decision due to professional secrecy. According to a published statement, the authority views decision publication as an extra sanction. To be sure, the decision may ultimately see the light of day, yet only after all legal remedies have been exhausted and the publication is not likely to cause disproportionate harm to the parties involved. 


Another interesting case - this time at the online/offline interface - has been handled by the Swedish supervisory authority, which reviewed the activities of a public transport operator in Stockholm. The case involved ticket inspectors equipped with video and audio recording cameras worn on their clothing. The use of this type of technology was intended to counter possible dangerous events during ticket checks, documenting incidents and ensuring that the right person is punished for travelling without a ticket. The problem was that the controllers had to have cameras on throughout their shifts and could thus potentially record every traveller. In addition, images and sounds were overwritten in the cameras only every minute. The authority considered this period disproportionately long and maintained that the storage time should be reduced to a maximum of 15 seconds. While this case involved recording undertaken by a trader’s employees, one can well imagine similar problems being posed in the context of increasingly sophisticated smart devices carried by consumers themselves. Recently unveiled Facebook glasses (Rayban Stories) provide a prominent case in point - and the competent authorities have already voiced concerns


Last but not least, as regards the decisions of national authorities, TikTok's sorrowful saga of alleged violations of children’s privacy continues to unfold. Previously it was the Italian data protection authority that ordered the platform to limit the processing of users’ data whose age could not be established with certainty (we informed about it here). This time the Dutch authority investigated TikTok’s privacy policy and concluded that the company had failed to inform its users, including children, in a clear and transparent way how their data is processed via app. More specifically, the language of the information remained of relevance, as the information was provided only in English, and not in Dutch. In consequence, the authority imposed (a comparably modest) fine of €750,000. This, however, may not be the end of TikTok’s legal problems, as the launch of two further inquiries by the Irish DPC demonstrates.


Finally, there are two additional pieces of news we consider noteworthy. The first, which has probably caught our readers' attention, is the Commission's decision on data transfers between the EU and the UK, issued on 28th of June. The decision is a direct follow-up of Brexit and confirms that the UK guarantees an essentially equivalent level of data protection to that which is provided in the EU. However, we are curious to see how these data transfers will develop in the future, especially in the context of recent reports about planned changes to the UK data protection law (we briefly touched upon this here). The second matter is a joint opinion of the European Data Protection Board and the European Data Protection Supervisor concerning the proposal for a regulation laying down harmonised rules on artificial intelligence. The opinion highlights several key points that need further elaboration (e.g. clarifications on a risk-based approach and the EDPS’ role as a market surveillance authority) and calls for “a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces”. 


** Agnieszka Jabłonowska contributed to this post.