Showing posts with label tik tok. Show all posts
Showing posts with label tik tok. Show all posts

Tuesday, 2 September 2025

Key GDPR Fines in Mid-2025: Luka (Replika), TikTok, and ING Bank Śląski

This post discusses three recent decisions of Data Protection Authorities imposing fines for GDPR infringements on Luka Inc., TikTok, and ING Bank Śląski. While most of our analyses usually focus on judgments of the Court of Justice of the European Union, in this case we turn to decisions of national authorities. Such decisions tend to attract significant attention, either because of the seriousness of the violations or the high amounts of the penalties, which makes them a frequent subject of debate. The three cases selected meet these criteria and, moreover, were issued within the past few months. They are also directly relevant to consumers, as they highlight risks that many of us face in everyday life when using apps or online services where personal data may be mishandled.
 
 
Luka Inc. 

On 10 April 2025, the Italian Data Protection Authority (Garante per la protezione dei dati personali) issued a decision against Luka Inc., the U.S. company behind the Replika chatbot. Replika is marketed as an AI “companion” designed to boost users’ mood and wellbeing, and can be set up as a friend, mentor, therapist or even a romantic partner. But according to the Garante, the way Luka handled users’ personal data fell far short of what the GDPR requires.

The investigation showed that Replika’s privacy policy did not clearly identify the legal ground for the many different ways in which users’ data were processed – for example, data used for running the chatbot versus data used for developing the large language model behind it. Instead of specifying purposes and corresponding legal bases, the policy only gave vague, generic statements like: “We care about the protection and confidentiality of your data. We therefore only process your data to the extent that: It is necessary to provide the Replika services you are requesting, you have given your consent to the processing, or we are otherwise authorized to do so under the data protection laws” (btw – doesn’t that sound familiar from many privacy policies?). This lack of granularity made it impossible for users to understand how their data were really being used, in breach of Articles 5(1)(a) and 6 GDPR.

What’s more, the privacy notice was only available in English, even though the service was offered in Italy. It also failed to explain key points required under GDPR: what kinds of data were collected, how long they were stored, whether data were transferred outside the EU, and for what purpose. Some statements were even misleading, for instance, suggesting that personal data might be transferred to the U.S., while the company later claimed no such transfers took place. Such gaps and contradictions meant that users could not make informed choices about their data.

However, the most troubling finding was that the Garante concluded Luka had failed to implement effective safeguards for children. Although the service was formally intended for adults, it lacked genuine age-verification mechanisms. Registration required only a name, email address, and gender, which allowed minors to create accounts. Even when users declared they were under 18, no technical barrier prevented them from accessing the platform. In practice, this meant that children could be exposed to age-inappropriate content, including sexually explicit material. Moreover, even after updates to the privacy policy, technical testing showed that under-18 users could still bypass the age restriction simply by editing their profile. 
 
For these violations, the Garante imposed an administrative fine of EUR 5,000,000, representing half of the maximum amount available under Article 83(5) GDPR.
 
 
TikTok Technology Limited
 
Another significant decision was issued by the Irish Data Protection Commission (DPC) in May 2025 against TikTok Technology Limited. Although the full text of the decision has not yet been published, the official press release provides insight into the reasons for the sanction.
 
The inquiry examined both the lawfulness of TikTok’s transfers of European users’ personal data to China and the adequacy of the company’s transparency regarding those transfers. The DPC concluded that TikTok had infringed the GDPR in two key respects.
 
First, the Commission found that TikTok’s transfers of user data to China violated Article 46(1) GDPR. The company failed to verify, guarantee, and demonstrate that personal data of European users – remotely accessed by staff in China – was afforded a level of protection essentially equivalent to that required within the EU. TikTok’s own assessments of Chinese law highlighted serious divergences from EU standards, particularly risks under the Anti-Terrorism Law, the Counter-Espionage Law, and the National Intelligence Law. Nevertheless, the company did not adequately address these risks or ensure that its contractual safeguards were effective.
 
Second, the DPC held that TikTok had not complied with the information duties set out in Article 13(1)(f) GDPR. Earlier versions of its privacy policy (in force between July 2020 and December 2022) failed to identify the countries involved in data transfers and did not explain the nature of the processing – for instance, that personnel in China could remotely access data stored in Singapore and the United States. This lack of clarity prevented users from understanding who could access their data and under what conditions.
 
The decision imposed not only administrative fines but also corrective measures. TikTok was given six months to bring its practices into compliance, failing which data transfers to China would have to be suspended altogether. The total fine amounted to EUR 530,000,000, comprising EUR 485,000,000 for the unlawful transfers and EUR 45,000,000 for the lack of transparency.
 
 
ING Bank Śląski
 
The third discussed decision was delivered on 23 July 2025 by the Polish Data Protection Authority (UODO) against ING Bank Śląski S.A., which was fined PLN 18,416,400 (around EUR 4,000,000). The case revolved around the bank’s widespread practice of copying and scanning ID cards of both existing and potential clients, even in situations where such a step was not required by law. The bank introduced this practice after the amendment of Polish anti-money laundering provisions, interpreting them as justifying the systematic copying of IDs.
 
The investigation revealed that between April 2019 and September 2020 the bank systematically scanned ID documents not only during customer onboarding, but also in contexts where no anti-money laundering (AML) obligations applied – for example, when a customer filed a complaint about an ATM. In practice, the bank’s internal procedures made the delivery of services conditional on handing over a scanned ID, leaving consumers with no real choice.
 
As emphasized in the decision, both AML law and the GDPR require banks to conduct a risk-based assessment and determine, case by case, whether copying an ID is genuinely necessary. ING failed to perform such assessments. Instead, it adopted blanket rules requiring ID copies in numerous situations, regardless of whether AML obligations applied. As a result, the bank processed extensive amounts of sensitive identifying information without a valid legal basis under Article 6 GDPR. Although no specific harm was demonstrated, the decision underscores that ID cards contain a wide range of personal data – including full name, date of birth, parents’ names, unique national ID number (PESEL), photograph, and document series. Taken together, these data significantly increase the risk of identity theft or fraudulent loans. Given that ING had millions of individual and corporate clients during the period in question, the potential consequences of such unnecessary data collection were substantial.

Saturday, 18 September 2021

A look back at consumer data protection - what has happened this year?

Today we rush to provide a brief summary of interesting developments at the interface of consumer law and data protection that have taken place since the beginning of this year. A lot has happened and so far we have only reported on selected cases. In this post we take a closer look at significant decisions and opinions issued this year, which will likely see continuation in the near future. 

We begin with a fairly recent decision adopted by the Irish Data Protection Commission (DPC) in the WhatsApp case. The decision captured headlines earlier this month - and not without a reason. After all, it is not every day that Facebook (as the parent company of WhatsApp) is hit with a fine of €225 million for its violations of the General Data Protection Regulation. The proceedings are interesting for both material and procedural reasons. Firstly, the ultimate decision reached by the DPC provides an extensive analysis of the GDPR’s transparency provisions, as applied to the case at hand. For a start, the decision points to the “over-supply of very high level, generalised information” and the possibility of creating an “information fatigue” - a well-known issue in consumer law and policy. Aside from problems of volume and presentation (e.g. multiple cross-references), violations of substantive transparency are also identified (e.g. not sufficiently specific categories of data recipients). Secondly, the proceedings reveal the relevance of the GDPR’s consistency mechanism in cases involving cross-border data processing (for related controversies and case law see our comment here). In the WhatsApp case, the Data Protection Commission was acting as the lead authority - and its original draft decision had not been quite as sweeping as the one eventually adopted. It was the interventions of the other supervisory authorities and the binding

decision adopted by the European Data Protection Board in late July that led the DPC to take a harsher stance - in relation to both identified infringements and calculated fines. Facebook has reportedly challenged the decision before a competent court, so stay tuned!


An even greater fine was imposed in July on Amazon; the Luxembourg data protection authority fined the company with €746 million (record-breaking so far) for illegal ad targeting and ordered the company to review its practices. As La Quadrature du Net (French privacy rights group that issued a complaint) explains, Amazon was targeting data subjects for advertising purposes without their freely given consent and was therefore processing consumer data without a legal basis. Unfortunately, the Luxembourg authority did not publish the content of the decision due to professional secrecy. According to a published statement, the authority views decision publication as an extra sanction. To be sure, the decision may ultimately see the light of day, yet only after all legal remedies have been exhausted and the publication is not likely to cause disproportionate harm to the parties involved. 


Another interesting case - this time at the online/offline interface - has been handled by the Swedish supervisory authority, which reviewed the activities of a public transport operator in Stockholm. The case involved ticket inspectors equipped with video and audio recording cameras worn on their clothing. The use of this type of technology was intended to counter possible dangerous events during ticket checks, documenting incidents and ensuring that the right person is punished for travelling without a ticket. The problem was that the controllers had to have cameras on throughout their shifts and could thus potentially record every traveller. In addition, images and sounds were overwritten in the cameras only every minute. The authority considered this period disproportionately long and maintained that the storage time should be reduced to a maximum of 15 seconds. While this case involved recording undertaken by a trader’s employees, one can well imagine similar problems being posed in the context of increasingly sophisticated smart devices carried by consumers themselves. Recently unveiled Facebook glasses (Rayban Stories) provide a prominent case in point - and the competent authorities have already voiced concerns


Last but not least, as regards the decisions of national authorities, TikTok's sorrowful saga of alleged violations of children’s privacy continues to unfold. Previously it was the Italian data protection authority that ordered the platform to limit the processing of users’ data whose age could not be established with certainty (we informed about it here). This time the Dutch authority investigated TikTok’s privacy policy and concluded that the company had failed to inform its users, including children, in a clear and transparent way how their data is processed via app. More specifically, the language of the information remained of relevance, as the information was provided only in English, and not in Dutch. In consequence, the authority imposed (a comparably modest) fine of €750,000. This, however, may not be the end of TikTok’s legal problems, as the launch of two further inquiries by the Irish DPC demonstrates.


Finally, there are two additional pieces of news we consider noteworthy. The first, which has probably caught our readers' attention, is the Commission's decision on data transfers between the EU and the UK, issued on 28th of June. The decision is a direct follow-up of Brexit and confirms that the UK guarantees an essentially equivalent level of data protection to that which is provided in the EU. However, we are curious to see how these data transfers will develop in the future, especially in the context of recent reports about planned changes to the UK data protection law (we briefly touched upon this here). The second matter is a joint opinion of the European Data Protection Board and the European Data Protection Supervisor concerning the proposal for a regulation laying down harmonised rules on artificial intelligence. The opinion highlights several key points that need further elaboration (e.g. clarifications on a risk-based approach and the EDPS’ role as a market surveillance authority) and calls for “a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces”. 


** Agnieszka Jabłonowska contributed to this post.

Tuesday, 16 February 2021

BEUC files complaints against Tik Tok

 Today, BEUC and 15 national consumer associations have filed coordinated complaints against Tik Tok, a platform popular with older children and teenagers. The complaint joins items related to purported violations of the Unfair Terms Directive, the Unfair Commercial Practices Directive and the GDPR. 

Part of Tik Tok's business model is relatively specific - namely, the platform sells token that can be used to "give presents" to other users as a reward for the performances shown in their videos. BEUC complains that several of the terms regulating these tokens are in fact non-transparent and substantively unfair. The complaint seems to connect nicely to the recent European Parliament report by our very own Joasia Luzak and colleague Marco Loos calling for closer scrutiny of terms of service for online providers

The platform's young target audience is an important driver behind some of the concerns: hidden advertisement and harmful content are a particularly sensitive issue when minors are involved. Other complaints, such as Tik Tok's nonchalant approach to control over user generated content (see earlier news of users finding themselves included in ads without any previous knowledge or consent from their side) seem to reflect more general concerns with privacy and user control/copyright.

The choice to build up such a large coordinated complaint may be an attempt to circumvent earlier identified problems with enforcing the GDPR against Tik Tok - namely, the lack of clarity as to which authority would be competent under the Regulation's "one-stop-shop" rule. This problem was only partially addressed when the company decided to establish its data processing activities in Ireland, which is also the European data hub for many other tech companies.