Data Blast: UK/US Data Bridge, Guidance on Data-Minimisation, EU Bans Biometric Surveillance, How Data Protection Law applies to COVID-19 status, and more!
UK and US Announce new ‘Data Bridge’
The United Kingdom and United States have announced that they have reached a commitment in principle to establish a UK Extension to the Data Privacy Framework, creating a ‘data bridge’ between the two countries.
The UK Government states that the data bridge will ‘remove that burden [of using costly contractual clauses to allow data transfers], speeding up processes for businesses, reducing costs, and increasing opportunity by making it easier for British businesses to operate and trade internationally.’
The Secretary of State for Science, Innovation and Technology, Chloe Smith, said that ‘Data bridges not only offer simpler avenues for the safe transfer of personal data between countries, but also remove red tape for businesses of all sizes and allow them to access new markets.’
The hope is that this data bridge will help drive trans-Atlantic research and innovation by providing greater certainty for those looking to collaborate with research partners on the other side of the Atlantic.
Subsequently the US Government has also announced that the US has fulfilled its commitments for implementing the EU-U.S. Data Privacy Framework (the ‘Framework’). However, the European Commission is yet to adopt an adequacy decision based on the Framework.
Finland Provides Further Guidance on Data Minimisation
A Finnish rental company regularly collected the personal data of the children of the families living in their rental accommodation. The Finnish regulator instructed the company to bring the processing of the children’s data into compliance with the GDPR and delete the identification numbers of the children if there was no legal basis for keeping such. The regulator found that it was not necessary for the company to routinely collect the children’s personal data.
At first instance, the Finnish Administrative Court annulled the regulator’s decision, on the basis that the company had a legitimate interest in processing the children’s data for the purpose of its rental activities. The regulator appealed to the Supreme Administrative Court (‘SAC’), arguing that the company may have a legal basis for processing, but was still in breach of the GDPR because the processing of the personal data was unnecessary for the specific purpose – i.e. the company, had failed to apply the principle of data minimisation.
The SAC agreed that a legal basis for processing does not mean that that processing cannot also be infringing the GDPR for other reasons: The purposes for which the data will be used should also be taken into account. The SAC also noted Recital 39 of the GDPR, which states, inter alia:
- The personal data should be adequate, relevant and limited to what is necessary for the purposes for which they are processed; and
- Personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.
The SAC found that it could be reasonable to process the personal information of children living in the company’s rental accommodation in specific circumstances, but it is not reasonable to do so as a matter of routine. It also found that the company could not justify the processing as a necessary requirement of the software it uses; the onus being on the rental company to use software that does not cause unnecessary processing. Accordingly, the SAC annulled the Administrative Court’s ruling.
Updates from Germany
The Importance of Oversight on Automatic Processing
The German regulator has fined a bank for not providing a data subject with the reasoning behind why an automated process refused them a credit card. The data subject applied for the card and filled in a form with their personal data. The bank used an algorithm to analyse the form, and this rejected the application without human input. The data subject had good credit and a high income, and so asked why they had been rejected. The bank refused to provide the information.
The German regulator upheld the data subject’s complaint. The regulator defined ‘automated decision-making’ as a decision taken by algorithmic models without human intervention. When using such algorithms, the bank was subject to specific transparency requirements under Art. 22(3) of the GDPR. If data subjects request information from those responsible for the automated decision-making, the responsible party must provide meaningful information about the logic behind the automated decision.
The regulator said that the bank ought to have had processes to provide access to concrete information about the database used, and the factors and criteria influencing the decision. Given its failure to do so, the regulator fined the bank €300,000.
No Legitimate Interest in calling Phone Directories with the aim to make a Profit
A German company bought metal residue from dental practices and laboratories. To find potential sellers of such residue, the company called dental practices by looking them up in publicly accessible directories such as the Yellow Pages (which contained the surname and first name of the practice owner, and the address and phone number of the practice, all of which then was then saved on the company’s systems).
One dentist complained to Saarland Administrative Court, which ordered the company to stop contacting dentists and delete the information stored, or be fined EUR 2,500. The company appealed to the Higher Saarland Administrative Court, on the basis that it had a legitimate interest in the processing of the data (i.e. making a profit). The company argued that the phone calls were less intrusive than postal mail.
The Higher Saarland Administrative Court did not allow the appeal. However, the Court observed that the company could not rely on the legitimate interest basis, and should have received consent before undertaking telephone advertising.
Medical Information that can help others is still Medical Information
The Italian regulator has fined a condominium manager for informing the other residents in the building that he managed, that a family in the building had COVID-19. In August 2021, he sent an email to all the other occupants informing them that that family had COVID-19 and as such extra sanitary measures were being taken.
The family complained to the regulator. The manager claimed to have heard about the family’s COVID-19 status from the condominium cleaner, who heard it from one of the residents. There is an exception to the prohibition on processing health data that has been manifestly made public by the data subject. He argued that the data subjects had informed a worker within the condominium, and so had consented to the further processing of that data. He also relied on the legitimate interest of preventing contagion between neighbours.
The regulator found that the fact that the family had told specific people about their COVID-19 status was not the same as making that information public or consenting to further processing. The Italian regulator also stressed the importance of data minimisation, and stated that the legitimate interest could equally well have been met without revealing the names of the family in question. Therefore the manager was fined EUR 2,000 for breach of Art. 9 GDPR.
Meta Appeals European Commission’s Competition Information Request
The European Commission has made information requests to Meta Ireland to see how Meta has implemented the rules on competition under Articles 101 and 102 of the Treaty on the Functioning of the European Union (‘TFEU’). The Commission requested Meta to provide, among other things, a number of internal documents, with a separate procedure for sensitive documents, which would be examined in a virtual data room.
Meta appealed to the General Court to annul the decision of the European Commission. Meta relied on four legal arguments, including Meta’s right to privacy. It argued that requiring the provision of irrelevant documents of a private nature infringes, inter alia, its right to privacy and that of its staff and third parties.
The General Court found that Meta could not question the fact that the limitation to the right to privacy would not be provided by law in this case. Articles 81-2 of the TFEU give the Commission the power to request the information it had requested. Further, the Court considered that the Commission’s decision met the objective of general interest as recognised by the EU: Maintaining competition is important within the EU, and businesses have a requirement to comply with the Commission’s requests. The Court rejected Meta’s appeal in its entirety.
Big Brother Cannot Watch You
The European Parliament has voted to ban biometric surveillance, emotion recognition and predictive policing throughout the EU. This will be the first piece of comprehensive legislation on artificial intelligence.
The stated intention is to curb the risks of AI and encourage its ethical use. However, the final wording of the AI Act has been described by the Computer & Communications Industry Association as deviating from the European Commission’s previous risk-based approach. This means that all AI uses, even those with very limited risk, will be treated as though they pose high risks, with the accompanying strict requirements.
The vote also strictly limits ‘categorisation based on sensitive characteristics, predictive policing and emotion recognition systems’ specifically:
- Both ‘real-time’ and after-the-fact remote biometric identification systems in publicly accessible spaces;
- Biometric categorisation systems using sensitive characteristics (such gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (which is said to violate human rights including the right to privacy).
European parliamentarians have proposed specific rules for generative AI (ChatGPT, Replika, Midjourney etc.), including requirements to label AI-generated content, and making information about the databases that such AI have been trained on publicly available.
The hope will be that the AI Act can be fully finalised before the next European Parliamentary election, in May 2024.