The UK’s data protection watchdog has ordered controversial facial recognition company Clearview AI to delete all data it holds on UK residents after the company was found to have made multiple breaches of UK data protection law.
The Information Commissioner’s Office (ICO) fined the company – which uses scraping technology to harvest photographs of people from images and videos posted on news sites, social media and websites – more than £7.5m.
The company settled a legal claim with the American Civil Liberties Union (ACLU) in May 2022 in which it agreed to halt its sales of facial recognition technology to private companies and individuals across the US.
The US-based firm sells access to what it claims is the “largest known database” of 20 billion face images to law enforcement agencies, which can use its algorithms to identify individuals from photographs and videos.
The company scrapes images of people from all over the world, from websites and social media, and provides additional “meta data” which can include details of where and when the photograph was taken, the gender of the subject, their nationality and the languages they speak.
The information commissioner, John Edwards, said Clearview not only allowed people on its database to be identified from photographs, but effectively allowed their behaviour to be monitored in a way that was “unacceptable”.
“People expect that their personal information will be respected, regardless of where in the world their data is being used,” he said. “That is why global companies need international enforcement.”
Fine ‘incorrect as a matter of law’
The ICO said in a statement that although Clearview AI had stopped offering services in the UK, the company was still using the personal data of UK residents to provide services to other countries.
Given the high number of internet and social media users in the UK, Clearview’s database was likely to include a substantial amount of data from UK residents which had been gathered without their knowledge, it said.
Lawyer for Clearview, Lee Wolosky, a partner at Jenner and Block, said the decision to impose any fine was “incorrect as a matter of law”.
“Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the UK at this time,” he added.
Clearview claims that its technology has “helped law enforcement track down hundreds of at-large criminals, including paedophiles, terrorists and sex traffickers”.
The company also says its technology has been used to “identify victims of crimes including child sex abuse and financial fraud” and to exonerate the innocent.
Multiple breaches of data protection
The ICO warned in a preliminary decision in November that Clearview AI could face a fine of over £17m, the largest that could be imposed, following a joint investigation by the ICO and the Office of the Australian Information Commissioner (OAIC).
The ICO reduced the proposed fine after considering representations from Clearview.
It found this week that Clearview AI did not have a lawful basis to collect information about UK citizens and had made multiple other data protection breaches:
- Clearview failed to use UK data on UK citizens in a fair and transparent way.
- It collected and processed data on UK citizens without their knowledge.
- It failed to meet the higher data protection standards required for processing biometric data required under the General Data Protection Regulation (GDPR).
- The company failed to put a process in place to prevent data being collected and stored indefinitely.
- Clearview AI also made it difficult for individuals who wished to object to their data being collected by the company by asking applicants for additional information, such as personal photographs.
The ICO’s fine is the latest in a series of regulatory actions and lawsuits that have hit Clearview AI over the past two years.
Privacy International and other human rights organisations filed co-ordinated legal complaints in the UK, France, Austria, Italy and Greece in May 2021.
In December 2021, French data protection watchdog CNIL ordered Clearview AI to stop its collection of photographic biometric information about people on French territory and to delete the data it had already collected.
CNIL found that Clearview’s software made it possible for its customers to gather detailed personal information about individuals by directing them to the social media accounts and blog posts of the people identified.
The ability of Clearview’s customers to conduct repeated searches over time of an individual’s profile in effect meant it was possible to monitor targeted individuals’ behaviour over time, CNIL concluded.
Attempts by individuals to access their personal information from Clearview, as required by data protection law, have proved difficult.
In the case of one French complainant, Clearview responded after four months after receiving seven letters.
The company requested a copy of the complainant’s ID that she had already provided and asked her to send a photograph, in what CNIL said was a failure to allow the complainant to exercise her rights.
CNIL also found that the company limits the rights of individuals to access data collected in the past 12 months, despite retaining their personal information indefinitely.
More intrusive than Google
In February 2022, the Italian data protection regulator fined Clearview €20m and ordered it to delete all data collected on Italian territory, after receiving complaints from four people who objected to their photographs appearing on Clearview’s database.
Clearview argued – unsuccessfully – that it did not fall under the jurisdiction of Italy because it offered no products or services in the country and blocked any attempt to access its platform from Italian IP addresses.
The Italian regulator also rejected Clearview’s claims that its technology was analogous to Google, finding that Clearview’s service was more intrusive than the search engine.
For example, Clearview’s technology kept copies of biometric data after the images had been removed from the web and associated them with meta data embedded in the photograph.
The facial matches provided by Clearview could also link to sensitive information, including racial origin, ethnicity, political opinions, religious beliefs or trade union membership.
The regulator found that, unlike Google, Clearview updated its database and retained images that no longer appeared on the web, providing a record of changing information about people over time.
“The public availability of data on the internet does not imply, by the mere fact of their public status, the legitimacy of their collection by third parties,” it said.
Clearview settlement in US
Most recently, Clearview reached a legal settlement in the US with the ACLU on 9 May 2022, following a claim that the firm had repeatedly violated the Biometric Information Privacy Act in Illinois.
The ACLU brought a complaint on behalf of vulnerable groups, including survivors of domestic violence and sexual assault, undocumented immigrants and sex workers who could be harmed by facial recognition surveillance.
Under the settlement, Clearview agreed not to sell its facial recognition services to companies and private individuals across the US, limiting it to offering services to law enforcement and government agencies.
Its is banned from offering its facial recognition services in the state of Illinois to law enforcement and private companies for five years.
Other measures included adding an opt-out request form on its website to allow residents of Illinois to remove their details from search results, which Clearview must advertise at its own expense.
Clearview offered ‘trial accounts’ to police Europe
Clearview was founded in 2017 in the US to offer facial recognition services and filed a patent for its machine learning technology in February 2021.
The company began offering services to US police and law enforcement agencies in 2019 and subsequently began expanding in Europe.
Clearview AI first came to the public’s attention in January 2020 when the New York Times revealed that the company had been offering facial recognition services to more than 600 law enforcement agencies and at least a handful of companies for “security purposes”.
The company’s users, of which it claimed to have 2,900, included college security departments, attorneys general and private companies, including events organisations, casino operators, fitness firms and cryptocurrency companies, Buzzfeed subsequently reported.
Clearview claimed that it had a small number of “test accounts” in Europe but deactivated them in March 2020 following complaints from European regulators.
The firm also removed several references to European data protection law from its website, which regulators said had clearly shown its previous intention to offer facial recognition services in Europe.
ICO should have issued maximum fine
Lucie Audibert, lawyer at Privacy International, said the ICO should have stuck with its original intention to fine Clearview AI the maximum possible amount of £17m.
“While we don’t know the exact number, a considerable amount of UK residents, likely the vast majority, have potentially had their photos scraped and processed by Clearview – the ICO’s original intent to impose the maximum fine was therefore the only commensurate response to the extent of the harm,” she told Computer Weekly.
“Clearview’s data scraping is, by definition, indiscriminate, and no technical adjustment can possibly allow it to filter out faces of people in certain countries. Even if they tried to filter out based on the IP address location of the website or the uploader of the picture, they would still end up with faces of people who reside in the UK.”
Audibert added: “It’s not a tenable business model – after the huge blow they got through the settlement with ACLU in the US, they are threading on extremely thin ice.”
Clearview has 28 days to appeal the ICO’s decision and six months to implement the order.
The ICO said that it could issue further fines if Clearview AI fails to comply.
“Our investigation team will maintain contact with Clearview to make sure appropriate measures are taken,” a spokesperson told Computer Weekly. “If they fail to comply with the enforcement notice, we can issue further monetary penalty notices in respect of the non-compliance with the enforcement notice.”