Tech giant Google's artificial intelligence (AI) has reportedly flagged parents' accounts for potential abuse over nude photos of their sick kids.
According to the father, the tech giant labeled the photos as child sexual abuse material (CSAM) when he used his Android smartphone to capture pictures of an illness on his toddler's groin, quoting the New York Times, according to The Verge.
The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation.
It highlights the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user's digital library, whether on their personal device or in cloud storage.
The incident occurred in February 2021, when some doctor's offices were still closed due to the Covid-19 pandemic.
As per the report, Mark (whose last name was not revealed) noticed swelling in his child's genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection.
Mark received a notification from Google two days after taking the photos, stating that his accounts had been locked due to “harmful content” that was “a severe violation of Google's policies and might be illegal.”
Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft's PhotoDNA for scanning uploaded images to detect matches with known CSAM.
In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.
John Carter has been a content and ‘ghostwriter' for many popular online publications over the years. John is now our chief editor at NewsGrab.