World

American photographed naked son for medical examination; Google saw a crime

by

Mark noticed something wrong with his son. The two-year-old’s penis felt swollen and he was aching. Mark, who lives in San Francisco and takes care of the boy, took out his Android smartphone and took pictures to document the problem so he could track its progression.

This was on a Friday night in February 2021. Mark’s wife called the doctor to make an emergency appointment, via video — they were in the middle of the pandemic. A nurse asked them to send pictures so the doctor could look at them before the appointment.

Mark’s wife took her husband’s cell phone and streamed close-ups of her son’s groin area to her iPhone so she could upload them to the office’s messaging system.

That episode cost Mark more than a decade of contacts, emails and photos and made him a target of investigation. The man — who asked to be identified only by his first name — was captured by an algorithmic net designed to hook people who exchange material on child sexual abuse.

Because they analyze such a large volume of data, technology companies have been under pressure to examine material passing through their servers to detect and prevent criminal behavior. Advocates for children and teens say such cooperation is essential to combat the online spread of sexual abuse images.

But that work can involve spying on private files and, in at least two episodes brought to light by the New York Times, it has led to innocent behavior being viewed in a sinister light.

Technologist Jon Callas, from an organization that defends digital civil liberties, says the cases act as warnings for this type of situation.

Mark, who is in his 45s, created a Gmail account in the mid-2000s and has since made heavy use of Google. Two days after the photos, Mark’s phone made a notification sound: his account had been disabled due to “harmful content” that constituted “a serious violation of Google’s policies and was possibly illegal.”

A link took him to a list of possible reasons, including “child sexual abuse and exploitation.” Mark was confused at first, but then he remembered his son’s infection. “My God,” he thought, “Google must think that was child pornography.”

He filled out a form requesting a review of Google’s decision and explaining the boy’s infection. At the same time, he discovered the ripple effect of rejecting the company. Not only did he lose emails, contact details from friends and former colleagues, and documentation from his son’s early years, but his Google Fi account was closed — he had to acquire a new number with another carrier.

Without access to his phone and email, he couldn’t get the passwords he needed to access his other internet accounts. He has been excluded from much of digital life.

In a statement, Google said: “Child sexual abuse material is abhorrent. We are committed to preventing its spread on our platforms.”

A few days after the appeal was filed, Google responded that it would not restore Mark’s account, without further explanation.

Meanwhile, the same scenario played out in Texas. A boy, also two years old, had an infection in his “private parts,” as his father wrote in a post I came across while writing about Mark’s story. At the pediatrician’s request, Cassio — who also asked to be identified only by his first name — used an Android phone to take photos, automatically saved to Google Photos. He then sent the images to the woman via Google chat.

Cassio was in the process of buying a house when his Gmail account was deactivated. “It was a headache,” she says.

The first tool used by the industry to seriously hamper the vast online exchange of child pornography was PhotoDNA, a database of known images of abuse, converted into unique digital codes. It could be used to scan large numbers of photos quickly to detect a match, even with small changes. After Microsoft launched the system in 2009, Facebook and other companies started using it.

In 2018, a major breakthrough was made, when Google developed an artificial intelligence tool capable of recognizing never-before-seen images of exploitation of children. This meant finding not only known images of abused children, but also unknown victims, who could potentially be rescued by authorities. Google has made the technology available to other companies, including Facebook.

When the photos taken by Mark and Cassio were automatically uploaded from the phone to Google’s servers, the technology identified them. A spokesperson said Google only analyzes when a user initiates “affirmative action”—that includes when the phone saves photos to the company’s cloud.

A human content moderator would have examined the photos after they were identified by the AI ​​to confirm that they met the federal definition of child sexual abuse material. When Google makes such a discovery, it blocks the user’s account, does a search for exploitative content, and then, as required by federal law, reports the case to a sort of cyber hotline.

In 2021 CyberTipline reported that it alerted authorities to “over 4,260 potential new child victims”. Mark and Cassio’s children were included in that number.

In December, Mark received an envelope from the Police Department in the mail. It was a letter informing him that he had been investigated, along with copies of the search warrants sent to Google and his ISP. The inquiry had asked for everything on Mark’s Google account: his internet searches, his location history, messages, and any documents, photos and videos he had stored with the company.

The search, related to “videos of child exploitation”, had been carried out in February, a week after he took the photos of his son.

Mark called the investigator, Nicholas Hillard, who told him the case had been dropped. He had tried to get in touch, but Mark’s phone and email hadn’t worked. “I determined that the incident was not a crime and that no crime had occurred,” he wrote in his report. Mark again appealed to Google, providing the police report, but to no avail.

Cassio was also investigated. A Houston Police detective called and asked him to come to the station. After showing his messages with the pediatrician, he was quickly released. But he was also unable to recover his Google account, which he had owned for a decade and which he was a paying user.

Not all photos of naked children are pornographic, abusive or indicative of exploitation. Carissa Byrne Hessick, a law professor at the University of North Carolina, says it can be tricky to define what constitutes sexually abusive imagery, but she agrees with police on medical images — saying they don’t qualify as abusive. “There was no child abuse. The photos were taken for non-sexual reasons.”

I had access to the photos that Mark took. The decision to identify them as worrisome was understandable: they are explicit photos of a child’s genitalia. But context matters: they were made by a father worried about his sick son.

“We recognize that in an era of telehealth and Covid, it has been necessary for parents to take pictures of their children to receive a diagnosis,” says Claire Lilley, director of child safety operations at Google. She says the company consulted with pediatricians to make their human reviewers aware of medical conditions that may appear in photos taken for medical purposes.

Cassio heard from a customer support representative that sending the photos to his wife using Google Hangouts violated the system’s terms of service.

As for Mark, Claire Lilley says reviewers did not detect any redness or a rash in the photos he took of his son and that subsequent account review revealed a video from six months before that Google also found troublesome: a small child lying in bed with a naked woman.

Mark didn’t remember the video and no longer had access to it, but he said it felt like it was a particular moment that he would have wanted to capture, not knowing he would be seen or judged by others. “I can imagine. We woke up, it was a beautiful day and I wanted to register that moment”, he says. “If we slept in our pajamas, all this could have been avoided.”

A Google spokesperson says the company stands by its decisions, despite the fact that the police exonerated the two men.

androidbig techcybersecuritydata privacydigital securityFacebookgeneral data protection lawgoogleinternetleafpornographysearch systemsocial networksWorld

You May Also Like

Recommended for you