Investigated account deleted because Google misidentified “sensitive” photos.
A man named Mark was banned from many accounts like Gmail, Google Drive… and under investigation after sending a photo of his child to a doctor.
One night in February 2021, 40-year-old Mark, who lives in the United States, noticed something unusual about his young son’s solar eclipse. He and his wife wanted to take the children to the hospital but the clinic was closed over the weekend and he was infected with Covid-19 so he decided to send the photos to the doctor first to get tested. With the help of photos, the doctor prescribes a drug to reduce swelling. But he and his wife didn’t know they were being collected and analyzed by Google AI.
Mark says he uses multiple Google services like Gmail, syncs calendars with Google Calendar, uses Android phones and saves photos to Google Photos, and even buys Google Fi’s virtual cellular network plan.
Two days after the child’s photo was sent to the doctor, Mark’s smartphone now says, “Your account has been disabled because of harmful content, violates Google’s policies and may contain illegal content and exploitation”.
At first Mark was confused, then he remembered the picture he took with his son. Oh, he thought, maybe Google thought it was child pornography.
Previously, he was a software engineer responsible for content hosting programs, so he believes such systems know what is flagged and what is legal. He filed a complaint with Google about the child’s illness and the reason for taking the photo.
During this time many of the Google services used by Mark were deactivated, emails were deleted. Letters, contacts from friends and colleagues, photos from my son’s early years stored on Google Drive. He also didn’t get the security code to enter the account and many other problems. “The more eggs you put in a basket, the more likely that basket will fail,” says Mark.
A few days later, Google replied that it couldn’t recover the account, but didn’t offer any further explanation.
The problem doesn’t end here. In December 2021, he received a warrant and a search warrant for his home from the San Francisco Police Department. The researcher queried everything in Mark’s account like internet search history, location history, messages, documents, photos, videos stored in Google services. The search phase related to “child exploitation” lasted several months.
In February, coroner Nicholas Hillard concluded that Marks had committed no crime. “I have determined that the incident was not a crime and that no violation was committed,” Hillard wrote in the statement. Mark continued to appeal to Google with the police report but to no avail. Google has announced that the account will be permanently deleted.
Mark is not alone. Cassiom, who lives in Texas, photographed his one-year-old son at the doctor’s with a mysterious genital infection, and his Google account was also suspended. “I bought a house and signed a lot of digital documents. They were saved in Gmail, but then they were blocked. That gave me a headache,” Casio said.
How Google identifies images
Tech giants report millions of images of child exploitation or sexual abuse every year. In 2021, Google alone filed more than 600,000 child abuse reports and suspended more than 270,000 user accounts.
The first tool the tech industry used to detect child pornography was PhotoDNA, released by Microsoft in 2009. It’s a database of images that many tech companies like Facebook use to identify child pornography. Check out the “torn” but disturbing image.
A major breakthrough came in 2018 when Google released an AI-based tool that allowed users to view large amounts of bad images online in a short amount of time. In addition to finding images of known child abuse victims, she advises authorities to identify and rescue unidentified victims. Later Facebook used this tool.
Uploads like Markus and Casio are automatically analyzed by the Google system. According to John Callas, a technology expert at the Electronic Frontier Foundation, a digital civil rights organization, this process is called intrusion scanning. “This is the nightmare we all fear. You scan your family photo album and you’re in trouble,” Call said.
After scanning, the AI flags suspicious content. A Google content moderator will review these photos or videos to determine if they violate copyright law. In this case, your Google account will be suspended and the objectionable content may be sent to CyberTipline.
According to Fallon McNulty, the operator of CyberTiplin, by 2021 the company will receive 29.3 million messages, or about 80,000 messages per day. Most of the photos are online, so CyberTiplin’s 40-strong team focuses on taking new photos, analyzing them, and sending them to law enforcement. Last year, the organization alerted authorities to “more than 4,260 new child victims,” including photos of Marco and Cassio, though they were not malicious.
Kate Klonick, a law professor at St. John’s, believes recognizing behavior when a person shares their photos on online platforms is a technological challenge. “Errors are very common, and companies need to put in place a more rigorous process to recover accounts from people who don’t have malicious intent,” Klonick said.
Carissa Byrne Hessick, a law professor at the University of North Carolina and an expert on child pornography crimes, says not all child nudity is sexually explicit, exploitative, or offensive. According to Hessick, Google should add some context to children’s photos when AI censors them, and be more careful when tagging photos censored by humans.
Google did not comment on the specific cases of Mark and Casio. “Sexual exploitation of children is despicable. We are committed to ensuring that it does not spread across our platforms,” a Google spokesman said.
Bao Lam (Consequences New York time)