anubiswildmegaways| Apple and Google have removed multiple malicious AI applications. The more serious problem is...

Financial Associated Press, April 27th (Editor Shi Zhengcheng) as AI technology steps into the stage of large-scale applicationAnubiswildmegawaysThe problem of malicious abuse of technology has resurfaced. "one click to repair the picture into one button to undress", which once sparked heated discussion on Weibo not long ago, is being staged in a more malicious form in the United States.

anubiswildmegaways| Apple and Google have removed multiple malicious AI applications. The more serious problem is...

Driven by the media, Apple and Google have removed a number of apps from their app stores that boast the ability to "undress with one click," US technology media 404 Media reported on Friday. The real problem is that these Internet giants, including Meta, which charges advertising fees to promote these apps, don't seem to be able to find apps that violate their policies.

Malicious AI applications "step into the room"

According to the report, by comparing the advertising database of the Meta platform, 404Media found a series of ads promoting "one-click undress" software, and the release records show that advertisements promoting these malicious AI software were placed on various social media platforms owned by Meta in March and April.

The boasted function of this kind of software is that it can upload photos of real people and can "erase any clothes with one click", and developers can make a profit by charging.

What is even more shocking is that this kind of software does not survive in the dark corners of the Internet, but appears aboveboard in the Apple App Store. When users click on an ad on social media, they jump directly to the Apple App Store. The app in the store is disguised as an AI art generator and does not disclose the actual use behind the app (because it must be a violation of app store policy).

Before the incident came to light this week, the ads were still hanging on Meta's social platform, and illegal apps had been found in Apple and Google's app stores.

After receiving the report, Meta found the media that broke the news, asked them to "send them all the links to the ads they saw" and quickly removed them from the shelves. Company spokesman Daniel Roberts later said that Meta does not allow ads with adult content and will be deleted as soon as they are found. The company will then step up scrutiny and take action against accounts that violate the policy.

According to the latest news on Friday, Apple also removed three apps found by the media after receiving the report. Google recently removed an AI face-changing app.

In fact, this kind of app with "DeepFake" function has been in the app store since at least 2022, but when Apple and Google received the report, they simply asked developers to stop advertising illegal uses, rather than taking them off the shelves.

American students have been arrested for this.

Although the Internet platform is quite passive to the way this kind of app is handled, these applications have produced adverse social consequences.

In December, two Florida high school students were arrested and charged with felony charges for using AI to generate nude photos of their classmates, Wired magazine reported.

According to a report provided by the police, the two boys were aged 13 and 14 respectively, and the photos were taken of students between the ages of 12 and 13. According to Florida law, their behavior is a third-tier felony as well as major car theft and unlawful detention.

Although the case has been described by the US media as "the first case of arrest and prosecution", similar incidents have been reported in high schools in California, Washington and New Jersey, but did not mention whether criminal charges were brought.

At present, only ten states in the United States have criminalized such acts, and there are no laws at the federal level to regulate them. The turnaround occurred at the end of January this year, due to the outbreak of the use of AI to produce explicit photos of Taylor Swift on Musk's X platform. In the most serious situation, the X platform had to ban the use of "Taylor Swift" as a keyword search.

After this incident, members of the United States Senate of both parties jointly introduced legislation called the 2024 Act to Combat illegal falsification of Images and unauthorized Editing.

Republican Senator Josh Holly, who co-sponsored the bill, told the media at the time that neither stars nor ordinary people should find themselves in AI porn. The innocent have the right to defend their reputation and hold the perpetrators accountable in court.