Fears of AI-driven fraud are heightened by a recent 'Deepfake' scam in China.
Reuters || Shining BD
Concerns have been raised about the potential for artificial intelligence (AI) techniques to support financial crimes following a fraud in northern China that used sophisticated "deepfake" technology to persuade a man to transfer money to a fictitious friend.
Amid an increase in AI-driven fraud, which primarily involves the manipulation of voice and facial data, China has increased scrutiny of such technology and apps and adopted new rules in January to formally protect victims.
In order to receive a transfer of 4.3 million yuan ($622,000), the perpetrator pretended to be a friend of the victim during a video call, according to police in the Inner Mongolian region of Baotou.
According to a statement released by the police on Saturday, the man transferred the funds under the impression that his friend was required to submit a deposit as part of an auction.
The police said they had recovered the majority of the stolen funds and were working to track down the remainder, adding that the man only realized he had been conned after the friend expressed ignorance of the circumstances.
The incident sparked debate on Weibo, a Chinese microblogging platform, about the dangers to online security and privacy. On Monday, the hashtag "#AI scams are exploding across the country" received more than 120 million views.
This proves that scammers can use voices, videos, and even photos, one user commented. Can information security regulations keep up with these individuals' methods?
(1 Chinese Yuan = 6.9121 Renminbi)