An anonymous reader quotes Engadget’s post: As Motherboard and The Verge point out, YouTuber Yannick Kilcher trained an AI language model using three years of content from the Politically Incorrect (/pol/) board on 4chan, a place notorious for racism and other forms of bigotry. After implementing the ten-bot model, Kilcher unleashed AI on the board – and it unsurprisingly sparked a wave of hate. In 24 hours, the bots wrote 15,000 messages that often included or interacted with racist content. They made up more than 10 percent of the posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI’s GPT-3), the model learned not only to pick out the words used in /pol/ posts, but also the overall tone, which Kilcher says combines “offensiveness, nihilism, trolling and deep distrust.’ The creator of the video went to great lengths to evade 4chan’s proxy and VPN protections, and even used a VPN to make it look like the bot posts were coming from the Seychelles. The AI ​​made a few mistakes, such as empty messages, but was convincing enough that it took many users about two days to realize something was wrong. According to Kilcher, many forum members only noticed one of the bots, and the pattern raised enough alarm that people accused each other of being bots days after Kilcher deactivated them. “This is a reminder that trained AI is only as good as its source material,” the report concludes.

Source by [author_name]

Previous articleDr. Malinga denies claims he made 500,000 rand from black coffee
Next articlePSG coach Galtier praises Neymar’s brilliant work ethic – SABC News