17.9 C
London
Tuesday, September 3, 2024

GPT-4o’s Chinese language token-training information is polluted by spam and porn web sites


The brand new tokenizer has 200,000 tokens in whole, and about 25% are in non-English languages, says Deedy Das, an AI investor at Menlo Ventures. He used language filters to depend the variety of tokens in several languages, and the highest languages, moreover English, are Russian, Arabic, and Vietnamese.

“So the tokenizer’s primary impression, in my view, is you get the associated fee down in these languages, not that the standard in these languages goes dramatically up,” Das says. When an LLM has higher and longer tokens in non-English languages, it could actually analyze the prompts sooner and cost customers much less for a similar reply. With the brand new tokenizer, “you’re taking a look at nearly 4 occasions value discount,” he says.

Das, who additionally speaks Hindi and Bengali, took a take a look at the longest tokens in these languages. The tokens mirror discussions taking place in these languages, in order that they embody phrases like “Narendra” or “Pakistan,” however frequent English phrases like “Prime Minister,” “college,” and “worldwideadditionally come up ceaselessly. Additionally they don’t exhibit the problems surrounding the Chinese language tokens.

That seemingly displays the coaching information in these languages, Das says: “My working idea is the web sites in Hindi and Bengali are very rudimentary. It’s like [mostly] information articles. So I might anticipate this to be the case. There aren’t many spam bots and porn web sites attempting to occur in these languages. It’s principally going to be in English.”

Polluted information and a scarcity of cleansing

Nonetheless, issues are drastically totally different in Chinese language. In response to a number of researchers who’ve appeared into the brand new library of tokens used for GPT-4o, the longest tokens in Chinese language are nearly completely spam phrases utilized in pornography, playing, and scamming contexts. Even shorter tokens, like three-character-long Chinese language phrases, mirror these matters to a major diploma.

“The issue is evident: the corpus used to coach [the tokenizer] is just not clear. The English tokens appear superb, however the Chinese language ones aren’t,” says Cai from Princeton College. It isn’t uncommon for a language mannequin to crawl spam when accumulating coaching information, however normally there might be vital effort taken to wash up the info earlier than it’s used. “It’s attainable that they didn’t do correct information clearing with regards to Chinese language,” he says.

The content material of those Chinese language tokens might counsel that they’ve been polluted by a selected phenomenon: web sites hijacking unrelated content material in Chinese language or different languages to spice up spam messages. 

These messages are sometimes commercials for pornography movies and playing web sites. They could possibly be actual companies or merely scams. And the language is inserted into content material farm web sites or typically reputable web sites to allow them to be listed by search engines like google and yahoo, circumvent the spam filters, and are available up in random searches. For instance, Google listed one search outcome web page on a US Nationwide Institutes of Well being web site, which lists a porn website in Chinese language. The identical website title additionally appeared in a minimum of 5 Chinese language tokens in GPT-4o. 

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here