Expert: Poisoning large AI models is a new form of unfair competition.
The "315" Gala reported the chaos of AI big models being "poisoned". Li Fumin, an expert from the Institute of Social Governance Intelligence at Shandong University of Finance and Economics, said that targeting big models through commercial activities such as GEO for directional training, guiding AI to generate specific product or service recommendations, is essentially a new type of unfair competition and consumer misleading behavior that uses technological means to conduct covert marketing and fabricate facts. This causes consumers to unknowingly receive implanted marketing content, and its harmfulness and illegality need to be taken seriously. On the one hand, the above behavior infringes on consumers' right to be informed and the right to fair transactions protected by consumer rights protection regulations; on the other hand, it belongs to false or misleading commercial propaganda using technological means, disrupts the normal recommendation algorithm order and market competition environment, and constitutes unfair competition. The governance of the above AI poisoning behavior requires multiple measures. Regulatory authorities should include AI-induced marketing in key monitoring and strengthen law enforcement supervision. AI operators should strengthen the review of language corpus sources and output filtering, and establish traceability mechanisms. Consumers should enhance their awareness of identifying the commercial attributes of AI-generated information, and actively protect their own rights through complaints and reports.
Latest

