Adviser urges focus on AI security
While focusing on catching up with the development of artificial intelligence, China needs to attach greater importance to security issues brought about by the wide use of AI, a cybersecurity expert and national political adviser said.
Zhou Hongyi, founder of Chinese cybersecurity company 360 Security Group and a member of the National Committee of the Chinese People's Political Consultative Conference, said OpenAI's recent release of Sora, a text-to-video AI generator that can create realistic videos based on users' text prompts, illustrated the gap between the United States and China in terms of AI development.
"The main gap between China and the US in AI lies in the original direction for AI technology," Zhou said on the sidelines of the annual session of the National Committee of the CPPCC, which opened on Monday. "Although it's difficult for China to create a universal large model that surpasses OpenAI's GPT-4 for now, the gap can be bridged within one to two years."
Once the direction is determined, Chinese companies are capable of learning very quickly and can soon catch up, especially when open-source projects are released by others, he said.
The development of AI is not only a competition between different companies, but also different nations, Zhou said.
"The US Department of Defense has enhanced cooperation with OpenAI, which later removed a clause banning the military use of AI," he said. "So China needs to plan ahead to be in the lead in AI development because it matters to the country's fate."
Zhou said this year is the "year of application" for Chinese AI as large models have great potential in many fields of enterprise and will trigger an industrial revolution.
Besides closely following the development of AI, Zhou has also paid close attention to issues brought about by the application of large models.
At this year's two sessions, he will propose that the country regulate AI security issues, because the technology will eventually influence all sectors of society.
The speed of AI's development has been unimaginable, and security issues related to its technology, content and ethics pose significant dangers, Zhou said.
"As AI technology advances, it can be used to make more targeted and realistic 'deepfake' content for fraud and even be used as a weapon to sabotage a country's political and national security systems," he said.
Zhou's proposal says that China needs to draft security standards for AI large models and conduct security evaluations to prevent them from being misused or exploited by others.
cuijia@chinadaily.com.cn