Industry experts and academicians pledged on Tuesday to uphold security in the field of artificial intelligence, including privacy protection, algorithm fairness and peaceful use of technology.
This follows the Shanghai Initiative for Safe and Secure AI Development, unveiled on the sidelines of the 2018 World Artificial Intelligence Conference.
AI practitioners must make sure the technology is oriented to the future, and there should be risk assessment and security oversight on critical processes like self-improvement and self-replication of machines, the initiative said.
There should also be clearly-defined responsibilities, where a mechanism for ascertaining and sharing AI security responsibilities should be established for different scenarios of AI application using laws and ethical norms.
AI must also not undermine user privacy and data security and technology roadmaps, and effort must be made to prevent threats to global peace and stability caused by the abuse of AI technologies in military fields, it added.
President Xi Jinping's congratulatory letter to the conference on Monday is a clear indication China is encouraging the sharing of critical AI findings and welcoming cooperation to avoid misuse of the technology, said Irakli Beridze, head of the center for AI and Robotics at the United Nations.
Security is set to become a hotspot for various types of smart applications, and AI is the natural solution to address security issues due to the data it banks on, said Yang Peng, deputy general manager of security management at Tencent.
For instance, Tencent has been applying algorithms and big data analytics to prevent, detect and crack down on fraud, he noted.
"Nations need to share and coordinate among each other on technical standards, information resources and coping mechanisms to ensure cyber security in the AI age," Yang said.