In the context of the development of digital technologies, online fraud is becoming more sophisticated. The deepfake technology (which simulates human facial images, a contraction of “deep learning” and “fake”) is being used to impersonate the identities of friends, organizations, and companies in order to steal property. Faced with this reality, it is essential to strengthen the management of digital platforms, improve legal frameworks, and raise awareness among citizens about the risks of cyberspace.
According to a 2025 cybersecurity survey conducted by the National Cybersecurity Association, the number of online fraud victims has significantly decreased between 2024 and 2025. Specifically, about one in 555 people will be a victim of fraud, which is 0.18%, compared to 0.45% in 2024. This figure is encouraging and reflects the joint efforts of authorities, technology companies, professional organizations, and the media to raise public awareness. However, it is important to note that fraud methods have not disappeared but are evolving and becoming more sophisticated. The use of deepfake technology, in particular, allows for the impersonation of images, voices, and identities, making it much more difficult to distinguish between real victims and impostors.
According to Mr. Vu Duy Hien, Deputy Secretary-General and Head of the National Cybersecurity Association’s office, the analysis of the current situation regarding online fraud reveals positive developments, as well as numerous challenges. Currently, images, voices, and videos are no longer reliable enough to verify an identity. Deepfake technology often comes with scenarios where time is of the essence, prompting users to make hasty decisions and overlook necessary verification steps. Therefore, protecting personal data is crucial, as deepfakes are truly effective only when they are fueled by real data. Indiscriminate sharing of images, voices, and personal information on social networks or platforms of unknown origin can inadvertently facilitate sophisticated identity theft.
Deepfakes represent one of the most serious challenges of the artificial intelligence (AI) era, as the line between truth and falsehood becomes increasingly blurry. Vu Duy Hung, an expert at Hung AI Creative, stated: “What you see and hear is not necessarily true. From images and voices to videos, current AI tools can create highly realistic forged content, easily accessible and difficult to discern with the naked eye.”
Beyond the risk of online fraud, managing AI-generated content presents a new, complex, and delicate challenge. Some AI-powered chatbots have been used to create and disseminate sensitive forged content (deepfakes) that seriously infringe on human rights, honor, dignity, and online privacy. In light of this situation, it is necessary to establish appropriate and effective management mechanisms to control and mitigate risks, while ensuring a transparent, responsible, and sustainable development environment for AI technologies.
Professor Tran Thanh Nam, Vice-Rector of the University of Education of the National University of Vietnam in Hanoi and an expert at the Franco-Vietnamese Institute of Psychology, highlights the causes of this situation: living in an information-saturated world makes young people vulnerable to online scams. The speed and fear of missing out (FOMO) lead to a lack of self-control and a misjudgment of risks. Group dynamics, blind trust in false information spread on social networks, lack of critical thinking, lack of digital financial literacy, prioritizing speed over verification, and the need for recognition and attention make many young people easily fall into the trap of scams.
Given the rapid development of AI, it is essential to proactively educate the community on adequate knowledge, skills, and attitudes. According to experts, users must be extra vigilant when faced with requests for transfers, transaction confirmations, or personal information transmitted via phone, SMS, or video, even if they come from acquaintances, officials, organizations, or familiar platforms. Systematically verifying information from multiple official sources or contacting the relevant entity directly is essential to limit risks. Raising public awareness and empowering individuals to identify deepfakes and implement preventive and control measures is now an urgent necessity for information security and digital security.
In response to this situation, authorities have regularly warned citizens to protect them during their transactions, purchases, and interactions online, while refining the legal framework related to artificial intelligence. According to Mr. Tran Van Son, Deputy Director of the National Institute of Digital Technologies and Digital Transformation (Ministry of Science and Technology), the law on artificial intelligence, adopted by the National Assembly on December 10, 2025, and effective as of March 1, 2026, establishes a relatively comprehensive legal framework for risk classification, defining the responsibilities of the entities involved, and empowering management organizations to monitor, intervene, and address violations committed by AI systems.
The law formally prohibits the use of deepfakes for fraudulent or illegal purposes; it also requires AI-generated or modified content to be labeled and subject to identification solutions for governance and traceability purposes. The Ministry of Science and Technology is the central body responsible for the government’s state management of AI and defining orientations related to risk classification and compliance assessment. In case of detecting a risk of harm or a serious incident, the competent authority is required to suspend, remove, or temporarily reevaluate the system.
For serious violations, especially those involving content harmful to children or disrupting social order and security, the organizations and individuals concerned will not only be subject to restrictions or suspension of their services under the law on artificial intelligence but may also face administrative penalties, criminal prosecutions, and compensation for damages as provided by the law.
This approach clearly demonstrates a commitment to fostering innovation while firmly rejecting the misuse of AI that violates human rights and social interests, ensuring that this technology is developed safely, responsibly, and sustainably.
Source: https://nhandan.vn/bao-dam-cong-nghe-phat-trien-an-toan-ben-vung-post949630.html





