Theoretical research

Theoretical models and legislative experiences on criminal liability for artificial intelligence and implications for Viet Nam

Do Viet Cuong - Pham Ngoc Tan Thursday, Oct/23/2025 - 22:19
Listen to Audio
0:00

(L&D) The article proposes solutions to improve Viet Nam’s criminal law, contributing to crime prevention and control and the protection of human rights in the context of the 4.0 technology era.

Abstract: The development of artificial intelligence (AI) has been transforming the world with undeniable benefits but has also posed unprecedented challenges to society. One pressing issue concerns determining accountability when an AI system engages in conduct harmful to society: who should bear the responsibility, and is it feasible to impose criminal liability on an AI entity? This article explores the nature of AI, assesses the risks of AI-related criminal activities, and examines the feasibility of holding AI systems criminally liable by analyzing the legal frameworks of selected countries. Based on these analyses, the article proposes solutions to improve Vietnam’s criminal law, contributing to crime prevention and the protection of human rights in the context of the Fourth Industrial Revolution.

Keywords: Artificial intelligence, AI crime, criminal liability, criminal law

1. The necessity of addressing the issue of criminal liability for artificial intelligence

The term artificial intelligence (AI) was first introduced by John McCarthy at a conference held at Dartmouth in 1956, where AI was defined as a new branch of science and technology capable of simulating the ways humans work[1]. Since then, various definitions of AI have emerged worldwide. Some define AI as machines and digital computers capable of imitating and performing tasks similar to those of intelligent beings, such as thinking, learning from previous experiences, or engaging in other cognitive activities[2]. Other scholars argue that AI is a software system capable of replicating human thinking processes with the assistance of computers or other devices[3], or that it represents the simulation of human behavior and cognitive processes on computers[4]. Despite these different conceptualizations, it can generally be observed that the core characteristics of AI lie in its ability to simulate, learn, reason, and respond to situations that have not been pre-programmed[5].

AI has therefore been applied in almost every sphere of life, including those requiring strict standards of responsibility such as the medical field, where artificial intelligence tools can perform surgical operations with great speed. However, it is not always possible to avoid incidents caused by robots during certain surgeries[6]. In reality, AI is not a flawless technology; the data processed by AI systems may encounter “errors” due either to objective factors - such as inaccurate data or faulty data analysis - or to subjective factors, such as the intentional use of AI for criminal purposes[7]. In 1981, a Japanese worker was killed by a nearby AI-powered robot that mistook him for a threat to its task performance and determined that pushing the worker into an operating machine nearby was the most efficient way to eliminate the obstacle[8].

The widespread application of AI has given rise to new legal issues, particularly in criminal law, prompting a series of questions such as: which subject should bear criminal liability (TNHS), and whether an AI entity should be considered the subject of an offense or the subject of criminal liability (including joint liability)? Clarifying these legal questions is crucial, especially in the context where the manufacturing, production, and design of AI are regarded as sectors at high risk for criminal activity.[9] At present, many countries around the world are studying and drafting laws on AI or developing regulatory frameworks and codes on AI,[10] but there is not yet a precise and concrete concept of AI crime. Nevertheless, there is already considerable evidence of this particular type of crime.[11] Accordingly, AI can be used as an instrument to serve criminal purposes, exploiting its capabilities to carry out acts that infringe social relations. For example, the use of AI to create forged content such as deepfakes is a tool capable of manipulating evidence, damaging reputation, and spreading misinformation, seriously affecting the integrity of individuals and the state.[12] In addition, AI systems themselves can become the targets of criminal activities such as cyberattacks. Contemporary cyberattacks generally take one of two forms: they are either sophisticated and tailored to target a specific victim,[13] or they are crude but highly automated and leverage large scale to create impact (for example, distributed denial-of-service (DDoS) attacks and port scanning). AI may enable attacks that are both targeted and large-scale, for instance by using reinforcement learning methods to probe vulnerabilities of many systems simultaneously before launching multiple attacks at once.[14] Such attacks are carried out for profit and harm new legal interests not yet protected by criminal law, such as cybersecurity.

AI may also merely provide the contextual conditions for a crime. Fraudulent schemes can rely on leading victims to believe that a certain AI function is feasible when in fact it is not, or that it is feasible but not used in the fraudulent act.[15] For example, tailored phishing is an attack aimed at harvesting confidential information or installing malware through a forged digital message purportedly sent from a trusted party such as the user’s bank. Attackers exploit preexisting trust to persuade users to perform actions they would normally avoid, such as revealing passwords or clicking on suspicious links.[16] Currently, most attacks are relatively indiscriminate, using generic messages modeled after major brands or topical events to attract the attention of certain users purely on the basis of random probability.[17] However, when combined with AI, the success rate of these phishing attacks increases substantially because AI can learn (active learning) to adapt messages so that they appear more familiar and trustworthy.[18] This raises the success rate of phishing attacks, causing victims to disclose banking information or execute fund transfers, thereby resulting in serious financial loss. Moreover, when such attacks are executed at scale, they affect not only individuals but also disrupt financial markets and e-commerce.

Thus, when AI is used for criminal purposes, it can cause serious consequences for society, thereby necessitating the establishment of legal rules to regulate this entity, particularly within the realm of criminal law.

2. Theoretical models of criminal liability for aritificial intelligence crimes

Although there is growing evidence of the existence of “artificial intelligence crimes,” the issue of criminal liability for AI entities remains complex, as AI has yet to be recognized as a legal person. Contemporary criminal law has already established criminal liability for legal persons—entities that, like AI, lack full human attributes—but the relationship between AI and human beings is far more intricate. This depends on the type of AI entity (its level of capability) and its degree of development. Based on this scale, Gabriel Hallevy synthesized and introduced three models corresponding to different approaches to determining criminal liability for AI entities, as follows:[19]

The first model is the “perpetration-via-another” model of criminal liability. This model asserts that artificial intelligence cannot possess human characteristics and should therefore be regarded as an innocent agent, even when it participates in the commission of a criminal act.[20] In essence, AI remains merely a device created and powered by human beings to serve and assist them in performing tasks. When a person uses such a tool to commit a crime, that person is still deemed to have acted with his or her own intent. Accordingly, this model attributes criminal liability to two main actors: the programmer and the user of the AI system.[21] Specifically, a programmer who designs AI software with the intention of committing socially dangerous acts through that software must bear criminal liability. Similarly, a user who, though not the creator of the software, employs or modifies the AI system for personal purposes so as to cause it to perform socially dangerous acts shall also be held criminally liable.

The second model is the “natural-probable-consequence” model of criminal liability. This model is based on the notion that programmers or users of AI systems maintain a close relationship with the daily operation of the AI entity but have no intention of committing a crime through it. Accordingly, when an AI system commits a criminal act, its programmer or user is unaware of such conduct until it occurs and causes socially dangerous consequences.[22] For example, a robot equipped with AI may be programmed to perform a specific task (such as operating an aircraft on autopilot). However, when a human pilot intervenes to stop the operation (for instance, deciding to turn back due to severe weather), the robot may interpret the pilot’s action as a threat obstructing its assigned mission and proceed to attack or kill the pilot (for example, by cutting off oxygen supply or using physical force).[23] In this case, the programmer who designed the AI software embedded in the robot had no intention of killing anyone, especially not the pilot who interfered with the robot’s operation.[24]The distinction between the first and second models lies in the element of intent: in the first model, the programmer or user deliberately employs AI for criminal purposes, whereas in the second model, they have no such intention, although they should have been aware of the AI’s potential to commit harmful acts. Accordingly, criminal liability is imposed as follows:

- In the first scenario, if the programmer or user of the AI system acts negligently in programming or operating the AI entity without any intention of committing a criminal act, they shall not bear criminal liability unless the criminal law provides for punishment of the offense committed through negligence. In this context, the user is deemed to have committed the offense with negligence in the form of carelessness, by performing an act without foreseeing its socially dangerous consequences, although they could and should have foreseen them.

- In the second scenario, the programmer or user has programmed or utilized the AI entity with the intention of committing a particular criminal act; however, the AI subsequently performs a different criminal act beyond the original purpose of the human actor. This situation may lead to two possible consequences:[25]

(1) If the AI acts merely as an unconscious intermediary, lacking awareness of the nature and degree of danger of the conduct prohibited under criminal law, the AI entity shall not bear criminal liability for the offense committed, similar to the first model.

(2) If the AI is not merely an unconscious intermediary but also possesses the capacity for cognition, reasoning, and autonomous decision-making like a human being, then, in addition to the criminal liability of the programmer and the user, the AI itself must also bear direct and joint criminal responsibility for the offense it commits. This gives rise to a new model of criminal liability, in which the AI is regarded as an entity capable of bearing legal responsibility in the same manner as a human being (Model No. 3).

- The third model is the model of direct criminal liability, which holds that AI is deemed equivalent to a human being in terms of both legal capacity and capacity for conduct; in other words, AI in this case operates independently of the programmer or the user.[26] Accordingly, this model focuses on the AI entity itself,[27] and criminal liability shall be determined based on the objective element (actus reus) and the subjective element (mens rea).[28] When any subject is proven to possess both elements in relation to a specific criminal act, that subject must bear criminal liability for that act.[29]

Although Hallevy’s models propose multiple scenarios along with detailed arguments and philosophical foundations corresponding to different levels of AI, they still encounter certain limitations in practical application. First, Hallevy’s models are not entirely compatible with the actual development of AI, since the process of successfully constructing an AI entity is highly complex and based on extensive collaboration.[30] Accordingly, programming an AI requires the participation of numerous programmers - potentially numbering in the thousands - each responsible for distinct tasks. In reality, applying criminal liability to programmers under this model would be equivalent to prosecuting thousands of individuals, which represents an enormous undertaking. Second, AI code may be open-source, meaning that the creator of such code allows others to modify, study, and distribute it (thus waiving intellectual property rights).[31] The number of users of such open-source code is immense, and some operate anonymously,[32] thereby further complicating the determination of criminal liability. Third, up to this point, no AI has yet existed that is equivalent to a human being, and Hallevy’s assumption that technological evolution can occur through applying undeveloped human-like attributes to existing entities is not always valid.[33] Furthermore, as law inevitably lags behind technology, the premature enactment of legal provisions ahead of technological development may lead to incompatibility and unnecessary revisions or amendments.[34]

Thus, Hallevy’s models, together with similar existing legal mechanisms, can be regarded as a starting point for the application of criminal law to future situations. Nevertheless, Hallevy’s models remain theoretical frameworks.[35] Therefore, to address the challenges posed by AI-related crimes, the issue for Viet Nam’s criminal law lies in combining theoretical and practical perspectives through studying, learning from, and absorbing the experiences of countries that have established or are establishing legal regulations on this new category of crimes, thereby identifying an appropriate path for Viet Nam in the technological era.

3. Legal frameworks on artificial intelligence crimes worldwide

At present, many countries have begun developing legal frameworks related to AI-related crimes. These frameworks share certain similarities in approach but also exhibit distinct differences depending on the specific circumstances of each country. Therefore, in this section, the authors analyze both the general approaches to AI-related crimes and the specific legal frameworks of selected countries, namely the United States, India, Russia, China, and Indonesia.

3.1. Overview of approaches to artificial intelligence crimes worldwide

Faced with the pressures posed by technology, countries around the world are sharing a common trend of studying AI governance, focusing on issues related to the development, use, and infrastructure building for AI.[37] In 2024, Europe drew global attention by introducing the Artificial Intelligence Act, aimed at managing potential risks posed by AI.[38] This Act established the principle of strict liability, under which programmers and users are held responsible for the actions of AI, regardless of their intent or awareness of the criminal conduct.[39] The approach of the European Union (EU) emphasizes individual (human) accountability and promotes the development of responsible AI. In contrast, other regions, such as the United States, adopt a causation-based approach, whereby criminal liability is imposed on individuals who directly cause the AI system to engage in criminal conduct.[40] Under this approach, the burden of proof rests with the prosecuting authority, which must establish a causal link between the human actor and the criminal act committed by AI. This requires careful consideration of the degree of human control over the AI system and the extent to which AI operates autonomously.[41] In addition, several countries take an approach to criminal liability based on the legal personality of AI. Accordingly, most countries around the world do not recognize AI as a legal person, except for Saudi Arabia and Japan, which are still considering this possibility.[42] This approach reflects the view that AI is merely a tool or technology created and used by humans, and therefore, humans should bear responsibility for its actions.

In addition, ethical considerations and accountability have also been a focus for countries such as Germany and the European Union, which have proposed ethical guidelines for AI development, emphasizing transparency, fairness, and accountability.[43] These guidelines are designed to ensure that AI systems are developed responsibly and thereby minimize the risks that such entities may pose. Enhancing the transparency of AI systems increases public trust in these systems, thereby facilitating better assessment of AI decision-making processes in cases involving criminal conduct.[44] At the same time, personal data protection and cybersecurity are given priority, with many countries enacting data protection laws to safeguard individual information from misuse and unauthorized access by AI systems.[45]

Finally, international cooperation has become a prominent trend among countries, particularly given the transnational nature of cybercrime and AI-related violations.[46] Intergovernmental partnerships, such as mutual legal assistance agreements, play a crucial role in exchanging information, evidence, and intelligence concerning AI-related crimes, thereby enabling more comprehensive and coordinated responses to transnational offenses caused by AI. International organizations, including the United Nations and the G7 industrialized countries,[47] have also paid attention to AI governance and criminal liability. These forums serve as platforms for discussing global challenges related to AI, sharing best practices, and developing international standards and regulations concerning the ethics, fairness, and accountabilityof AI.

3.2. Legal frameworks of selected countries on artificial intelligence crimes

United States

Although a technological powerhouse, the United States does not have a unified legal definition of AI. Instead, the country adopts an AI governance approach based on individual agencies rather than a comprehensive approach like that of Europe.[48] As a result, different definitions of AI have been provided by various U.S. agencies, which can be categorized into five areas:[49]

(1) Policy, including “documents such as executive orders, resolutions, and plans reflecting the U.S. government’s policy on AI governance”;

(2) Accountability, including “legislative tools aimed at algorithmic accountability, which may reflect the government’s response to public concerns regarding algorithmic bias and discrimination”;

(3) Facial recognition technology, including “an increasingly fast-developing legal framework regulating the use of facial recognition technology and related data”;

(4) Transparency, including laws “primarily aimed at promoting transparency in relation to AI usage in various contexts”;

(5) Others, including “federal bills pending enactment concerning general governance or AI research issues, as well as other related matters.”

In practice, the United States has recorded criminal rulings related to artificial intelligence systems.

Regarding self-driving vehicles, in 2018 a woman was fatally struck by an Uber self-driving test vehicle while a passenger was inside.[50] The cause of the incident was that the autonomous driving system failed to detect the woman. However, the key issue was that during operation, the system had been affected by the personal phone of the passenger in the vehicle.[51] In March 2019, the Yavapai County Prosecutor’s Office declared that there were no grounds to hold Uber criminally liable. Nevertheless, the passenger in the vehicle was charged with manslaughter due to negligence. Negligence was defined as “a person’s failure to recognize a substantial risk, for which it is not justifiable to believe that the result would occur or that the circumstances exist. The risk must be of a degree and nature that renders such unawareness a serious deviation from the standard of care that a reasonable person would observe in that situation.”[52] In 2022, the United States again recorded another case involving a self-driving car. A man was prosecuted for involuntary manslaughter for operating a Tesla in autopilot mode and causing an accident.[53] The difference in this case was that the Tesla vehicle had been commercialized and widely used. Consequently, this case is considered a potential precedent for future prosecutions of drivers who over-rely on autonomous systems.[54] Thus, the U.S. approach to criminal liability concerning autonomous systems is to assess the relationship between technology and humans, specifically, in this case, the degree of human reliance on AI.

Regarding crimes of fraud and technology abuse, the Computer Fraud and Abuse Act 1986 (CFAA)[55] stipulates criminal liability for unauthorized access to computers or for exceeding authorized access. In practice, a harmful act involving artificial intelligence systems can be traced back to human actions (for example, a hacker using AI to steal money from a bank account), in which case AI is considered a tool for committing the crime.[56] In the case of Van Buren v. United States[57], the U.S. Supreme Court clarified that “exceeding authorized access” applies to situations where an individual, although permitted to access a computer, retrieves data or accesses restricted areas, such as files or restricted databases.

Concerning crimes of tailored phishing, the Electronic Communications Privacy Act 1986 (ECPA)[58] applies in the United States. Intentional unauthorized access (i.e., hacking) or exceeding authorized access to electronic communication service facilities constitutes a criminal offense under the ECPA. The ECPA also penalizes the intentional interception of electronic communications in transit under the Wiretap Act.[59]

In summary, the United States has chosen a sector-specific regulatory approach. According to the authors, this approach is relatively appropriate because the U.S. is a federal country with a complex relationship between federal and state governments, and also due to the interdisciplinary nature of AI. To date, U.S. law continues to focus on assigning criminal liability to humans and on acts involving the malicious use of AI, relying on traditional criminal law frameworks.[60]

India

India is currently a highly developing country in the field of AI.[61] In India, criminal liability is based on principles established in various statutes, case law, and constitutional provisions.[62] The Indian Penal Code (IPC) 1860 is the primary legal text governing criminal liability in India.[63] The IPC classifies criminal acts into different categories according to the severity of the offense and prescribes corresponding punishments. The Code also specifies acts that constitute crimes, such as murder, theft, fraud, and assault. In addition, the IPC addresses exemptions from criminal liability, such as self-defense, insanity, intoxication, and mistakes of fact.[64] In India, criminal liability applies not only to individuals but also to certain entities, such as corporations.[65] Therefore, the Indian Penal Code is quite similar to the Criminal Code of Viet Nam (analyzed below) in its approach to categorizing criminal acts and prescribing corresponding penalties. Similarly, Viet Nam also provides for criminal liability regimes for commercial legal entities.

Currently, India does not have a separate law specifically governing AI. However, the country has planned to replace the Information Technology Act 2000 with the Digital India Act 2023 to include provisions related to AI.[66] Nevertheless, AI continues to pose challenges to the criminal law system in India, revolving around issues such as the legal status of AI, privacy, data protection, and ethical concerns.[67]

Russia and China

Under the current criminal law of Russia and China, AI-related crimes can be classified into three types: (i) crimes that can be regulated by the existing criminal law; (ii) crimes that are incompletely regulated by the existing criminal law; and (iii) crimes that cannot be regulated by the existing criminal law.[68]

Regarding crimes regulated by the existing criminal law, these are offenses already stipulated in the criminal codes of the two countries and can be easily addressed through judicial interpretation. For example, in the first case of AI-assisted fraud in China,[69] the offense essentially falls under “theft or unauthorized access to personal information by other means,” as provided for in Chinese criminal law. In Russia, unauthorized access to computer information containing personal private data, carried out intentionally and for profit or personal purposes, causing damage to the lawful rights and interests of citizens, is punishable under the provisions of the Criminal Code of the Russian Federation.[70]

Regarding crimes incompletely regulated by the existing criminal law, these are traditional offenses but with certain new characteristics arising from AI, which the criminal law cannot regulate effectively. For example, in the case of traffic accidents, previously, if the accident resulted from a manufacturer’s fault, the manufacturer would be held liable; if the fault lay with the driver, the driver would be responsible. However, in the context of the development of self-driving vehicles, which operate autonomously without human intervention, and if the automated driving system causes an accident, liability becomes much more difficult to determine. Similar to the cases of Uber and Tesla analyzed in the United States, can responsibility be attributed to the driver or the manufacturer for errors caused by AI acting independently? The criminal law of Russia and China does not yet provide specific regulations on this issue.[71]

Regarding crimes that cannot be regulated by the existing criminal law, these are offenses that the current criminal law of Russia and China is unable to address because the legal provisions do not cover or foresee such special cases. For example, in the case of an AI-integrated prosthetic arm designed to assist people with mobility impairments being attacked (hacked) and causing pain to its owner, how should this be addressed? If traditional law is applied and the prosthetic arm is considered merely as property, the act would only be regarded as property damage, which is inadequate. However, if the arm is considered a part of the human body and damaging the AI prosthetic arm is treated as causing bodily harm, this also leads to difficulties in determining the extent of the damage.[72]

In addition, there are cases where the criminal law cannot regulate because the elements of a crime cannot be satisfied. A well-known case in 2016 in China involved an AI called “Tay”[73], which had the capability to learn through interactions with users. However, this feature was abused by users, turning the AI into a tool for posting politically incorrect, offensive, and racially discriminatory statements. Under the Criminal Code of Russia and the Criminal Code of the People’s Republic of China, the widespread use of such statements could be considered a crime. The question that arises in this case is whether the act of “teaching” AI could make these users criminally liable. This is a behavior that has only emerged in the technological era, and therefore no specific answer exists for this situation yet.[74]

In summary, the current Criminal Code of the Russian Federation and the Criminal Code of the People’s Republic of China are facing challenges similar to those encountered by the criminal law of Viet Nam (as will be analyzed below).

Indonesia

Indonesia is a Southeast Asian country in the ASEAN bloc with an economy and social development not significantly different from that of Viet Nam. Currently, the country is investing heavily in the field of AI, with the ambition of turning this sector into a key pillar of the national economy.[75] This new trend requires Indonesia to rapidly complete the legal frameworks related to AI, including in the criminal law sector.

In practice, artificial intelligence technology has not yet been specifically regulated in Indonesia,[76] although some laws have addressed this field, such as Law No. 28/2014 on Copyright, Law No. 19/2016 on Information and Electronic Transactions, and Law No. 27/2022 on Personal Data Protection. Corresponding to each of these legal documents, AI is defined differently, and Indonesia has not yet issued a comprehensive definition of AI.

Notably, the Ministry of Communication and Information Technology of Indonesia is preparing a Circular on Ethical Guidelines for Artificial Intelligence.[77] This circular aims to ensure that the use of data to develop AI technology is conducted responsibly and protects personal data in accordance with the legal framework provided by Law No. 27/2022 on Personal Data Protection. Additionally, the Minister of Communication and Information Technology of Indonesia has issued a Circular on Ethics for the Use of Artificial Intelligence, which is expected to serve as ethical guidance for the development and use of AI in Indonesia.[78] The circular focuses on enterprises and electronic system administrators in both the public and private sectors.

Thus, Indonesia’s approach has similarities with that of Germany and the European Union, as it adopts an approach based on ethics and accountability,[79] thereby enhancing governance and mitigating the risks that AI may pose.

It can be seen that AI-related crime is a complex issue for many countries. Although there are some differences due to policies, forms of government, and state structures, in general, the countries analyzed above are choosing a balanced approach between promoting innovation and maintaining strict control measures. Accordingly, criminal liability is applied to AI when it is used as a tool to commit criminal acts. Even when AI independently engages in criminal behavior, policies on strict liability during AI production and training are applied to address the issue. Therefore, at present, many countries are adopting an approach similar to Model 1, as discussed in Section 2 of this article.

4. Criminal Law of Viet Nam regarding artificial intelligence crimes

4.1. Challenges in establishing criminal liability of AI under Viet Nam's Criminal Law

Through the analysis of AI-related crimes, the authors recognize that this new type of crime poses challenges regarding the criminal liability of AI under Viet Nam’s criminal law, as well as highlights the need to improve Viet Nam’s criminal legislation, specifically:

First, regarding the concept of crime. Under the current Vietnamese criminal law, a crime is defined as an act dangerous to society committed by a person or a commercial legal entity, with fault, and infringing upon social relations established and protected by criminal law. The issue arises when an AI system is capable of independently committing an act dangerous to society without human intervention, in which case the definition of a crime as an act committed by a person or a commercial legal entity may no longer be appropriate.

Second, regarding the elements of a crime. Following the issue of the concept of a crime, the emergence of a new subject beyond the two traditional subjects, namely individuals and legal entities, implies that the understanding of the elements of a crime with the previously established traditional factors will also undergo changes.

Regarding the protected legal interests, in the future AI may cause damages that are incalculable in material terms and infringe upon new social relations; therefore, further research is required to expand the scope of the legal interests that the law needs to protect (for example, the interest of information security).[80]

Regarding the objective aspect, with technological development the place (location) where an offense is committed may now be far from where the socially dangerous consequence occurs (from one country to another) or may even occur in domains such as cyberspace or other spaces beyond Earth. In practice, new modes of offending — for example, image manipulation for fraud, generating false information, or using AI to display unauthorized advertising on social media platforms — have emerged and are becoming increasingly common. It is evident that AI acts as a powerful tool for offenders, making investigation and detection more difficult because it is hard to determine precisely the time and place of commission due to AI’s characteristic ability to act remotely.[81] In cases where AI is used to commit offenses, the place of commission and the place where the consequences occur may be separated by great distances (across multiple countries or territories) and manifest as many different types of harm.[82] Therefore, a new understanding is needed regarding the place (location) of commission, the consequences of the offense, novel methods and modus operandi, and the determination of the territorial scope/applicability of criminal law.

Regarding the subjective aspect of the offense, AI has the ability to learn and synthesize data to continuously update itself; however, this development process lies beyond human foresight. When the output of the data malfunctions or no longer aligns with social norms (for instance, an AI designed to monitor and process medical data autonomously alters the data, leading to incorrect treatment directions for patients), the process of attributing criminal liability becomes difficult, specifically in determining fault for socially dangerous acts that have no precedent. To date, in order to establish elements pertaining to the subjective aspect of AI, it is necessary to prove factors such as knowledge, intent, motive, emotions, etc.; this requires interdisciplinary cooperation and considerable effort before it can be accomplished.[83]

Regarding the subject, when discussing AI-related crime, a major question arises: Can AI be considered a subject of crime? If AI possesses cognitive abilities similar to or even surpassing those of humans, could it be regarded as an individual or a legal entity, or should the law introduce a new type of subject? This issue has been a matter of debate in recent years in AI research. According to the European Parliament, AI could have legal status akin to a legal entity with legal capacity, but responsibility would rest with the registered natural person.[84] Another perspective argues that a new type of criminal subject - namely AI - may emerge: as AI develops and becomes an independent entity, it could fully have the capacity to commit new types of crimes.[85]. Considering both perspectives, holding the programmer or user accountable for the AI system could prove more effective, as it would impose strict responsibility during the creation and development of these systems. However, the authors note that imposing strict liability may create barriers to technological development, given that AI has always been a highly costly field for investors[86], making it difficult for them to bear large expenses without being able to predict the risks that AI may generate. Therefore, devising an approach that balances responsibility and profit is essential. Additionally, attributing criminal liability to the programmer or user raises further issues, such as the scope of liability (it would be unfair to hold them accountable for all outcomes caused by AI through learning and autonomous evolution), the duty to take measures to monitor and prevent AI from committing offenses, and so forth.[87] To identify a reasonable approach, legislators and technologists need to collaborate to develop criminal liability models for AI in specific cases.

Third, regarding punishment. Although AI possesses intelligence and advanced capabilities, in essence, it remains an inanimate entity (without emotions) existing in some material form (e.g., a robot). Therefore, punishing AI systems or robots raises a significant challenge in application, as they are merely machines, even when equipped with artificial intelligence.[88] After extensive discussions, researchers have proposed punishments for AI such as disabling certain AI functions, disconnecting them from the internet, or confining them to a specific area.[89] However, to date, criminal law in most countries (including Viet Nam) primarily regards humans as the core element and rejects the idea that non-human entities can possess cognition or require punishment for their actions.[90] It is evident that, in the context of the technological era, the philosophy and implementation of punishment need to evolve, not only in form but also in the underlying purpose of the punishment.

4.2. Some solutions to improve Viet Nam's Criminal Law

First, adjustment of policy and law. The first issue in the process of improving Viet Nam’s criminal law is the need to establish appropriate legal measures to protect social relations against technologies that are not yet regulated by law. To achieve this, legal policy must include provisions and guidelines regarding the legal status of AI. The authors note that this is a complex matter likely to generate prolonged debate, as historically, legal entities emerged in the 14th century globally, but it took hundreds of years before regulations governing such entities, particularly in criminal law, were established.[91] Nevertheless, based on analyses of global approaches to AI-related crimes and the legal frameworks of certain countries, the authors recommend that it is not yet necessary to recognize AI as an independent legal entity; rather, AI should be approached as a tool supporting the commission of criminal acts, and policies should be developed to regulate this special entity accordingly. Under this approach, Vietnamese criminal law needs to review, adjust, and supplement matters related to crimes and criminal elements that involve AI entities. Specifically, it is necessary to incorporate new protected objects as well as methods and modalities of crime associated with high technologies such as AI to ensure a legal basis for applying criminal liability when offenses occur. Accordingly, the authors recommend adding “cybersecurity” as a protected object and issuing additional guidance documents under Section 2 – Crimes in the Field of Information Technology and Telecommunications in the 2015 Penal Code to address legal gaps created by technology and ensure the compatibility and harmonization of Vietnamese law with international law.

Furthermore, a clear trend observed in the United States and Indonesia is that criminal law is not the only measure to effectively address AI-related crimes because AI is highly diverse and appears in many sectors, transforming traditional industries.[92] Therefore, Viet Nam also needs to improve its legal system in the field of information technology related to AI, including: Law on High Technology 2008, amended in 2013 and 2014; Law on Information Technology 2006; Law on Cybersecurity 2015; Law on Network Security 2018; and specific legal documents regulating specialized areas such as autonomous vehicles, 3D printing technology, virtual currency, etc., in order to establish a comprehensive policy framework for AI-related crimes.

In addition, the issue of developing responsible AI also needs to be considered and promoted. It is evident that the complexity of AI lies in the fact that humans cannot precisely predict the output of an AI when it performs tasks,[93] although the input data can be controlled. Therefore, the matter of building responsible AI and proactively managing AI risks has been highlighted in the European Union AI Act, as well as in circulars and guidelines on AI responsibility and ethics currently being developed in Indonesia. In practice, Viet Nam has also begun this process, with the Prime Minister issuing the National Strategy on Research, Development, and Application of Artificial Intelligence until 2030, and numerous workshops on this issue have recently been held in Viet Nam.[94] Accordingly, the immediate task for Viet Nam is to develop and complete ethical codes and guidelines for responsible AI.

Second, strengthening international cooperation. The need for international cooperation is extremely necessary in the current context due to the transnational nature[95] and technological complexity of AI.[96] Cooperation is required not only in crime prevention and control but also in research activities, as well as in the sharing of AI technology and knowledge. The investment of millions of dollars by the global technology corporation Nvidia in Viet Nam[97] is opening up both opportunities and challenges for the country, particularly regarding technology control. In fact, even major countries such as Russia, China, the United States, and India, which are renowned for technological development, have not yet established a comprehensive AI policy framework. Therefore, strengthening international cooperation will not only help Viet Nam address immediate criminal issues but also create opportunities to advance and master this technology, thereby developing comprehensive and accurate policies to meet future reform needs concerning AI-related crimes.

5. Conclusion

Through the study of AI-related crimes, it can be observed that the issue of applying criminal liability to AI entities is a new and highly challenging matter in Viet Nam. These challenges stem not only from the legal status of AI entities but also from technological, ethical, and accountability aspects in AI development. In the context of implementing the National Strategy on Research, Development, and Application of AI until 2030, Viet Nam is required to establish and complete legal documents concerning the legal responsibilities of entities involved with AI.

However, technology is constantly evolving, developing rapidly, and remains unpredictable. Therefore, to build a comprehensive and effective policy, future efforts will require extensive, in-depth, and interdisciplinary research on AI-related crimes both domestically and internationally. From this, humanity can collectively identify a path forward in the era of technology./.

1. Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal Social Control, 4 AKRON INTELLECTUAL PROPERTY JOURNAL (2016).

2. Caldwell M., Andrews J.T.A., Tanay T. & colleagues, AI-enabled Future Crime, 9(1) CRIME SCIENCE 14 (2020).

3. Hifajatali Sayyed, Artificial Intelligence and Criminal Liability in India: Exploring Legal Implications and Challenges, 10 Cogent Social Sciences 2343195 (2024).

4. El-Kady R., Artificial Intelligence and Criminal Law: Advances in Finance, Accounting, and Economics, IGI GLOBAL 34–52 (2024).

5. Dongmei, Pang; Olkhovik, Nikolay V., Criminal Liability for Actions of Artificial Intelligence: Approach of Russia and China, Journal of Siberian Federal University. Humanities & Social Sciences. 2022 15 (8): 1094-1107 (2022)

6. Alice Giannini, United States Report on Traditional Criminal Law Categories and AI, (2024), https://www.penal.org/sites/default/files/files/A-01-24.pdf.

7. Dong Jun (Justin) Kim, Artificial Intelligence and Crime: What Killer Robots Could Teach about Criminal Law (2017), https://ir.wgtn.ac.nz/handle/123456789/20861.

8. Yaumi Ramdhani, Amiruddin, & Ufran, Countering Artificial Intelligence Crimes in a Criminal Law Perspective, 9 rrijm 167 (2024).

9. Viet T.T., Models of Criminal Liability of Artificial Intelligence: From Science Fiction to Prospect for Criminal Law and Policy in Vietnam, 35(4) LS (2019).

10. TRINH TIEN VIET (ED.), CRIMINAL POLICY OF VIET NAM FACING THE CHALLENGES OF THE INDUSTRIAL 4.0 REVOLUTION, JUDICIAL PUBLISHING HOUSE (2020).

11. TRINH TIEN VIET (ED.), CRIMINAL LIABILITY AND PUNISHMENT, VIET NAM NATIONAL UNIVERSITY HANOI PUBLISHING HOUSE (2022).

* Dr. Do Viet Cuong, Faculty of International Law, Hanoi Law University, Vietnam National University, Hanoi. Approved for publication on 24/3/2025. Email: cuongvietdo@vnu.edu.vn

** Pham Ngoc Tan, Class 66, High-Quality Law Program, University of Law, Vietnam National University, Hanoi

[1] Artificial Intelligence (AI) Coined at Dartmouth | Dartmouth, https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth (accessed 18/11/2024).

[2] Sayed Tantawy Mohamed Sayed, Legal Aspects of Artificial Intelligence and Robotics, 2 Journal of Afro-Asian Studies 25 (2020).

[3] Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach, NXB Pearson (Third edition, Global edition ed. 2016), p 1 - 5.

[4] Yahya Dahshan, Criminal Liability for Artificial Intelligence Crimes, 2020 UAEU Law Journal (2021), https://scholarworks.uaeu.ac.ae/sharia_and_law/vol2020/iss82/2.

[5] Ramy El-Kady, Artificial Intelligence and Criminal Law, Artificial Intelligence Approaches to Sustainable Accounting Advances in Finance, Accounting, and Economics, 34 (Maria C. Tavares et al. eds., 2024), https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-0847-9.ch003 (truy cập lần cuối 6/1/2025).

[6] Isabelle Poirot-Mazères, Chapitre 8. Robotique et médecine  quelle(s) responsabilité(s) ?, 24 Journal International de Bioéthique 99 (2013).

[7] Xuejiao Li et al., Data Issues in Industrial AI System: A Meta-Review and Research Strategy, (2024), https://arxiv.org/abs/2406.15784 (truy cập lần cuối 10/12/2024).

[8] Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal Social Control, 4 Akron Intellectual Property Journal (2016).

[9] Danila Kirpichnikov et al., Criminal Liability of the Artificial Intelligence, 159 E3S Web Conf. 04025 (2020).

[10] Regulation of artificial intelligence around the world, Library of Congress, Washington, D.C. 20540 USA, https://www.loc.gov/item/2023555920/ (truy cập lần cuối 19/11/2024).

[11] M. Caldwell et al., AI-Enabled Future Crime, 9 Crime Science 14 (2020).

[12] Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer, (2022), https://www.jstor.org/stable/resrep42027 (truy cập lần cuối ngày 6/1/2025).

[13] David Kushner, The Real Story of Stuxnet, 50 IEEE Spectr. 48 (2013).

[14] Caldwell et al., lbid, 11, tr.8.

[15] Caldwell et al., lbid, 11, tr.5.

[16] Matt Boddy, Phishing 2.0: The New Evolution in Cybercrime, 2018 Computer Fraud & Security 8 (2018).

[17] Caldwell et al., lbid, 11, tr.8.

[18] Alejandro Correa Bahnsen et al., DeepPhish : Simulating Malicious AI (2018), https://www.semanticscholar.org/paper/DeepPhish-%3A-Simulating-Malicious-AI-Bahnsen-Torroledo/ae99765d48ab80fe3e221f2eedec719af80b93f9 (truy cập lần cuối 6/1/2025).

[19] Hallevy, lbid, 8, tr. 177-181.

[20] Gabriel Hallevy, Virtual Criminal Responsibility, 6 The Original Law Review 6 (2020).

[21] Gabriel Hallevy, lbid, 20, tr.11.

[22] Dong Jun (Justin) Kim, Artificial Intelligence and Crime: What Killer Robots Could Teach about Criminal Law (2017), https://ir.wgtn.ac.nz/handle/123456789/20861 (truy cập lần cuối 6/1/2025).

[23] Trinh Tien Viet (ed.), Vietnam Criminal Policy Facing the Challenges of the Fourth Industrial Revolution, Justice Publishing House, p.244 (2020).

[24] Hallevy, lbid, 8, tr.181.

[25] Hallevy, lbid, 8, tr.182.

[26] Hallevy, lbid, 8, tr.182.

[27] Steven J. Frank, Tort Adjudication and the Emergence of Artificial Intelligence Software, 21 Suffolk U. L. Rev. 623 (1987).

[28] Hallevy, lbid, 8, tr.186.

[29] Trinh Tien Viet (ed.), Ibid, 23, p.247.

[30] Jack Beard, Autonomous Weapons and Human Responsibilities, Nebraska College of Law: Faculty Publications (2014), https://digitalcommons.unl.edu/lawfacpub/196.

[31] ANDREW LAURENT, UNDERSTANDING OPEN SOURCE AND FREE SOFTWARE LICENSING (2008).

[32] Christian Payne, On the Security of Open Source Software, 12 Information Systems Journal 61 (2002).

[33] Rachel Charney, Can Androids Plead Automatism - A Review of When Robots Kill: Artificial Intelligence under the Criminal Law by Gabriel Hallevy, 73 U. Toronto Fac. L. Rev. 69 (2015).

[34] Lyria Bennett Moses, Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change, (2007), https://papers.ssrn.com/abstract=979861 (accessed 9/1/2025).

[35] Caldwell et al., lbid, 11, p. 5-6.

[36] Rostam J. Neuwirth, Law, Artificial Intelligence, and Synaesthesia, 39 AI & Soc 901 (2024).

[37] Michael Veale, Kira Matus & Robert Gorwa, AI and Global Governance: Modalities, Rationales, Tensions, 19 Annu. Rev. Law. Soc. Sci. 255 (2023).

[38] Claudio Novelli et al., AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act, 3 DISO 13 (2024).

[39] Margot E. Kaminski, The Developing Law of AI: A Turn to Risk Regulation, SSRN Journal (2024), https://www.ssrn.com/abstract=4692562 (accessed 9/1/2025).

[40] Kate Crawford & Jason Schultz, Ai Systems as State Actors, 119 Columbia Law Review 1941 (2019).

[41] Hifajatali Sayyed, Artificial Intelligence and Criminal Liability in India: Exploring Legal Implications and Challenges, 10 Cogent Social Sciences 2343195 (2024).

[42] A. Atabekov & O. Yastrebov, Legal Status of Artificial Intelligence Across Countries: Legislation on the Move, XXI ERSJ 773 (2018).

[43] Ulrike Franke Sartori Paola, Machine Politics: Europe and the AI Revolution, ECFR (2019), https://ecfr.eu/publication/machine_politics_europe_and_the_ai_revolution/ (truy cập lần cuối 9/1/2025).

[44] Sayyed, lbid, 41, p.8.

[45] Lars Hornuf, Sonja Mangold & Yayun Yang, Data Protection Law in Germany, the United States, and China, in Data Privacy and Crowdsourcing 19 (2023), https://link.springer.com/10.1007/978-3-031-32064-4_3 (truy cập lần cuối 9/1/2025).

[46] Cristos Velasco, Cybercrime and Artificial Intelligence. An Overview of the Work of International Organizations on Criminal Justice and the International Applicable Instruments, 23 ERA Forum 109 (2022); Ana I. Cerezo, Javier Lopez & Ahmed Patel, International Cooperation to Fight Transnational Cybercrime, in Second International Workshop on Digital Forensics and Incident Analysis (WDFIA 2007) 13 (2007), http://ieeexplore.ieee.org/document/4299369/ (truy cập lần cuối 9/1/2025).

[47] AI Watch: Global regulatory tracker - G7 | White & Case LLP, (2024), https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-g7 (accessed 9/1/2025).

[48] Maneesha Mithal, Legal Requirements for Mitigating Bias in AI Systems, JD Supra, https://www.jdsupra.com/legalnews/legal-requirements-for-mitigating-bias-3221861/ (accessed 9/1/2025).

[49] Yoon Chae, US AI Regulation Guide: Legislative Overview and Practical Considerations, Connect On Tech (2019), https://www.connectontech.com/us-ai-regulation-guide-comprehensive-overview-and-practical-considerations/ (accessed 9/1/2025).

[50] TROY GRIGGS, DAISUKE WAKABAYASHI, How a Self-Driving Uber Killed a Pedestrian in Arizona - The New York Times, https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html (accessed 11/1/2025).

[51] Highway Accident Report: Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018, (2019), https://trid.trb.org/View/1751168 (accessed 11/1/2025).

[52] Arizona Criminal Code, §13-105.

[53] Tom Krisher & STEFANIE DAZIO Associated Press, L.A. County Felony Charges Are First in Fatal Crash Involving Tesla’s Autopilot, Los Angeles Times (2022), https://www.latimes.com/california/story/2022-01-18/felony-charges-are-first-in-fatal-crash-involving-teslas-autopilot (accessed 11/1/2025).

[54] Alice Giannini, United States Report On Traditional Criminal Law Categories And AI, (2024), https://www.penal.org/sites/default/files/files/A-01-24.pdf.

[55] Computer Fraud and Abuse Act (CFAA), 18 U.S.C. §1030.

[56] Ryan Abbott & Alexander Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 4 in Legal Aspects of Autonomous Systems 83 (Dário Moura Vicente, Rui Soares Pereira, & Ana Alves Leal eds., 2024), https://link.springer.com/10.1007/978-3-031-47946-5_6 ( accessed 11/1/2025).

[57] Van Buren v. United States 141 S. Ct. 1648, 1652 (2021).

[58] Electronic Communications Protection Act (“ECPA”), 18 U.S.C. § 2702.

[59] Wiretap Act (18. US.C. § 2511).

[60] Alice Giannini, lbid, 54, p.41.

[61] U.S Department of Commerce, India Artificial Intelligence, (2024), https://www.trade.gov/market-intelligence/india-artificial-intelligence (last visited Jan 11, 2025).

[62] Craig Eggett, The Role of Principles and General Principles in the ‘Constitutional Processes’ of International Law, 66 Neth Int Law Rev 197 (2019).

[63] Sesha Kethineni, Cybercrime in India: Laws, Regulations, and Enforcement Mechanisms, 1 (2019).

[64] Ameya Kilara, Justification and Excuse in the Criminal Law: Defences Under the Indian Penal Code, 19 Student Bar Review 12 (2007).

[65] Sayyed, lbid, 41, p.5.

[66] Sanhita Chauriha, How the Digital India Act Will Shape the Future of the Country’s Cyber Landscape, The Hindu, Oct. 9, 2023, https://www.thehindu.com/sci-tech/technology/how-the-digital-india-act-will-shape-the-future-of-the-countrys-cyber-landscape/article67397155.ece (truy cập lần cuối 11/1/2025).

[67] Sayyed, lbid, 41, p.12.

[68] Dongmei, Pang; Olkhovik, Nikolay V., Criminal Liability for Actions of Artificial Intelligence: Approach of Russia and China, Journal of Siberian Federal University. Humanities & Social Sciences. 2022 15 (8): 1094-1107 (2022).

[69] Global Times, China’s First ‘AI Cheating’ Case in Video Games Publicly Adjudicated; Defendant Sentenced to Years of Imprisonment for Selling Illegal AI Plug-Ins - Global Times, https://www.globaltimes.cn/page/202405/1311806.shtml (accessed 12/1/2025).

[70] Dongmei, Pang; Olkhovik, Nikolay V., lbid, 68, p.1099.

[71] Dongmei, Pang; Olkhovik, Nikolay V., lbid, 68.

[72] Dongmei, Pang; Olkhovik, Nikolay V., lbid, 68.

[73] C. Custer, I Tried to Make Microsoft’s Chinese Chatbot Racist. Here’s How She Stacked up to Tay., https://www.techinasia.com/tay-bad-microsofts-chinese-chatbot-racist (accessed 12/1/2025).

[74] Dongmei, Pang; Olkhovik, Nikolay V., lbid, 68, p.1100.

[75] Vietnam+ (VietnamPlus), Indonesia seeks to become AI investment destination, Vietnam+ (VietnamPlus) (2024), https://en.vietnamplus.vn/indonesia-seeks-to-become-ai-investment-destination-post297443.vnp (truy cập lần cuối 12/1/2025); Indonesia attracts $1.9b AI investments, Tech in Asia (2024), https://www.techinasia.com/news/indonesia-attracts-19b-ai-investments (accessed 2/1/2025); Unlocking The Potential Of AI-Driven Growth In Indonesia, https://www.oliverwyman.com/our-expertise/insights/2024/oct/unlocking-potential-of-ai-driven-growth-in-indonesia.html (accessed 12/1/2025).

[76] Yaumi Ramdhani, Amiruddin, & Ufran, Countering Artificial Intelligence Crimes in a Criminal Law Perspective, 9 rrijm 167 (2024).

[77] Yaumi Ramdhani, Amiruddin, & Ufran, lbid, 76.

[78] Yaumi Ramdhani, Amiruddin, & Ufran, lbid, 76.

[79] Ulrike Franke Sartori Paola, lbid, 44, p.4 - 8.

TRINH TIEN VIET (ED.), CRIMINAL RESPONSIBILITY AND PUNISHMENT, HANOI NATIONAL UNIVERSITY PUBLISHING HOUSE, p. 411 - 412 (2022).

[81] Caldwell et al., lbid, 11, p. 3-5.

[82] Assoc. Prof. Dr. Trinh Tien Viet, Vietnam Criminal Law Facing the Impacts and Challenges of the Fourth Industrial Revolution, National Conference: “Judicial Reform in the Criminal Law Sector” (2021).

[83] Trinh Tien Viet, Models of Criminal Liability of Artificial Intelligence: From Science Fiction to Prospect for Criminal Law and Policy in Vietnam, 35 LS (2019), https://js.vnu.edu.vn/LS/article/view/4257 (accessed 19/1/2025).

[84] Pritam Kumar, Determination of Civil and Criminal Liability of Artificial Intelligence, 4 DMEJL 48 (2023).

[85] Viet, lbid, 83, p.10.

[86] Diego Aparicio & Kanishka Misra, Artificial Intelligence and Pricing, (2022), https://papers.ssrn.com/abstract=4149670 (accessed 10/12/2025).

[87] Matilda Claussén Karlsson, Artificial Intelligence and the External Element of the Crime : An Analysis of the Liability Problem, 2017, https://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-58269 (accessed 11/1/2025).

[88] Ramy El-Kady, Towards Approving Rules for Criminal Liability and Punishment for Misuse of Artificial Intelligence Applications, 14 مجلة البحوث القانونية والإقتصادية (المنصورة) 875 (2022).

[89] C. Hakan Kan, Criminal Liability Of Artificial Intelligence From The Perspective Of Criminal Law, ijoess 55 (2024).

[90] Berrin Akbulut, YAPAY ZEKA VE CEZA HUKUKU SORUMLULUĞU, 27 HBV-HFD 267 (2023).

[91] Andrew Weissmann & David Newman, Rethinking Criminal Corporate Liability, 82 82 Indiana Law Journal 411 (2007) (2007), https://www.repository.law.indiana.edu/ilj/vol82/iss2/5.

[92] Syed Wajdan Rafay Bukhari & Saifullah Hassan, Impact Of Artificial Intelligence on Copyright Law: Challenges and Prospects, 5 647 (2024).

[93] Yavar Bathaee, The Artificial Intelligence Black Box And The Failure Of Intent And Causation, 31 Harvard Journal of Law & Technology(2018).

[94] baochinhphu.vn, Establishing Principles for Responsible Development of Artificial Intelligence, baochinhphu.vn (2024), https://baochinhphu.vn/xay-dung-nguyen-tac-phat-trien-tri-tue-nhan-tao-co-trach-nhiem-102240705141138763.htm (accessed 13/1/2025); Together aiming at the development of responsible artificial intelligence | Vietnam National University, https://vnu.edu.vn/ttsk/?C1654/N34573/Cung-huong-den-viec-phat-trien-tri-tue-nhan-tao-co-trach-nhiem.htm (accessed 13/1/2025).

[95] Velasco, lbid, 46, p.110-111.

[96] Esmat Zaidan & Imad Antoine Ibrahim, AI Governance in a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective, 11 Humanit Soc Sci Commun 1 (2024).

[97] NVIDIA Expansion into Vietnam: Potential for AI Sector Growth, Vietnam Briefing News (2024), https://www.vietnam-briefing.com/news/nvidia-expansion-into-vietnam-potential-for-ai-sector-growth.html/ (accessed 13/1/2025).

Related articles

Electronic monitoring in Canadian Criminal Law: Experiences and recommendations for improving the Law on Execution of Criminal Judgments of Viet Nam

Electronic monitoring in Canadian Criminal Law: Experiences and recommendations for improving the Law on Execution of Criminal Judgments of Viet Nam

Theoretical research

(L&D) This article focuses on clarifying the current provisions of Canadian criminal law regarding the subjects of application, monitoring mechanisms, rights of individuals under electronic monitoring, and measures for handling violations of electronic monitoring. In addition, the article analyzes both the advantages and limitations of electronic monitoring in the criminal justice field in Canada, thereby providing recommendations for improving the Law on Execution of Criminal Judgments of Viet Nam in this regard.

Determination of insurable interests in property insurance contracts

Determination of insurable interests in property insurance contracts

Theoretical research

(L&D) The principle of insurable interest in property insurance contracts plays an important role in determining the parties entitled to enter into such contracts as well as their rights upon the occurrence of an insured event. Compliance with this principle is a prerequisite for ensuring the legality and validity of property insurance contracts in accordance with Vietnamese law.

THE RIGHT OF RESIDENCE FOR MIGRANT WORKERS IN ASEAN: 
OPPORTUNITIES AND CHALLENGES IN THE PROCESS OF SUSTAINABLE DEVELOPMENT

THE RIGHT OF RESIDENCE FOR MIGRANT WORKERS IN ASEAN: OPPORTUNITIES AND CHALLENGES IN THE PROCESS OF SUSTAINABLE DEVELOPMENT

Theoretical research

(L&D) - The right of residence of migrant workers in ASEAN plays an important role in promoting economic integration, social stability within the region, and sustainable development.

Exemption from environmental liability:  A comparative analysis between Directive 2004/35/EC of the EU and the Law on Environmental Protection 2020 of Viet Nam

Exemption from environmental liability: A comparative analysis between Directive 2004/35/EC of the EU and the Law on Environmental Protection 2020 of Viet Nam

Theoretical research

(L&D) - This article examines and compares the mechanisms for exemption from environmental liability under Directive 2004/35/EC (Environmental Liability Directive – ELD) of the European Union and Vietnam’s Law on Environmental Protection 2020, with the aim of identifying shortcomings in Vietnam’s legal framework and proposing solutions to enhance the effectiveness of environmental protection.

PREVENTION AND CONTROL OF FRAUD IN REAL ESTATE TRADING PLATFORM SERVICES: LESSONS FROM SOUTH KOREA AND IMPLICATIONS FOR VIETNAM

PREVENTION AND CONTROL OF FRAUD IN REAL ESTATE TRADING PLATFORM SERVICES: LESSONS FROM SOUTH KOREA AND IMPLICATIONS FOR VIETNAM

Theoretical research

(L&D) - Fraud in real estate trading platform services constitutes a persistent and multifaceted challenge across jurisdictions, including Vietnam.

THE LAWS OF SEVERAL NATIONS CONCERNING THE EXPLOITATION AND USE OF UNDERGROUND SPACE AND IMPLICATIONS FOR VIETNAM

THE LAWS OF SEVERAL NATIONS CONCERNING THE EXPLOITATION AND USE OF UNDERGROUND SPACE AND IMPLICATIONS FOR VIETNAM

Theoretical research

(L&D) - The article presents the importance of developing underground space and underground works to address the issue of land scarcity in major urban areas, especially in the context of rapid urbanization, which is considered an inevitable trend to optimize the use of land resources.

Balancing interests in copyright protection for artificial intelligence outputs - International legal practice and some recommendations for Vietnam [1]

Balancing interests in copyright protection for artificial intelligence outputs - International legal practice and some recommendations for Vietnam [1]

Theoretical research

(L&D) – This article examines the impacts of artificial intelligence (AI) on copyright law and the necessity of adjusting the legal framework to balance the interests among stakeholders.

The Use of Equity in Regulating Civil Relations in France and Recommendations for Vietnam

The Use of Equity in Regulating Civil Relations in France and Recommendations for Vietnam

Theoretical research

(L&D) -In Vietnam, equity has become a source of civil law. However, the application of equity has not been effective, as the provisions on its application remain limited in terms of function, scope, and the subjects entitled to apply it.