ARTIFICIAL INTELLIGENCE LAW 2025: A HISTORIC TURNING POINT OR A “COMPLIANCE TRAP” FOR VIETNAMESE STARTUPS?

The National Assembly’s passage of the **Artificial Intelligence Law on December 10, 2025**, is a significant milestone in the national digital governance strategy. Vietnam has become one of the few countries to enact a comprehensive AI law at a very early stage, following only the EU with its AI Act. The policy ambition is clear: to control risks, protect society, and shape a new order for artificial intelligence technology.

However, upon detailed analysis of the legal text from a technical perspective and comparison with international practices, a major question arises: **are we building a foundation for innovation, or inadvertently creating a “compliance trap” that could stifle our very own domestic AI ecosystem?**

Below are **three structural bottlenecks** that, in my view, could have long-term impacts on startups and Vietnam’s technological competitiveness if left unaddressed.

Firstly, **the flawed “humanization” in the definition of AI**. The law describes AI as having the ability to “understand,” “perceive,” and “reason”—concepts rooted in human epistemology. Technically, modern AI operates on statistical probability and function optimization, lacking will or consciousness. Attributing human qualities to machines not only misrepresents the technological nature but also creates confusion when assigning legal liability, particularly regarding subjective fault (*mens rea*), raising the risk of humans “hiding” behind algorithms.

Secondly, **the risk of stifling the development of domestic large language models (LLMs)**. Article 7 of the Law strictly prohibits data collection that violates intellectual property laws, while Vietnam lacks equivalent exceptions like “Text and Data Mining” or “Fair Use” as seen in the EU, the US, or Japan. Consequently, domestic AI startups could face severe legal risks if they independently collect data to train models, forcing them to rely on foreign Big Tech for technology and data instead of mastering the AI value chain.

Thirdly, **the compliance burden coupled with an illusion of technological control**. The requirement for conformity assessment and explainability principles for high-risk systems creates a technical paradox: the more advanced the model (like deep learning), the harder it is to explain. Imposing an “absolute explainability” obligation might force sectors like healthcare to use less accurate models just for easier explanation, thereby increasing the risk of misdiagnosis—directly contradicting the very goal of protecting people.

Leave a Reply

Your email address will not be published. Required fields are marked *