A Third Path For AI Beyond The US-China Binary

22 Trần Quốc Toản, Phường Võ Thị Sáu, Quận 3, Tp.HCM
Tiêu điểm
Tin tức: KEVIN WARSH – MỘT NHÂN VẬT MANG TƯ DUY THỰC DỤNG VÀ AM TƯỜNG THỊ TRƯỜNG TÀI CHÍNH – NGỒI VÀO CHIẾC GHẾ QUYỀN LỰC NHẤT FED. Tin tức: Thỏa thuận thương mại 'lớn nhất từ ​​trước đến nay' VH & TG: Tại sao trật tự thương mại thế giới nên được thiết kế lại? BĐS: Chủ nhà chốt lời đúng đỉnh: Ôm 23 tỷ bán chung cư gửi ngân hàng thu lãi 115 triệu mỗi tháng, rồi ở nhà thuê chờ BĐS giảm giá mới mua CN & MT: Chuyên gia Đức: 'chúng ta tụt hậu 20 năm so với Trung Quốc về công nghệ pin xe điện' CN & MT: Mỹ vượt Nhật Bản trở thành nhà sản xuất thép lớn thứ ba thế giới VH & TG: Quyền lực kinh tế và sự thay đổi bá quyền VH & TG: Trung Quốc: Tử Huyệt Dầu Mỏ Và Chiến Tranh CN & MT: La Nina, El Nino đổi cách tính mới, La Nina sẽ xuất hiện dày hơn VH & TG: Giới siêu giàu Trung Quốc quản lý tài sản với tư duy toàn cầu CN & MT: Chuyện lạ: Đức và Nhật Bản dẫn đầu thế giới suốt nửa thế kỷ, nhưng lại bại dưới tay thợ rèn ở Trung Quốc Tiền Tệ : Chân dung người được chọn vào ghế Chủ tịch Fed: 35 tuổi đã vào Hội đồng Thống đốc, nổi tiếng với lập trường cứng rắn về nới lỏng tiền tệ CN & MT: Kỹ sư: Công nghệ Nhật Bản ổn, Pháp hay nhưng Trung Quốc phải dùng cụm từ này CN & MT: La Niña đã đi rồi, El Niño có sắp đến không? SK & Đời Sống: Cấp phép kinh doanh mặt tiền đường yêu cầu phải có chỗ để xe cho khách, không lấn chiếm vỉa hè? VH & TG: Nhà khoa học đoạt giải Nobel: Mỹ đang mất dần ưu thế trước Trung Quốc Tin tức: Thâm hụt thương mại Mỹ tăng mạnh nhất trong gần 34 năm Tiền Tệ : Luật Tiền Mới: Thanh Lọc Hay Sụp Đổ? CN & MT: Từ Bom Nguyên Tử Đến AI: Đạo Đức SK & Đời Sống: Khoa học, Đại dịch và Tương lai Nhân loại SK & Đời Sống: Thư Gửi Tương Lai: Bài Học Lịch Sử CN & MT: Elon Musk đã đúng VH & TG: Mỹ yếm thế trong đối đầu với Trung Quốc VH & TG: Châu Âu lại phải chuẩn bị cho chiến tranh? VH & TG: Nông thôn Trung Quốc: Những cuộc trở về đáng lo CN & MT: Cuộc chiến AI trên quỹ đạo Tiền Tệ : Fed, USD và Tương lai Kinh tế Toàn cầu CN & MT: Cuộc chiến chip Mỹ - Trung bước sang giai đoạn mới Tin tức: Việt Nam sẽ xây 5 đảo nổi trên biển giống mô hình của Dubai, tọa lạc tại thành phố đáng sống nhất cả nước Tin tức: Di dời trường học, bệnh viện khỏi trung tâm TPHCM để giảm ùn tắc VH & TG: Đại chiến lược đằng sau chính sách đối ngoại của Trump VH & TG: Trung Quốc tích trữ lương thực và năng lượng đề phòng rủi ro chiến tranh CN & MT: Chưa từng có trong lịch sử: Khách sạn ngoài không gian đầu tiên chuẩn bị mở cửa, phục vụ hàng trăm khách cùng lúc VH & TG: THÁI ĐỘ BÀI CHÂU ÂU CỦA MỸ BẮT NGUỒN TỪ NHẬN THỨC VỀ SỰ YẾU KÉM VỀ QUÂN SỰ VÀ SỰ SUY GIẢM DÂN SỐ BẢN ĐỊA CN & MT: BONG BÓNG AI CHẲNG PHẢI LÀ ĐIỀU GÌ MỚI MẺ CẢ – KARL MARX ĐÃ GIẢI THÍCH CƠ CHẾ ẨN SAU NÓ TỪ GẦN 150 NĂM TRƯỚC RỒI CN & MT: TRUNG QUỐC MUỐN DẪN ĐẦU THẾ GIỚI VỀ VIỆC ĐIỀU TIẾT TRÍ TUỆ NHÂN TẠO – LIỆU KẾ HOẠCH NÀY CÓ THÀNH CÔNG? CN & MT: Đại dịch, biến đổi khí hậu và tương lai Tiền Tệ : Tái Cấu Trúc Tài Chính Toàn Cầu SK & Đời Sống: Cô đơn: Xu hướng xã hội Việt Nam Chứng khoán: Đầu tư nội địa giữa bất ổn toàn cầu Tiền Tệ : Sau năm 2025 rút vốn kỷ lục, khối ngoại tiếp tục bán ròng gần 7.000 tỷ trong tháng 1/2026 CN & MT: Pin xe điện: Tiến bộ và Nguy cơ Tiền Tệ : Tăng trưởng GDP và chiến lược hạ tầng BĐS: Một đại gia Nhật Bản sắp xây 18.000 căn hộ tại vùng lõi mở rộng siêu đô thị TP. HCM VH & TG: Nền kinh tế 'cô đơn' nở rộ ở Trung Quốc CN & MT: Ngay đầu năm mới, 30.000 nhân viên của một tập đoàn bị sa thải VH & TG: Như những lời nhắc nhở Tin tức: Kinh tế tư nhân chỉ phát triển lành mạnh trong môi trường lành mạnh CN & MT: Năm 2032, Mặt Trăng có thể thay đổi mãi mãi BĐS: TS Cấn Văn Lực: Bất động sản không phải là lĩnh vực được ưu tiên vay vốn Thư Giản: Cuộc Xuất Hành Vĩ Đại 2026: Bình Minh Mới BĐS: Dòng tiền đầu tư 'áp đảo' thị trường bất động sản BĐS: TS. Cấn Văn Lực: Lãi suất đã bước sang cuộc chơi mới BĐS: Bốn “trụ cột” dẫn dắt thị trường bất động sản trong 2026 Thư Giản: Bình minh 2026: Cuộc xuất hành vĩ đại BĐS: Mặt bằng trung tâm TP.HCM: Thực trạng và dự báo BĐS: Không đánh đổi giá nhà lấy tăng trưởng viển vông! BĐS: Thực trạng phân khúc nhà liền thổ tại TPHCM BĐS: 5 lưu ý khi đầu tư LƯỚT SÓNG bất động sản BĐS: Hơn 32.000 căn nhà ở xã hội gần TP.HCM trong kế hoạch xây dựng năm 2025 của Long An : Nếu không sửa luật, dự án bất động sản sẽ tắc trong 10 năm tới Tin tức: Việt Nam từ khủng hoảng lạm phát đến nền kinh tế 510 tỷ USD Tin tức: TP.HCM sắp đổi mới loạt quy hoạch đô thị, người dân được hưởng gì? Tin tức: Kịch bản Nga-Trung tại Venezuela SK & Đời Sống: Nơi thảo nguyên vẫn còn SK & Đời Sống: Hikikomori: Lời Cảnh Tỉnh Và Giải Pháp Tiền Tệ : NHNN hạ mục tiêu tăng trưởng tín dụng, kiểm soát chặt lĩnh vực bất động sản Tiền Tệ : Tăng trưởng tín dụng cao nhất trong 10 năm, lãnh đạo NHNN lưu ý rủi ro khi tỷ lệ tín dụng/GDP đã lên 146% Tiền Tệ : Lạm phát và khả năng chi trả VH & TG: Chiến lược An ninh Quốc gia Mỹ: Răn đe Trung Quốc Chứng khoán: 150 nhà đầu tư toàn cầu đến Việt Nam tìm cơ hội "giải ngân" Thư Giản: Sự chậm trễ của ứng dụng khoa học VH & TG: Nhật Bản cân nhắc vũ khí hạt nhân Thư Giản: Khải Huyền và Đại Đào Thải Chứng khoán: PHẦN 3: THỊ TRƯỜNG THĂNG HOA XUẤT TƯỚNG NHỮNG 'ANH HÙNG' Chứng khoán: VN-Index mất gần 30 điểm Thư Giản: CHẾT KHÔNG PHẢI VÌ LÀM DỞ, MÀ VÌ BỊ BÓP CỔ Thư Giản: “KHÔNG PLAN” CHÍNH LÀ NGHÈO — VÀ NGHÈO THÌ CĂNG THẲNG Chứng khoán: Mía đường Cao Bằng (CBS) chốt quyền trả cổ tức bằng tiền tỷ lệ 30% Chứng khoán: CHỨNG KHOÁN QUÝ 4.2025 Kì 2 Thư Giản: 14 định luật ngầm BĐS: Đất ở ổn định 20 năm, không có khiếu kiện, tranh chấp có được cấp sổ đỏ hay không
Bài viết
A Third Path For AI Beyond The US-China Binary

    What if the future of AI isn’t defined by Washington or Beijing, but by improvisation elsewhere?

    Beatrice Caciotti for Noema Magazine

    HANOI, Vietnam — It’s June 20. There’s a velvet backdrop, LEDs pulsing cyan and a giant banner declaring the launch of a new national AI alliance. FPT Corp. chairman Truong Gia Binh, who heads one of the country’s leading IT and telecommunications companies, strides onstage, quoting a wartime slogan — “Nothing is more precious than independence and freedom” — before proceeding to cast artificial intelligence as Vietnam’s next great battle, an existential fight for the country’s future. Around him are rectors from top universities, ministers hunched over sleek tablets, startup founders livestreaming from the aisles.

    The question everyone expects, the one the world keeps asking, hangs in the air: Is it the U.S. or China? Which AI superpower will Vietnam choose?

    But Binh flips the script. FPT, he announces, will open its “core tech stack” — language models, cloud infrastructure, even training data — to any domestic partner who wants to build with it. Binh outlined three commitments: FPT would open a national sandbox for controlled experimentation with the aim of creating a locally trained GPT-style model by year’s end and support a state-backed push to teach AI in schools. These three commitments are a refusal — “We don’t stand on the shoulders of giants,” an FPT executive later tells the crowd. “We walk beside them.”

    The applause swells.

    In computing, a “stack” is simply the layered architecture that makes technology run: chips and circuits at the base, then operating systems, then applications, all the way up to the user interface. Each layer builds on the one below. Decisions made at one level cascade upward. Which is why choices about the stack are never just technical — they decide who holds power, and who must follow.

    Banners at the FPT event promise an open, comprehensive and state-regulated electronic ecosystem. The familiar poles of AI politics — Silicon Valley’s proprietary platforms and Beijing’s centralized infrastructure — are never named, but everyone in the room understands what is being contested: who gets to define the terms of intelligence itself. The stakes are stack-level choices — black-box dependence or modular improvisation; opacity or legibility; someone else’s roadmap or a sovereign design of your own. In practical terms, the decision is the difference between paying for access to OpenAI’s closed API and fine-tuning an open-weight model on a café’s shared GPU rig — between consuming intelligence as a service and composing it as an act of sovereignty. One rents a mind, the other trains its own in the wild.

    This is, in essence, a claim to AI sovereignty: the ability to build and govern infrastructures on Vietnam’s own terms while still enabling cross-border flows of data, talent and computation. AI sovereignty here does not mean isolation, but authorship — deciding which data, models and rules shape, and will shape, how machine intelligence is built and deployed.

    In short, Vietnam is not picking sides. It is building a third stack.

    Infrastructural Nonalignment

    Many view AI geopolitics as a culture war between Silicon Valley’s libertarian individualism versus China’s communitarian authoritarianism. That familiar tableau of cowboy disruptors and state-backed titans still lingers in op-eds, but it obscures the quieter territorial redrawing that’s occurring along the infrastructural level. Baidu, long positioned as China’s national champion in AI, has been eclipsed by a wave of leaner, more research-oriented Chinese labs like Z.ai, formerly known as Zhipu AI, Baichuan Intelligence and MiniMax. These newer actors release open-weight models and invite scrutiny, blurring the assumed line between authoritarian opacity and democratic transparency.

    The sharper fault line now runs not between nations but infrastructures — between the guarded logic of proprietary systems and the unruly emergence of open-weight models; between centralized command and distributed improvisation; between the doctrine of safety and the discipline of scrutiny. If OpenAI, Anthropic, and Google DeepMind’s frontier models have largely represented the logic of enclosure, then more open-weight projects like DeepSeek and Meta’s LlaMA — not fully open-source but released in ways that allow retraining and scrutiny — gesture toward a counter-current that is partial, constrained, yet powerful in its transnational diffusion. Even as OpenAI has more recently released “open models,” the broader movement of open-weight diffusion cuts across borders, destabilizing the notion that AI will crystallize into two superpower-led blocs.

    In other words, culture is not what is being exported; technology stacks are.

    What travels across borders aren’t values per se, but configurations of infrastructure: model weights, licensing schemes, data regimes, cloud dependencies and developer ecosystems. These are the substrates through which AI systems are made legible, tractable and governable. It is these substrates — rather than grand narratives about freedom or control — that shape how knowledge is produced, validated and operationalized.

    “Is it the U.S. or China? Which AI superpower will Vietnam choose?”

    Vietnam’s position in this landscape is telling. Neither fully aligned with the U.S. nor China, it is assembling a third stack that draws selectively from both sides while cultivating its own infrastructural sovereignty. Through state-linked firms like FPT, domestic LLM research and partnerships with groups like U.S.-based Nvidia, Japan’s NTT Data Group and China’s Huawei that straddle geopolitical divides, Vietnam exemplifies a mode of infrastructural nonalignment: modular, adaptive and deeply attuned to the asymmetries of global AI. In declaring its own stack, Vietnam claims the right to decide how reality itself is translated into machine-readable form — what becomes visible, knowable and actionable to AI systems.

    FPT’s stack is beginning to take discernible form. Unveiled in Japan in late 2024, the company’s AI Factory, a high-performance computing hub designed to train and deploy large AI models, is anchored by California-based Nvidia’s accelerated computing platform, equipped with thousands of H100 and H200 superchips. Wrapped in the Nvidia AI Enterprise suite and the NeMo framework, this infrastructure undergirds FPT’s growing portfolio of Vietnamese-language models and vision systems. These models are served through FPT Smart Cloud, the firm’s sovereign cloud platform, which allows for flexible deployment — on-premises, on local servers or devices closer to where data is generated or within domestic data centers. 

    The architecture is modular by design, satisfying Vietnam’s data-residency requirements by localizing storage and compute within Vietnam’s jurisdiction, while containerizing models and APIs so they can be deployed across borders. Backed by Japanese capital from Sumitomo Corp. and SBI Holdings, a Tokyo-based financial services group, FPT is also investing in regional data infrastructure to expand storage and processing capacity across Southeast Asia, along with sector-specific tuning programs that adapt models for use in industries like healthcare, finance and transportation. Here we have not only a singular, unified stack, but also a composable system: compute, weights and cloud services stitched together in a form that can be tuned to whatever relevant context — health, finance, mobility — at home or abroad, on Vietnam’s own terms.

    At the top of its stack, FPT has introduced a pair of platforms — AI Studio and AI Inference — that are designed to give Vietnamese developers and enterprises greater control over how AI models are adapted and deployed. Launched in April, these tools extend the AI Factory’s reach beyond infrastructure into application and authorship. AI Studio provides a fine-tuning environment that’s built on Nvidia’s NeMo framework, a toolkit for customizing and retraining large models like DeepSeek-R1 and Llama 3.3 on internal or domain-specific datasets. AI Inference, by contrast, serves as the production layer: offering a catalogue of pretrained models — over 20 at launch in April— available via API for rapid integration into enterprise workflows. Both operate atop the same GPU backbone as the Factory itself, ensuring continuity between experimentation and execution.

    The result is a stack that is assembled, rather than monolithic, with domestic platforms, a sovereign cloud, high-performance compute and transnational research folded into its modular system. Each of its layers carries different dependencies, but together they allow Vietnam to hold authorship over the shape, orientation and reach of its AI infrastructure — one that’s stable enough to anchor public deployment and open enough to adapt or travel. Together, these platforms complete the circuit: from compute to model to use-case, all within an architecture that remains legible, governable and adaptable. The ambition here is not just technical performance, but epistemic discretion — the ability to decide which models are retrained, how they are tuned and for whom they are made to speak.

    Platforms(FPT AI Studio, FPT AI Inference)
    Models(DeepSeek-R1, Llama 3.3)
    Data(publicly available, domain-specific)
    Compute(AI Factory, FPT Smart Cloud)

    An Instance Of The Third Stack

    In parallel, FPT has continued to deepen its strategic alignment with Mila, the Quebec AI institute founded and advised by AI pioneer Yoshua Bengio. The partnership, which began in 2020 and was renewed in 2023, links Vietnam’s largest tech conglomerate with one of the world’s leading research centers in deep learning and responsible AI. On paper, the collaboration is focused on advancing large language models (LLMs) and natural language processing. But its significance runs deeper: It is a quiet counterexample to the prevailing narrative of AI as an ideological battleground. Rather than choosing between spheres, FPT is building connective tissue — embedding Vietnamese researchers within Mila’s lab, circulating knowledge across borders and shaping governance standards from a position that is neither defensive nor derivative. In a landscape where openness is often declared but rarely reciprocal, this is what infrastructural diplomacy can look like.

    “Neither fully aligned with the U.S. nor China, it is assembling a third stack that draws selectively from both sides while cultivating its own infrastructural sovereignty.”


    Vietnam is not alone in forging a third path. In Malaysia and Indonesia, the development of a Nusantara-style AI strategy, which frames AI design around Indonesia’s plural linguistic and cultural infrastructures by prioritizing multilingual corpora and local cultural knowledge, reflects an ambition to build systems attuned to the neighboring countries’ extraordinary linguistic and cultural diversity; it is an infrastructural project as much as a symbolic one. The United Arab Emirates (UAE), meanwhile, has positioned itself as a regional vanguard through the release of Falcon, a series of open-weight language models that signal both technical capacity and the sovereign intent to develop and license its own models rather than depending on U.S. or Chinese systems. Taken together, these initiatives point to a wider shift: not a rejection of global AI paradigms, but a refusal to be wholly contained by them.

    If infrastructure, not ideology, is travelling, how do export-controlled chips, data residency laws or safety regimes map onto a nonaligned stack — one built outside the U.S.–China duopoly that draws from both, but is governed locally? What happens when the aspiration to sovereign configuration runs into the hard limits of material interdependence — when the cloud is not local, the chip is embargoed or the licensing regime bakes in foreign oversight? The challenge for what I call nonaligned builders — those operating in third-party countries outside of the U.S.-China binary — is not just to assemble stacks that work, but to govern stacks that remain legible under pressure. Their task is to hold open the space between technical borrowing and epistemic capture. In this emerging order, the real question is not whether nations can build independently, but whether they can stay in control of what their systems are allowed to know, remember and act upon. The third stack holds an exquisite contradiction: It both evades and entangles itself with old powers.

    Vietnam’s advantage may lie not in self-sufficiency but in strategic bricolage: The ability to assemble a working stack from mismatched parts, to fine-tune open weights from both China and the U.S. on GPUs bought in part with Japanese capital, and to deploy them on sovereign cloud infrastructure that complies with Vietnamese law but draws from global standards. In this way, the third stack is not sealed off but selectively permeable: borrowing where it must but governing what it borrows — even if stitching together components from competing powers requires constant negotiation of technical standards and political constraints.

    The third stack may never match the scale of its U.S. or Chinese counterparts, but that is the point. Its advantage lies in asymmetrical scaling: in tuning for context, licensing with constraint and extending its reach not by dominating the field, but by slipping beneath it. Consider SemiKong, an open-weight large language model developed for the semiconductor industry through a collaboration between FPT Software, Silicon Valley-based Aitomatic and Tokyo Electron Ltd. Built on Meta’s Llama 3.1 architecture, SemiKong outperforms general-purpose models like GPT in sector-specific tasks — an illustration of how sovereign capability can be exercised not through scale, but through precision. By contributing to an open-source, transnational effort that aligns with its own industrial priorities, Vietnam inserts itself not as a peripheral adopter but as a co-author of global AI infrastructure. This is asymmetry as strategy: composing relevance not by competing at the center, but by accruing influence at the edge.

    Even when components like chips, frameworks or toolkits are foreign, Vietnam retains leverage through procedural sovereignty: the ability to constrain how data moves, where models are trained and under what terms systems are deployed. If the lower layers of the stack remain entangled in foreign supply chains and architectures, the upper layers offer room to assert rules, policies and frictions that subtly reroute control.

    The passage of Vietnam’s first-ever Law on the Digital Technology Industry in June marks a turning point in this strategy. While the EU builds its AI regime through risk tiers — classifying systems as unacceptable, high-, medium- or low-risk with corresponding obligations — and the U.S. leans on voluntary disclosure, where companies pledge transparency rather than comply with binding rules, Vietnam’s approach is more infrastructural: classifying digital systems as strategic assets and binding them to pre-approval requirements, domestic data handling and sectoral oversight. The law does not aim to lead through values or scale, but through configuration — embedding sovereignty not in rhetoric, but in the mechanics of deployment.

    “Vietnam’s advantage may lie not in self-sufficiency but in strategic bricolage.”

    California-based Qualcomm’s establishment of its AI R&D center in Hanoi on June 10, however, reveals the entangled logic of infrastructural nonalignment. As Qualcomm’s third-largest facility worldwide — after India and Ireland— the Hanoi center is tasked with developing generative and agentic AI across domains ranging from smartphones and XR — the umbrella term for virtual, augmented and mixed reality — to automotive systems and the sundry connected devices known as the “internet of things.” At first glance, the move dovetails neatly with Vietnam’s national strategies on AI, semiconductors and digital transformation, with their emphasis on technology transfer, ecosystem development and workforce capacity. The partnership exemplifies Vietnam’s strategy of courting foreign investment while cultivating domestic sovereignty.

    But the familiar contradictions of such arrangements remain. What enters under the banner of knowledge exchange may calcify into dependency — on imported architectures, inherited standards, embedded design assumptions. This dynamic sharpened in an earlier move in April, when Qualcomm quietly acquired MovianAI, a Vietnamese generative AI spin-off from Vingroup’s VinAI lab, best known for its Vietnamese-language models and mobility systems. What looked like local capacity was, in the end, simply absorbed by a U.S. multinational company. The test, then, is whether Vietnam can transmute this influx of code and capital into sovereign capacity before the licenses and safety regimes around it harden into a new perimeter that encloses — or possibly imprisons — its third stack.

     

    But what is AI sovereignty? A posture, an imperative or a practicality? AI sovereignty, as it currently plays out outside of the U.S.-China vacuum, is not a banner‐waving claim to territorial control; rather, it manifests itself as the quiet right to decide what counts as knowledge and how that knowledge shows up in the world. That is, an epistemological sovereignty. This sovereignty lives in the stack — in the choices about model weights, training data, licensing regimes and cloud dependencies that govern what becomes legible and what remains unseen. AI sovereignty, in practice, is a situated authorship of machine reasoning: an infrastructural claim over how the world is parsed and made actionable. When a polity engineers its own stack, it is in effect engineering an epistemic world of AI, shaping not the raw world itself but the way the world will be disclosed to users, regulators and neighboring states.

    The kind of AI sovereignty that the Vietnamese nonalignment model enacts is an act of epistemic refusal through infrastructural design. By refusing to license its perception of reality from OpenAI, AWS or Alibaba Cloud, Vietnam reserves the right to set the horizon of what can be perceived, queried and disputed within its own techno‐social field. The third stack becomes a sovereign entity — a self-authored architecture of appearance. Every domestic corpus curated, every open-weight checkpoint released under a local license, is a clause in an epistemic constitution.

    Here, the stakes outrun the vocabulary of “localization” or “self-reliance.” The question is no longer whether Vietnam can train a Vietnamese GPT, but whether it can dictate the contours of Vietnamese reality as machines come to perceive it. In other words, sovereignty is authorship of the perceptual field itself. What the development lexicon still dismisses as “local innovation” is, in truth, a claim to epistemic self-determination.

    This is where we see why licensing minutiae, which determine how a model may be used, modified or shared, come into play — and unlike API keys, which typically permit or deny access, licenses articulate regimes of use. They encode norms around attribution, commercial prohibition or modification, transforming technical infrastructure into a site of governance.

    Creative Commons “CC-BY-NC” allows others to reuse a model with attribution but bars its commercial use. An open-weight model is not just cheaper; it is epistemically plastic. It can be re-trained, audited, or forked to accommodate dialects, taboos, or regulatory mandates that proprietary code cannot express. The license then ultimately determines who may reshape what AI comes to mean. With generative models, where authorship has shifted from the creator to the system, licensing becomes a mechanism of epistemic control. Who controls a license is not just managing software — they are drawing the perceptual boundaries of the machine.

    This bleeds into the policy domain: Exporting stacks is a struggle over cognitive jurisdiction. When the UAE releases Falcon weights  — numerical parameters that shape how a model reasons —  or Indonesia funds Nusantara-centered tokenizers —tools that determine how language is segmented and interpreted — they are exporting a template for how the world will appear to a machine and, by extension, to everyone downstream who relies on that machine’s judgment. Sovereignty travels as epistemic infrastructure long before it surfaces as policy.

    “When a polity engineers its own stack, it is in effect engineering an epistemic world of AI, shaping not the raw world itself but the way the world will be disclosed to users, regulators and neighboring states.”

    When contrasting proprietary safety regimes with open-weight scrutiny, the fault line is not merely technical — about how code is written or secured — but epistemic: about what knowledge the system encodes, which assumptions it permits, and whose realities it can recognize. The critique is well-rehearsed: mainstream agentic AI is born of surveillance, prying open everything from calendars to encrypted chats and funneling the take through vendor-controlled clouds. Proprietary stacks promise protection but lock the perceptual machinery of AI — the systems that decide what it can see, process and remember, along with their data trails — behind contractual walls. Open-weight models flip that asymmetry with their (partially) auditable code, local fine-tuning and domestic data paths that keep the epistemological workshop at home, enabling the polity that lives with the system to inspect, contest and reshape what it is allowed to know.

    The third stack movement is, at heart, a contest over who gets to script the next layer of the world’s intelligibility. As scaffolding, it stands at the threshold of perception, where infrastructure sets the conditions of appearance. Every technical detail — chips, weights, data residencies — reads as a clause in a deeper argument: Sovereignty is the power to decide what appears to machines — and, through them, to the humans and institutions that depend on their judgments — and nonaligned stacks make that power visible precisely because they embody a third way, asserting authorship outside the U.S.-China duopoly.

    AI sovereignty here refers less to territorial command than to infrastructural authorship. It names the capacity to decide what kinds of knowledge are encoded, which models speak and under what terms. Rather than a political slogan, it materializes in the stack itself — in weights, training data, licensing regimes and dependencies. In this sense, sovereignty is enacted as a design choice, shaping what becomes visible to machines and, by extension, to societies.

    Epistemic Dissonance

    This global divergence is producing what we might call epistemic dissonance: not just disagreement about values or governance, but incompatibilities at the level of what can be known, predicted or rendered actionable by AI systems. Each stack encodes a distinct epistemic posture — one that determines how knowledge is structured, what data is treated as relevant and which forms of uncertainty are permitted or pre-emptively excluded.

    Proprietary LLMs, for instance, are often trained on vast but opaque corpora: Reddit threads, scraped web content, undisclosed licensing agreements. They are often optimized for scale, fluency and legal insulation rather than contextual fidelity. These models are brittle to local nuance, struggle with underrepresented dialects and tend to encode dominant cultural logics even as they claim universality. By contrast, emerging localized language models — such as those trained in Vietnam, Indonesia or the UAE — often work with culturally specific corpora: state archives, vernacular media, annotated speech from linguistic minorities. Their parameters may be smaller, but their epistemic frame is tighter. They are not just less powerful; they are differently calibrated.

    This is not simply a question of bias or inclusion. It is a structural matter: what kinds of questions a model is designed to answer, what counts as a valid input and whose epistemologies are legible within its architecture. We might think of each stack as offering different epistemic affordances — a term borrowed from design to describe the range of actions a system enables or inhibits. Some stacks are built to predict consumer preferences or automate content generation; others are tuned to support governance, translation or educational tasks in linguistically diverse environments. What gets excluded — polysemy, dialect variation, historical opacity — is just as crucial as what gets encoded, because these choices determine whose worlds become legible to machines — and by extension, retrievable to future humans through them — and whose are consigned to obscurity.

    Stack governance, too, reflects these epistemic fractures. Proprietary stacks tend to obscure: their weights are closed, their decision-making pipelines buried behind APIs and privacy disclaimers. They produce legibility for the end-user while rendering themselves illegible to regulators and the public. In contrast, open-weight or semi-open stacks may fragment the field: allowing local actors to fork, fine-tune or redeploy models in ways that increase heterogeneity — and with it, epistemic pluralism. But this pluralism comes at the cost of consistency, interoperability and, in some cases, centralized safety oversight — risks that can leave systems unstable, create frictions across borders, and open governance gaps precisely where alignment is most needed.

    “The third stack movement is, at heart, a contest over who gets to script the next layer of the world’s intelligibility.”

    The result is not a single AI world, but overlapping cognitive infrastructures, each generating its own truths, exclusions, and forms of abstraction. This is what makes Vietnam’s infrastructural improvisations so significant: they reflect not just a geopolitical hedge. Rather, they offer a glimpse into the emergent politics of epistemic design — where building a stack means deciding not only how intelligence works, but whose world it recognizes.

    For example, Vietnam’s PhoGPT, a 4B-parameter open-source model, was trained from scratch on a 102-billion-token Vietnamese corpus — including web-crawled news, legal texts, books, Wikipedia, medical journals and more — resulting in a model attuned to Vietnam’s phrasing, governance and linguistic norms rather than Reddit-centric English idioms. In contrast, U.S. models like OpenAI’s GPT-4 are trained on massive English-language corpora scraped from sources such as Reddit, Wikipedia, and licensed publishers, optimizing them for global English fluency but leaving them brittle with local nuance and underrepresented dialects. Meanwhile, Baidu’s Ernie Bot redirects queries on Tiananmen Square toward state-approved historical summaries, reflecting how its stack is calibrated to Chinese state information controls. These divergences show how different stacks literally decide which worlds become legible to machines, and which are erased.

    The Rise Of Agents

    This dissonance becomes more acute with the rise of AI agents — models that don’t merely answer prompts but pursue goals, make decisions and interact with digital or physical environments on behalf of users. As these agents move from lab demos into real-world workflows — coordinating tasks, navigating interfaces, acting autonomously — the epistemic stakes of stack design deepen.

    An agent trained on a proprietary U.S. stack might assume individual agency, default to English-language documentation or prioritize efficiency over negotiation. An agent built on a localized Vietnamese or Indonesian model, by contrast, might be embedded with different priors — attuned to collective coordination, informal hierarchies or context-sensitive constraints. These are not just behavioral quirks. They are epistemic scripts — coded assumptions about what the world is, how it works and how action within it should unfold.

    This contest over agentic scripts is already unfolding in China, where the development of local AI agents is accelerating at pace. In the West, attention has pivoted to GPT-4o’s conversational fluidity — and more recently to GPT-5’s benchmark-beating claims — but Chinese companies like Butterfly Effect, Alibaba, Zhipu and ByteDance are building systems that move past chat entirely. These agents execute, rather than merely respond. Designed to eventually interact across tightly integrated app ecosystems, they perform tasks, process forms and coordinate across services with minimal user input. Interfaces are mobile-first, frictionless and oriented around action rather than dialogue.

    This divergence is infrastructural rather than merely functional. These agents are trained on domestic data regimes, embedded within governance systems and calibrated to behavioral norms that depart sharply from the design assumptions of U.S.-led stacks. In the Chinese model, the agent is not a synthetic colleague or expressive companion. It is an operative node: a procedural intermediary within a platform stack where commerce, communication and administration blur.

    Agents In The Real World

    A few notes on the readiness and resistance of AI agents are in order. To understand how agents might materialize in practice, we must first understand the texture of the workflows they are meant to inhabit — and the uneven terrain of digital infrastructure, cloud uptake and process standardization that shapes their integration. AI agents have become the new object of desire in both technical and commercial imaginations. They take initiative, coordinate across systems and promise a shift from reactive tools to goal-driven collaborators. This shift, however, brings its own dissonance — especially as agents begin to traverse real-world workflows.

    A parallel tension emerges in Western enterprise circles. In the private equity world, agents are spoken of with urgency (“Must deploy across the portfolio”) or with skepticism (“There’s no measurable ROI”). But both positions flatten a third truth: Most companies are structurally unprepared. Agents that run continuously, interact across siloed systems and make autonomous decisions require foundational upgrades — stable APIs, interoperable data, well-mapped workflows. Without this substrate, autonomy becomes a liability. Systems buckle, provenance vanishes, pilots stall.

    As computer scientist Arvind Narayanan observes, technologists often confuse resistance with unreadiness. If the world hasn’t adopted agents at scale, it is not for lack of vision but because most infrastructures were never designed to support continuous, self-initiating computation. And more than that: Most jobs, like most systems, are not reducible to discrete tasks. The hardest-to-automate dynamics are often precisely those that evade formalization — at the edge of instruction, across tacit boundaries.

    “Vietnam’s infrastructural improvisations … offer a glimpse into the emergent politics of epistemic design — where building a stack means deciding not only how intelligence works, but whose world it recognizes.”

    This is where stack design re-enters. A Vietnamese or Indonesian agent, trained on local workflows, may encode different epistemic assumptions — informal consensus over explicit delegation, ambiguity-tolerant reasoning over strict logic. These differences are not bugs but are adaptations for infrastructural realities. In this sense, nonaligned agents are not just alternatives, but artifacts of situated constraint, designed to operate within locally legible systems.

    The task, then, is not to mimic Silicon Valley’s agent paradigm, but to script agency from the bottom up — on top of architectures that can carry it and in languages that local systems can understand. Until then, every claim of intelligent delegation risks producing more opacity than autonomy.

    If models encode knowledge, agents execute it. They become emissaries of the stack that spawned them. The question is not just which models get built, but which agents get deployed — and in whose image. In this light, Vietnam’s third stack is more than a hedge against platform dependence: it is a rehearsal for a future in which AI agents — trained locally, governed modularly — enact a worldview not defined by Silicon Valley or Beijing, but by the granular, situated logics of a sovereign digital ecology.

    Sovereignty In Pieces

    There’s a better question than which bloc Vietnam will choose: it’s who decides what alignment can look like? Most countries will not build an AI stack from scratch; they will adopt, adapt and hybridize — assembling intelligence from components that are not entirely their own. As an infrastructural bricoleur, doing so wires together a stack that belongs to neither hegemon. In the space between black-box dependence and infrastructural refusal, a new sovereignty is taking shape — one weight, one corpus, one fork at a time. The third stack is no local curiosity; it is a preview of how much of the world will build.

    The future of AI will not be charted by accelerationist slogans or neatly layered diagrams. It will surface — messy, uneven, tactical — from the friction of adaptation and the patient labor of coaxing disparate systems into dialogue. To read this terrain is to steer between hype and despair, tuning into the pulse of alignment: code splicing into cable, vision bending to vernacular, sovereignty assembled incrementally. Like a signal routed through stray relays, the coming architecture will glow with detours that seldom make headlines yet quietly redraw the map.

    Dang Nguyen is a writer and researcher of AI, culture and aesthetics. Her work traces the informal infrastructures and moral frictions of digital life, focusing on Southeast Asia and the technological practices that emerge under conditions of constraint. Dang is a Majority World Scholar at Yale Law School and an incoming Bellwether Scholar at the University of California, Berkeley School of Information.

    By Dang Nguyen - NoemaMag

    THỐNG KÊ TRUY CẬP
    • Đang online 12
    • Truy cập tuần 9375
    • Truy cập tháng 19152
    • Tổng truy cập 566952