
Geography: Asia Pacific · East Asia · China
DeepSeek V3, built by a Chinese quantitative trading firm, achieved performance comparable to GPT-4 and Claude at a fraction of the cost. The company used aggressive optimization techniques including mixture-of-experts architecture and FP8 training on Nvidia H800 chips (the export-restricted version of H100s).
This shattered the assumption that frontier AI requires billions of dollars and tens of thousands of GPUs. DeepSeek open-sourced its models, spawning an ecosystem of Chinese LLMs (Qwen, MiniMax, Moonshot) that compete on efficiency rather than raw compute.
The constraint: the reported $5.6M covers only the final training run, not the years of research and failed experiments. But even accounting for that, China's AI labs are proving that optimization under hardware constraints can produce world-class results. The implication for export controls is uncomfortable: sanctions may be accelerating Chinese AI efficiency.