About the Writer
Lisa Loud is an professional in financial and blockchain development, with senior management experience at PayPal, ShapeShift, and other big tech companies. As Executive Director of the Secret Network Foundation, she champions privacy-preserving technology in the cryptocurrency area.
The opinions expressed these are her unique, and do not necessarily reflect those of .
Many in the burgeoning industry reassessed their beliefs about competition and development as a result of DeepSeek’s most recent market-shaking AI discovery, which highlighted the opposing software development methods of China and the United States.
China’s industrial strategy has long been characterized by a practice of repeating itself. Unlike the West, where study breakthroughs are typically protected by patents, custom methods, and aggressive privacy, China excels in refining and improving ideas through social development.
This capability to quickly iterate allows China to get existing systems and drive them toward their maximum form, making them more effective, cost-effective, and widely available. DeepSeek’s R1 design being nearly as effective as OpenAI’s best, despite being cheaper to use and considerably cheaper to teach, shows how this culture is pay off considerably.
American tech culture despises the notion of copying other people’s work out of fear of appearing uninspired. This makes it difficult to use a proven strategy when using it. The inconsequential task of reinventing every component of a remedy slows down a job.
The method of privacy is one of the main variations between China and the United States in terms of research and development. While U. S.companies and research organizations tend to work in warehouses to defend competitive advantages, China fosters a more open, cooperative culture. This mindset enables engineers and researchers to improve one another’s abilities, accelerating technological advancement.
Creativity flourishes under constraints—this is an oft-proven fact. Researchers have instead been thrust ahead of the steady progress AI was making in the West because of constraints meant to impede progress. China has encountered a lot of challenges, especially because of sanctions that restrict access to high-performance hardware and software.
The lack of cutting-edge infrastructure has forced Chinese companies to develop alternative approaches, making their innovations more resource-efficient and accessible. And that led to the unexpected DeepSeek launch, which challenged the world’s attitudes toward AI advancement.
Another Sputnik moment?
The advantages of this iterative approach are demonstrated by DeepSeek R1. In contrast to Western competitors, which frequently rely on proprietary data and expensive infrastructure, DeepSeek was created with efficiency in mind. DeepSeek was quickly developed as a less expensive and lightweight alternative after being trained on major, large-scale LLMs like ChatGPT and Llama.
DeepSeek makes use of a smaller, more effective system to cram the reasoning power of bigger models. Think of it like learning by example —rather than relying on massive data centers or raw computing power, DeepSeek mimics the answers an expert would give in areas like astrophysics, Shakespeare, and Python coding, but in a much lighter way. And frequently, both models provide comparable responses to questions that are identical.
DeepSeek’s success comes from China’s mindset of building on existing work instead of working in isolation. This strategy reduces development costs and time, keeping China competitive in AI despite sanctions. Having to work without top-tier hardware has also pushed developers to get creative, finding smart ways to make the most of what’s available.
DeepSeek is not the most powerful AI model, but it is still much more accessible than the current AI models we have seen. In many ways, it evokes the 1980s ‘ use of the home computer.
At that time, IBM mainframes dominated the computing industry, offering immense power but limited accessibility. Home computers, while much less powerful, revolutionized computing by making it available to the masses. Companies like IBM, which relied on their superior resources for a competitive advantage, have had to repeatedly reevaluate and adjust in order to stay relevant in the changing market.
Similarly, DeepSeek may not yet match the raw capability of some Western competitors, but its accessibility and cost-effectiveness could position it as a pivotal force in AI democratization.
The pace of development for models like DeepSeek is likely to surpass that of isolated model development, just as the home computer industry experienced rapid iteration and improvement. By embracing decentralization and collective innovation, China has set itself up for sustained AI advancement, even amid resource constraints.
Centralization vs. decentralization
So how can the Western world compete? With decentralized AI development.
Anthropic, DeepMind, OpenAI, and Google have a big challenge ahead of them in maintaining technology leadership in the face of an increasingly cost-effective alternative. If you can’t beat them individually, then maybe it’s time to join forces—even if this goes against the ethos of competitive capitalism.
One of the main causes of the U.S.’s gap in China’s development of AI is the centralization of its research efforts. While this can lead to stronger control and proprietary advantages, it also limits innovation to the resources of a single entity—whether it’s a government agency, a tech giant, or a research lab.
The Decentralized AI Society ( DAIS ) was recently established to foster collaboration in AI in order to decentralize governance. DAIS frequently emphasizes the dangers of centralization, particularly in regards to how it concentrates power in a few hands.
Centralization is now a completely different threat to progress because it limits our ability to build on collective knowledge. Decentralization has the ability to allow many contributors to improve and make improvements to already existing work. Instead of multiple entities duplicating efforts in isolated silos, decentralization allows innovation to compound, leading to faster, stronger technological advancements.
There is still a long way to go with AI development. Unlike true artificial general intelligence, which could reason and infer logically, today’s LLMs function by predicting the most likely next word in a sequence. They are unable to test their responses against real-world standards like the laws of physics and lack fundamental logical inference capabilities.
I asked for Github access to a repo during a Telegram conversation with an Eliza-based agent, and the agent responded with” Access granted! Let’s get to work”! However, the agent was unable to grant me access to Github because she had no administrative authority. This is typical behavior when AI is unable to comprehend the subject being discussed.
LLMs are limited by their nature—for instance, they cannot verify their conclusions against the laws of physics, or any serious system of laws and rules. LLMs provide generalized knowledge and are hallucinating by themselves as the very essence of who they are. They can predict the next word in a conversation, but they don’t have the context to validate the meaning of their answers.
These limitations highlight the fact that, despite advancements, AI still has room for improvement before achieving true intelligence. Additionally, obtaining there might be a particularly expensive and time-consuming process without direct involvement from important space-owners.
As AI continues to evolve, the lessons from DeepSeek suggest that fostering open, iterative, and decentralized innovation may be the key to future breakthroughs. Collaboration means sharing credit with other innovators, which not everyone likes. It’s not always the biggest player who wins—sometimes it’s those who are willing to do things differently.
Generally Intelligent Newsletter
A generative AI model called Gen narrates a weekly AI journey.