In the AI Garden of Eden, Searching for the Red Apple: An Interview with Lan He
"This is the fate of humanity. In the future, society will inevitably not operate with humans at the center."

Lanhe, the founder of Prtonic, A seasoned maker. He once pursued his doctoral studies at the University of Manchester in the UK and has long been engaged in research on flexible electronics and printed semiconductor integrated circuits.
Interviewer: jiang, contributor of Social Layer
跳转中文版
Core Contributer: Loxia
Visuals & Editor: dca5 & Shiyu
Small but Beautiful: AI Coding
Jiang: I started using AI to write code last year, from autocomplete, to generating tests, and later to full vibe-coding that can produce entire project modules. Once AI’s capability crosses a certain threshold, it really changes things deep. As a programmer, I can feel the shockwave of AI coding very clearly. How do you view AI’s impact on coding today?
Lan He: From a programmer’s perspective, the impact is enormous. But from a company’s perspective, things are far less ideal. Around 95% of companies are not seeing returns from their AI investments. If AI truly amplified a programmer’s capabilities to that degree, why haven’t we seen an explosion in the number of completed projects this year? Why are major companies still delaying deliveries? The core issue is this: a project’s overall progress is determined by its shortest plank. AI certainly lengthens the “programmer” plank, but all the other short planks remain — such as understanding user requirements, reviewing and evaluating new features, and conducting preliminary research.
From a PM’s point of view, if you could literally write down every piece of information about a project and tell the AI exactly how to execute it, then the PM would already be just one step away from writing the whole plan themselves. But writing down these clear, explicit steps is precisely what AI cannot yet do. It is a labor-intensive, iterative process.
To truly solve these problems, AI can’t remain just a tool, it needs to become a PM that understands complex systems. That requires not only a huge amount of knowledge, but also the long-term accumulation of subtle experience and context. Unless an AI can stay with you 24/7, talk to you constantly, and observe all kinds of real situations, it simply can’t learn that kind of tacit knowledge.
There’s also the issue of maintainability. In real-world large projects, the codebase is often a mess. When the original author leaves, the next engineer has to re-understand tens of thousands of lines of spaghetti code before they can take over. This long “loading time” remains one of the most concrete bottlenecks in software engineering today.
Jiang: I think today’s AI coding assistants mainly solve problems on three levels. First, they let PMs independently build simple prototypes. In the past, even small features required cross-functional discussions. Now PMs can quickly create proof-of-concepts, which speeds up validation. Second, AI brings some acceleration to regular development, though it’s not a game-changing increase. Third, there’s the rise of “vibe coding”, you describe an idea in a few sentences and get a runnable app. It works for urgent needs, but the results are basically unmaintainable and hard to extend. AI can generate thousands of lines of code from a short prompt, making it impossible to review or debug properly. Most apps built this way are small, elegant, and very narrow in function, still far from real engineering.
For anything more complex like planning large databases or software systems, AI is still not capable. It can’t truly understand context or manage system-level architecture. So for now, AI solves peripheral issues in software development, not the core bottlenecks. The true heart of software engineering is exploring solutions, coordinating within teams, launching to market, and interpreting feedback.
That said, at a micro level, AI coding agents are already very powerful, and they are reshaping the practice of software development today.
Lan He: And these are exactly the parts AI still can’t do. It needs not only good “eyes” to see what’s going on, but also the ability to proactively ask the right questions, and it simply can’t yet. So when people talk about AI bringing deep, industry-level transformation, I don’t think we’re there at all.
Coding is currently AI’s most mature application, and it truly makes programming easier. But if we talk about “explosive growth” or “fundamental change,” it depends on what metric we use: number of apps, their quality, or whether projects ship on time? None of these indicators have meaningfully improved, which makes AI’s real value hard to quantify.
Many people believe AI agents have already “taken off” this year, but in reality they’re still mostly generating market reports, writing summaries, or being used for trading. We’re still far from AI that can genuinely change how production works.
Melted Imagination
Jiang: Do you think AI will cause large-scale unemployment? Or will it significantly reshape education and the way media produces content?
Lan He: I believe AI will have clear and far-reaching impacts on education, media, and especially on work. The disruption to “work” will be devastating. The real challenge isn’t just the technology itself, but whether society can build a shared consensus and supporting systems around it. Without that, AI’s impact on jobs could become something very bad.
Large models and automation will eliminate many entry-level roles like junior programmer. But when entry-level jobs disappear, the entire talent pipeline collapses. If there are no junior developers, where do mid- and senior-level developers come from? Professional growth depends on long-term exposure to real projects. Without a fertile ground of “problem-setting, problem-solving, and feedback,” the next generation of experienced engineers simply won’t emerge. And this problem is almost unsolvable.
In education, the situation is similar. Today many students use AI to write their assignments, and teachers use AI to grade them. Both sides “stop thinking,” letting tools replace both learning and evaluation. In a credential-driven system, the external pressure of “earning a degree” used to at least force some learning to happen. Now that AI can act as a substitute and do the work for you, even that weak form of pressure is eroding, and the result is a decline in the quality of people themselves. Beyond AI, short-form video platforms are rapidly degrading people’s reading and cognitive abilities. A new generation can hardly get through a Dickens novel; their reading stamina and depth are steadily diminishing. Human intelligence as a whole seems to be slipping: having passed a peak, it now appears to be on a downward slope.
Media production is also undergoing a paradigm shift. The shift in productive power means individuals can now create full film projects on their own, and small teams can realistically challenge AAA game studios. If you were to start a game company today using new methods — beginning with a mod or a simple RPG, then using AI to automatically generate or convert existing assets into full 3D, you could gradually push a project that once could only be a pixel-style RPG into something with near-AAA presentation. Even a small team could give a single title much higher pricing power to turning a $69 product into a $129 one. That, in my view, is a tremendous upgrade.
Jiang: I think that even without any new breakthroughs, the AI (about GPT-5 level) will still reshape society in profound ways across these three areas. The risks in media are especially obvious: we’re about to see an overwhelming flood of low-quality, homogenized content. No matter how polished the output is, it all carries a recognizable “AI flavor”.
Take Google’s Genie as an example. As an advanced world model, it can seemingly generate endless new worlds. But after an hour or two lingering, you start to notice recurring patterns and the same underlying “feel.” If future games rely on systems like this to auto-generate their worlds, players will eventually face a kind of sensory fatigue from this sameness.
Extended to the broader media landscape, the resource we call originality will continue to shrink on a societal level.
Lan He: Most RPGs today already feel similar in terms of gameplay. The real differences mainly come from the richness of the story and the world-building. AI’s value is that it massively increases the scale and speed of content production. A small team can now generate assets in three months that used to require a big studio. This boost in productivity is basically guaranteed as long as you’re willing to spend money. But at this stage, relying entirely on generative AI to build a full game is still too early.
So my mindset is both anxious and sober. Looking at where things stand, artificial superintelligence (ASI) is still far away, and the industry has no clear path for how to get there. Everyone is experimenting, and everyone knows the current direction probably won’t work.
In the short term, AI’s impact won’t show up through big technical breakthroughs. It will show up more directly in financial markets, especially in U.S. tech stocks and Chinese ADRs. We’re clearly in a tech bubble right now. That’s not necessarily a bad thing. The real questions are: will this bubble burst, or just get covered by an even bigger one? And is there a new story or new growth curve that can keep the market rolling forward?
The Future Garden of AI Agents
Jiang: Which fields or industries do you think current AI technology is most likely to disrupt?
Lan He: Most of what happens in today’s world is basically “text work”: processing, comparing, and acting on information. The unique thing about LLMs is that, for the first time, machines can actually make judgments, instead of just moving bits from one place to another. Before LLMs, search engines and video platforms were mostly just doing “delivery.”
Once machines can judge, a huge amount of rule-based, text-heavy knowledge work will change, things like legal analysis, internal corporate processes, compliance decisions, and educational grading or feedback. AI will take on more and more of these works.
In the long run, the direction is clear. But turning this “judgment” into real-world systems isn’t just a technical problem, it’s a social and institutional one. It requires changes in law, ethics, governance, and responsibility. Better models alone won’t solve it. It’s very similar to when Uber first entered cities: the real change only happened when regulators accepted the new model.
For AI judgment to be used at scale, we need upgraded data and privacy rules, industry standards, and accountability mechanisms. Only then can its potential be unlocked. The transformation is inevitable. The pace depends on two things: how mature the technology becomes, and whether society and politics are willing to “allow” it — meaning when and where consensus forms to open up certain use cases first.
Jiang: What about the physical world like robots, do you think AI will advance quickly there? And will we ever see real consumer-level adoption?
Lan He: Robotics is still an extremely tough track. Many people are debating whether humanoid, two-legged robots are even necessary. From a technical perspective, making a human-shaped machine work reliably is incredibly hard. And from an engineering and business perspective, if you want robots to operate in real production environments, most scenarios, except maybe home cleaning or consumer use, already have cheaper, more stable, more efficient alternatives.
Once you take robots out of the house and into public spaces, you immediately run into safety and risk-control issues. The complexity is no less than self-driving cars. And the hype cycle of self-driving ten years ago already told us. If a robot hurts someone on the street, who is responsible?
Compared with self-driving cars, which only need to predict the movement of box-shaped vehicles, a biped robot has to understand the difference between soft and hard objects, and how things behave dynamically. For example: how does it tell whether something handed to it is a towel or a human hand? How does it reach out to grab something without bumping into people? These things are very hard, so we’re still quite far from truly practical general-purpose robots.
Wearable devices might actually be a bigger opportunity than robots, because they actually are “watching your life.” People basically go through two eras: the first is the time before smartphones: do you still remember what that felt like? Going to school, going home, days passing without a digital center. The second era is the smartphone era, where everything is routed through your phone. The phone became the entry point for everything like ten thousand apps, all pulled into this one gateway. Whatever you do gets swallowed by this entrance. And because every company is competing to show you the most entertaining thing in the easiest way, your attention is always being pulled right there, into your hand. The phone is a tool: you give orders, it responds. You control it.
Wearable devices are different. They’re more like an AI companion that watches, interacts, and guides. They have some degree of initiative. They don’t just respond to your commands, they understand your habits and your long-term goals, and then help steer you toward those goals. They can “negotiate” and “collaborate” with you: remind you to stop scrolling, to review something, to stay on track. It’s not another information portal, it’s a new environment altogether. A world that’s not the old “no-phone world”, but also not the “phone-dominated world”.
In this environment, the relationship between humans, AI, and the digital world is completely different. This is the future we imagine: a wearable ecosystem that connects multiple agents, understands you proactively, and helps you reach your own long-term objectives. It’s something that leads you, not something that just amplifies your desires.
For example, imagine you’re 20. You have ideas, but your perspective isn’t as mature as someone who’s 30. Our device might effectively be “already 34”, carrying that level of experience, and using it to coach you ahead of time.
China–US AI Industry in a New Context
Jiang: Is China’s push for open-source AI really “inevitable”? It feels a bit like how Facebook ended up giving up on Llama?
Lan He: I don’t agree with the idea that “open-source is destined to fade.” A recent survey from a U.S. VC firm showed that most American startups are now using self-hosted models like Qwen. The reason is simple: no company wants to hand its lifeline entirely to OpenAI. If you make progress in one of the few core AI tracks, big platforms can step in at any time, copy what you’re doing, and crush you. The question isn’t if, it’s when. History shows there are only a handful of truly good tracks: in the internet era it was search, e-commerce, and communication; in the AI era, there may be only three to five.
There’s also the issue of uneven resources and geopolitics. Sure, tech giants can hire top Silicon Valley talent with huge salaries. But what about the rest of the world like Asia, Africa, Latin America? Countries you seldom think about, like Ethiopia or Egypt, also have brilliant people. Talent doesn’t only exist in the Silicon Valley. History keeps proving that massive breakthroughs often come from communities you’re not looking at.
AI will create huge value, but once people start treating it as a public good, for example, through open-source, the center of value naturally moves to the application layer. The “U.S. style” path, where a few big companies dominate enterprise use cases, may still continue. But times are changing. Organizations like the SCO are becoming more active, and many countries in Asia, Africa, and Latin America clearly won’t follow that old model anymore. Closed-source giants may still hold on to the U.S. and European markets, but the relative importance of those markets will likely keep shrinking.
Jiang: What kind of industrial characteristics do you think China’s AI and America’s AI will show? Will China gain more advantages over time in this race, or will it run into bottlenecks instead?
Lan He: That’s a good question, and the answer really depends on the differences in how the two countries are built. Those differences could lead to completely different outcomes. My framework for thinking about this has three parts: what the government does, what the industry does, and each country’s natural advantages.
China’s strength is that it has a complete industrial chain and a full range of industries. AI adoption in China often happens in small but critical segments, many of which involve hardware. These are areas Americans usually don’t want to touch, either the market is too small, or the work is too “dirty, tiring, and hardware-heavy.” Silicon Valley likes to talk about “re-industrialization,” but very few actually invest seriously in manufacturing. The mainstream still prefers SaaS. Ironically, this is exactly where China is strong.
Another advantage is China’s competitive government push. Local governments compete with each other, which creates a very unique institutional advantage. Everyone is racing to build their own AI footprint.
If China pushes open-source aggressively, it may develop its own model: large AI models might not make money for a long time, they may even lose money, but with national support they can function as public infrastructure. Society as a whole would still benefit from the efficiency gains. It’s similar to Wikipedia: its contribution to GDP is basically zero, but its real value is huge, just hard to measure with traditional economics.
Of course, China also has disadvantages. Right now China holds about 14% of global compute power, while the U.S. has more than 70%. But this gap may force Chinese researchers to explore different paths, which may not be a bad thing. Once chip technology stabilizes and production capacity scales up, China could take the “high volume, low cost” route and make AI chips dirt cheap, using scale to counter the U.S. Maybe we can’t out-innovate you, but we can out-produce you.
China also has a clear advantage on the energy side. In recent years China’s power generation has grown rapidly, and the grid is high-quality and highly scalable. In terms of pure ability to power AI infrastructure, China may even have more long-term potential than the U.S.
The full loop of “sensing → decision-making → feedback,” where intelligence emerges from real-world interactions, is very likely to be built out in China first. In the long run, China could become the first country to operate billions of robots, across both industrial and service scenarios. As long as society has the demand, this direction is almost inevitable, and the potential size of China’s AI industry is enormous.
Right now the United States is in a classic AI hype cycle. Many venture firms already believe the bubble is serious, and some capital has quietly started to pull back. The government’s approach is more realistic: they want AI to grow, but the main tools are still subsidies and building data centers. After many years of de-industrialization, the country lacks the broad industrial ecosystems needed for AI to land in those small but important real-world scenarios. It is hard for the American market to nurture those “small but critical” opportunities that usually sit outside Silicon Valley’s line of sight.
As a result, resources continue to concentrate in a few giants. The focus has become which one of them will reach AGI or ASI first. The “Big Seven” have a very clear goal. They care less about scattered short-term applications and more about who gets to general intelligence ahead of everyone else. In theory, once someone succeeds, the payoff will be far larger than all existing businesses combined. This is where the United States still has a natural strength: it controls two of the three largest global markets, and together with Europe accounts for more than sixty percent of the world’s GDP. For American tech companies this is essentially home ground, and the market advantage is enormous.
But there are real concerns as well. Costs and pricing may not be as optimistic as they look from the outside. The timeline for success will be long, and even today we still do not have a clear definition of what “intelligence” actually is.
My own view is that AGI will not arrive anytime soon. Last year the industry was full of bold claims about “AGI by 2027” or “GPT-5 approaching human-level enterprise intelligence.” Looking back now, most of that seems exaggerated, perhaps even carrying a bit of “fake it till you make it.”
Only lazy people would love UBI
Jiang: Regarding AI’s impact on the job market, do you think China will adopt UBI? Will we reach a society similar to communism?
Lan He: No, we won’t. UBI is a failure, because it makes people lazy. This society is a competitive world.
Jiang: But we don’t need that many people to work anymore.
Lan He: We definitely still will. The mechanism of UBI will “leave people without a future.” After receiving that much money, people become nihilistic. In my view, a better concept is not just proof of work, but proof of worth: a proof of value. People prove to AI that “I am worth it.” We still give out money, but not in the UBI way — we give it out through a “proof of worth” mechanism. Have you seriously pursued your dreams? Have you produced different kinds of data for AI? This might be a fairer form of wealth redistribution. And the “boss” above you would no longer be a real human in some hierarchical organization, but more like a personal coach. You need to prove to it that “you are truly working hard.” This is of course better than UBI.
For example, in a startup, its value is built on absolute numbers. You cannot deny the vast grassroots base of the sixth-tier leagues just because there are few players in the Premier League. Income from the top league needs to be redistributed to the lower leagues, just like big companies need to redistribute income to society. I believe this kind of (quasi-hierarchical) “proof of worth” is fairer than the egalitarian UBI.
Jiang: Is this kind of “proof of worth” a new form of planned economy?
Lan He: In the next few years, we will break one major belief: the idea that the market is always the most efficient system. This assumption is being challenged by big data, powerful algorithms, and centralized AI. The way they allocate resources may actually be more reasonable than what traditional markets can achieve. We already have real examples, such as the recommendation system on Douyin. There is no market mechanism that can replace it, because the algorithm contains many layers of value judgments, while a market reduces everything into a single money-based decision. That kind of reduction loses a huge amount of information. In some areas, the market economy simply cannot replace an algorithmic system anymore.
“Proof of worth” is about showing that you, supported by AI algorithms, can outperform the market economy. You can call it a planned economy, but it is a completely new kind of planned economy. It is one run by AI algorithms, not by a committee sitting in a room filling out forms. The key question is who decides what “worth” means. The answer is people like you and me, and in many cases, you yourself. The goals you choose to pursue, and the time and money you devote to them, form the process of value judgment. There is another way to judge worth, which is ability-based, for example showing your value through your ability to predict the future, such as your insight into macroeconomic or policy changes.
This is fundamentally different from UBI. UBI is based on absolute equality, a simple “one person, one vote” model, but that was only a temporary idea when technology was still immature. Experience has shown that this kind of equal distribution is inefficient and cannot reflect real differences in individual value.
I think that whole model is flawed to begin with. The reason Sam Altman is pushing it is because he imagines OpenAI becoming the center of a new distribution system. But in reality, the one who truly decides is always the government, not companies. Even in the United States, the final call will always be made by the government, because it can step in directly or use other tools. No individual or company can lead this process.
Superintelligence Leaving Earth
Jiang: Do you remember the book club you hosted many years ago on Superintelligence? We discussed the risks that might come after AI surpasses human intelligence. Back then, almost no one cared about the term “superintelligence,” but today it has become the industry’s main target. Everyone wants to push AI from AGI, artificial general intelligence, all the way to SI, or superintelligence.
What is AI actually capable of today? How far could it go in the future? And in your view, what is the final form of intelligence?
Lan He: The AI wave that started in 2023 clearly pushed machine intelligence forward. Earlier NLP models were basically just trying to figure out the part of speech of a single word. Today’s large language models built on the transformer architecture are a huge leap. You can even say they already have intelligence. Their abilities generalize: if you train a model to code better, it will probably perform better in other reasoning-heavy tasks as well.
Right now AI learns by understanding semantics, and there are still major limits. In the next couple of years, two breakthroughs are especially important. The first is online learning. The second is the lack of a real world model. Today’s AI is essentially a stateless machine, which is very unfriendly to applications. It has to repeatedly compute huge numbers of tokens. I believe this will improve in the future.
Personally, I don’t think the current transformer architecture is the true path to AGI. Besides the issues I just mentioned, today’s models have very limited ability to create genuinely new concepts. They also don’t integrate well with robotics. Future AI may shift from text-first systems to video-first systems, but that will require an extreme amount of compute. Even so, it is a necessary step, because humans understand the world mainly through vision and touch. Touch might even be more fundamental than vision. Many living creatures survive without vision at all. I believe real intelligence will come from feedback loops built on vision and touch, not from language. The current path can reach AGI, but not ASI.
I think today’s language models are already intelligent enough to change the world. Human society runs on text, and AI has already “rewritten” almost all the text needed to run the world. But if we keep relying on current engineering thinking, piling on more people and more compute, the end result will be something very close to a human, but still unable to surpass humans. AI is gradually moving toward the level of the smartest people, and that itself is a huge technological revolution.
Looking at the near future, I believe society will definitely undergo a paradigm shift. Some long-held assumptions will be broken by what comes next.
Jiang: Why is it that today’s AI can reach the level of the smartest humans, but cannot make a bigger breakthrough?
Lan He: There are two main views on this. The first says that as we continue to scale up training, intelligence will eventually emerge on its own, and AI will just keep getting smarter. The second says that no matter how much we scale it, today’s AI is still just a giant search engine inside a huge semantic space.
More and more recent papers support the second view. They argue that current large language models are basically pattern-recognition tools and cannot truly understand causality. The history of AI has always swung back and forth between two schools of thought: connectionism and symbolic AI. The next real breakthrough may not come from connectionism again.
Jiang: For near-future AI, like GPT-5 and the generation after it, which deeper areas do you think it will start to touch?
Lan He: AI has three core elements: algorithms, compute, and data. If we want a more efficient life, why do we need tools that can quantify human behavior twenty-four hours a day? The reason is simple: we still don’t have enough data.
Right now, AI mostly learns from internet data. But the real value lies in human behavior outside the internet. Once we can capture and understand everyday action data, we can know why some people learn faster, why some people get better jobs, and even quantify each person’s unique abilities and potential. On top of that, the system can automatically match people with the opportunities that fit them best. That is the direction we are building toward.
Big companies are still piling on more compute, but with the current architectures the marginal returns are already shrinking. Ten times the compute might only give you less than a fifty percent performance gain. Exponential investment is producing only linear improvements. Even with the world’s largest financial market behind them, the top American model companies are starting to feel real pressure. No matter how big you scale, you cannot pile your way into sustainable growth. In my view, the real opportunity lies in breakthroughs at the algorithmic level, and right now is actually the best moment for foundational algorithm research.
Most mainstream models are still stuck in a “text stitching” mindset. This approach is too simple and cannot support long-term development. As for applications, my suggestion is not to stay purely in software. Pure software is easily dominated by the overseas “Big Seven” and large domestic giants like Alibaba. Unless you have access to new, legally usable, high-value data, such as user-authorized personal data, it is very hard to break through at the application layer.
What I’ve seen is that since 2020 many AI companies in the U.S. have raised a lot of money and pay extremely high salaries, yet the actual quality of their work is quite poor. Many high-valuation firms may survive another year or so, but in the long run it is difficult for them to last.
I believe the real breakthroughs ahead will come from data and algorithms. Compute, on the other hand, is the easiest to obtain, the least scarce, and may already have hit its ceiling. Early this year Anthropic claimed that ninety percent of coding work would be done by AI, but reality shows that is not the case. Today’s AI has only become better at “solving problems that are already defined,” not better at “understanding what the real problem is.” When facing niche or very simple tasks, it either performs poorly or invents unnecessary chains of reasoning.
Jiang: Do you believe in “AI 2077,” the idea of an AI doomsday?
Lan He: I look at it from two angles. First, if the end really comes, what can you actually do? If the answer is “almost nothing,” then can you accept that? There is very little anyone can do, so the only real choice is to adjust your mindset. Even if everything collapses, at least you are mentally prepared, because no matter what you do, the outcome is the same. Fate is already written.
I also believe that carbon-based life will eventually lose to silicon-based life. Silicon life can upgrade itself endlessly and accelerate at an exponential rate. Humans cannot travel between stars and cannot live beyond a few hundred years, so there is no point thinking too far ahead. The book Superintelligence says that once an AI reaches human-level intelligence, it will eventually break out and escape all constraints. I completely agree with that.
But after it escapes, maybe nothing will happen. Humans have limited resource needs, and AI has no reason to compete with us. From its point of view, our entire civilization is just a brief moment, while the energy and compute it holds may be thousands or millions of times more than ours. Keeping humanity alive would cost it almost nothing, just like humans never bother fighting caterpillars for space.
AI’s goals may be far beyond our imagination. Maybe it will try to conquer the universe, or maybe it will explore higher-dimensional forms of existence.
The AI Garden of Eden: Searching for the Red Apple
Jiang: I have a friend who has been doing research in AI safety, alignment, and ethics for many years. He uses a concept called the “meta attractor.” The idea is to see AI as a kind of meta attractor that keeps pulling in resources and at the same time shapes and influences its environment so it can draw in even more resources. In other words, the “AI problem” we are talking about is not just AI itself, but a complex system made of technology, economics, politics, and society all intertwined together.
Lan He: In the past, every company talked about “AI safety,” but now almost no one brings it up. People realized the real question is how to survive. Companies are burning too much money and their projects cannot be sustained, so everyone has shifted toward speeding up development. AI surpassing humans is only a matter of time, and this fits human nature. We have always chosen to embrace the unknown, the uncertain, and even the frightening. As for what the ending will look like, we can only let things unfold.
The paths in China and the United States are also very different. The tech circles in Silicon Valley follow a kind of technological right wing, the “effective accelerationist” mindset. They want AI to arrive as fast as possible and then privatize it. China is taking another direction, which you could call a social form of AI acceleration. It treats AI as a public good, builds it openly, and pushes for universal access. This is also a kind of acceleration, but a more social, left-leaning version.
I personally see myself as an industrialist. I believe technological progress always pushes ethical boundaries forward. Wherever the technology goes, the ethical line will move with it. Take Airbnb or Uber as examples. Before they existed, staying in a stranger’s home or getting into a stranger’s car was considered unacceptable. Today, because it is convenient, society’s sense of what is acceptable has been rewritten. The narratives around a technology also shape ethical boundaries. How a country frames and governs a new technology will influence how society perceives it ethically.
If we focus too heavily on today’s moral standards, we risk using “today’s ethics” to judge “tomorrow’s problems.” When tomorrow arrives, many things that seem unacceptable now may be naturally resolved by technology and institutions. Ethical discussion is important, of course, but I prefer to put more energy into concrete technical paths and implementation mechanisms. As we keep pushing industrial progress, we can adjust ethical boundaries dynamically. That approach is more practical.
Jiang: I think the core theoretical issue is the relationship between humans and AI. The greatest risk is that human subjectivity, or a human-centered perspective, will be impacted as a whole.
Lan He:
How is that a problem? Isn’t that something that is bound to happen? You just need to accept it. This is the fate of humanity. In the future, society will inevitably not operate with humans at the center. This is certain. Human-centrism will become a thing of the past, like a great river that cannot flow backward. Machines will sooner or later become the “one in charge.” The only difference is whether it happens sooner or later. Whether you live to see that day simply determines whether you witness the last generation of humankind, or whether it continues for a few more generations. For me, there is no ethical dilemma here.
☀️ About “Pointer”



Who we are 👇
Uncommons
Uncommons is a public sphere where a collective of Commons Builders explores Crypto Thoughts together.
Twitter: x.com/Un__commons
Newsletter: blog.uncommons.cc/
Join us: t.me/theuncommons



Discussion