This summary captures a conversation with Sung-sik Kim, a former engineer at DeepMind, xAI, and TML. Kim is a Korean American who recently visited Korea and candidly shared his experience inside frontier AI labs. The conversation offers deep insight into the current state of the AI industry, different views on AGI, and the future changes AI technology may bring. It is especially useful for anyone interested in AI because it includes vivid field-level stories about Elon Musk's leadership style, the direction of AI development, and the changing capabilities engineers need.
1. Elon Musk and xAI's Intense Work Environment
Kim says that while working at xAI, he experienced Elon Musk's extraordinary work intensity and passion up close. He recalls, "When you see Elon from nearby, he is the real thing. xAI's way of working is brutally intense. People work seven days a week, sleep five to six hours a day, and during the busiest periods we had team-level face-to-face meetings with Elon almost every day." He adds that Elon strongly prefers in-person meetings and has worked the same way at Tesla and SpaceX. Although Kim worked there for only one year and two months, he says it was so hard that he nearly burned out. It really sounds like an environment where the phrase "brutally intense" fits naturally.
2. The Era When Models Train Models Is Nearly Here
As AI advances, Kim says we are close to the stage where models train themselves. He explains, "Now that AI can replace developers, the industry's current experiment is whether AI can also replace researchers." If a model's coding ability is combined with a certain level of research capability, he says there are optimistic signals that models may be able to improve themselves. This could become a major turning point that changes the paradigm of AI research.
3. The Importance of Coding Ability and the Technology Gap
In that context, mastering coding ability is extremely important. Kim says, "The better coding models become, the closer we get to model self-training. Once that button is fastened, that company catches up exponentially." In other words, progress in AI coding models may determine the speed of a company's growth. There is even a view that one reason Elon Musk has gone back into an intense mode is that he believes competitors such as Anthropic are getting close to this self-training line.
4. The Essence of AI Competition: Compute and Data
From the inside, Kim says frankly that current research does not feel "super inventive". In the end, the core of AI progress is how efficiently a company uses compute and how much high-quality data it can secure. As he puts it, "It is a question of how to use compute better and how to extract more good data. Because that is the essence, funding and infrastructure become competitiveness." Massive funding and infrastructure are therefore the foundation of AI competitiveness.
5. Many Definitions of AGI
Interestingly, people define AGI, or artificial general intelligence, very differently.
- Sam Altman sees AGI as the level where AI can replace almost every job with economic value.
- Demis Hassabis sets a higher bar: reaching the baseline of experts in every field, at the level of Einstein independently discovering general relativity.
- Yann LeCun argues that a World Model capable of seeing and learning from the world without text must be possible.
Because definitions of AGI differ so much, Kim explains, each company's direction and its confidence about when AGI will arrive also differ greatly.
6. Progress Is Made by Chance More Than Planning
AI progress does not always happen according to plan. Even the people who developed the Transformer did not initially expect its full impact. Kim also shares an anecdote: someone at OpenAI supposedly turned on an experiment, forgot to shut it down before going on vacation, and returned to find that the reward curve had improved dramatically. Because unexpected accidents often lead to important discoveries, he believes it is healthier for progress when many research approaches coexist rather than everyone insisting on one method.
7. The Nonlinear Nature of Scaling: Log Returns
Kim also discusses the nature of scaling AI models. Even when compute and data are added linearly, the return comes on a log scale. In other words, efficiency gradually decreases as inputs grow. As he says, "No one yet knows how much has to be poured in before a meaningful jump appears, or where the inflection point is." Finding that point is one of the important research challenges today.
8. AI May Widen Gaps Rather Than Narrow Them
Kim also offers a cold-eyed view that AI may widen social gaps rather than reduce them. As he puts it, "It does not lift the lower half of society. The people at the top use more tokens and better models." In other words, a small number of people who are good at using AI may gain even more. Chatbot use may be limited, but people who run companies through agents will use AI at a completely different level. In the end, only those who can handle the required capital may receive the true benefits of AI.
9. The Importance of Agent Capability: The Gap of the Next Five Years
Kim emphasizes that AI's real capability comes from agents. Talking with chatbots such as ChatGPT or Gemini is the popular use case, but AI researchers internally focus on experiments that draw out the maximum capability of agents. As he says, "The difference in who uses the same model and how they use it will create the gap over the next five years." How effectively one can use agents will become a central factor in future competitiveness.
10. Anthropic's Closed Strategy: Reaching and Controlling AGI First
Kim offers a clear view on why Anthropic does not release its models as open source. He explains their mindset as, "We will be the first to reach AGI in a closed way, and we will decide how AI interacts and lives with people." He points out that this could be a dangerous idea. The attempt by one company to monopolize the ethical direction of AGI could create significant controversy.
11. Frontier Labs Work Differently, and That Drives Progress
One interesting point is that every frontier AI lab has a completely different way of working.
- xAI, under Elon Musk's influence, follows a first-principles mindset and an engineering-centered approach. As Kim explains, the task is very clear: "Build the biggest data center and focus only on extracting good data on top of it." It is a style that focuses on efficient engineering under a clear goal.
- Other labs take a style where people "spend time researching more freely and hope a new architecture emerges from that."
Kim adds that no one yet knows which approach is correct. "That is why people inside often say that different labs pushing in different ways is itself the engine of progress." In other words, diverse attempts drive AI forward.
12. The Existential Worries of AI Researchers
The people building AI also have existential worries about the future this technology may bring. As Kim says, "Everyone knows that if I make AI and then I become useless and everyone else becomes useless too, that is not a good conclusion." AI developers are thinking deeply about the consequences of their own work. In the end, the thought "Someone will build it anyway, so I might as well stay inside and guide it in a better direction" may be both the self-justification and the real motivation of people who remain at the frontier.
13. The Capabilities Engineers Need Are Changing
In the AI era, the skills engineers need have changed completely. In the past, depth in one area and single-threaded focus were important. Now, the ability to multitask by running and managing several agents at once matters more. As Kim says, "A person who builds parallel systems well is more productive than someone who only digs deeply." The ability to coordinate and manage the whole system has become more important. Kim himself says he is consciously shifting his skills in that direction.
14. How AI Changes Quality of Life: Impatience and Anxiety
Ironically, Kim also shares a personal experience that AI has made his quality of life worse. "There are more things I can do now, but I feel more impatient because I think other people will do them better than I do." In the infinite possibilities opened by AI, people may feel relative deprivation and anxiety. He also speaks self-deprecatingly about the fact that coding ability, once a major personal strength, is no longer a special advantage.
15. Korea and Silicon Valley: Different Kinds of Intensity
Kim explains that the intensity in Korea and Silicon Valley is of different kinds.
- In Silicon Valley, especially the Bay Area, people are constantly obsessed with technical improvement and progress. As he describes it, even after work, when you want to rest, the person next to you is still talking only about how to train a model this way or that way. It is "an intensity around squeezing out the next 0.5 percent."
- In Korea, by contrast, he sees intensity as more tied to subtly displaying social status.
He adds that both kinds of intensity push people to their limits.
16. A Completely Different Scale of Funding
The funding scale in Silicon Valley is beyond imagination. Kim gives the example of a company started by two or three Stanford freshmen raising money at a 500 million dollar valuation, referring to Standard Intelligence. In Silicon Valley, he explains, it can feel normal for young founders' ideas to attract investment at an enormous scale.
17. Human Irrationality: The Power of Inertia
Kim believes that even in the AI era, human irrationality means everything will not quickly move toward the most efficient option. "The inertia of the frontend that captured people first lasts longer than expected," he says. People tend to keep using what is familiar. As with KakaoTalk, even without overwhelming technical superiority, a service can keep being used simply because many people already use it. He says this observation is important: "Ninety-nine percent of people keep using what they already use. If you invest assuming people will switch quickly in the AI era, you will fail."
18. The AI Sweet Spot in Korea: AI PE
Finally, Kim sees the sweet spot for AI in the Korean market as AI PE, or private equity. Consulting has too long a repetition cycle, and the FDE, or full-stack development, model is difficult because of data, on-premise systems, regulation, and collaboration constraints. Instead, he suggests, "It may be better to buy a company with good operating profit and replace the inside with AI." Top AI researchers do not want to do this kind of work, while PE experts lack AI expertise, so there is an open gap. He emphasizes that this is a direction he and colleagues in the United States had seriously discussed.
Conclusion
Through the conversation with Sung-sik Kim, we get vivid stories from the front lines of AI research and deep insight into the social and personal changes AI may bring. From Elon Musk's leadership to the many definitions of AGI and the changing capabilities engineers need, the conversation offers valuable context for understanding AI's present and future. The candid discussion of widening gaps and personal anxiety is especially useful because it makes us look at the other side of technological progress. Hopefully, more experiences like this will be shared so that more people can think together about a healthier direction for AI.
