Back to Articles

Linux Kernel Management at Scale: How Linus Torvalds Manages 40 Million Lines of Code

#Linux#Kernel#Open Source#Software Engineering#Project Management#Linus Torvalds

Introduction: Managing the Unmanageable

How do you manage a software project that has grown to 40 million lines of code, involves thousands of contributors, and powers everything from smartphones to supercomputers?

This is the challenge that Linus Torvalds faces every day as the maintainer of the Linux kernel. In a recent conversation at the Open Source Summit Japan (his 29th such public discussion), Linus shared his unique perspective on managing one of the world's largest and most successful open-source projects.

What makes his approach particularly remarkable is that Linus no longer considers himself a programmer. Instead, he describes his role as maintaining the project's overall health, managing release cycles, and enforcing the principles that have kept Linux stable and reliable for over three decades.

This article explores the technical details of how Linux kernel development works, the rigorous processes that ensure quality, and what modern software teams can learn from this battle-tested approach.


I. The Scale: By the Numbers

Before diving into management philosophy, let's establish the scale through concrete data:

1.1 Codebase Metrics

MetricValueSignificance
Total Lines40 millionIncludes code, documentation, and comments
MaintainersHundredsHierarchical structure across subsystems
Contributors per ReleaseThousandsGlobal developer community
Release Cycle9 weeksConsistent for ~20 years
Commits per Window11,000-13,000Via hundreds of pull requests
Merge Duration2 weeksIntensive code integration period
Stabilization7 weeksBug fixing and RC releases

These numbers aren't just statistics—they represent a finely tuned development machine that has evolved over decades. What's remarkable is that the 9-week cycle has remained consistent for approximately 20 years, demonstrating the stability of the process itself.

1.2 The Development Hierarchy

The Linux kernel doesn't use a flat contribution model. Instead, it employs a hierarchical maintainer system:

Linus Torvalds (Top-Level Maintainer)
    ↓
Sub-System Maintainers (Hundreds)
    ↓
Device Driver Maintainers
    ↓
Contributors (Thousands)

This structure allows:

  • Scalability: No single person reviews all code
  • Specialization: Maintainers deeply understand their subsystems
  • Quality Control: Multiple review layers before reaching Linus
  • Distribution: Workload spread across experts

II. The 9-Week Release Cycle: A Deep Dive

2.1 Phase 1: The Merge Window (Weeks 1-2)

The merge window is when Linus works what many would consider impossible hours.

"I receive approximately 12,000 commits per merge window. If you count the commits after merging, it's between 11,000 and 13,000. These come through hundreds of pull requests."

— Linus Torvalds

During the first week, Linus is merging code from morning to night, far beyond a typical 9-to-5 schedule. The second week slightly slows down as he handles delayed submissions and carefully reviews complex changes.

Key Characteristics:

AspectTraditional ApproachLinux Kernel Approach
Feature DeadlinesFixed dates (e.g., "Must be done by Friday")No deadlines—"merge when ready"
Missed Release ImpactHigh pressure, rushed featuresLow pressure—wait 9 weeks for next window
Submission FlowLast-minute rushesDistributed across 2-week window
Developer StressHigh (artificial deadlines)Low (natural cycle)

Why This Works:

  1. Predictable Cadence: Developers know that if they miss this window, the next one is just 9 weeks away
  2. No Arbitrary Deadlines: Code is merged when ready, not when a calendar says so
  3. Reduced Pressure: No "crunch culture" around releases
  4. Quality Focus: Ready code gets merged; incomplete code waits for the next cycle

2.2 Phase 2: Stabilization (Weeks 3-9)

Once the merge window closes, no new features are accepted. The entire community shifts focus to:

  • Bug hunting and reporting
  • Testing across diverse hardware
  • Release Candidate (RC) versions
  • Regression prevention (the golden rule)

Linus releases weekly RC versions during this period, with each week bringing the kernel closer to stability.

The RC Progression:

WeekFocusTarget Audience
RC1Initial stability checkBrave testers and developers
RC2-RC3Major bug fixesEarly adopters
RC4-RC6Minor fixes and polishBroader testing community
RC7Final validationDistribution maintainers
Final ReleaseProduction-readyGeneral public

2.3 The LTS Strategy

For Long-Term Support (LTS) versions (typically the last release of each year), the stabilization period is even more critical.

LTS Selection Evolution:

EraApproachProblems
Early DaysUnpredictable LTS selectionDevelopers rushed to add features
Problem"Feature cramming" before LTSSupposedly stable kernels became unstable
CurrentPredictable annual patternCommunity understands the rhythm

Why It Works Now:

  • Developers have witnessed the mechanism for years
  • Everyone knows if they miss an LTS, the next will arrive in a year
  • Companies plan around annual LTS releases
  • Reduced "sprint" mentality around LTS selection

III. Conflict Resolution: Why Linus Prefers Manual Merging

3.1 The Counter-Intuitive Approach

One might expect that with 12,000+ commits per cycle, automation would handle conflict resolution. Not so.

"I merge so often that I can almost do it with my eyes closed. I tell sub-maintainers not to pre-merge for me. Even though they know their subsystems better than I do, I have more merge experience than almost anyone."

— Linus Torvalds

3.2 Why Manual Merging Matters

BenefitExplanation
Global AwarenessMaintains high-level understanding of subsystem interactions
Quality ControlCatches issues that sub-maintainers miss in their pre-merges
Pattern RecognitionDeveloped intuition for spotting problematic code
Conflict TrackingKnows exactly where and why conflicts occurred

3.3 Real-World Impact

During every merge window, Linus inevitably finds code that looks wrong and rejects it:

"Sometimes I look at the conflict code and see a problem. Just a few days ago, I merged a commit with a conflict, saw the code had obvious errors, and pulled it back from the tree, demanding an explanation."

— Linus Torvalds

This hands-on approach ensures that the person with the broadest view of the entire codebase makes integration decisions—a key principle for any large-scale software project.


IV. The "No Regressions" Iron Rule

4.1 What This Rule Means

If there's one principle that defines Linux kernel development, it's this: No regressions allowed.

The Three Pillars:

  1. No Backward Compatibility Breaks: Code that worked 30 years ago must still work today
  2. No Behavioral Changes: If existing software depends on certain behavior, preserve it
  3. No Excuses: "I fixed a different bug" is not acceptable for introducing a new one

4.2 Why This Rule Is Controversial

Many kernel developers initially resist this rule. After all, isn't fixing bad design part of engineering progress?

"Many developers don't like my 'no regressions' rule. From an engineering perspective, sometimes the solutions are 'ugly'—we have to make the kernel behave differently for different programs to preserve compatibility."

— Linus Torvalds

The Tension:

DesireReality
"Fix the bad design"Breaking design might be critical for users
"Clean up old code"Old code supports legacy systems
"Modernize interfaces"Old interfaces power production infrastructure

4.3 The Real-World Impact

The problem with breaking compatibility is that users run legacy software:

  • Applications built 30 years ago on abandoned libraries
  • Embedded systems that can't be recompiled
  • Enterprise software maintained by vendors who no longer exist
  • Scientific instruments with custom kernel drivers

When you break compatibility, you're not just inconveniencing developers—you're breaking production systems that real people depend on.

4.4 The Linux Approach

PracticeImplementation
New FeaturesGet new interfaces
Old InterfacesRemain functional
Ugly WorkaroundsAcceptable if they preserve compatibility
Right FixNever at the expense of existing users

This is why Linux runs everywhere from smartwatches to supercomputers: users trust that it won't break their systems.

4.5 The Economic Argument

"If you're developing a library that others depend on, or a kernel that libraries depend on, which countless people depend on—you're not just fixing your own bug. You're breaking everyone who depends on your project."

— Linus Torvalds

The Cost of Regressions:

  • Immediate: Users can't upgrade
  • Short-term: Distributions can't adopt new kernels
  • Long-term: Loss of trust in the platform

The Cost of Avoiding Regressions:

  • Immediate: More complex code (handling edge cases)
  • Short-term: "Ugly" workarounds for compatibility
  • Long-term: Trust, stability, and universal adoption

For Linux, the math is clear: The cost of regressions far exceeds the cost of avoiding them.


V. What Triggers Linus: Developer Anti-Patterns

Even with a well-established process, human behavior still creates friction. Linus identified three behaviors that reliably cause problems:

5.1 The Saturday Night Pull Request

"I occasionally receive pull requests on Saturday—the day before I'm about to close the merge window. I think: 'Wasn't the code supposed to be ready before the merge window?' Sometimes I just say: 'No, wait for the next release in nine weeks. You're too late.'"

— Linus Torvalds

Anti-PatternCorrect Approach
Submitting at the end of the 2-week windowSubmit early in the merge window
Waiting until the last minuteHave code ready before the window opens
Hoping for last-minute inclusionPlan for the next cycle if too late

5.2 Untested Code

Linus runs the latest kernel on all his personal machines. When he discovers bugs that developers should have caught:

"If I'm the first person to find a bug, that means someone didn't test properly before submitting. Sometimes I email maintainers directly: 'You didn't do your job this time.'"

— Linus Torvalds

Testing Best Practices:

Testing LevelResponsibility
Unit TestingDeveloper's responsibility
Integration TestingSub-maintainer's responsibility
Real Hardware TestingSubmit on hardware similar to production
Regression TestingEnsure existing features still work

5.3 The "I Fixed a Different Bug" Defense

This is what genuinely angers Linus:

"What I cannot accept is someone unwilling to admit they introduced a bug. Bugs happen—we're not perfect. But when someone points out your bug, you should say: 'Sorry, that's my fault, I'll fix it.' Not: 'But I fixed a different bug!' That's not acceptable."

— Linus Torvalds

Acceptable Response to Bug Report:

AspectGood ResponseBad Response
Acknowledgment"That's my fault, I'll fix it""But I fixed something else"
ActionFix the bug without breaking other thingsIntroduce new bugs while fixing
CommunicationTransparent about the issueDefensive or evasive

5.4 The Regression Principle

"I'd rather have a known, documented bug that people can work around than bugs appearing everywhere, making the system completely unpredictable."

— Linus Torvalds

This is why regressions are treated so seriously:

  • Known bugs: Manageable, documented, workarounds possible
  • Regressions: Unpredictable, break working systems, erode trust

VI. Linus on AI: Skeptical of Hype, Open to Tools

6.1 The Stance: Anti-Hype, Pro-Utility

"I actually hate the term 'AI'—not because I dislike the technology, but because it's so overhyped. Everything must be AI these days. But I very much believe in AI as a tool."

— Linus Torvalds

The AI Hype Problem:

RealityHype
AI is a useful toolAI will replace programmers
AI helps with certain tasksAI solves all problems
AI is an evolutionAI is a revolution
AI needs oversightAI is autonomous

6.2 AI for Code Review, Not Code Generation

While many focus on AI writing code, Linus is more interested in AI helping to maintain code:

  • Several projects already using AI for code review
  • AI tools can check patches before they reach Linus
  • One AI tool recently found issues that caught experts by surprise
  • Goal: Stop bad code before it enters the codebase

AI in Kernel Development:

Current UseFuture Potential
Code review assistanceAutomated patch validation
Bug detection in patchesExplaining complex interactions
Identifying problematic patternsPredicting potential conflicts
Documentation generationTest case generation

6.3 The Compiler Analogy

Linus compares AI to compilers:

"AI is essentially the same as when compilers appeared. Don't think AI will suddenly revolutionize programming—we went through this decades ago with the people who wrote compilers."

— Linus Torvalds

Historical Perspective:

InnovationImpactTimeline
Assembly → Compilers1000x productivity gain1950s-1970s
Compilers → Optimizing CompilersBetter performance, abstraction1970s-1990s
AI Assistance10x-100x productivity on top2020s-

Key Insight:

  • Compilers were the real revolution
  • AI is the next evolution of tooling
  • Both provide additional abstraction layers
  • Both make developers more efficient
  • Neither replaces developers

6.4 The "Polite Error" Advantage

Dirk Hohndel made an excellent observation:

"The difference between AI tools and compilers: When AI tools make mistakes, they're much more polite. When Claude is wrong, it says: 'Oh yes, you're right, I made a mistake, I'm sorry.' No compiler has ever said that to me."

— Dirk Hohndel

Linus added that some developers trained an AI on Linux kernel mailing list discussions, only to discover they needed to teach it to be less polite—because it had learned to communicate like kernel developers (who, shall we say, value directness over diplomacy).

6.5 The Bottom Line on AI

"AI gives us an extra abstraction layer. You can express ideas at a higher level without explaining every low-level detail to the system. That makes it easier and more efficient for humans to get things done."

— Linus Torvalds

Expected Timeline:

TimeframeAI Integration
CurrentExperimental code review tools
Next YearStandard part of kernel workflow
3-5 YearsIntegrated into maintainer toolchain
5+ YearsAs essential as compilers are today

VII. Lessons for Modern Software Engineering

What can software teams learn from Linux kernel development?

7.1 Predictable Release Cads Reduce Stress

The 9-week cycle means:

Traditional ApproachLinux Kernel Approach
Arbitrary deadlinesNatural rhythm
High-pressure releasesSteady progress
Missed deadline = disasterMissed window = wait 9 weeks
Crunch cultureSustainable pace

Applicability:

  • Even for smaller projects, predictable cycles reduce stress
  • Missing a release shouldn't be catastrophic
  • Rhythm matters more than specific dates

7.2 Clear Rules Are Better Than Constant Decisions

The "no regressions" rule means:

Without Clear RulesWith No Regressions Rule
Constant debates about breaking changesClear expectation: don't break things
Decision fatigue for maintainersAutomatic decision-making
Users fear upgradesUsers trust upgrades
Fragmented ecosystemCoordinated evolution

Applicability:

  • Establish clear, non-negotiable principles
  • Make decisions automatic based on rules
  • Reduce cognitive load on teams

7.3 Manual Review at Scale Is Possible

Linus personally reviews 12,000 commits every 9 weeks by:

TechniqueImplementation
High-Level FocusUnderstanding interactions, not implementation
Trusted DelegationSub-maintainers handle subsystem details
Personal Conflict ResolutionMaintains global awareness
Pattern RecognitionDecades of experience identifying problems

Applicability:

  • Hierarchical review structures scale
  • Senior engineers should handle integration
  • Tooling should support, not replace, human review

7.4 Tooling Should Empower, Not Replace

AI shouldn't replace developers but:

PurposeImplementation
Catch Issues EarlyBefore code reaches reviewers
Provide ContextExplain complex interactions
Automate ChecksRun tests, static analysis
Add AbstractionLet developers work at higher levels

Applicability:

  • Tools should multiply human effectiveness
  • Augmentation, not automation
  • Human judgment remains essential

7.5 Admit Mistakes Early

The kernel culture values:

BehaviorImpact
Own your bugsTrust and respect from peers
Fix without breakingSystem stability maintained
Transparent communicationFaster problem resolution
No excuse-makingFocus on solutions, not blame

Applicability:

  • Blameless postmortems
  • Focus on systemic improvements
  • Psychological safety for admitting mistakes

7.6 Boring Is Good

"The highlight is that it's 'the same as always.' I like 'boring' and predictable. When your kernel is depended on by everyone—from phones to supercomputers—you don't want too many surprises."

— Linus Torvalds

Exciting ProjectBoring Project
Constant changesPredictable evolution
Frequent breakageStable foundation
Heroic effortsSustainable processes
User anxietyUser trust

For Infrastructure Software:

  • Reliability beats excitement
  • Predictability enables adoption
  • Boring is beautiful

VIII. The Human Element: Why Process Matters

What emerges from Linus's discussion is that managing a 40-million-line codebase isn't about technology alone—it's about creating processes that:

  1. Scale predictably across thousands of contributors
  2. Preserve stability over decades
  3. Enable newcomers to participate without breaking things
  4. Maintain trust with users who depend on the software
  5. Reduce stress for maintainers and contributors alike

8.1 The "I'm Not King of the World" Philosophy

Perhaps the most telling quote:

"I'm not 'King of the World,' so I can't control other projects. I can only set rules for the kernel."

— Linus Torvalds

This humility is core to Linux's success. Linus doesn't claim to have universal answers for all software projects. He focuses on what he can control: the principles that make the Linux kernel work.

8.2 Process as a Competitive Advantage

The Linux kernel has succeeded not because of brilliant technical decisions (though there are many) but because it developed sustainable social and technical processes that have stood the test of time.

Process Advantages:

AspectLinux KernelTypical Projects
Longevity35+ years and countingAverage 3-5 years
ScalabilityThousands of contributorsStruggle beyond dozens
StabilityNo regressions allowedFrequent breakage
TrustUniversal adoptionFragmented ecosystem
SustainabilityMaintainer work-life balanceBurnout common

IX. Conclusion: What 40 Million Lines Teach Us

The Linux kernel isn't just a technical achievement—it's a case study in sustainable software development at scale.

9.1 Key Takeaways

PrincipleApplication
Process > BrillianceBoring, consistent processes beat chaotic genius
No Regressions = SuperpowerUsers trust software that doesn't break
Manual Review ScalesWith the right workflow and hierarchy
Tools Amplify, Don't ReplaceAI is the next compiler, not next programmer
Admit MistakesBest debugging strategy is honesty
Boring Is GoodReliability matters more than excitement

9.2 The Universal Lesson

As software projects grow from thousands to millions of lines of code, the Linux kernel's approach becomes increasingly relevant. The challenge isn't writing code—it's creating systems that allow thousands of people to collaborate without destroying each other's work.

The Real Challenge:

  • Not: How do we write faster?
  • Rather: How do we collaborate at scale without chaos?
  • Not: How do we add more features?
  • Rather: How do we maintain stability while growing?
  • Not: How do we use the latest tools?
  • Rather: How do we create sustainable processes?

Linus Torvalds has solved these problems for 40 million lines. The question for the rest of us is: How can we apply these lessons to our own projects?

9.3 Final Thoughts

The answer might just determine whether our software survives three decades—or three years.

"I'm not 'King of the World,' so I can't control other projects. I can only set rules for the kernel."

— Linus Torvalds

Perhaps the most important lesson is humility: acknowledge what you can control, focus on that, and let principles guide decisions when you can't make them personally.

For 40 million lines of code, this approach has worked remarkably well.


Further Reading

Source Material


CC BY-NC 4.02025 © Chiway Wang
RSS