Articles: 3,583  ·  Readers: 863,895  ·  Value: USD$2,699,175

Press "Enter" to skip to content

Moore’s Law




Moore’s Law is a famous observation and prediction made by Gordon Moore, co-founder of Intel, regarding the rapid and exponential increase in computing power.

The most common modern formulation of the observation states that the number of transistors that can be placed inexpensively on an integrated circuit (microchip) doubles approximately every two years.

This trend does not represent a physical or natural law, but rather an empirical observation that became a self-fulfilling prophecy and a guiding principle for the entire semiconductor industry, driving consistent innovation and progress in miniaturization.

The Core Concept and History

Moore’s original 1965 paper predicted that the number of components (which primarily meant transistors) on an integrated circuit would double every year for the next ten years. In 1975, he revised the prediction to a doubling every two years, which is the rate most commonly cited today.

The key effects of this exponential increase in transistor density include:

  • Increased Performance: More transistors on a chip mean higher processing power and speed.
  • Decreased Cost: As the density increases, the cost per transistor decreases exponentially, making computing more affordable.
  • Increased Energy Efficiency: Smaller transistors generally require less power to operate, improving efficiency.

The result is that consumer electronics—from personal computers to smartphones—continuously get faster, smaller, and cheaper over time.

Global Business Examples

Moore’s Law has served as the fundamental roadmap for the technology sector, compelling companies globally to invest heavily in research and development to maintain the pace of technological advancement.

Intel and AMD: These are prime examples in the semiconductor industry. They have used Moore’s Law as a core business driver, setting aggressive targets for the density and performance of their central processing units (CPUs) and graphics processing units (GPUs). The relentless competition to achieve the next node (smaller feature size, e.g., from 7nm to 5nm) is a direct response to the expectation set by Moore’s Law.

TSMC (Taiwan Semiconductor Manufacturing Company): As the world’s largest contract chip manufacturer, TSMC’s entire business model revolves around constantly pushing the limits of lithography (the process of printing circuit patterns onto silicon) to create smaller, denser, and more advanced chips for clients like Apple and NVIDIA. Their huge capital expenditure in next-generation fabrication plants (fabs) is aimed at sustaining the doubling of density.

Consumer Electronics (Apple, Samsung): Companies that build end-user devices benefit directly. Moore’s Law allows them to release new generations of products (like smartphones) that are significantly more powerful than their predecessors, fitting powerful processors into thinner, lighter form factors while maintaining competitive price points.


Limits and the Future

In recent years, the industry has encountered significant physical and economic challenges that have led many experts to question whether the traditional pace of Moore’s Law can be sustained indefinitely.

  1. Physical Limits: As transistors approach the atomic scale (now measured in a few nanometers), quantum effects, such as electron tunneling and leakage, become dominant. This increases heat and power consumption, fundamentally limiting how small transistors can be.
  2. Economic Limits: The cost of designing and building new fabrication plants that can handle cutting-edge lithography (like Extreme Ultraviolet or EUV) has become extraordinarily high, making it harder to justify the investment required to achieve the next doubling in density.

In response, the industry is shifting to new strategies, sometimes referred to as “More than Moore” or “Beyond Moore’s Law,” to continue improving computing performance through other means:

  • Advanced Packaging: Stacking multiple integrated circuits vertically (3D integration) or connecting disparate chips on a single package to increase overall system performance without relying solely on transistor shrinking.
  • Specialized Architectures: Developing chips designed for specific tasks, such as Google’s Tensor Processing Units (TPUs) for machine learning, or GPUs for parallel processing, rather than relying on general-purpose CPU improvements.
  • New Computing Paradigms: Exploring fundamentally different technologies like quantum computing or neuromorphic (brain-inspired) computing that could eventually supersede silicon-based limitations.