Claritypoint AI
No Result
View All Result
  • Login
  • Tech

    Biotech leaders: Macroeconomics, US policy shifts making M&A harder

    Funding crisis looms for European med tech

    Sila opens US factory to make silicon anodes for energy-dense EV batteries

    Telo raises $20 million to build tiny electric trucks for cities

    Do startups still need Silicon Valley? Leaders at SignalFire, Lago, and Revolution debate at TechCrunch Disrupt 2025

    OmniCore EyeMotion lets robots adapt to complex environments in real time, says ABB

    Auterion raises $130M to build drone swarms for defense

    Tim Chen has quietly become of one the most sought-after solo investors

    TechCrunch Disrupt 2025 ticket rates increase after just 4 days

    Trending Tags

  • AI News
  • Science
  • Security
  • Generative
  • Entertainment
  • Lifestyle
PRICING
SUBSCRIBE
  • Tech

    Biotech leaders: Macroeconomics, US policy shifts making M&A harder

    Funding crisis looms for European med tech

    Sila opens US factory to make silicon anodes for energy-dense EV batteries

    Telo raises $20 million to build tiny electric trucks for cities

    Do startups still need Silicon Valley? Leaders at SignalFire, Lago, and Revolution debate at TechCrunch Disrupt 2025

    OmniCore EyeMotion lets robots adapt to complex environments in real time, says ABB

    Auterion raises $130M to build drone swarms for defense

    Tim Chen has quietly become of one the most sought-after solo investors

    TechCrunch Disrupt 2025 ticket rates increase after just 4 days

    Trending Tags

  • AI News
  • Science
  • Security
  • Generative
  • Entertainment
  • Lifestyle
No Result
View All Result
Claritypoint AI
No Result
View All Result
Home AI News

Boston Dynamics and TRI use large behavior models to train Atlas humanoid

Dale by Dale
September 27, 2025
Reading Time: 3 mins read
0

# The Unseen Revolution: How Mixture of Experts (MoE) is Reshaping Large-Scale AI

RELATED POSTS

NICE tells docs to pay less for TAVR when possible

FDA clears Artrya’s Salix AI coronary plaque module

Medtronic expects Hugo robotic system to drive growth

For the last several years, the narrative in large-scale AI has been dominated by a simple, powerful idea: bigger is better. The race to build the most capable Large Language Models (LLMs) has often felt like an arms race in parameter counts, with each new state-of-the-art model boasting hundreds of billions—or even trillions—of parameters. This pursuit of scale has yielded incredible results, but it has come at the cost of monumental computational and energy expenditure. Training and even running these monolithic “dense” models is an exercise in brute force.

But what if there’s a smarter way to scale? What if, instead of making one impossibly large brain, we could build a committee of specialists? This is the core idea behind a revolutionary architecture that is quietly redefining the frontier of AI: the **Mixture of Experts (MoE)**.

—

### From Dense Monoliths to Sparse Specialists

To understand the elegance of MoE, we first need to appreciate the inefficiency of traditional dense models. In a dense transformer architecture, every single parameter is engaged for every single piece of data that flows through it. When you ask a 175-billion-parameter dense model a question, all 175 billion parameters are involved in computing the answer. It’s like asking an entire university faculty—from the poet laureate to the quantum physicist—to collectively decide on the best way to bake a cake. It’s powerful, but incredibly wasteful.

A Mixture of Experts model fundamentally changes this dynamic. Instead of a single, massive set of weights, an MoE layer is composed of two key components:

ADVERTISEMENT

1. **A set of “Expert” networks:** These are smaller, specialized neural networks. Each expert can be thought of as a specialist with a deep understanding of a particular domain of knowledge, patterns, or data types.
2. **A “Gating Network” or “Router”:** This is the clever part. The gating network is a small neural network that acts as a traffic controller. For each input (like a word or token in a sentence), the router examines it and decides which one or two experts are best suited to handle it.

The result is a process called **sparse activation**. While the model might have a staggering total number of parameters (e.g., 1.8 trillion), only a small fraction of them—the chosen experts—are activated for any given token. The other experts remain dormant, saving immense computational resources. To return to our analogy, the gating network is the university dean who, upon receiving the cake-baking query, directs it only to the culinary arts professor, letting everyone else continue their work undisturbed.

### The Practical Implications: Efficiency Meets Scale

This architectural shift from dense to sparse isn’t just an academic curiosity; it has profound, practical consequences that are already being leveraged by leading AI labs.

* **Drastically Reduced Inference Cost:** This is the most immediate benefit. Because only a fraction of the model is used for each computation, the cost and time required to generate a response (inference) are significantly lower than for a dense model of a similar total parameter count. An MoE model with a trillion parameters might have an inference cost equivalent to a much smaller dense model of only 100-200 billion parameters.

* **Breaking the Scaling Barrier:** MoE allows us to continue scaling the *knowledge capacity* of a model (total parameters) without a proportional increase in the computational cost of using it. This opens the door to models with far more parameters than would ever be feasible with a dense architecture, potentially leading to a new leap in capabilities and nuanced understanding.

* **Increased Specialization and Performance:** By allowing different experts to specialize, the model can develop more refined and accurate representations for different types of information. One expert might become a master of programming syntax, another of poetic language, and a third of scientific reasoning. This specialization can lead to higher quality outputs across a diverse range of tasks.

—

### The Road Ahead

Of course, the MoE approach is not without its challenges. Training these models is a complex art, requiring sophisticated techniques to ensure all experts are utilized effectively (load balancing) and that the gating network learns to route tokens intelligently. There are also significant hardware and software considerations for deploying models where different parts of the network are activated dynamically.

Despite these hurdles, the Mixture of Experts architecture represents a paradigm shift. It moves us away from the brute-force scaling of dense models toward a more intelligent, efficient, and biologically plausible approach to building artificial intelligence. It proves that the future of AI isn’t just about being bigger—it’s about being smarter in how that size is used. The unseen revolution is already here, and it’s built on a committee of experts.

This post is based on the original article at https://www.therobotreport.com/boston-dynamics-tri-use-large-behavior-models-train-atlas-humanoid/.

Share219Tweet137Pin49
Dale

Dale

Related Posts

AI News

NICE tells docs to pay less for TAVR when possible

September 27, 2025
AI News

FDA clears Artrya’s Salix AI coronary plaque module

September 27, 2025
AI News

Medtronic expects Hugo robotic system to drive growth

September 27, 2025
AI News

Aclarion’s Nociscan nearly doubles spine surgery success

September 27, 2025
AI News

Torc collaborates with Edge Case to commercialize autonomous trucks

September 27, 2025
AI News

AMR experts weigh in on global challenges and opportunities for the industry

September 27, 2025
Next Post

Kodiak Robotics to use NXP processors in autonomous trucks

FieldAI raises $405M to scale ‘physics first’ foundation models for robots

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended Stories

The Download: Google’s AI energy expenditure, and handing over DNA data to the police

September 7, 2025

Appointments and advancements for August 28, 2025

September 7, 2025

Ronovo Surgical’s Carina robot gains $67M boost, J&J collaboration

September 7, 2025

Popular Stories

  • Ronovo Surgical’s Carina robot gains $67M boost, J&J collaboration

    548 shares
    Share 219 Tweet 137
  • Awake’s new app requires heavy sleepers to complete tasks in order to turn off the alarm

    547 shares
    Share 219 Tweet 137
  • Appointments and advancements for August 28, 2025

    547 shares
    Share 219 Tweet 137
  • Medtronic expects Hugo robotic system to drive growth

    547 shares
    Share 219 Tweet 137
  • D-ID acquires Berlin-based video startup Simpleshow

    547 shares
    Share 219 Tweet 137
  • Home
Email Us: service@claritypoint.ai

© 2025 LLC - Premium Ai magazineJegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Subscription
  • Category
  • Landing Page
  • Buy JNews
  • Support Forum
  • Pre-sale Question
  • Contact Us

© 2025 LLC - Premium Ai magazineJegtheme.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?