Pragmatic idealist. Worked on Ubuntu Phone. Inkscape co-founder. Probably human.
1466 stories
·
12 followers

Google says it dropped the energy cost of AI queries by 33x in one year

1 Share

So far this year, electricity use in the US is up nearly 4 percent compared to the same period the year prior. That comes after decades of essentially flat use, a change that has been associated with a rapid expansion of data centers. And a lot of those data centers are being built to serve the boom in AI usage. Given that some of this rising demand is being met by increased coal use (as of May, coal's share of generation is up about 20 percent compared to the year prior), the environmental impact of AI is looking pretty bad.

But it's difficult to know for certain without access to the sorts of details that you'd only get by running a data center, such as how often the hardware is in use, and how often it's serving AI queries. So, while academics can test the power needs of individual AI models, it's hard to extrapolate that to real-world use cases.

By contrast, Google has all sorts of data available from real-world use cases. As such, its release of a new analysis of AI's environmental impact is a rare opportunity to peer a tiny bit under the hood. But the new analysis suggests that energy estimates are currently a moving target, as the company says its data shows the energy drain of a search has dropped by a factor of 33 in just the past year.

What’s in, what’s out

One of the big questions when doing these analyses is what to include. There's obviously the energy consumed by the processors when handling a request. But there's also the energy required for memory, storage, cooling, and more needed to support those processors. Beyond that, there's the energy used to manufacture all that hardware and build the facilities that house them. AIs also require a lot of energy during training, a fraction of which might be counted against any single request made to the model post-training.

Any analysis of energy use needs to make decisions about which of these factors to consider. For many of the ones that have been done in the past, various factors have been skipped largely because the people performing the analysis don't have access to the relevant data. They probably don't know how many processors need to be dedicated to a given task, much less the carbon emissions associated with producing them.

But Google has access to pretty much everything: the energy used to service a request, the hardware needed to do so, the cooling requirements, and more. And, since it's becoming standard practice to follow both Scope 2 and Scope 3 emissions that are produced due to the company's activities (either directly, through things like power generation, or indirectly through a supply chain), the company likely has access to those, as well.

For the new analysis, Google tracks the energy of CPUs, dedicated AI accelerators, and memory, both when active on handling queries and while idling in between queries. It also follows the energy and water use of the data center as a whole and knows what else is in that data center so it can estimate the fraction that's given over to serving AI queries. It's also tracking the carbon emissions associated with the electricity supply, as well as the emissions that resulted from the production of all the hardware it's using.

Three major factors don't make the cut. One is the environmental cost of the networking capacity used to receive requests and deliver results, which will vary considerably depending on the request. The same applies to the computational load on the end-user hardware; that's going to see vast differences between someone using a gaming desktop and someone using a smartphone. The one thing that Google could have made a reasonable estimate of, but didn't, is the impact of training its models. At this point, it will clearly know the energy costs there and can probably make reasonable estimates of a trained model's useful lifetime and number of requests handled during that period. But it didn't include that in the current estimates.

To come up with typical numbers, the team that did the analysis tracked requests and the hardware that served them for a 24 hour period, as well as the idle time for that hardware. This gives them an energy per request estimate, which differs based on the model being used. For each day, they identify the median prompt and use that to calculate the environmental impact.

Going down

Using those estimates, they find that the impact of an individual text request is pretty small. "We estimate the median Gemini Apps text prompt uses 0.24 watt-hours of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water," they conclude. To put that in context, they estimate that the energy use is similar to about nine seconds of TV viewing.

The bad news is that the volume of requests is undoubtedly very high. The company has chosen to execute an AI operation with every single search request, a compute demand that simply didn't exist a couple of years ago. So, while the individual impact is small, the cumulative cost is likely to be considerable.

The good news? Just a year ago, it would have been far, far worse.

Some of this is just down to circumstances. With the boom in solar power in the US and elsewhere, it has gotten easier for Google to arrange for renewable power. As a result, the carbon emissions per unit of energy consumed saw a 1.4x reduction over the past year. But the biggest wins have been on the software side, where different approaches have led to a 33x reduction in energy consumed per prompt.

A color bar showing the percentage of energy used by different hardware. AI accelerators are the largest use, followed by CPU and RAM. Idle machines and overhead account for about 10 percent each. Most of the energy use in serving AI requests comes from time spent in the custom accelerator chips. Credit: Elsworth, et. al.

The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They've developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.

The other thing is that Google designs its own custom AI accelerators, and it architects the software that runs on them, allowing it to optimize both sides of the hardware/software divide to operate well with each other. That's especially critical given that activity on the AI accelerators accounts for over half of the total energy use of a query. Google also has lots of experience running efficient data centers that carries over to the experience with AI.

The result of all this is that it estimates that the energy consumption of a typical text query has gone down by 33x in the last year alone. That has knock-on effects, since things like the carbon emissions associated with, say, building the hardware gets diluted out by the fact that the hardware can handle far more queries over the course of its useful lifetime.

Given these efficiency gains, it would have been easy for Google to simply use the results as a PR exercise; instead, the company has detailed its methodology and considerations in something that reads very much like an academic publication. It's taking that approach because the people behind this work would like to see others in the field adopt its approach. "We advocate for the widespread adoption of this or similarly comprehensive measurement frameworks to ensure that as the capabilities of AI advance, their environmental efficiency does as well," they conclude.

Read full article

Comments



Read the whole story
tedgould
2 days ago
reply
Texas, USA
Share this story
Delete

Some pro athletes get better with age. High-pressure environments are rewiring their brains

1 Share

In a world where sports are dominated by youth and speed, some athletes in their late 30s and even 40s are not just keeping up—they are thriving.

Novak Djokovic is still outlasting opponents nearly half his age on tennis’s biggest stages. LeBron James continues to dictate the pace of NBA games, defending centers and orchestrating plays like a point guard. Allyson Felix won her 11th Olympic medal in track and field at age 35. And Tom Brady won a Super Bowl at 43, long after most NFL quarterbacks retire.

The sustained excellence of these athletes is not just due to talent or grit—it’s biology in action. Staying at the top of their game reflects a trainable convergence of brain, body, and mindset. I’m a performance scientist and a physical therapist who has spent over two decades studying how athletes train, taper, recover, and stay sharp. These insights aren’t just for high-level athletes—they hold true for anyone navigating big life changes or working to stay healthy.

Increasingly, research shows that the systems that support high performance—from motor control to stress regulation to recovery—are not fixed traits but trainable capacities. In a world of accelerating change and disruption, the ability to adapt to new changes may be the most important skill of all. So, what makes this adaptability possible—biologically, cognitively, and emotionally?

The amygdala and prefrontal cortex

Neuroscience research shows that with repeated exposure to high-stakes situations, the brain begins to adapt. The prefrontal cortex—the region most responsible for planning, focus, and decision-making—becomes more efficient in managing attention and making decisions, even under pressure.

During stressful situations, such as facing match point in a Grand Slam final, this area of the brain can help an athlete stay composed and make smart choices—but only if it’s well trained.

In contrast, the amygdala, our brain’s threat detector, can hijack performance by triggering panic, freezing motor responses, or fueling reckless decisions. With repeated exposure to high-stakes moments, elite athletes gradually reshape this brain circuit.

They learn to tune down amygdala reactivity and keep the prefrontal cortex online, even when the pressure spikes. This refined brain circuitry enables experienced performers to maintain their emotional control.

Creating a brain-body loop

Brain-derived neurotrophic factor, or BDNF, is a molecule that supports adapting to changes quickly. Think of it as fertilizer for the brain. It enhances neuroplasticity: the brain’s ability to rewire itself through experience and repetition. This rewiring helps athletes build and reinforce the patterns of connections between brain cells to control their emotion, manage their attention, and move with precision.

BDNF levels increase with intense physical activity, mental focus, and deliberate practice, especially when combined with recovery strategies such as sleep and deep breathing.

Elevated BDNF levels are linked to better resilience against stress and may support faster motor learning, which is the process of developing or refining movement patterns.

For example, after losing a set, Djokovic often resets by taking deep, slow breaths—not just to calm his nerves, but to pause and regain control. This conscious breathing helps him restore focus and likely quiets the stress signals in his brain.

In moments like these, higher BDNF availability likely allows him to regulate his emotions and recalibrate his motor response, helping him to return to peak performance faster than his opponent.

Rewiring your brain

In essence, athletes who repeatedly train and compete in pressure-filled environments are rewiring their brain to respond more effectively to those demands. This rewiring, from repeated exposures, helps boost BDNF levels and in turn keeps the prefrontal cortex sharp and dials down the amygdala’s tendency to overreact.

This kind of biological tuning is what scientists call cognitive reserve and allostasis—the process the body uses to make changes in response to stress or environmental demands to remain stable. It helps the brain and body be flexible, not fragile.

Importantly, this adaptation isn’t exclusive to elite athletes. Studies on adults of all ages show that regular physical activity—particularly exercises that challenge both body and mind—can raise BDNF levels, improve the brain’s ability to adapt and respond to new challenges, and reduce stress reactivity.

Programs that combine aerobic movement with coordination tasks, such as dancing, complex drills, or even fast-paced walking while problem-solving have been shown to preserve skills such as focus, planning, impulse control, and emotional regulation over time.

After an intense training session or a match, you will often see athletes hopping on a bike or spending some time in the pool. These low-impact, gentle movements, known as active recovery, help tone down the nervous system gradually.

Outside of active recovery, sleep is where the real reset and repair happen. Sleep aids in learning and strengthens the neural connections challenged during training and competition.

Over time, this convergence creates a trainable loop between the brain and body that is better equipped to adapt, recover, and perform.

Lessons beyond sport

While the spotlight may shine on sporting arenas, you don’t need to be a pro athlete to train these same skills.

The ability to perform under pressure is a result of continuing adaptation. Whether you’re navigating a career pivot, caring for family members, or simply striving to stay mentally sharp as the world changes, the principles are the same: Expose yourself to challenges, regulate stress, and recover deliberately.

While speed, agility, and power may decline with age, some sport-specific skills such as anticipation, decision-making, and strategic awareness actually improve. Athletes with years of experience develop faster mental models of how a play will unfold, which allows them to make better and faster choices with minimal effort. This efficiency is a result of years of reinforcing neural circuits that doesn’t immediately vanish with age. This is one reason experienced athletes often excel even if they are well past their physical prime.

Physical activity, especially dynamic and coordinated movement, boosts the brain’s capacity to adapt. So does learning new skills, practicing mindfulness, and even rehearsing performance under pressure. In daily life, this might be a surgeon practicing a critical procedure in simulation, a teacher preparing for a tricky parent meeting, or a speaker practicing a high-stakes presentation to stay calm and composed when it counts. These aren’t elite rituals—they’re accessible strategies for building resilience, motor efficiency, and emotional control.

Humans are built to adapt—with the right strategies, you can sustain excellence at any stage of life.

Fiddy Davis Jaihind Jothikaran is an associate professor of kinesiology at Hope College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Read the whole story
tedgould
3 days ago
reply
Texas, USA
Share this story
Delete

Pakistan is tapping into solar power at an 'unprecedented' rate. Here's why

1 Share
Solar panels on the rooftops of houses in Islamabad, Pakistan. The country is in the midst of a solar boom that solar analysts describe as "unprecedented."

Solar experts say there's never been a faster adoption of solar, with panels popping up on rooftops.

(Image credit: Betsy Joles for NPR)

Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete

China’s Guowang megaconstellation is more than another version of Starlink

1 Share

US defense officials have long worried that China's Guowang satellite network might give the Chinese military access to the kind of ubiquitous connectivity US forces now enjoy with SpaceX's Starlink network.

It turns out the Guowang constellation could offer a lot more than a homemade Chinese alternative to Starlink's high-speed consumer-grade broadband service. China has disclosed little information about the Guowang network, but there's mounting evidence that the satellites may provide Chinese military forces a tactical edge in any future armed conflict in the Western Pacific.

The megaconstellation is managed by a secretive company called China SatNet, which was established by the Chinese government in 2021. SatNet has released little information since its formation, and the group doesn't have a website. Chinese officials have not detailed any of the satellites' capabilities or signaled any intention to market the services to consumers.

Another Chinese satellite megaconstellation in the works, called Qianfan, appears to be a closer analog to SpaceX's commercial Starlink service. Qianfan satellites are flat in shape, making them easier to pack onto the tops of rockets before launch. This is a design approach pioneered by SpaceX with Starlink. The backers of the Qianfan network began launching the first of up to 1,300 broadband satellites last year.

Unlike Starlink, the Guowang network consists of satellites manufactured by multiple companies, and they launch on several types of rockets. On its face, the architecture taking shape in low-Earth orbit appears to be more akin to SpaceX's military-grade Starshield satellites and the Space Development Agency's future tranches of data relay and missile-tracking satellites.

Guowang, or "national network," may also bear similarities to something the US military calls MILNET. Proposed in the Trump administration's budget request for next year, MILNET will be a partnership between the Space Force and the National Reconnaissance Office (NRO). One of the design alternatives under review at the Pentagon is to use SpaceX's Starshield satellites to create a "hybrid mesh network" that the military can rely on for a wide range of applications.

Picking up the pace

In recent weeks, China's pace of launching Guowang satellites has approached that of Starlink. China has launched five groups of Guowang satellites since July 27, while SpaceX has launched six Starlink missions using its Falcon 9 rockets over the same period.

A single Falcon 9 launch can haul up to 28 Starlink satellites into low-Earth orbit, while China's rockets have launched between five and 10 Guowang satellites per flight to altitudes three to four times higher. China has now placed 72 Guowang satellites into orbit since launches began last December, a small fraction of the 12,992-satellite fleet China has outlined in filings with the International Telecommunication Union.

The constellation described in China's ITU filings will include one group of Guowang satellites between 500 and 600 kilometers (311 and 373 miles), around the same altitude of Starlink. Another shell of Guowang satellites will fly roughly 1,145 kilometers (711 miles) above the Earth. So far, all of the Guowang satellites China has launched since last year appear to be heading for the higher shell.

This higher altitude limits the number of Guowang satellites China's stable of launch vehicles can carry. On the other hand, fewer satellites are required for global coverage from the higher orbit.

A prototype Guowang satellite is seen prepared for encapsulation inside the nose cone of a Long March 12 rocket last year. This is one of the only views of a Guowang spacecraft China has publicly released. Credit: Hainan International Commercial Aerospace Launch Company Ltd.

SpaceX has already launched nearly 200 of its own Starshield satellites for the NRO to use for intelligence, surveillance, and reconnaissance missions. The next step, whether it's the SDA constellation, MILNET, or something else, will seek to incorporate hundreds or thousands of low-Earth orbit satellites into real-time combat operations—things like tracking moving targets on the ground and in the air, targeting enemy vehicles, and relaying commands between allied forces. The Trump administration's Golden Dome missile defense shield aims to extend real-time targeting to objects in the space domain.

In military jargon, the interconnected links to detect, track, target, and strike a target is called a kill chain or kill web. This is what US Space Force officials are pushing to develop with the Space Development Agency, MILNET, and other future space-based networks.

So, where is the US military in building out this kill chain? The military has long had the ability to detect and track an adversary's activities from space. Spy satellites have orbited the Earth since the dawn of the Space Age.

Much of the rest of the kill chain—like targeting and striking—remains forward work for the Defense Department. Many of the Pentagon's existing capabilities are classified, but simply put, the multibillion-dollar satellite constellations the Space Force is building just for these purposes still haven't made it to the launch pad. In some cases, they haven't made it out of the lab.

Is space really the place?

The Space Development Agency is supposed to begin launching its first generation of more than 150 satellites later this year. These will put the Pentagon in a position to detect smaller, fainter ballistic and hypersonic missiles and provide targeting data for allied interceptors on the ground or at sea.

Space Force officials envision a network of satellites that can essentially control a terrestrial battlefield from orbit. The way future-minded commanders tell it, a fleet of thousands of satellites fitted with exquisite sensors and machine learning will first detect a moving target, whether it's a land vehicle, aircraft, naval ship, or missile. Then, that spacecraft will transmit targeting data via a laser link to another satellite that can relay the information to a shooter on Earth.

US officials believe Guowang is a step toward integrating satellites into China's own kill web. It might be easier for them to dismiss Guowang if it were simply a Chinese version of Starlink, but open source information suggests it's something more. Perhaps Guowang is more akin to megaconstellations being developed and deployed for the US Space Force and the National Reconnaissance Office.

If this is the case, China could have a head start on completing all the links for a celestial kill chain. The NRO's Starshield satellites in space today are presumably focused on collecting intelligence. The Space Force's megaconstellation of missile tracking, data relay, and command and control satellites is not yet in orbit.

Chinese media reports suggest the Guowang satellites could accommodate a range of instrumentation, including broadband communications payloads, laser communications terminals, synthetic aperture radars, and optical remote sensing payloads. This sounds a lot like a mix of SpaceX and the NRO's Starshield fleet, the Space Development Agency's future constellation, and the proposed MILNET program.

A Long March 5B rocket lifts off from the Wenchang Space Launch Site in China's Hainan Province on August 13, 2025, with a group of Guowang satellites. (Photo by Luo Yunfei/China News Service/VCG via Getty Images.) Credit: Luo Yunfei/China News Service/VCG via Getty Images

In testimony before a Senate committee in June, the top general in the US Space Force said it is "worrisome" that China is moving in this direction. Gen. Chance Saltzman, the Chief of Space Operations, used China's emergence as an argument for developing space weapons, euphemistically called "counter-space capabilities."

"The space-enabled targeting that they've been able to achieve from space has increased the range and accuracy of their weapon systems to the point where getting anywhere close enough [to China] in the Western Pacific to be able to achieve military objectives is in jeopardy if we can’t deny, disrupt, degrade that... capability," Saltzman said. "That’s the most pressing challenge, and that means the Space Force needs the space control counter-space capabilities in order to deny that kill web."

The US military's push to migrate many wartime responsibilities to space is not without controversy. The Trump administration wants to cancel purchases of new E-7 jets designed to serve as nerve centers in the sky, where Air Force operators receive signals about what's happening in the air, on the ground, and in the water for hundreds of miles around. Instead, much of this responsibility would be transferred to satellites.

Some retired military officials, along with some lawmakers, argue against canceling the E-7. They say there's too little confidence in when satellites will be ready to take over. If the Air Force goes ahead with the plan to cancel the E-7, the service intends to bridge the gap by extending the life of a fleet of Cold War-era E-3 Sentry airplanes, commonly known as AWACS (Airborne Warning and Control System).

But the high ground of space offers notable benefits. First, a proliferated network of satellites has global reach, and airplanes don't. Second, satellites could do the job on their own, with some help from artificial intelligence and edge computing. This would remove humans from the line of fire. And finally, using a large number of satellites is inherently beneficial because it means an attack on one or several satellites won't degrade US military capabilities.

In China, it takes a village

Brig. Gen. Anthony Mastalir, commander of US Space Forces in the Indo-Pacific region, told Ars last year that US officials are watching to see how China integrates satellite networks like Guowang into military exercises.

"What I find interesting is China continues to copy the US playbook," Mastalir said. "So as you look at the success that the United States has had with proliferated architectures, immediately now we see China building their own proliferated architecture, not just the transport layer and the comm layer, but the sensor layer as well. You look at their pursuit of reusability in terms of increasing their launch capacity, which is currently probably one of their shortfalls. They have plans for a quicker launch tempo."

A Long March 6A carries a group of Guowang satellites into orbit on July 27, 2025, from the Taiyuan Satellite Launch Center in north China's Shanxi Province. China has used four different rocket configurations to place five groups of Guowang satellites into orbit in the last month. Credit: Wang Yapeng/Xinhua via Getty Images

China hasn't recovered or reused an orbital-class booster yet, but several Chinese companies are working on it. SpaceX, meanwhile, continues to recycle its fleet of Falcon 9 boosters while simultaneously developing a massive super-heavy-lift rocket and churning out dozens of Starlink and Starshield satellites every week.

China doesn't have its own version of SpaceX. In China, it's taken numerous commercial and government-backed enterprises to reach a launch cadence that, so far this year, is a little less than half that of SpaceX. But the flurry of Guowang launches in the last few weeks shows that China's satellite and rocket factories are picking up the pace.

Mastalir said China's actions in the South China Sea, where it has taken claim of disputed islands near Taiwan and the Philippines, could extend farther from Chinese shores with the help of space-based military capabilities.

"Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing," he said. "That has started with an A2AD, an Anti-Access Area Denial strategy, which is extended to the first island chain and now the second island chain, and eventually all the way to the west coast of California."

"The sensor capabilities that they'll need are multi-orbital and diverse in terms of having sensors at GEO (geosynchronous orbit) and now increasingly massive megaconstellations at LEO (low-Earth orbit)," Mastalir said. "So we're seeing all signs point to being able to target US aircraft carriers... high-value assets in the air like tankers, AWACs. This is a strategy to keep the US from intervening, and that's what their space architecture is designed to do."

Read full article

Comments



Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete

SpaceX has built the machine to build the machine. But what about the machine?

1 Share

STARBASE, Texas—I first visited SpaceX's launch site in South Texas a decade ago. Driving down the pocked and barren two-lane road to its sandy terminus, I found only rolling dunes, a large mound of dirt, and a few satellite dishes that talked to Dragon spacecraft as they flew overhead.

A few years later, in mid-2019, the company had moved some of that dirt and built a small launch pad. A handful of SpaceX engineers working there at the time shared some office space nearby in a tech hub building, "Stargate." The University of Texas Rio Grande Valley proudly opened this state-of-the-art technology center just weeks earlier. That summer, from Stargate's second floor, engineers looked on as the Starhopper prototype made its first two flights a couple of miles away.

Over the ensuing years, as the company began assembling its Starship rockets on site, SpaceX first erected small tents, then much larger tents, and then towering high bays in which the vehicles were stacked. Starbase grew and evolved to meet the company's needs.

All of this was merely a prelude to the end game: Starfactory. SpaceX opened this truly massive facility earlier this year. The sleek rocket factory is emblematic of the new Starbase: modern, gargantuan, spaceship-like.

To the consternation of some local residents and environmentalists, the rapid growth of Starbase has wiped out the small and eclectic community that existed here. And that brand new Stargate building that public officials were so excited about only a few years ago? SpaceX first took it over entirely and then demolished it. The tents are gone, too. For better or worse, in the name of progress, the SpaceX steamroller has rolled onward, paving all before it.

Starbase is even its own Texas city now. And if this were a medieval town, Starfactory would be the impenetrable fortress at its heart. In late May, I had a chance to go inside. The interior was super impressive, of course. Yet it could not quell some of the concerns I have about the future of SpaceX's grand plans to send a fleet of Starships into the Solar System.

Inside the fortress

The main entrance to the factory lies at its northeast corner. From there, one walks into a sleek lobby that serves as a gateway into the main, cavernous section of the building. At this corner, there are three stories above the ground floor. Each of these three higher levels contains various offices, conference rooms and, on the upper floor, a launch control center.

Large windows from here offer a breathtaking view of the Starship launch site two miles up the road. A third-floor executive conference room has carpet of a striking rusty, reddish hue—mimicking the surface of Mars, naturally. A long, black table dominates the room, with 10 seats along each side, and one at the head.

An aerial overview of the Starship production site in South Texas earlier this year. The sprawling Starfactory is in the center. Credit: SpaceX

But the real attraction of these offices is the view to the other end. Each of the upper three floors has a balcony overlooking the factory floor. From there, it's as if one stands at the edge of an ocean liner, gazing out to sea. In this case, the far wall is discernible, if only barely. Below, the factory floor is crammed with all manner of Starship parts: nose cones, grid fins, hot staging rings, and so much more. The factory emitted a steady din and hum as work proceeded on vehicles below.

The ultimate goal of this factory is to build one Starship rocket a day. This sounds utterly mad. For the entire Apollo program in the 1960s and 1970s, NASA built 15 Saturn V rockets. Over the course of more than three decades, NASA built and flew only five different iconic Space Shuttles. SpaceX aims to build 365 vehicles, which are larger, per year.

Wandering around the Starfactory, however, this ambition no longer seems undoable. The factory measures about 1 million square feet. This is two times as large as SpaceX's main Falcon 9 factory in Hawthorne, California. It feels like the company could build a lot of Starships here if needed.

During one of my visits to South Texas, in early 2020 just before the onset of the COVID-19 pandemic, SpaceX was building its first Starship rockets in football field-sized tents. At the time, SpaceX founder Elon Musk opined in an interview that building the factory might well be more difficult than building the rocket.

Here's a view of SpaceX's Starship production facilities, from the east side, in late February 2020. Credit: Eric Berger

"If you want to actually make something at reasonable volume, you have to build the machine that makes the machine, which mathematically is going to be vastly more complicated than the machine itself," he said. "The thing that makes the machine is not going to be simpler than the machine. It’s going to be much more complicated, by a lot."

Five years later, standing inside Starfactory, it seems clear that SpaceX has built the machine to build the machine—or at least it's getting close.

But what happens if that machine is not ready for prime time?

A pretty bad year for Starship

SpaceX has not had a good run of things with the ambitious Starship vehicle this year. Three times, in January, March, and May, the vehicle took flight. And three times, the upper stage experienced significant problems during ascent, and the vehicle was lost on the ride up to space, or just after. These were the seventh, eighth, and ninth test flights of Starship, following three consecutive flights in 2024 during which the Starship upper stage made more or less nominal flights and controlled splashdowns in the Indian Ocean.

It's difficult to view the consecutive failures this year—not to mention the explosion of another Starship vehicle during testing in June—as anything but a major setback for the program.

There can be no question that the Starship rocket, with its unprecedentedly large first stage and potentially reusable upper stage, is the most advanced and ambitious rocket humans have ever conceived, built, and flown. The failures this year, however, have led some space industry insiders to ask whether Starship is too ambitious.

My sources at SpaceX don't believe so. They are frustrated by the run of problems this year, but they believe the fundamental design of Starship is sound and that they have a clear path to resolving the issues. The massive first stage has already been flown, landed, and re-flown. This is a huge step forward. But the sources also believe the upper stage issues can be resolved, especially with a new "Version 3" of Starship due to make its debut late this year or early in 2026.

The acid test will only come with upcoming flights. The vehicle's tenth test flight is scheduled to take place no earlier than Sunday, August 24. It's possible that SpaceX will fly one more "Version 2" Starship later this year before moving to the upgraded vehicle, with more powerful Raptor engines and lots of other changes to (hopefully) improve reliability.

SpaceX could certainly use a win. The Starship failures occur at a time when Musk has become embroiled in political controversy while feuding with the president of the United States. His actions have led some in government and private industry to question whether they should be doing business with SpaceX going forward.

It's often said in sports that winning solves a lot of problems. For SpaceX, success with Starship would solve a lot of problems.

Next steps for Starship

The failures are frustrating and publicly embarrassing. But more importantly, they are a bottleneck for a lot of critical work SpaceX needs to do for Starship to reach its considerable potential. All of the technical progress the Starship program needs to make to deploy thousands of Starlink satellites, land NASA astronauts on the Moon, and send humans to Mars remains largely on hold.

Two of the most important objectives for the next flight require the Starship vehicle to fly a nominal mission. For several flights now, SpaceX engineers have dutifully prepared Starlink satellite simulators to test a Pez-like dispenser in space. And each Starship vehicle has carried about two dozen different tile experiments as the company attempts to build a rapidly reusable heat shield to protect Starship during atmospheric reentry.

The engineers are still waiting for the results of their experiments.

In the near term, SpaceX is hyper-focused on getting Starship working and starting the deployment of large Starlink satellites that will have the potential to unlock significant amounts of revenue. But this is just the beginning of the work that needs to happen for SpaceX to turn Starship into a deep-space vehicle capable of traveling to the Moon and Mars.

These steps include:

  • Reuse: Developing a rapidly reusable heat shield and landing and re-flying Starship upper stages
  • Prop transfer: Conducting a refueling test in low-Earth orbit to demonstrate the transfer of large amounts of propellant between Starships
  • Depots: Developing and testing cryogenic propellant depots to understand heating losses over time
  • Lunar landing: Landing a Starship successfully on the Moon, which is challenging due to the height of the vehicle and uneven terrain
  • Lunar launch: Demonstrating the capability of Starship, using liquid propellant, to launch safely from the lunar surface without infrastructure there
  • Mars transit: Demonstrating the operation of Starship over months and the capability to perform a powered landing on Mars.

Each of these steps is massively challenging and at least partly a novel exercise in aerospace. There will be a lot of learning, and almost certainly some failures, as SpaceX works through these technical milestones.

Some details about the Starship propellant transfer test, a key milestone that NASA and SpaceX had hoped to complete this year but now may tackle in 2026. Credit: NASA

SpaceX prefers a test, fly, and fix approach to developing hardware. This iterative approach has served the company well, allowing it to develop rockets and spacecraft faster and for less money than its competitors. But you cannot fly and fix hardware for the milestones above without getting the upper stage of Starship flying nominally.

That's one reason why the Starship program has been so disappointing this year.

Then there are the politics

As SpaceX has struggled with Starship in 2025, its founder, Musk, has also had a turbulent run, from the presidential campaign trail to the top of political power in the world, the White House, and back out of President Trump's inner circle. Along the way, he has made political enemies, and his public favorability ratings have fallen.

Amid the fallout between Trump and Musk this spring and summer, the president ordered a review of SpaceX's contracts. Nothing happened because government officials found that most of the services SpaceX offers to NASA, the US Department of Defense, and other federal agencies are vital.

However, multiple sources have told Ars that federal officials are looking for alternatives to SpaceX and have indicated they will seek to buy launches, satellite Internet, and other services from emerging competitors if available.

Starship's troubles also come at a critical time in space policy. As part of its budget request for fiscal year 2026, the White House sought to terminate the production of NASA's Space Launch System rocket and spacecraft after the Artemis III mission. The White House has also expressed an interest in sending humans to Mars, viewing the Moon as a stepping stone to the red planet.

Although there are several options in play, the most viable hardware for both a lunar and Mars human exploration program is Starship. If it works. If it continues to have teething pains, though, that makes it easier for Congress to continue funding NASA's expensive rocket and spacecraft, as it would prefer to do.

What about Artemis and the Moon?

Starship's "lost year" also has serious implications for NASA's Artemis Moon Program. As Ars reported this week, China is now likely to land on the Moon before NASA can return. Yes, the space agency has a nominal landing date in 2027 for the Artemis III mission, but no credible space industry officials believe that date is real. (It has already slipped multiple times from 2024). Theoretically, a landing in 2028 remains feasible, but a more rational over/under date for NASA is probably somewhere in the vicinity of 2030.

SpaceX is building the lunar lander for the Artemis III mission, a modified version of Starship. There is so much we don't really know yet about this vehicle. For example, how many refuelings will it take to load a Starship with sufficient propellant to land on the Moon and take off? What will the vehicle's controls look like, and will the landings be automated?

And here's another one: How many people at SpaceX are actually working on the lunar version of Starship?

Publicly, Musk has said he doesn't worry too much about China beating the United States back to the Moon. "I think the United States should be aiming for Mars, because we've already actually been to the Moon several times," Musk said in an interview in late May. "Yeah, if China sort of equals that, I'm like, OK, sure, but that's something that America did 56 years ago."

Privately, Musk is highly critical of Artemis, saying NASA should focus on Mars. Certainly, that's the long arc of history toward which SpaceX's efforts are being bent. Although both the Moon and Mars versions of Starship require the vehicle to reach orbit and successfully refuel, there is a huge divergence in the technology and work required after that point.

It's not at all clear that the Trump administration is seriously seeking to address this issue by providing SpaceX with carrots and sticks to move the lunar lander program forward. If Artemis is not a priority for Musk, how can it be for SpaceX?

This all creates a tremendous amount of uncertainty ahead of Sunday's Starship launch. As Musk likes to say, "Excitement is guaranteed."

Success would be better.

Read full article

Comments



Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete

Mammals that chose ants and termites as food almost never go back

1 Share

If you were to design the strangest diet possible, eating nothing but ants and termites would probably make the shortlist. Yet over the past 66 million years, mammals across the globe have repeatedly gone down this path—not once or twice, but at least a dozen times. From anteaters and aardvarks to pangolins and aardwolves, the so-called myrmecophages (animals that feed on ants and termites) have evolved similar traits: they’ve lost most or all of their teeth, grown long sticky tongues, and learned to consume insects by the tens to hundreds of thousands each day.

A new study reveals that this extreme dietary specialization, once thought rare and mysterious, has emerged independently in mammals at least 12 times in the last 66 million years (i.e., since the Cenozoic era began). This is a striking example of convergent evolution and shows just how powerful ants and termites have been in shaping mammalian history.

“The number of distinct origins for myrmecophagy was certainly surprising, as was the discovery that their origins seem to quite neatly follow the trend of growth across ant and termite colony sizes throughout the Cenozoic,” Thomas Vida, first author of the study and a researcher at the University of Bonn, told Ars Technica.

The rise of insect-eating mammals

To figure out how often and when mammals evolved a taste for ants and termites, the study authors first had to track down which species are truly “obligate myrmecophages”—animals that rely entirely on ants and termites, with little to no other food in their diet. That meant going through nearly a century’s worth of information. “We looked through a very large number of published natural history papers, zoological texts, and conservation reports as a baseline for identification,” Vida added.

This board dataset covered 4,099 living mammal species. The researchers then grouped these species into one of five dietary categories based on gut analyses and field observations: strict ant/termite specialists, general insect-eaters, carnivores, omnivores, and herbivores. Next, they ran several statistical models to work backward from this data to reconstruct the most likely diets for each ancestral node.

The results showed at least 12 separate origins of obligate myrmecophagy, with instances in each of the three main mammal groups: monotremes (egg-laying mammals), marsupials, and placentals. Surprisingly, some families, like Carnivora (dogs, bears, weasels), were responsible for about a quarter of all these origins, suggesting certain lineages were predisposed to make the leap.

Image of a semi-circle representing the evolution of major mammalian groups, showing the different points where ant eating evolved are widely spread out among them. The large diagram to the left shows that ant and termite eaters are widely distributed among mammalian species. The small diagram on the right shows that, although most evolved from insect eaters, each of the major diet groups produced some antedating specialists. Credit: Vida, Calamari, & Barden/NJIT

Moreover, in every case, the ancestors were either insectivores or carnivores, with insect-eaters making the shift about three times more often than carnivores. The researchers also compared these timelines with the expansion of ants and termites themselves. Fossil evidence shows that during the Cretaceous (about 145–66 million years ago), these insects made up less than 1 percent of all insects on Earth.

It wasn’t until after the K–Pg extinction event, which wiped out all non-avian dinosaurs and reshaped ecosystems, that ant and termite colonies began to expand. By the Miocene epoch (~23 million years ago), they accounted for 35 percent of all insect specimens. “The increasing abundance of social insects over the last 50 million years or so led to the repeated evolution of specialized diets in mammals. We sometimes call this selective pressure,” Phillip Barden, one of the study authors and a professor of biology at New Jersey Institute of Technology, told Ars.

Once mammals switched to an ant-and-termite-only diet, they almost never went back. The elephant shrew genus Macroscelides was the sole exception, shifting to omnivory after adopting myrmecophagy during the Eocene. This suggests that such specialization can be an evolutionary one-way street, possibly because losing teeth and developing highly adapted tongues, claws, and stomachs makes it difficult to return to a generalist diet.

“We only recover a single reversal out of specialized ant- and termite-eating, which could mean a few things. One possibility is that it is exceptionally difficult to re-evolve baseline feeding features once you become heavily specialized. It could also be that betting on ants and termites tends to pay off, that is, there is little selective pressure to de-specialize given the ubiquity of social insects in many environments,” Barden explained.

Insects are more influential than we realize

By showing that ant- and termite-based diets evolved repeatedly, the study highlights the overlooked role of social insects in shaping biodiversity. “This work gives us the first real roadmap, and what really stands out is just how powerful a selective force ants and termites have been over the last 50 million years, shaping environments and literally changing the face of entire species,” Barden said.

However, according to the study authors, we still do not have a clear picture of how much of an impact insects have had on the history of life on our planet. Lots of lineages have been reshaped by organisms with outsize biomass—and today, ants and termites have a combined biomass exceeding that of all living wild mammals, giving them a massive evolutionary influence.

However, there’s also a flip side. Eight of the 12 myrmecophagous origins are represented by just a single species, meaning most of these lineages could be vulnerable if their insect food sources decline. As Barden put it, “In some ways, specializing in ants and termites paints a species into a corner. But as long as social insects dominate the world’s biomass, these mammals may have an edge, especially as climate change seems to favor species with massive colonies, like fire ants and other invasive social insects.”

For now, the study authors plan to keep exploring how ants, termites, and other social insects have shaped life over millions of years, not through controlled lab experiments, but by continuing to use nature itself as the ultimate evolutionary archive. “Finding accurate dietary information for obscure mammals can be tedious, but each piece of data adds to our understanding of how these extraordinary diets came to be,” Vida argued.

Evolution, 2025. DOI: 10.1093/evolut/qpaf121 (About DOIs)

Rupendra Brahambhatt is an experienced journalist and filmmaker. He covers science and culture news, and for the last five years, he has been actively working with some of the most innovative news agencies, magazines, and media brands operating in different parts of the globe.

Read full article

Comments



Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete
Next Page of Stories