So far this year, electricity use in the US is up nearly 4 percent compared to the same period the year prior. That comes after decades of essentially flat use, a change that has been associated with a rapid expansion of data centers. And a lot of those data centers are being built to serve the boom in AI usage. Given that some of this rising demand is being met by increased coal use (as of May, coal's share of generation is up about 20 percent compared to the year prior), the environmental impact of AI is looking pretty bad.
But it's difficult to know for certain without access to the sorts of details that you'd only get by running a data center, such as how often the hardware is in use, and how often it's serving AI queries. So, while academics can test the power needs of individual AI models, it's hard to extrapolate that to real-world use cases.
By contrast, Google has all sorts of data available from real-world use cases. As such, its release of a new analysis of AI's environmental impact is a rare opportunity to peer a tiny bit under the hood. But the new analysis suggests that energy estimates are currently a moving target, as the company says its data shows the energy drain of a search has dropped by a factor of 33 in just the past year.
One of the big questions when doing these analyses is what to include. There's obviously the energy consumed by the processors when handling a request. But there's also the energy required for memory, storage, cooling, and more needed to support those processors. Beyond that, there's the energy used to manufacture all that hardware and build the facilities that house them. AIs also require a lot of energy during training, a fraction of which might be counted against any single request made to the model post-training.
Any analysis of energy use needs to make decisions about which of these factors to consider. For many of the ones that have been done in the past, various factors have been skipped largely because the people performing the analysis don't have access to the relevant data. They probably don't know how many processors need to be dedicated to a given task, much less the carbon emissions associated with producing them.
But Google has access to pretty much everything: the energy used to service a request, the hardware needed to do so, the cooling requirements, and more. And, since it's becoming standard practice to follow both Scope 2 and Scope 3 emissions that are produced due to the company's activities (either directly, through things like power generation, or indirectly through a supply chain), the company likely has access to those, as well.
For the new analysis, Google tracks the energy of CPUs, dedicated AI accelerators, and memory, both when active on handling queries and while idling in between queries. It also follows the energy and water use of the data center as a whole and knows what else is in that data center so it can estimate the fraction that's given over to serving AI queries. It's also tracking the carbon emissions associated with the electricity supply, as well as the emissions that resulted from the production of all the hardware it's using.
Three major factors don't make the cut. One is the environmental cost of the networking capacity used to receive requests and deliver results, which will vary considerably depending on the request. The same applies to the computational load on the end-user hardware; that's going to see vast differences between someone using a gaming desktop and someone using a smartphone. The one thing that Google could have made a reasonable estimate of, but didn't, is the impact of training its models. At this point, it will clearly know the energy costs there and can probably make reasonable estimates of a trained model's useful lifetime and number of requests handled during that period. But it didn't include that in the current estimates.
To come up with typical numbers, the team that did the analysis tracked requests and the hardware that served them for a 24 hour period, as well as the idle time for that hardware. This gives them an energy per request estimate, which differs based on the model being used. For each day, they identify the median prompt and use that to calculate the environmental impact.
Using those estimates, they find that the impact of an individual text request is pretty small. "We estimate the median Gemini Apps text prompt uses 0.24 watt-hours of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water," they conclude. To put that in context, they estimate that the energy use is similar to about nine seconds of TV viewing.
The bad news is that the volume of requests is undoubtedly very high. The company has chosen to execute an AI operation with every single search request, a compute demand that simply didn't exist a couple of years ago. So, while the individual impact is small, the cumulative cost is likely to be considerable.
The good news? Just a year ago, it would have been far, far worse.
Some of this is just down to circumstances. With the boom in solar power in the US and elsewhere, it has gotten easier for Google to arrange for renewable power. As a result, the carbon emissions per unit of energy consumed saw a 1.4x reduction over the past year. But the biggest wins have been on the software side, where different approaches have led to a 33x reduction in energy consumed per prompt.
The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They've developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.
The other thing is that Google designs its own custom AI accelerators, and it architects the software that runs on them, allowing it to optimize both sides of the hardware/software divide to operate well with each other. That's especially critical given that activity on the AI accelerators accounts for over half of the total energy use of a query. Google also has lots of experience running efficient data centers that carries over to the experience with AI.
The result of all this is that it estimates that the energy consumption of a typical text query has gone down by 33x in the last year alone. That has knock-on effects, since things like the carbon emissions associated with, say, building the hardware gets diluted out by the fact that the hardware can handle far more queries over the course of its useful lifetime.
Given these efficiency gains, it would have been easy for Google to simply use the results as a PR exercise; instead, the company has detailed its methodology and considerations in something that reads very much like an academic publication. It's taking that approach because the people behind this work would like to see others in the field adopt its approach. "We advocate for the widespread adoption of this or similarly comprehensive measurement frameworks to ensure that as the capabilities of AI advance, their environmental efficiency does as well," they conclude.
In a world where sports are dominated by youth and speed, some athletes in their late 30s and even 40s are not just keeping up—they are thriving.
Novak Djokovic is still outlasting opponents nearly half his age on tennis’s biggest stages. LeBron James continues to dictate the pace of NBA games, defending centers and orchestrating plays like a point guard. Allyson Felix won her 11th Olympic medal in track and field at age 35. And Tom Brady won a Super Bowl at 43, long after most NFL quarterbacks retire.
The sustained excellence of these athletes is not just due to talent or grit—it’s biology in action. Staying at the top of their game reflects a trainable convergence of brain, body, and mindset. I’m a performance scientist and a physical therapist who has spent over two decades studying how athletes train, taper, recover, and stay sharp. These insights aren’t just for high-level athletes—they hold true for anyone navigating big life changes or working to stay healthy.
Increasingly, research shows that the systems that support high performance—from motor control to stress regulation to recovery—are not fixed traits but trainable capacities. In a world of accelerating change and disruption, the ability to adapt to new changes may be the most important skill of all. So, what makes this adaptability possible—biologically, cognitively, and emotionally?
Neuroscience research shows that with repeated exposure to high-stakes situations, the brain begins to adapt. The prefrontal cortex—the region most responsible for planning, focus, and decision-making—becomes more efficient in managing attention and making decisions, even under pressure.
During stressful situations, such as facing match point in a Grand Slam final, this area of the brain can help an athlete stay composed and make smart choices—but only if it’s well trained.
In contrast, the amygdala, our brain’s threat detector, can hijack performance by triggering panic, freezing motor responses, or fueling reckless decisions. With repeated exposure to high-stakes moments, elite athletes gradually reshape this brain circuit.
They learn to tune down amygdala reactivity and keep the prefrontal cortex online, even when the pressure spikes. This refined brain circuitry enables experienced performers to maintain their emotional control.
Brain-derived neurotrophic factor, or BDNF, is a molecule that supports adapting to changes quickly. Think of it as fertilizer for the brain. It enhances neuroplasticity: the brain’s ability to rewire itself through experience and repetition. This rewiring helps athletes build and reinforce the patterns of connections between brain cells to control their emotion, manage their attention, and move with precision.
BDNF levels increase with intense physical activity, mental focus, and deliberate practice, especially when combined with recovery strategies such as sleep and deep breathing.
Elevated BDNF levels are linked to better resilience against stress and may support faster motor learning, which is the process of developing or refining movement patterns.
For example, after losing a set, Djokovic often resets by taking deep, slow breaths—not just to calm his nerves, but to pause and regain control. This conscious breathing helps him restore focus and likely quiets the stress signals in his brain.
In moments like these, higher BDNF availability likely allows him to regulate his emotions and recalibrate his motor response, helping him to return to peak performance faster than his opponent.
In essence, athletes who repeatedly train and compete in pressure-filled environments are rewiring their brain to respond more effectively to those demands. This rewiring, from repeated exposures, helps boost BDNF levels and in turn keeps the prefrontal cortex sharp and dials down the amygdala’s tendency to overreact.
This kind of biological tuning is what scientists call cognitive reserve and allostasis—the process the body uses to make changes in response to stress or environmental demands to remain stable. It helps the brain and body be flexible, not fragile.
Importantly, this adaptation isn’t exclusive to elite athletes. Studies on adults of all ages show that regular physical activity—particularly exercises that challenge both body and mind—can raise BDNF levels, improve the brain’s ability to adapt and respond to new challenges, and reduce stress reactivity.
Programs that combine aerobic movement with coordination tasks, such as dancing, complex drills, or even fast-paced walking while problem-solving have been shown to preserve skills such as focus, planning, impulse control, and emotional regulation over time.
After an intense training session or a match, you will often see athletes hopping on a bike or spending some time in the pool. These low-impact, gentle movements, known as active recovery, help tone down the nervous system gradually.
Outside of active recovery, sleep is where the real reset and repair happen. Sleep aids in learning and strengthens the neural connections challenged during training and competition.
Over time, this convergence creates a trainable loop between the brain and body that is better equipped to adapt, recover, and perform.
While the spotlight may shine on sporting arenas, you don’t need to be a pro athlete to train these same skills.
The ability to perform under pressure is a result of continuing adaptation. Whether you’re navigating a career pivot, caring for family members, or simply striving to stay mentally sharp as the world changes, the principles are the same: Expose yourself to challenges, regulate stress, and recover deliberately.
While speed, agility, and power may decline with age, some sport-specific skills such as anticipation, decision-making, and strategic awareness actually improve. Athletes with years of experience develop faster mental models of how a play will unfold, which allows them to make better and faster choices with minimal effort. This efficiency is a result of years of reinforcing neural circuits that doesn’t immediately vanish with age. This is one reason experienced athletes often excel even if they are well past their physical prime.
Physical activity, especially dynamic and coordinated movement, boosts the brain’s capacity to adapt. So does learning new skills, practicing mindfulness, and even rehearsing performance under pressure. In daily life, this might be a surgeon practicing a critical procedure in simulation, a teacher preparing for a tricky parent meeting, or a speaker practicing a high-stakes presentation to stay calm and composed when it counts. These aren’t elite rituals—they’re accessible strategies for building resilience, motor efficiency, and emotional control.
Humans are built to adapt—with the right strategies, you can sustain excellence at any stage of life.
Fiddy Davis Jaihind Jothikaran is an associate professor of kinesiology at Hope College.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Solar experts say there's never been a faster adoption of solar, with panels popping up on rooftops.
(Image credit: Betsy Joles for NPR)
US defense officials have long worried that China's Guowang satellite network might give the Chinese military access to the kind of ubiquitous connectivity US forces now enjoy with SpaceX's Starlink network.
It turns out the Guowang constellation could offer a lot more than a homemade Chinese alternative to Starlink's high-speed consumer-grade broadband service. China has disclosed little information about the Guowang network, but there's mounting evidence that the satellites may provide Chinese military forces a tactical edge in any future armed conflict in the Western Pacific.
The megaconstellation is managed by a secretive company called China SatNet, which was established by the Chinese government in 2021. SatNet has released little information since its formation, and the group doesn't have a website. Chinese officials have not detailed any of the satellites' capabilities or signaled any intention to market the services to consumers.
Another Chinese satellite megaconstellation in the works, called Qianfan, appears to be a closer analog to SpaceX's commercial Starlink service. Qianfan satellites are flat in shape, making them easier to pack onto the tops of rockets before launch. This is a design approach pioneered by SpaceX with Starlink. The backers of the Qianfan network began launching the first of up to 1,300 broadband satellites last year.
Unlike Starlink, the Guowang network consists of satellites manufactured by multiple companies, and they launch on several types of rockets. On its face, the architecture taking shape in low-Earth orbit appears to be more akin to SpaceX's military-grade Starshield satellites and the Space Development Agency's future tranches of data relay and missile-tracking satellites.
Guowang, or "national network," may also bear similarities to something the US military calls MILNET. Proposed in the Trump administration's budget request for next year, MILNET will be a partnership between the Space Force and the National Reconnaissance Office (NRO). One of the design alternatives under review at the Pentagon is to use SpaceX's Starshield satellites to create a "hybrid mesh network" that the military can rely on for a wide range of applications.
In recent weeks, China's pace of launching Guowang satellites has approached that of Starlink. China has launched five groups of Guowang satellites since July 27, while SpaceX has launched six Starlink missions using its Falcon 9 rockets over the same period.
A single Falcon 9 launch can haul up to 28 Starlink satellites into low-Earth orbit, while China's rockets have launched between five and 10 Guowang satellites per flight to altitudes three to four times higher. China has now placed 72 Guowang satellites into orbit since launches began last December, a small fraction of the 12,992-satellite fleet China has outlined in filings with the International Telecommunication Union.
The constellation described in China's ITU filings will include one group of Guowang satellites between 500 and 600 kilometers (311 and 373 miles), around the same altitude of Starlink. Another shell of Guowang satellites will fly roughly 1,145 kilometers (711 miles) above the Earth. So far, all of the Guowang satellites China has launched since last year appear to be heading for the higher shell.
This higher altitude limits the number of Guowang satellites China's stable of launch vehicles can carry. On the other hand, fewer satellites are required for global coverage from the higher orbit.
SpaceX has already launched nearly 200 of its own Starshield satellites for the NRO to use for intelligence, surveillance, and reconnaissance missions. The next step, whether it's the SDA constellation, MILNET, or something else, will seek to incorporate hundreds or thousands of low-Earth orbit satellites into real-time combat operations—things like tracking moving targets on the ground and in the air, targeting enemy vehicles, and relaying commands between allied forces. The Trump administration's Golden Dome missile defense shield aims to extend real-time targeting to objects in the space domain.
In military jargon, the interconnected links to detect, track, target, and strike a target is called a kill chain or kill web. This is what US Space Force officials are pushing to develop with the Space Development Agency, MILNET, and other future space-based networks.
So, where is the US military in building out this kill chain? The military has long had the ability to detect and track an adversary's activities from space. Spy satellites have orbited the Earth since the dawn of the Space Age.
Much of the rest of the kill chain—like targeting and striking—remains forward work for the Defense Department. Many of the Pentagon's existing capabilities are classified, but simply put, the multibillion-dollar satellite constellations the Space Force is building just for these purposes still haven't made it to the launch pad. In some cases, they haven't made it out of the lab.
The Space Development Agency is supposed to begin launching its first generation of more than 150 satellites later this year. These will put the Pentagon in a position to detect smaller, fainter ballistic and hypersonic missiles and provide targeting data for allied interceptors on the ground or at sea.
Space Force officials envision a network of satellites that can essentially control a terrestrial battlefield from orbit. The way future-minded commanders tell it, a fleet of thousands of satellites fitted with exquisite sensors and machine learning will first detect a moving target, whether it's a land vehicle, aircraft, naval ship, or missile. Then, that spacecraft will transmit targeting data via a laser link to another satellite that can relay the information to a shooter on Earth.
US officials believe Guowang is a step toward integrating satellites into China's own kill web. It might be easier for them to dismiss Guowang if it were simply a Chinese version of Starlink, but open source information suggests it's something more. Perhaps Guowang is more akin to megaconstellations being developed and deployed for the US Space Force and the National Reconnaissance Office.
If this is the case, China could have a head start on completing all the links for a celestial kill chain. The NRO's Starshield satellites in space today are presumably focused on collecting intelligence. The Space Force's megaconstellation of missile tracking, data relay, and command and control satellites is not yet in orbit.
Chinese media reports suggest the Guowang satellites could accommodate a range of instrumentation, including broadband communications payloads, laser communications terminals, synthetic aperture radars, and optical remote sensing payloads. This sounds a lot like a mix of SpaceX and the NRO's Starshield fleet, the Space Development Agency's future constellation, and the proposed MILNET program.
In testimony before a Senate committee in June, the top general in the US Space Force said it is "worrisome" that China is moving in this direction. Gen. Chance Saltzman, the Chief of Space Operations, used China's emergence as an argument for developing space weapons, euphemistically called "counter-space capabilities."
"The space-enabled targeting that they've been able to achieve from space has increased the range and accuracy of their weapon systems to the point where getting anywhere close enough [to China] in the Western Pacific to be able to achieve military objectives is in jeopardy if we can’t deny, disrupt, degrade that... capability," Saltzman said. "That’s the most pressing challenge, and that means the Space Force needs the space control counter-space capabilities in order to deny that kill web."
The US military's push to migrate many wartime responsibilities to space is not without controversy. The Trump administration wants to cancel purchases of new E-7 jets designed to serve as nerve centers in the sky, where Air Force operators receive signals about what's happening in the air, on the ground, and in the water for hundreds of miles around. Instead, much of this responsibility would be transferred to satellites.
Some retired military officials, along with some lawmakers, argue against canceling the E-7. They say there's too little confidence in when satellites will be ready to take over. If the Air Force goes ahead with the plan to cancel the E-7, the service intends to bridge the gap by extending the life of a fleet of Cold War-era E-3 Sentry airplanes, commonly known as AWACS (Airborne Warning and Control System).
But the high ground of space offers notable benefits. First, a proliferated network of satellites has global reach, and airplanes don't. Second, satellites could do the job on their own, with some help from artificial intelligence and edge computing. This would remove humans from the line of fire. And finally, using a large number of satellites is inherently beneficial because it means an attack on one or several satellites won't degrade US military capabilities.
Brig. Gen. Anthony Mastalir, commander of US Space Forces in the Indo-Pacific region, told Ars last year that US officials are watching to see how China integrates satellite networks like Guowang into military exercises.
"What I find interesting is China continues to copy the US playbook," Mastalir said. "So as you look at the success that the United States has had with proliferated architectures, immediately now we see China building their own proliferated architecture, not just the transport layer and the comm layer, but the sensor layer as well. You look at their pursuit of reusability in terms of increasing their launch capacity, which is currently probably one of their shortfalls. They have plans for a quicker launch tempo."
China hasn't recovered or reused an orbital-class booster yet, but several Chinese companies are working on it. SpaceX, meanwhile, continues to recycle its fleet of Falcon 9 boosters while simultaneously developing a massive super-heavy-lift rocket and churning out dozens of Starlink and Starshield satellites every week.
China doesn't have its own version of SpaceX. In China, it's taken numerous commercial and government-backed enterprises to reach a launch cadence that, so far this year, is a little less than half that of SpaceX. But the flurry of Guowang launches in the last few weeks shows that China's satellite and rocket factories are picking up the pace.
Mastalir said China's actions in the South China Sea, where it has taken claim of disputed islands near Taiwan and the Philippines, could extend farther from Chinese shores with the help of space-based military capabilities.
"Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing," he said. "That has started with an A2AD, an Anti-Access Area Denial strategy, which is extended to the first island chain and now the second island chain, and eventually all the way to the west coast of California."
"The sensor capabilities that they'll need are multi-orbital and diverse in terms of having sensors at GEO (geosynchronous orbit) and now increasingly massive megaconstellations at LEO (low-Earth orbit)," Mastalir said. "So we're seeing all signs point to being able to target US aircraft carriers... high-value assets in the air like tankers, AWACs. This is a strategy to keep the US from intervening, and that's what their space architecture is designed to do."