Pragmatic idealist. Worked on Ubuntu Phone. Inkscape co-founder. Probably human.
1513 stories
·
12 followers

“So much more menacing”: Formula E’s new Gen4 car breaks cover

1 Share

Formula E officially revealed its next electric racing car today. At first glance, the Gen4 machine looks similar to machinery of seasons past, but looks are deceiving—it’s “so much more menacing,” according to Formula E CEO Jeff Dodds. The new car is not only longer and wider, it’s far more powerful. The wings and bodywork now generate meaningful aerodynamic downforce. There will be a new tire supplier as Bridgestone returns to single-seat racing. The car is even completely recyclable.

I’m not sure that everyone who attended a Formula E race in its first season would have bet on the sport’s continued existence more than a decade down the line. When the cars took their green flag for the first time in Beijing in 2014, as many people derided it for being too slow or for the mid-race car swaps as praised it for trying something new in the world of motorsport.

Despite that, the racing was mostly entertaining, and it got better with the introduction of the Gen2 car, which made car swapping a thing of the past. Gen3 added more power, then temporary all-wheel drive with the advent of the Gen3 Evo days. That car will continue to race in season 12, which kicks off in Brazil on December 6 and ends in mid-August in London. When season 13 picks up in late 2026, we might see a pretty different kind of Formula E racing.

A Partially disassembled Formula E chassis on a stand The HALO head protection will be more necessary than ever, given the higher speeds of the new car. Credit: Formula E

“It feels like a real moment for us,” said Dodds. The new car will generate 603 hp in race mode, a 50 percent jump compared to the Gen3 Evo. That goes up to 804 hp (600 kW) in attack mode. For context, next year’s F1 cars will generate more power, but only when their batteries are fully charged; if the battery is depleted, that leaves just a 536 hp (400 kW) V6.

Acceleration should be extremely violent thanks to permanent AWD—the first for any single seater in FIA competition, at least for the last few decades. Top speed will be close to double that of the original race car, topping out at 210 mph (337 km/h). Now you can see why the sport decided that aerodynamic grip would be a useful addition.

In fact, there will be two different bodywork configurations, one for high downforce and the other with less. But that doesn’t mean Formula E teams will run out and build wind tunnels, like their F1 counterparts. “There’s significant gains that can be made out of software improvements, efficiency improvements, powertrain developments,” said Dodds, so there’s no incentive to spend lots of money on aero development that would only add fractions of a second.

The biggest opportunity for finding performance improvements may be with traction control and antilock braking systems. Formula E wants its technology to be road-relevant, so such driver aids will be unlimited in the Gen4 era. But efficiency will remain of utmost importance; the cars will still have to regenerate 40 percent of the energy they need to finish the race, as the 55 kWh battery is not sufficient to go flat-out to the end. Happily for the drivers, the new car can regen up to 700 kW of energy under braking.

A grainy shot of a Formula E gen4 car testing The Gen4 car in testing. Credit: Formula E

Finally, the car’s end of life has been considered. The entire race car is entirely recyclable, Formula E says, and it already contains 20 percent recycled content.

So far, the Gen4 car has been put through its paces for more than 5,000 miles (8,000 km), which is more than the mileage of an entire Formula E season, including testing. Now the teams have started to receive their chassis and have started the work of getting to know them and preparing to race them in season 13, all while getting ready to start season 12 next month.

What we won’t know until season 13 gets underway is how the Gen4 era will change the races. With bigger, faster cars, not every Formula E circuit will still be suitable, like London’s very tight Excel Arena track, but with a continued focus on making efficiency count, it’s quite likely we’ll continue to see the same close pack racing as before.

Read full article

Comments



Read the whole story
tedgould
7 hours ago
reply
Texas, USA
Share this story
Delete

Would Elon Musk Work Harder for $1 Trillion Than $1 Billion?

1 Share
Economists and psychologists say that compensation may not provide as powerful an incentive as is often assumed.

Read the whole story
tedgould
7 hours ago
reply
Texas, USA
Share this story
Delete

Google plans secret AI military outpost on tiny island overrun by crabs

1 Share

On Wednesday, Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. The previously undisclosed project will reportedly position advanced AI infrastructure a mere 220 miles south of Indonesia at a location military strategists consider critical for monitoring Chinese naval activity.

Aside from its strategic military position, the island is famous for its massive annual crab migration, where over 100 million of red crabs make their way across the island to spawn in the ocean. That’s notable because the tech giant has applied for environmental approvals to build a subsea cable connecting the 135-square-kilometer island to Darwin, where US Marines are stationed for six months each year.

The project follows a three-year cloud agreement Google signed with Australia’s military in July 2025, but many details about the new facility’s size, cost, and specific capabilities remain “secret,” according to Reuters. Both Google and Australia’s Department of Defense declined to comment when contacted by the news agency.

Sir David Attenborough examines the great Christmas Island red crab migration.

Bryan Clark, a former US Navy strategist who ran recent war games featuring Christmas Island, told Reuters that the planned facility would enable AI-powered military command and control. Recent military exercises involving Australian, US, and Japanese forces show Christmas Island’s value as a forward defense position for launching uncrewed weapons systems. The island’s location allows the monitoring of traffic through the Sunda, Lombok, and Malacca straits, which are key waterways for global shipping and submarine movements.

Christmas Island has reportedly struggled with poor telecommunications and limited economic opportunities in the past, but some of the island’s 1,600 human residents are cautiously optimistic about the project.

Christmas Island Shire President Steve Pereira told Reuters that the council is examining community impacts before approving construction. “There is support for it, providing this data center actually does put back into the community with infrastructure, employment, and adding economic value to the island,” Pereira said.

That’s great, but what about the crabs?

Christmas Island’s annual crab migration is a natural phenomenon that Sir David Attenborough reportedly once described as one of his greatest TV moments when he visited the site in 1990.

Every year, millions of crabs emerge from the forest and swarm across roads, streams, rocks, and beaches to reach the ocean, where each female can produce up to 100,000 eggs. The tiny baby crabs that survive take about nine days to march back inland to the safety of the plateau.

While Google is seeking environmental approvals for its subsea cables, the timing could prove delicate for Christmas Island’s most famous residents. According to Parks Australia, the island’s annual red crab migration has already begun for 2025, with a major spawning event expected in just a few weeks, around November 15–16.

During peak migration times, sections of roads close at short notice as crabs move between forest and sea, and the island has built special crab bridges over roads to protect the migrating masses.

Parks Australia notes that while the migration happens annually, few baby crabs survive the journey from sea to forest most years, as they’re often eaten by fish, manta rays, and whale sharks. The successful migrations that occur only once or twice per decade (when large numbers of babies actually survive) are critical for maintaining the island’s red crab population.

How Google’s facility might coexist with 100 million marching crustaceans remains to be seen. But judging by the size of the event, it seems clear that it’s the crab’s world, and we’re just living in it.

Read full article

Comments



Read the whole story
tedgould
1 day ago
reply
Texas, USA
Share this story
Delete

Nick Fuentes’s Rise Puts MAGA Movement in a ‘Time of Choosing’

1 Share
After Mr. Fuentes’s interview with Tucker Carlson, Republicans are considering just how far his views are from the nationalism embraced by President Trump’s followers.

Read the whole story
tedgould
2 days ago
reply
Texas, USA
Share this story
Delete

Adobe's new plan to justify its subscriptions: be a one-stop AI shop

1 Share
adobe creativity ai age
Photo: Mitchell Clark

Disclosure: DPReview attended Adobe Max, with Adobe covering travel and lodging expenses.

The subscription payment model is a tough one; customers have made it clear that they're fatigued by having to pay for everything every month, and companies have to continuously justify why their software shouldn't just be a one-time payment.

It's an argument we've seen time and time again here at DPReview almost any time Adobe's Creative Cloud comes up, with commenters bemoaning the lost days of simply being able to buy Photoshop once (at least, until the next version came out in a few years).

In the age of generative AI, Adobe seems to have found a new answer: being a one-stop shop for AI services that would typically require separate subscriptions. Partner Models in particular have come up again and again at this year's Adobe Max conference, from keynotes to product demos. And while AI will almost certainly have terrifying implications for society at large and the art of photography in particular, I find myself coming away strangely optimistic for the future of the artform, at least as a hobby.

In the age of generative AI, Adobe wants to be a one-stop shop for AI services

Let's lay some groundwork quickly for those who haven't been following along. This week, Adobe announced and released several new features for Photoshop and Lightroom, programs that many photographers consider essential.

As usual, most of it revolved around AI: there's a chatbot coming to Photoshop that you can ask to make certain edits and complete tasks for you, the Generative Remove tool that lets you erase unwanted distractions is now better, and you can "Harmonize" foreground and background layers to turn compositing into a single-click process.

AI from partners, and Adobe

adobe_partner_models_slide
Photo: Mitchell Clark

The biggest change, though, is the introduction of Partner Models. Up until now, features like Generative Fill, which let you add AI-generated elements to your images, and AI Upscale, relied on Adobe's in-house Firefly models. And while you can still use those, Adobe's now letting you use other models too, such as Google's goofily-named Nano Banana image generator and Topaz Labs' increasingly popular upscale, denoise and sharpen models.

Rather than relying on separate paid subscriptions and apps for each of those services, it all happens within Photoshop using AI credits that are included in your Creative Cloud plan (provided you've chosen the right one).

Put another way, Adobe is mediating your relationship with other AI vendors. It doesn't want you to view them as separate services that you have to manage depending on what tasks you have this month, but tools you can access within its apps that – importantly – you don't have to pay for separately.

The company laid the groundwork for this change in advance, changing up its Creative Cloud subscription earlier this year with its plans now centering around how many AI credits are included. In retrospect, it's obvious that this was vital if it wanted to let its users access otherwise expensive AI models without needing a separate subscription.*

What's the impact?

This could be a sign of profound changes to come for photographers. Not because I think the future of Creative Cloud as a subscription hinges on whether this gambit works. Realistically, that battle is over; it seems like most people are willing to pay the rent, and, realistically, there's probably a lot of overlap between the anti-subscription and anti-AI crowds. (I say this with love.) No, it could be something much deeper.

While many of us hobbyists like to imagine that being a professional photographer would let us pursue all our artistic ambitions in interesting locales, the reality is that the largest market for paid photography is less glamorous commercial work; capturing images to be used in advertisements and other collateral by corporations.

Try as they might, companies have never been able to fully extract the photographer from that equation

But try as they might, companies have never been able to fully extract the photographer from that equation; there's still a human who has to hold the camera and make what are ultimately creative decisions. Generative AI may finally be the thing that lets them do that. At the very least, there's a good chance that human photographers will become less and less important in the creative process. The photo doesn't quite match the senior VP of marketing's vision? They can fire up Photoshop and have generative AI "fix" it with a simple prompt.

Adobe's demo of making a model change which way they're facing. They pitch it as being at the behest of the model, but that doesn't strike me as the most likely scenario.

To be clear, this isn't a hypothetical future; during its keynote, Adobe showed an example of using the Generative Fill tool to change which direction a model was looking. Higher-ups could always mandate changes, but the barriers to them doing so have never been so low; before, they would've had to weigh the costs of dragging everyone back into the studio. Now, all it takes is a couple of clicks and some AI credits. And with tools like Firefly and Express, Adobe's trying to make it so you don't even have to know which model works best for which purposes.

Our AI, your voice

model customization slide
Custom models are Adobe's solution to living in what it calls a "content-first" world.
Image: Adobe

It goes even further. Adobe also introduced something called Custom Models, which lets you feed your existing work into its Firefly AI and train it to produce images in a similar style. There's also a super-charged version for corporations that will let them dump their entire intellectual property into it, generating on-brand content (yuck) without the need for any artistic input. The work of all the creatives that have worked with the company becomes grist for the ever-accelerating content mill.

Okay, so what about the part where I said I don't think it's the apocalypse? Well, for those of us who do photography as a hobby, not a job (which I suspect is actually most of us), this approach could be helpful, especially if AI tools are only a very occasional part of how we work with our images.

Take Topaz's Gigapixel upscaler, something that gets recommended relatively frequently in our forums and comments. It's not something I'd personally spend $12 a month on, but it's something I'd sometimes use to touch up older photos if I had access to it. If it's just included in my Creative Cloud subscription, I can do so without really having to think about it.

The drive to add more and more AI features could also result in more features that are genuinely useful to photographers. Work that went into features like the cloud-based remove tool could inform tools like Lightroom's Assisted Culling tool, which has to recognize eyes that are out of focus and missed exposures.

Lightroom visual stacks
Lightroom has its fair share of AI features, but largely remains a bastion for people who care about photography.

Cloud processing is making it possible to search your Lightroom catalogue using natural language, rather than having to rely on tags that you've manually added. And while Adobe views the AI Photoshop assistant more as a way to automate repetitive tasks, it could be a powerful tool in helping people learn a piece of increasingly complicated software.

There are clearly still lots of people at Adobe who recognize that photography can be a passion, not just a means to an end, and who are finding ways for AI to enhance what humans do, not replace it. And, at least for now, they still seem to have the space and resources to do that work.

Adobe is building tools for people who don't care to learn the craft they're practicing

However, that work is being showcased alongside the latest innovations in placing business needs over human ones, and tools built for people who don't care to learn the craft they're practicing. See the Firefly video editor, for people who want video edited but don't want to edit it, and Photoshop AI assistant for people who want things photoshopped but don't want to Photoshop it.

At the end of it all, it's hard to say what vision will win out, or what balance will be struck. Certainly, the latter seems to be the one being sold the hardest here at Max, but maybe that's just because it's not as prima facie enticing to an audience that still includes a lot of creative people. I'm not sure who's buying that vision of the future, and I'm honestly a little scared to find out. But I do think that it'll come with a lot of side benefits for photographers, intended and not.

* It also likely represents some big deals between Adobe and other AI companies, which doesn't help assuage my concerns about how bubbly the map of the AI economy looks one bit.

Read the whole story
tedgould
3 days ago
reply
Texas, USA
Share this story
Delete

Every piece of gear a conflict photographer carries (and why)

1 Share

War photography is incredibly demanding, requiring superb technical skills, a finely tuned kit and the ability to adapt and survive in harsh environments. Photojournalist Jonathan Alpeyrie knows this all too well, having spent more than 20 years covering major conflicts across the globe, including those in Ukraine, Iraq, Syria, Gaza and the international drug trade. In a recent video, Alpeyrie walked through his kit for assignments, while also sharing his experiences and insight into being a conflict photographer.

Alpeyrie said that he has been packing the same way for over 20 years, and his kit is straightforward: a camera, flak jacket, phone, bag and computer. These days, his gear includes the Canon EOS R camera, which he said he likes because it's discreet and allows him to look more like a tourist than anything else.

On the lens side, Alpeyrie recommends not skimping on quality. "The lens is where you want to spend your money. The body you can pick and choose," he explains in the video. His current lens of choice is the Canon EOS R 50mm F1.2L USM. He says the 50mm lens is the only one he uses right now, in part because the wide aperture is ideal for working in dark situations. It's also smaller than other lenses, which is helpful when you're taking cover from artillery fire. He says he has had larger lenses, such as 200mm or 300mm, break in these situations.

Beyond gear choices, Alpeyrie shares lots of valuable insights in the video. He touches on how he protects his camera and memory cards in dangerous places, the importance of knowing your camera and how to use manual settings, tips on framing and composition, and so much more. It's well worth a watch all the way through.

Read the whole story
tedgould
3 days ago
reply
Texas, USA
Share this story
Delete
Next Page of Stories