Pragmatic idealist. Worked on Ubuntu Phone. Inkscape co-founder. Probably human.
1519 stories
·
12 followers

What’s it like to compete in the longest US off-road rally with no GPS?

1 Share

I’ve been involved with the Rebelle Rally since its inception in 2016, either as a competitor or live show host, and over the past 10 years, I’ve seen it evolve from a scrappy rally with big dreams to the world-class event that it is today.

In a nutshell, the Rebelle Rally is the longest competitive off-road rally in the United States, covering over 2,000 kilometers, and it just happens to be for women. Over eight days, teams of two must plot coordinates on a map, figure out their route, and find multiple checkpoints—both marked and unmarked—with no GPS, cell phones, or chase crews. It is not a race for speed but rather a rally for navigational accuracy over some of the toughest terrain California and Nevada have to offer. There are two classes: 4×4 with vehicles like the Jeep Wrangler and Ford Bronco and X-Cross for cars like the Honda Passport and BMW X5. Heavy modifications aren’t needed, and many teams compete for the coveted Bone Stock award.

For this 10th anniversary, I got back behind the wheel of a 2025 Subaru Crosstrek Wilderness as a driver, with Kendra Miller as my navigator, to defend my multiple podium finishes and stage wins and get reacquainted with the technology, or lack thereof, that makes this multi-day competition so special.

Two women rally drivers stand in front of their car
Emme Hall (R), driver, and Kendra Miller (L), navigator, before the start of the 2025 Rebelle Rally. Credit: Ernesto Araiza
hands hold a compass on a map
Technology powers the Rebelle Rally, except for when it comes to the competitors' navigation. Credit: Nicole Dreon
Does this rally run actually on tater tots? Credit: Nicole Dreon

High-tech rally

In the morning, as Kendra uses a scale ruler to plot 20-plus coordinates on the map of the day, a laborious task that requires intense concentration, I have time to marvel at base camp a bit. We climb out of our snuggy sleeping bags and tents in the pitch black of 5 am, but the main tent is brighter than ever thanks to Renewable Innovations and its mobile microgrid.

This system combines a solar and a hydrogen fuel cell system for up to 750 kWh of power. In the early morning, the multiple batteries in both systems power the bright lights that the navigators need to see their maps, and the Starlink units send the commentary show to YouTube and Facebook Live. Competitors and staff can take a hot shower, the kitchen fries up the morning’s tater tots—seriously, they are the best—and the day’s drivers’ meeting gets started on the PA system. We’re 100 miles from nowhere, and it feels like home.

The microgrid can even integrate other fuel cell systems. For our awards ceremony, Renewable Innovations fed some of its hydrogen from the special tanker developed by Quantum Fuel Systems into Toyota’s TRD Fuel Cell Generator Tundra to power the jumbo screens, microphones, and lights. The Tundra is a pretty cool package. Outfitted with Fox 3.0 remote reservoir shocks and 35-inch Nitto tires to get far afield, it can produce 80 kW of power, store 36 kWh of juice, and deliver all those electrons as three-phase, industrial-grade power. We don’t need no stinkin’ generators!

A mobile solar array and storage system in the desert
Mobile solar arrays and battery storage has replaced noisy diesel generators for camp power. Credit: Regine Trias
Tents and solar panels
Rebelle Rally HQ on day 3. Credit: Mayfield Media
A closeup of a fuel cell in a pickup bed
The hydrogen fuel cell generator in Toyota's Hydrogen Solutions pickup truck. Credit: Rebellation/Paolo Baraldi

Although competitors are using nothing but a map and compass to find their way through the desert, the staff of the Rebelle Rally knows where each of us is thanks to the Yellow Brick trackers and the Iridium satellite system. Think of Iridium as the OG Starlink. This company had satellites in low-Earth orbit when Elon Musk still had his natural hairline—1998 to be exact.

We have two Yellow Brick units. One is attached to the outside of the Crosstrek and allows the Rally to know where we are at all times. This information is also used in the live tracking system so fans can follow our progress on the Rebelle Rally website. When we arrive at a checkpoint, I press the send button on a different Yellow Brick unit that we keep inside the car. This sends a signal to the Iridium network, which then does two things. It updates our scoring, and it gives us our latitude and longitude. Each satellite can talk to up to four others in the sky, so there is plenty of redundancy without much latency. I get a four-second countdown after I push the button to ensure that I really want to send the signal, but the information goes through in less than a second once it is sent.

Oh, and don’t go thinking we can just press our trackers willy-nilly to get our coordinates so we know where we are. If I press the button and I’m nowhere near a checkpoint, I get a wide-miss penalty. That button is high-stakes.

A brightly decorated Subaru Crosstrek Wilderness rally car
No fancy differentials, just a bit of a lift and the standard CVT. Credit: Ernesto Araiza
hands plot a map
The Rebelle is a test of accurate navigation over more than 1,200 miles as much as it is driving. Credit: Paolo Baraldi
One of the two Yellow Brick trackers. Woe betide you if you use it when you shouldn't. Credit: Nicole Dreon

Low-tech competition

We Rebelles lock away our phones for the duration of the rally, and any GPS capability in the car is disabled, either by pulling the fuse or antenna or by physically covering up the center screen. Lucky for me, the Crosstrek Wilderness doesn’t have any native navigation system, so I can keep the full screen visible. Good thing, too, as I need it to disable the traction control and parking sensors. Trust me, nothing is more annoying than having a car suddenly brake on its own when all you’re trying to do is reverse over a bush so you can clear an obstacle.

Aside from sensors, the Subie is free from any fancy-pants technology. There is no adaptive suspension or ride-height adjustability, and the car uses steel springs, not airbags. X-Mode can modify the traction control for various situations, but it only works at slower speeds, and frankly, we only need it a few times. I turn it on while descending a steep hill with lots of loose rocks to take advantage of hill-descent control and again when clambering up a rocky section where one wheel gets off the ground a bit.

The hardest driving section involves the dunes of Glamis, California. Here, I have to get the Crosstrek through soft sand where deep holes can hide over every crest. Momentum is key, so I keep my foot planted on the throttle and use the paddle shifters to keep the continuously variable transmission in a section of the power band with high revs. It works, and we only get stuck once. We throw our Maxtrax recovery boards under the BFGoodrich KO2 tires, and we’re out in five minutes.

Two women rally drivers confer outside their car
Day 6, not far from the end. Credit: Regine Trias
two women rally drivers sit on the hood of their car at the end of the race
Third in class, with a stage win? Not bad at all. Credit: Ernesto Araiza

The only nod to GPS is our external odometers. Kendra and I have two—a Monit system and a Terratrip system—just in case one fails. Both have a GPS antenna but only use the technology to give us precise distance. This is crucial when trying to nail an unmarked checkpoint to within 25 meters. Other units use a wheel probe or plug into the car’s CAN bus system, but they are difficult to install and finicky to calibrate. The ones we have are much less stressful.

The simple Subie triumphs

While other vehicles with fancy electronic rear lockers, forward-facing cameras, and adaptive suspension completed the Rally, my little Subie scraped her way to a third-place finish and a stage win for the X-Cross class. Yeah, she had a little help, as I added a 2-inch lift and swapped out wheels and tires, but the little analog car that could made a great showing. First place was earned by Carey Lando and Andrea Shaffer in the new Subaru Forester Hybrid, and a BMW X5 helmed by Rebecca Donaghe and Rebecca Dalski took second place.

Along the way, we nailed some unmarked checkpoints within 10 meters, wide-missed one or two, and got to celebrate our success or mourn our mistakes almost instantly courtesy of Iridium and our Yellow Brick tracker. I found the time to take one shower and was treated to hot water thanks to the power provided by Renewable Innovations. We might have used just old-school navigation tools to get us down the course, but the Rebelle Rally employed high-tech solutions to make sure we were safe, accurately scored, and had all the tater tots we could eat.

Read full article

Comments



Read the whole story
tedgould
27 minutes ago
reply
Texas, USA
Share this story
Delete

A Force Behind Timothée Chalamet, Charli XCX & Other Megastars of the Moment

1 Share
Timothée Chalamet, Charli XCX and Billie Eilish are among those who trust Aidan Zamiri, a director and photographer, with their images.

Read the whole story
tedgould
2 days ago
reply
Texas, USA
Share this story
Delete

In Explaining His Gaffe, Heritage Foundation Leader Pleads Ignorance

1 Share
Kevin Roberts, under fire for defending Tucker Carlson’s interview with a white nationalist, said that he did not keep up with the news and that he had simply read an aide’s script.

Read the whole story
tedgould
2 days ago
reply
Texas, USA
Share this story
Delete

The Mac calculator’s original design came from letting Steve Jobs play with menus for ten minutes

1 Share

In February 1982, Apple employee #8 Chris Espinosa faced a problem that would feel familiar to anyone who has ever had a micromanaging boss: Steve Jobs wouldn’t stop critiquing his calculator design for the Mac. After days of revision cycles, the 21-year-old programmer found an elegant solution: He built what he called the “Steve Jobs Roll Your Own Calculator Construction Set” and let Jobs design it himself.

This delightful true story comes from Andy Hertzfeld’s Folklore.org, a legendary tech history site that chronicles the development of the original Macintosh, which was released in January 1984. I ran across the story again recently and thought it was worth sharing as a fun anecdote in an age where influential software designs often come by committee.

Design by menu

Chris Espinosa started working for Apple at age 14 in 1976 as the company’s youngest employee. By 1981, while studying at UC Berkeley, Jobs convinced Espinosa to drop out and work on the Mac team full time.

Believe it or not, Chris Espinosa still works at Apple as its longest-serving employee. But back in the day, as manager of documentation for the Macintosh, Espinosa decided to write a demo program using Bill Atkinson’s QuickDraw, the Mac’s graphics system, to better understand how it worked. He chose to create a calculator as one of the planned “desk ornaments,” which were small utility programs that would ship with the Mac. They later came to be called “desk accessories.”

Espinosa thought his initial calculator design looked good, but Jobs had other ideas when he saw it. Hertzfeld describes the scene: “Well, it’s a start,” Steve said, “but basically, it stinks. The background color is too dark, some lines are the wrong thickness, and the buttons are too big.”

Screenshot: The Mac OS 1.0 calculator seen in situ. The Mac OS 1.0 calculator seen in situ with other desk accessories. Credit: Apple / Benj Edwards

For several days, Espinosa would incorporate Jobs’s suggestions from the previous day, only to have Jobs find new faults with each iteration. It might have felt like a classic case of “design by committee,” but in this case, the committee was just one very particular person who seemed impossible to satisfy.

Rather than continue the endless revision cycle, Espinosa took a different approach. According to Hertzfeld, Espinosa created a program that exposed every visual parameter of the calculator through pull-down menus: line thickness, button sizes, background patterns, and more. When Jobs sat down with it, he spent about ten minutes adjusting settings until he found a combination he liked.

The approach worked. When given direct control over the parameters rather than having to articulate his preferences verbally, Jobs quickly arrived at a design he was satisfied with. Hertzfeld notes that he implemented the calculator’s UI a few months later using Jobs’s parameter choices from that ten-minute session, while Donn Denman, another member of the Macintosh team, handled the mathematical functions.

That ten-minute session produced the calculator design that shipped with the Mac in 1984 and remained virtually unchanged through Mac OS 9, when Apple discontinued that OS in 2001. Apple replaced it in Mac OS X with a new design, ending the calculator’s 17-year run as the primary calculator interface for the Mac.

Why it worked

Espinosa’s Construction Set was an early example of what would later become common in software development: visual and parameterized design tools. In 1982, when most computers displayed monochrome text, the idea of letting someone fine-tune visual parameters through interactive controls without programming was fairly forward-thinking. Later, tools like HyperCard would formalize this kind of idea into a complete visual application framework.

The primitive calculator design tool also revealed something about Jobs’s management process. He knew what he wanted when he saw it, but he perhaps struggled to articulate it at times. By giving him direct manipulation ability, Espinosa did an end-run around that communication problem entirely. Later on, when he returned to Apple in the late 1990s, Jobs would famously insist on judging products by using them directly rather than through canned PowerPoint demos or lists of specifications.

The longevity of Jobs’s ten-minute design session suggests the approach worked. The calculator survived nearly two decades of Mac OS updates, outlasting many more elaborate interface elements. What started as a workaround became one of the Mac’s most simple but enduring designs.

By the way, if you want to try the original Mac OS calculator yourself, you can run various antique versions of the operating system in your browser thanks to the Infinite Mac website.

Read full article

Comments



Read the whole story
tedgould
3 days ago
reply
Texas, USA
Share this story
Delete

How I Stopped Worrying and Learned to Love the Bots

1 Share

The decisions by major AI companies—xAI in July, Meta in August, and OpenAI last month—to open their chatbots to erotica have supercharged debate around humans forming romantic relationships with AI. Critics argue that this is the end of human connection.

I founded and run one of the largest romantic chat companies in the world, janitorAI. And yes, I chat with the bots myself—mafiosa Nova Marino is a personal favorite. When I launched the site in 2023, OpenAI sent me a cease-and-desist order because our users were using our platform together with OpenAI’s model to generate romantic content. Our website couldn’t access OpenAI’s application programming interface, and the company disabled some of our users’ OpenAI accounts for violating its terms of service. A few months later, that ban quietly disappeared.

Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete

Researchers isolate memorization from reasoning in AI neural networks

1 Share

When engineers build AI language models like GPT-5 from training data, at least two major processing features emerge: memorization (reciting exact text they’ve seen before, like famous quotes or passages from books) and reasoning (solving new problems using general principles). New research from AI startup Goodfire.ai provides the first potentially clear evidence that these different functions actually work through completely separate neural pathways in the model’s architecture.

The researchers discovered that this separation proves remarkably clean. In a preprint paper released in late October, they described that when they removed the memorization pathways, models lost 97 percent of their ability to recite training data verbatim but kept nearly all their “logical reasoning” ability intact.

For example, at layer 22 in Allen Institute for AI’s OLMo-7B language model, the bottom 50 percent of weight components showed 23 percent higher activation on memorized data, while the top 10 percent showed 26 percent higher activation on general, non-memorized text. This mechanistic split enabled the researchers to surgically remove memorization while preserving other capabilities.

Perhaps most surprisingly, the researchers found that arithmetic operations seem to share the same neural pathways as memorization rather than logical reasoning. When they removed memorization circuits, mathematical performance plummeted to 66 percent while logical tasks remained nearly untouched. This discovery may explain why AI language models notoriously struggle with math without the use of external tools. They’re attempting to recall arithmetic from a limited memorization table rather than computing it, like a student who memorized times tables but never learned how multiplication works. The finding suggests that at current scales, language models treat “2+2=4” more like a memorized fact than a logical operation.

It’s worth noting that “reasoning” in AI research covers a spectrum of abilities that don’t necessarily match what we might call reasoning in humans. The logical reasoning that survived memory removal in this latest research includes tasks like evaluating true/false statements and following if-then rules, which are essentially applying learned patterns to new inputs. This also differs from the deeper “mathematical reasoning” required for proofs or novel problem-solving, which current AI models struggle with even when their pattern-matching abilities remain intact.

Looking ahead, if the information removal techniques receive further development in the future, AI companies could potentially one day remove, say, copyrighted content, private information, or harmful memorized text from a neural network without destroying the model’s ability to perform transformative tasks. However, since neural networks store information in distributed ways that are still not completely understood, for the time being, the researchers say their method “cannot guarantee complete elimination of sensitive information.” These are early steps in a new research direction for AI.

Traveling the neural landscape

To understand how researchers from Goodfire distinguished memorization from reasoning in these neural networks, it helps to know about a concept in AI called the “loss landscape.” The “loss landscape” is a way of visualizing how wrong or right an AI model’s predictions are as you adjust its internal settings (which are called “weights”).

Imagine you’re tuning a complex machine with millions of dials. The “loss” measures the number of mistakes the machine makes. High loss means many errors, low loss means few errors. The “landscape” is what you’d see if you could map out the error rate for every possible combination of dial settings.

During training, AI models essentially “roll downhill” in this landscape (gradient descent), adjusting their weights to find the valleys where they make the fewest mistakes. This process provides AI model outputs, like answers to questions.

Figure 1: Overview of our approach. We collect activations and gradients from a sample of training data (a), which allows us to approximate loss curvature w.r.t. a weight matrix using K-FAC (b). We decompose these weight matrices into components (each the same size as the matrix), ordered from high to low curvature. In language models, we show that data from different tasks interacts with parts of the spectrum of components differently (c). Figure 1 from the paper “From Memorization to Reasoning in the Spectrum of Loss Curvature.” Credit: Merullo et al.

The researchers analyzed the “curvature” of the loss landscapes of particular AI language models, measuring how sensitive the model’s performance is to small changes in different neural network weights. Sharp peaks and valleys represent high curvature (where tiny changes cause big effects), while flat plains represent low curvature (where changes have minimal impact).

Using a technique called K-FAC (Kronecker-Factored Approximate Curvature), they found that individual memorized facts create sharp spikes in this landscape, but because each memorized item spikes in a different direction, when averaged together they create a flat profile. Meanwhile, reasoning abilities that many different inputs rely on maintain consistent moderate curves across the landscape, like rolling hills that remain roughly the same shape regardless of the direction from which you approach them.

“Directions that implement shared mechanisms used by many inputs add coherently and remain high-curvature on average,” the researchers write, describing reasoning pathways. In contrast, memorization uses “idiosyncratic sharp directions associated with specific examples” that appear flat when averaged across data.

Different tasks reveal a spectrum of mechanisms

The researchers tested their technique on multiple AI systems to verify the findings held across different architectures. They primarily used Allen Institute’s OLMo-2 family of open language models, specifically the 7 billion- and 1 billion-parameter versions, chosen because their training data is openly accessible. For vision models, they trained custom 86 million-parameter Vision Transformers (ViT-Base models) on ImageNet with intentionally mislabeled data to create controlled memorization. They also validated their findings against existing memorization removal methods like BalancedSubnet to establish performance benchmarks.

The team tested their discovery by selectively removing low-curvature weight components from these trained models. Memorized content dropped to 3.4 percent recall from nearly 100 percent. Meanwhile, logical reasoning tasks maintained 95 to 106 percent of baseline performance.

These logical tasks included Boolean expression evaluation, logical deduction puzzles where solvers must track relationships like “if A is taller than B,” object tracking through multiple swaps, and benchmarks like BoolQ for yes/no reasoning, Winogrande for common sense inference, and OpenBookQA for science questions requiring reasoning from provided facts. Some tasks fell between these extremes, revealing a spectrum of mechanisms.

Mathematical operations and closed-book fact retrieval shared pathways with memorization, dropping to 66 to 86 percent performance after editing. The researchers found arithmetic particularly brittle. Even when models generated identical reasoning chains, they failed at the calculation step after low-curvature components were removed.

Figure 3: Sensitivity of different kinds of tasks to ablation of flatter eigenvectors. Parametric knowledge retrieval, arithmetic, and memorization are brittle, but openbook fact retrieval and logical reasoning is robust and maintain around 100% of original performance. Figure 3 from the paper “From Memorization to Reasoning in the Spectrum of Loss Curvature.” Credit: Merullo et al.

“Arithmetic problems themselves are memorized at the 7B scale, or because they require narrowly used directions to do precise calculations,” the team explains. Open-book question answering, which relies on provided context rather than internal knowledge, proved most robust to the editing procedure, maintaining nearly full performance.

Curiously, the mechanism separation varied by information type. Common facts like country capitals barely changed after editing, while rare facts like company CEOs dropped 78 percent. This suggests models allocate distinct neural resources based on how frequently information appears in training.

The K-FAC technique outperformed existing memorization removal methods without needing training examples of memorized content. On unseen historical quotes, K-FAC achieved 16.1 percent memorization versus 60 percent for the previous best method, BalancedSubnet.

Vision transformers showed similar patterns. When trained with intentionally mislabeled images, the models developed distinct pathways for memorizing wrong labels versus learning correct patterns. Removing memorization pathways restored 66.5 percent accuracy on previously mislabeled images.

Limits of memory removal

However, the researchers acknowledged that their technique isn’t perfect. Once-removed memories might return if the model receives more training, as other research has shown that current unlearning methods only suppress information rather than completely erasing it from the neural network’s weights. That means the “forgotten” content can be reactivated with just a few training steps targeting those suppressed areas.

The researchers also can’t fully explain why some abilities, like math, break so easily when memorization is removed. It’s unclear whether the model actually memorized all its arithmetic or whether math just happens to use similar neural circuits as memorization. Additionally, some sophisticated capabilities might look like memorization to their detection method, even when they’re actually complex reasoning patterns. Finally, the mathematical tools they use to measure the model’s “landscape” can become unreliable at the extremes, though this doesn’t affect the actual editing process.

Read full article

Comments



Read the whole story
tedgould
4 days ago
reply
Texas, USA
Share this story
Delete
Next Page of Stories