Pragmatic idealist. Worked on Ubuntu Phone. Inkscape co-founder. Probably human.
1492 stories
·
12 followers

Lead poisoning has been a feature of our evolution

1 Share

Our hominid ancestors faced a Pleistocene world full of dangers—and apparently one of those dangers was lead poisoning.

Lead exposure sounds like a modern problem, at least if you define “modern” the way a paleoanthropologist might: a time that started a few thousand years ago with ancient Roman silver smelting and lead pipes. According to a recent study, however, lead is a much more ancient nemesis, one that predates not just the Romans but the existence of our genus Homo. Paleoanthropologist Renaud Joannes-Boyau of Australia’s Southern Cross University and his colleagues found evidence of exposure to dangerous amounts of lead in the teeth of fossil apes and hominins dating back almost 2 million years. And somewhat controversially, they suggest that the toxic element’s pervasiveness may have helped shape our evolutionary history.

The skull of an early hominid, aged to a dark brown color. The skull is fragmentary, but the fragments are held in the appropriate locations by an underlying beige material. The skull of an early hominid. Credit: Einsamer Schütze / Wikimedia

The Romans didn’t invent lead poisoning

Joannes-Boyau and his colleagues took tiny samples of preserved enamel and dentin from the teeth of 51 fossils. In most of those teeth, the paleoanthropologists found evidence that these apes and hominins had been exposed to lead—sometimes in dangerous quantities—fairly often during their early years.

Tooth enamel forms in thin layers, a little like tree rings, during the first six or so years of a person’s life. The teeth in your mouth right now (and of which you are now uncomfortably aware; you’re welcome) are a chemical and physical record of your childhood health—including, perhaps, whether you liked to snack on lead paint chips. Bands of lead-tainted tooth enamel suggest that a person had a lot of lead in their bloodstream during the year that layer of enamel was forming (in this case, “a lot” means an amount measurable in parts per million).

In 71 percent of the hominin teeth that Joannes-Boyau and his colleagues sampled, dark bands of lead in the tooth enamel showed “clear signs of episodic lead exposure” during the crucial early childhood years. Those included teeth from 100,000-year-old members of our own species found in China and 250,000-year-old French Neanderthals. They also included much earlier hominins who lived between 1 and 2 million years ago in South Africa: early members of our genus Homo, along with our relatives Australopithecus africanus and Paranthropus robustus. Lead exposure, it turns out, is a very ancient problem.

Living in a dangerous world

This study isn’t the first evidence that ancient hominins dealt with lead in their environments. Two Neanderthals living 250,000 years ago in France experienced lead exposure as young children, according to a 2018 study. At the time, they were the oldest known examples of lead exposure (and they’re included in Joannes-Boyau and his colleagues’ recent study).

Until a few thousand years ago, no one was smelting silver, plumbing bathhouses, or releasing lead fumes in car exhaust. So how were our hominin ancestors exposed to the toxic element? Another study, published in 2015, showed that the Spanish caves occupied by other groups of Neanderthals contained enough heavy metals, including lead, to “meet the present-day standards of ‘contaminated soil.’”

Today, we mostly think of lead in terms of human-made pollution, so it’s easy to forget that it’s also found naturally in bedrock and soil. If that weren’t the case, archaeologists couldn’t use lead isotope ratios to tell where certain artifacts were made. And some places—and some types of rock—have higher lead concentrations than others. Several common minerals contain lead compounds, including galena or lead sulfide. And the kind of lead exposure documented in Joannes-Boyau and his colleagues’ study would have happened at an age when little hominins were very prone to putting rocks, cave dirt, and other random objects in their mouths.

Some of the fossils from the Queque cave system in China, which included a 1.8 million-year-old extinct gorilla-like ape called Gigantopithecus blacki, had lead levels higher than 50 parts per million, which Joannes-Boyau and his colleagues describe as “a substantial level of lead that could have triggered some developmental, health, and perhaps social impairments.”

Even for ancient hominins who weren’t living in caves full of lead-rich minerals, wildfires, or volcanic eruptions can also release lead particles into the air, and erosion or flooding can sweep buried lead-rich rock or sediment into water sources. If you’re an Australopithecine living upstream of a lead-rich mica outcropping, for example, erosion might sprinkle poison into your drinking water—or the drinking water of the gazelle you eat or the root system of the bush you get those tasty berries from… .

Our world is full of poisons. Modern humans may have made a habit of digging them up and pumping them into the air, but they’ve always been lying in wait for the unwary.

screenshot from the app Cubic crystals of the lead-sulfide mineral galena.

Digging into the details

Joannes-Boyau and his colleagues sampled the teeth of several hominin species from South Africa, all unearthed from cave systems just a few kilometers apart. All of them walked the area known as Cradle of Humankind within a few hundred thousand years of each other (at most), and they would have shared a very similar environment. But they also would have had very different diets and ways of life, and that’s reflected in their wildly different exposures to lead.

A. africanus had the highest exposure levels, while P. robustus had signs of infrequent, very slight exposures (with Homo somewhere in between the two). Joannes-Boyau and his colleagues chalk the difference up to the species’ different diets and ecological niches.

“The different patterns of lead exposure could suggest that P. robustus lead bands were the result of acute exposure (e.g., wild forest fire),” Joannes-Boyau and his colleagues wrote, “while for the other two species, known to have a more varied diet, lead bands may be due to more frequent, seasonal, and higher lead concentration through bioaccumulation processes in the food chain.”

Did lead exposure affect our evolution?

Given their evidence that humans and their ancestors have regularly been exposed to lead, the team looked into whether this might have influenced human evolution. In doing so, they focused on a gene called NOVA1, which has been linked to both brain development and the response to lead exposure. The results were quite a bit short of decisive; you can think of things as remaining within the realm of a provocative hypothesis.

The NOVA1 gene encodes a protein that influences the processing of messenger RNAs, allowing it to control the production of closely related variants of a single gene. It’s notable for a number of reasons. One is its role in brain development; mice without a working copy of NOVA1 die shortly after birth due to defects in muscle control. Its activity is also altered following exposure to lead.

But perhaps its most interesting feature is that modern humans have a version of the gene that differs by a single amino acid from the version found in all other primates, including our closest relatives, the Denisovans and Neanderthals. This raises the prospect that the difference is significant from an evolutionary perspective. Altering the mouse version so that it is identical to the one found in modern humans does alter the vocal behavior of these mice.

But work with human stem cells has produced mixed results. One group, led by one of the researchers involved in this work, suggested that stem cells carrying the ancestral form of the protein behaved differently from those carrying the modern human version. But others have been unable to replicate those results.

Regardless of that bit of confusion, the researchers used the same system, culturing stem cells with the modern human and ancestral versions of the protein. These clusters of cells (called organoids) were grown in media containing two different concentrations of lead, and changes in gene activity and protein production were examined. The researchers found changes, but the significance isn’t entirely clear. There were differences between the cells with the two versions of the gene, even without any lead present. Adding lead could produce additional changes, but some of those were partially reversed if more lead was added. And none of those changes were clearly related either to a response to lead or the developmental defects it can produce.

The relevance of these changes isn’t obvious, either, as stem cell cultures tend to reflect early neural development while the lead exposure found in the fossilized remains is due to exposure during the first few years of life.

So there isn’t any clear evidence that the variant found in modern humans protects individuals who are exposed to lead, much less that it was selected by evolution for that function. And given the widespread exposure seen in this work, it seems like all of our relatives—including some we know modern humans interbred with—would also have benefited from this variant if it was protective.

Science Advances, 2025. DOI: 10.1126/sciadv.adr1524  (About DOIs).

Read full article

Comments



Read the whole story
tedgould
13 hours ago
reply
Texas, USA
Share this story
Delete

12 years of HDD analysis brings insight to the bathtub curve’s reliability

1 Share

Backblaze is a backup and cloud storage company that has been tracking the annualized failure rates (AFRs) of the hard drives in its datacenter since 2013. As you can imagine, that’s netted the firm a lot of data. And that data has led the company to conclude that HDDs “are lasting longer” and showing fewer errors.

That conclusion came from a blog post this week by Stephanie Doyle, Backblaze’s writer and blog operations specialist, and Pat Patterson, Backblaze’s chief technical evangelist. The authors compared the AFRs for the approximately 317,230 drives in Backblaze’s datacenter to the AFRs the company recorded when examining the 21,195 drives it had in 2013 and 206,928 drives in 2021. Doyle and Patterson said they identified “a pretty solid deviation in both age of drive failure and the high point of AFR from the last two times we’ve run the analyses.”

A graph entitled "A Comparison of Backblaze Drive Failure Rates Over Time" Credit: Backblaze

As Doyle and Patterson wrote, the tested drives’ high failure percentage peaks this year were 4.25 percent at 10 years and three months, compared to 13.73 percent at about three years and three months in 2013 and 14.24 percent at seven years and nine months in 2021.

“Not only is that a significant improvement in drive longevity, it’s also the first time we’ve seen the peak drive failure rate at the hairy end of the drive curve. And, it’s about a third of each of the other failure peaks,” Doyle and Patterson wrote.

You can check out Paterson and Doyle’s August blog post for more information about the drives they analyzed this year. The drives were from HGST, Seagate, Toshiba, and WDC, and they had an average age of 3.7 months to 103.9 months (about 8.7 years). The drives ranged from 4TB to 24TB. In 2021, Backblaze’s sample had drives from the same vendors, and the drives tested for each model had an average age of 3.57 to 80.85 months (about 6.7 years). The drives ranged from 4TB to 16TB.

As Backblaze has done in the past, Doyle and Paterson compared the behaviors of Backblaze’s datacenter HDDs with the bathtub curve, an engineering principle that says component failure rates tend to follow a U-shape over time, with more failures occurring early in life before the rate drops, settles, and then picks up again as the component ages.

But as seen in Backblaze’s graph above, the company’s HDDs aren’t adhering to that principle. The blog’s authors noted that in 2021 and 2025, Backblaze’s drives had a “pretty even failure rate through the significant majority of the drives’ lives, then a fairly steep spike once we get into drive failure territory.”

The blog continues:

What does that mean? Well, drives are getting better, and lasting longer. And, given that our trendlines are about the same shape from 2021 to 2025, we should likely check back in when 2029 rolls around to see if our failure peak has pushed out even further.

Speaking with Ars Technica, Doyle said that Backblaze’s analysis is good news for individuals shopping for larger hard drives because the devices are “going to last longer.”

She added:

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.

“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.

Questioning the bathtub curve

Doyle and Paterson aren’t looking to toss the bathtub curve out with the bathwater. They’re not suggesting that the bathtub curve doesn’t apply to HDDs, but rather that it overlooks additional factors affecting HDD failure rates, including “workload, manufacturing variation, firmware updates, and operational churn.” The principle also makes the assumptions that, per the authors:

  • Devices are identical and operate under the same conditions
  • Failures happen independently, driven mostly by time
  • The environment stays constant across a product’s life

While these conditions can largely be met in datacenter environments, “conditions can’t ever be perfect,” Doyle and Patterson noted. When considering an HDD’s failure rates over time, it’s wise to consider both the bathtub curve and how you use the component.

Read full article

Comments



Read the whole story
tedgould
18 hours ago
reply
Texas, USA
Share this story
Delete

U.S. to Take Control of More Companies to Counter China

1 Share
Treasury Secretary Scott Bessent, left, with the U.S. trade representative, Jamieson Greer. Mr. Bessent said the United States must become less reliant on China for rare-earth minerals.

Read the whole story
tedgould
1 day ago
reply
Texas, USA
Share this story
Delete

Keep losing your key fob? Ford’s new “Truckle” is the answer.

1 Share

I came across possibly one of the weirdest official automotive accessories this morning, courtesy of a friend's social media feed. It's called the "Truckle," and it's a hand-crafted silver and bronze belt buckle that might be the envy of every other cowboy out there, since this one has a place to keep your F-150's key fob without ruining the lines of your jeans.

The Truckle was designed by Utah-based A Cut Above Buckles, with a hand-engraved F-150 on the bump in the front. Behind the truck? Storage space for a Ford truck key fob, which should fit any F-150 from model year 2018 onward.

"You can put your key fob in the buckle—all your remote features work while it’s in the buckle," designer Andy Andrews told the Detroit Free Press. "Once you have it in there, you're not going to lose that key fob. You’re not going to be scratching your head (wondering) where it’s at. It's right there with you in the Truckle."

A man wears a belt buckle
You'll have to supply your own belt. Credit: Ford
The back side of a belt buckle that has a key fob holder in it
The key fob, in place. Credit: Ford

The limited edition Truckle is probably only for serious F-150 fans, though; at $200, it's quite a commitment to keeping your pants up. Ford and A Cut Above Buckles debuted the Truckle this past weekend at the Texas State Fair.

Read full article

Comments



Read the whole story
tedgould
7 days ago
reply
Texas, USA
Share this story
Delete

Why Signal’s post-quantum makeover is an amazing engineering achievement

1 Share

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It's all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft's post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this "root key" allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What's more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today's attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Graeme Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to "erasure codes," a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal's post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or..., the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don't care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Read full article

Comments



Read the whole story
tedgould
7 days ago
reply
Texas, USA
Share this story
Delete

Empathy

1 Share

Empathy

And more empathy.

Read the whole story
tedgould
16 days ago
reply
Texas, USA
Share this story
Delete
Next Page of Stories