Backpointers to earlier epochs in append-only cryptographic data structures like key transparency logs. If the client last fetched epoch 1000, and the server reports the current epoch is 3000, the server can return log(2000) intermediate epochs, following skip pointers, to provide a hash chain from epoch 3000 to 1000.
Amazing man, with many important contributions over a very long career. The Rabin Cryptosystem (like RSA, but with public exponent 2) is notable for two reasons. First, unlike RSA, it is provably as hard as "factorization" (as he would call it), and second, unlike RSA, it wasn't protected by patent.
I think in practice it doesn't work to deserialize only verified data. Snowpack has a mechanism for this but I found it impractical to require all use cases fit this form.
I'm not sure exactly what system you're describing, but I hope it doesn't involve hand-marshaling and unmarshaling of data structures. Your requirement (b) seems at odds with forward-compatibility. Lack of forward-compatibility complicates upgrades, especially in federated systems when you cannot expect all nodes to upgrade at once.
I might be biased, but it's been possible to write a sufficiently complex system (https://github.com/foks-proj/go-foks) without feeling "tortured." It's actually quite the opposite, the cleanest system I've programmed in for these use cases, and I've tried many over the last 25 years. Never am I guessing how to verify an object, I'm not sure how that follows.
I also think it's worth noting that the same mechanism works for MAC'ing, Encryption and prefixed hashing. Just today I came across this IDL code:
struct ChunkNoncePayload @0xadba174b7e8dcc08 {
id @0 : lib.FileID;
offset @1 : lib.Offset;
final @2 : Bool;
}
And the following Go code for making a nonce for encrypting a chunk of a large file:
As in signing, it's nice to know this nonce won't conflict with some other nonce for a different data type, and the domain separation in the IDL guarantees that.
Hi, post author here. Agree that the idea isn't tricky, but it seems like many systems still get it wrong, and there wasn't an available system that had all the necessary features. I've tried many of them over the years -- XDR, JSON, Msgpack, Protobufs. When I sat down to write FOKS using protobufs, I found myself writing down "Context Strings" in a separate text file. There was no place for them to go in the IDL. I had worked on other systems where the same strategy was employed. I got to thinking, whenever you need to write down important program details in something that isn't compiled into the program (in this case, the list of "context strings"), you are inviting potentially serious bugs due to the code and documentation drifting apart, and it means the libraries or tools are inadequate.
I think this system is nice because it gives you compile-time guarantees that you can't sign without a domain separator, and you can't reuse a domain separator by accident. Also, I like the idea of generating these things randomly, since it's faster and scales better than any other alternative I could think of. And it even scales into some world where lots of different projects are using this system and sharing the same private keys (not a very likely world, I grant you).
It should be possible to change the name of the type, and this happens often in practice. But type renames shouldn't break preexisting signatures. In this scheme you are free change the type name, and preexisting signatures still verify with new code -- of course as long as you never change the domain separator, which you never should do. Also you'd need to worry about two different projects reusing the same type name. Lastly, the transmission size in this scheme remains unaffected since the domain separators do not appear in the serialized data. Rather, both sides agree on it via the protocol specification.
I would say two problems with the asn.1 approach are: (1) it seems like too much cognitive overload for the OIDs to have semantic meaning, and it invites accidental reuse; I think it matters way more that the OIDs are unique, which randomness gets you without much effort; and (2) the OIDs aren't always serialized first, they are allowed to be inside the message, and there are failures that have resulted (https://nvd.nist.gov/vuln/detail/cve-2022-24771, https://nvd.nist.gov/vuln/detail/CVE-2025-12816)
(edit on where the OIDs can be, and added another CVE)
OIDs have to be unique just enough to not fall into the wrong parsing/validating path in the same system, which isn't that hard.
>Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure
That's on brand for the javascript world, yes.
With asn1 being a can of worms, at least it's a can of worms with a reputation, unlike this nice magic trick.
Disclaimer: there exists a PR filled under my name into an asn.1 parser that fixes a bug, which is not merged since October 2022.
Those CVEs seem a little more subtle than OID serialization issues. In the first example there are actually two distinct problems in concert that lead to the vulnerability, one of which is when a "low public exponent" is used.
It seems like in that PR, the fact that the OID wasn't checked is part of the problem. I think a better system wouldn't compile or would always fail to verify if the OID (domain separator) is wrong, and I think you'd get that behavior in the posted system.
This is Bleichenbacher's rump-session e=3 RSA attack. It's pretty straightforward, and is in Cryptopals if anyone wants to try it. If you don't check all the RSA padding, and you use e=3, you can just take an integer cube root.
https://github.com/foks-proj/go-foks/blob/main/lib/merkle/bi... https://github.com/foks-proj/go-foks/blob/main/lib/merkle/cl...
reply