Hacker Newsnew | past | comments | ask | show | jobs | submit | more rudasn's commentslogin

Super cool and congrats for getting it done. You should be proud, even just for persisting all these years.

Also, I'm surprised "All your base are belong to us!" hasn't been submitted yet!


Cool, I've been working on similar my self. Not released yet, haven't had the time recently.

Curious as to why you store the data in the database in b64 as opposed to files on disk. What's the reasoning for that? Doesn't it make storage/backups/etc more complicated?

Not an expert myself, I opted for in browser encryption, in chunks, so as to avoid memory limitations (at least in some browsers, not FF yet), and in browser gzip so as to keep file size down and speed things up.

I find your niche quite interesting (journalists, whistleblowers) but given the high stakes of that perhaps an open source or more collaborative approach would be easier to promote.

Another idea I've tried out but not pursued, is some sort of browser extension/addon (I used nwjs, similar to electron), that offers client side encryption for any site (form field really). So you'd only post encrypted stuff to whatever service (email, reddit, hn, whatever) and only anyone with the key would get to read it (well, assuming they have the key and the same extension). Just throwing the idea out there, I'm sure others have thought about something along those lines before. The details to get it right are tricky (UX wise), but for your target audience it may be well worth the extra work.

Keep it up!:)


Thank you for the kind words and for taking the time to read the white paper. It's a good feeling when you spend time and effort on something and someone takes the time to go through it.

I opted for database storage to simplify the management of ephemeral data. For a solo project, and as someone still learning, this was a practical way to keep the codebase manageable while focusing on core features like encryption and token-based access control.

However, you should note, in case you missed it in the white paper, that messages and files are deleted upon view (for view-once links) or expiry, whichever comes first. This ensures that the ~33% storage overhead from base64 is temporary, as a file only occupies space until it’s accessed or expires.

That said, you’re absolutely right that base64 encoding adds unneccessay storage overhead and could complicate backups for large files. I also recognize that storing files on disk could be more efficient for large-scale use cases. As (or should I say IF?) the project scales with users, I’ll definitely consider optimizations like disk storage or compression (your gzip idea is great!).

If I run into optimization problems, then it means people are using my product, and that sounds like one of them good problems (Marlo Stanfield's voice).

Your suggestion of in-browser encryption is super compelling, especially to assure users of total privacy. I noted in my white paper that client-side encryption is a future goal to address the limitation of the current server-side encryption, and your approach aligns with that vision.

The browser extension idea is also fascinating, I did not think of that.

I’m open to collaboration (again, as mentioned in my white paper) and would love to discuss ideas for making ClosedLinks more auditable while still keeping it commercially viable/sustainable. I’d be excited to hear more about your project or explore ways we could collaborate on privacy-focused tools.

Thanks again for the encouragement and for sparking this discussion!


Just a +1 for browser encryption... you should be able to use pbkdf2 + aes to take an input passphrase with pbkdf2 to generate an aes key to then encrypt an input file in the browser, I'm not sure if you gain much via gzip before/after depending on the document that may already be a zip file (for word/oo, etc).

On the file storage, I generally recommend going straight to a cloud interface to separate storage backend from the actual storage medium... There are self-hosted options for an S3 compatible backend you manage, or you can use actual S3 or one of several other providers for S3 style storage.


Chrome with JS disabled works good enough for me on mobile. It's also easy to whitelist specific sites. But mostly, if I get a blank page I just go back.


I just use Brave and its builtin ad blocking on iOS. This is the killer app of these days.

Very few sites really work with js disabled.



N.I.B. for me. Greatest love song of all time. https://youtu.be/NsXEb-NOs88

Changes. Saddest song of all time. Can't listen to it without crying. https://youtu.be/dOz_dLmpT9A


I have two!

One is a webapp I wrote almost 20 years ago for my dad and it's still being used today. It runs on IIS and built with asp classic and vanilla html/css/js (no frameworks back then). They use it to track orders and invoices to suppliers/vendors and ensure what they receive is what they ordered.

The other, an electron-type app that saves people hundreds of hours per month by letting them bypass some bad UIs and interact with external services directly. It's been running for 6 years, only had to make very few updates, and it's the one thing I don't need monitoring for - not only it's been quite stable, I get called immediately if it breaks (eg when external services change their endpoints).


Wow, that’s amazing! There’s something really beautiful about software that quietly runs for years — especially when it helps your family or saves real people time.

Your first project really touched me. A 20-year-old app still in use today? That’s not just code, that’s legacy. And the second one sounds like exactly the kind of practical tool that developers dream of building — something that just works and stays out of the way.

Thanks for sharing this, it’s inspiring!


Ephemeral, client-side encrypted sharing of files, text, html, and forms.

Just prototyping at the moment, but the goal is to allow users to not only share files (even big ones) but also forms, like Google forms, but encrypted and one time only (read once).

The use case I have in mind is allowing businesses to create GDPR forms (with private info, consent, etc), share unique urls with specific customers, and once the data is received by the business delete it from the server.

This could be useful to businesses that don't have a customer-facing portal, but have to deal with PII and the customer needs to consent and verify the data and what it's used for.

The data is encrypted client side (web crypto) and the password either shared in the url (in the hash fragment, also encrypted by a key stored on the server) or by other means (eg. could be the recipient's dob or id number or some other previously shared or known value).

Still trying to figure out the details, use cases, business value but the core backend is done so is the client-side crypto stuff. I managed to get chunked AES-GCM working so that it doesn't load the whole file in memory in order to encrypt it, it does that in chunks of let's say 2MB. Chrome also has chunked requests (in addition to responses) for sending the file to the server, but would probably need to come up with some other mechanism to get that working on other browsers (like send the chunks in multiple requests and append to a single file on the server, but that adds more complexity so I'm still working it out).


Don’t want to be too negative.

Hope to point something from experience But.

It never is “one time”, amount of ways people mess up is huge. Even just when you make submit and 5x confirmation there will be once a week a new user that happens to acknowledge 5x they filled in all they needed and know it will not be possible to fill in again but… they really need to fix that one thing they messed up when filling in.


absolutely. even when everything goes smoothly, if you send me a one-time thing, i don't know if i am in the right situation to be able to handle this now. i need to be able to take a look and then decide if i want to deal with this now or later. having to make this decision without looking at it first would raise my anxiety level quite a lot, depending on who this is from.


Great feedback thanks! Will definitely consider this.


Build the image on the deployment server? Why not build somewhere else once and save time during deployments?

I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.


I think the idea with unregistry is that you're still building somewhere else once but then instead of pushing everything to a registry once you push your unique layers directly to each server you're deploying.


Yeah. I'm curious what George Carlin and Gil Scott Heron would have to say about all these.


I bet you haven't seen indeces on decimals though! Fun times :)


Just curious as someone with limited experience on this. Whats wrong with it? decimal is consistent & predictable (compared than float), so it shouldn't be that big of a deal right? CMIIW


Yeah, not a big deal but completely useless nonetheless as you would never really query your table for just the one decimal column (eg the price) but a couple more (eg the category and the price) so you'd have a multi-column index on those columns. The index on just the price column never gets used.


What if you wanted to select "top 100 most expensive products" or number of products between $0.01 and $10, $10.01 and $100, $100.01 and $1000? Sure you could do a full table scan on your products table on both queries but an index on price would speed both queries up a lot if you have a lot of products. Of course you have to determine if the index would be used enough to make up for the extra time on index update when the price changes or products are added or deleted.


Cheap solution, sure, add an index. But you're asking an OLAP question question of an OLTP system. Questions like that are best asked at least of an out-of-production read replica or better an analytics db.


I don't really understand this - what is an out of production read replica? Why wouldn't it just go to a production read replica?

And what is an "analytics db" in this context?


In general just avoiding mixed types of load. Predictable, audited application queries in a user request shouldn’t be mixed with potentially extremely expensive long running analytics queries. Different replica sets isolates customers from potential performance impacts caused by data analytics.


You stream CDC events to have a 1 to 1 read replica in something like Snowflake/Databricks where you can run all kinds of OLAP workflows on this analytics db replica.


Oh, sure, but wouldn't the whole website be served out of a read-friendly database? Why would you have a separate "analytics" database to the main database(s) driving the site?


In the real world, people want cheap solutions and they want it yesterday.


They'd certainly need decimals in the first place, but yeah I have seen indexes on every column, multiple times, I have seen indexes such that the sum of their size are 26 times the size of the original data... that's actively being written to.


Have you looked into browser-side gzip stream, file system api, and service workers? Maybe your "edge" could be end-users.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: