Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since What'sApp uses textsecure, can we be sure that they are blind to the content of our messages? Is there any way for them to get the key, still claim it's e2e encrypted, except for when they want to hand the key over to various states etc?


- we don't have access to the source code, so who knows what they have implemented?

- even if they have implemented it faithfully, you should compare fingerprints. If they don't line up, you might be subject to a MITM attack


Even if you had access to the source code, how would you verify that the app you are running was compiled from that source?


They'd need to be building reproducibly[1], which is somewhat difficult to do on iOS thanks to code signing and App Store encryption[2]. Android is definitely closer[3].

[1] https://reproducible-builds.org

[2] https://github.com/WhisperSystems/Signal-iOS/issues/641

[3] https://f-droid.org/wiki/page/Deterministic,_Reproducible_Bu...


1. Anyone who can read a control flow graph.

2. Yes.


When the difference between correct code and a pretty bad bug can be one instruction[1], I'm not sure if "anyone who can read a control flow graph" is going to be able to find implementation flaws (intentional or not).

[1] https://youtu.be/5pAen7beYNc?t=13m51s


And how are things any different reading source code? The disassembled binary CFG is either going to branch on greater than or greater or equal, exactly like the source code. If you won't see the bug in the CFG, you won't see the bug in the source.


This is a valid point. Both the source code and the CFG can hide bugs pretty easily.

I would argue that reading source code is much easier, though. For example, if you are auditing code written in a memory-safe language, you don't need to look for memory corruption bugs. You also have an audit trail for all source code changes.


It's not like this code was originally written in Haskell. Come on.


I just love reading post-optimisation machine code.


The point was you don't, but plenty of people do. And those people haven't sounded the alarm on any red flags.


So if they implemented a small extra procedure to copy text and send on a side-channel when a flag was set externally, if such a thing was done then this would be obvious to many people despite it being closed source? Aren't these apps developed modularly?

Or indeed if it were a patch that could be initiated as forced update at will from the server-side, again that would be clear to "people" based on the client side binary?

These are the ways I imagine I'd snoop on supposed end-to-end encrypted communication channels; there's probably something much cleverer, but again, we're saying that can be detected easily?

Genuine questions, not a programmer, not familiar with the state-of-the-art of reading machine code/interpreting network traffic, nor indeed with which watchdogs are guarding against abuse by the TLAs.


That's not really what "side channel" means. You mean "covert channel", a related but different concept.


I was thinking something like using the SMS channel which, IIUC, was originally for carrying solely control information and ergo is a side-channel; more in the telecoms sense than the crypto sense. Yes, perhaps "covert channel" would be better.

Do you have any response to the substantive point or just quibbles on the semantics.


The bigger point he's making is that your comment "Anyone who can read a control flow graph" can be taken to mean everything is fine here when it's really not.

Its not hard to imagine how WA could be compromised without anyone knowing for many years.


I don't understand what you are trying to say here, sorry.


A couple of concrete names from this "plenty" would be a good start...

That is, those who read WhatsApp disassembly and did thourough enough review to warrant with sufficient confidence not raising any flags. It would also help to know which specific build the were looking at.


They didn't for vast majority of problems in open or closed source software. It was black hats or security researchers digging around with intent to use, sell, or fix it. A tiny, tiny subset of people who can code in or read assembly.

What you and other assembly verification supporters are proposing is an assembly version of the many eyeballs hypothesis. The idea that many people that are qualified could see it means they thoroughly analyzed it in a way that implied actual security results. I see little to no evidence of that. Actually, I see the opposite where malware hits binary systems in many ways that were easily preventable at source or assembly. That assumption means I don't trust the claim that assembly being verifiable means that it was likely verified. More like stockpiled into a 0-day collection.

You also can't trust something to be correct, reliable, and secure with assembly or binary alone. If you could, high-assurance field would be all over that. Instead, the evidence indicated the source... esp if requirements were encoded as formal policy... had more information to work with to analyze potential compromises and information leaks. A lot gets lost in producing assembly. So, they instead analyze source for correctness in all states of execution for various properties then verify that assembly does same thing with some proof supplied to evaluators and/or customers. Much more trustworthy. Also almost non-existent for proprietary or FOSS software.


You assume that the binary you are using is one those people have already seen. This need not be the case: a rollout might have started hours ago and you might have been one of the first few to get the new version.


But do they? It's a slightly dangerous assumption to make. If everyone assumes that, nobody will bother to check at all.


We had access to openssl code, and yet heartbleed happened.


It's about incentives. The heartbleed bug was present in the code for almost two full years before someone who discovered it exercised responsible disclosure. It's possible that others have noticed it before this, but it's highly likely that the only people looking were security researchers, power users (like Google, the ones who first reported it to the authors), or actors looking to exploit it for their own agenda.

As it stands, average people (or average developers) have little incentive to go trawling through the existing body of open source code, mostly because they probably have better things to do with their time. In the commercial world, bug bounties attempt to skew the incentives to encourage the 'more eyes' part of the axiom 'given enough eyeballs, all bugs are shallow'.


Granted Heartbleed was only exploitable for a few years, but Shellshock was available since '89

And I'm pretty sure bug bounties would apply to shellshock and heartbleed as long as you can find a company with a bounty program that also used openssl or could be exploited via bash.


Bugs were also discovered in Cisco IOS and Juniper JunOS that led to security compromises and a leaked NSA rootkit. At least open source code cand be audited by independent security experts, forked, rewritten and fixed (see LibreSSL).


Pretty much everyone who's job involves using IDA Pro does this every day.


[flagged]


If people were that diligent, heartbleed would never have happened.


Yes.

They're not.


There's nothing preventing Whatsapp from releasing an update that has a flag on your phone number/id that turns off e2e for messages that you send. If their software was open source we would be able to verify what we are running but as it stands, it's not and I doubt it will ever be.


Open source is a red herring. You're downloading WhatsApp from an app store (or, at least, the overwhelming majority of users are). If you can't verify what the binary is doing, the source code doesn't make a difference.

Beyond that, despite the repeated claims of open source advocates, there's nothing preventing people from taking the app store versions of things like WhatsApp and reverse engineering them.


If builds were reproducible (i.e. binaries would be identical if recompiled with the same toolchain on a different machine), then all it would take is N independent builders to verify the app store binary matches their locally built binary to greatly decrease the likelihood of tampering.

So, while being open source is not the complete answer, it certainly doesn't hurt.


How do you guarantee the App Store doesn't serve limited edition binaries to selected recipients?


The App Store run by a central authority with complete control over what can even be available and the ability to modify the delivery at their own whim is certainly a big issue in terms of trusting the integrity of the apps running on a device.


If you suspect yourself to be a selected recipient (e.g. you're Edward Snowden) I reckon you should compile your own binaries. Or read 'Reflections on Trusting Trust'.


Now you are the selected recipient of modified source code.


Get it from multiple sources, and do a diff.


Fun fact: The App Store already serves limited edition binaries to everyone because it encrypts them per-account :)


Forget the aop store. How do we know Google, Apple, Microsoft, Ubuntu, etc doesn't give us a malicious kernel update?

I don't think we have good solutions for the problem of malicious updates in general.

The only one I can think of is a trusted hypervisor that hashes memory in the guest and reports on it. And even then, how do we trust that?


Forget the software, the firmware running on the baseband processor can read system memory and send it over the network without you knowing. But that takes lots of effort to target a specific person.

So what do you do? It comes back to making sure that 'they' can only hack some of the people all the time, and all of the people some of the time. It's preventing them hacking all the people all the time I worry about.


I don't think it hurts! All else being equal, I'd rather have source than not have it. What I don't accept is our supposed helplessness in detecting backdoors in secure messaging software.


Or we could have build systems that weren't even more convoluted and fragile than they were 50 years ago and just release software in the form it's supposed to be in.


Obviously the app maintainer could push an update that simply leaks your keys and stored messages. Doesn't matter if it's open or closed source.

However, open source does give users some real recourse in the event that the project moves in an undesirable direction. I don't like what's happening, I can fork it without your permission and still have access to the same development environment and build tools the original project had. I think that's important and valuable.


A major problem with your approach is that it assumes analyzing a binary for correctness or security is equivalent to analyzing well-documented, high-level source. It's not. It takes much more work to discover vulnerabilities in assembly or even correctness failures. That's because it lacks the context for how the software is supposed to operate.

I can read a commented Python or BASIC program with almost no effort unless it's very sloppy. I can tell you a MISRA C or SPARK program with associated static checks is immune to entire classes of errors without analyzing the source myself. I can tell you what information flows can or can't happen in a language like SIF implementing information-flow security. I can do all of this while expending almost no effort. So, I'm much more likely to do it than if I had to decompile and reverse-engineer a binary from scratch with analyses using the tiny information in a binary.

So, every time you say that, what you're really saying is: "Anyone could do this if they spent enormous time and energy. Sort of like they could hand-compile their C code each iteration. They probably won't but I'm countering your wanting for source because in theory they could do all this with assembly with enough effort."

It's definitely not true for correctness as assembly lacks what you need to know it's correct. It's probably not true for security as correctness is a prerequisite for it. In any case, economics is important where the effort required to achieve a thing determines whether someone will likely spend that effort. In case of binary analysis, it's apparently a lot less than source analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: