Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Call for testing: OpenSSH 8.0 (mindrot.org)
179 points by lelf on March 29, 2019 | hide | past | favorite | 96 comments


It includes an interesting comment on the scp vulnerability.

> This release contains mitigation for a weakness in the scp(1) tool and protocol (CVE-2019-6111) [...] The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.

I think it's the time to "alias scp=sftp". If the developers officially believe that scp should be retired, let's do the switch. Both are parts of OpenSSH and the commandline argument is almost identical.

Also, it has

> ssh(1), sshd(8): Add experimental quantum-computing resistant key exchange method, based on a combination of Streamlined NTRU Prime 4591^761 and X25519.

This is big. Together with XMSS signature, it means we already have a complete suite of post-quantum cryptography (experimentally) deployed in OpenSSH! It may be the first mass deployment of post-quantum cryptography in a major protocol.

One month ago, I commented that the introduction of XMSS post-quantum signature as "useless" (https://news.ycombinator.com/item?id=19160739), as the decryption of key exchange is much more vulnerable than spoofing the signature. But now NTRU+X25519 is deployed, great progress here!


Frankly, I had not realized scp and sftp were two distinct protocols, despite my heavily using them. I thought sftp was an interactive client, while scp was meant for scripting, and never really bothered to dig further. I now will, thank you.

However, I expect plenty of people to be in the same case.


I have actually been able to separate the external sftp-server from sshd, and connect it instead to stunnel. The sftp client binary can also be instructed to use something other than the ssh client to establish connectivity.

A simpler invocation could run it through (unencrypted) inetd/socket activation.

SELINUX really throws fits of hysteria when running this, however.

  $ cat sftp-tls.sh
  #!/bin/sh
  exec nc --ssl server 52345


  $ sftp -S ./sftp-tls.sh bogus
  Connected to bogus.
  sftp> cd /nobody
  sftp> get sftp-tls
  Fetching /nobody/sftp-tls to sftp-tls
  /nobody/sftp-tls       100%   15     8.9KB/s   00:00    
  sftp> quit


  # cat /etc/stunnel/sftp-ssl.conf 
  #GLOBAL####
 
  ;sslVersion =     TLSv1.2
  TIMEOUTidle =     6000
  renegotiation     =     no
           FIPS     =     no
        options     =     NO_SSLv2
        options     =     NO_SSLv3
        options     =     SINGLE_DH_USE
        options     =     SINGLE_ECDH_USE
        options     =     CIPHER_SERVER_PREFERENCE
         syslog     =     yes
          debug     =     debug
        setuid      =     nobody
        setgid      =     nobody
  #     chroot      =     /var/empty/stunnel
 
        libwrap     =     no
        service     =     sftp-ssl
  ; cd /var/empty; mkdir -p stunnel/etc; cd stunnel/etc;
  ; echo 'sftp-ssl: ALL EXCEPT localhost' >> hosts.deny;
  ; chcon -t stunnel_etc_t hosts.deny
 
  ; https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
 
 ciphers=ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:RSA+AESGCM:!aNULL:!MD5:!DSS

          curve     =     secp521r1
 
  #CREDENTIALS####
 
  #      verify     =     4
           cert     =     /etc/stunnel/sftp-ssl.pem
 
  #ROLE####
 
           exec     =     /usr/libexec/openssh/sftp-server
  #     execargs=   sftp-server -d /home/nobody
  #     execargs=   sftp-server -l DEBUG3


Don't forget some of your other options, such as piping across ssh commands, which is specifically very useful when piping tar output[1]. Piping tar commands traditionally was useful because it dealt better with some file types than scp did (devices, sockets, named pipes, tar is better overall at dealing with them, or was in the past). The other benefit is that after you realize you're just putting a sequence of commands into place to be run remotely, you start realizing you can do other things prior to the tar command if you need...

Rsync's ssh support is generally good, so might be about equivalent, but it's not always handy.

1: https://www.cyberciti.biz/faq/howto-use-tar-command-through-...


I’ve been using Linux and UNIX systems professionally for ~20 years and this is the first time I realised they were distinct protocols too.


I was definitely in the same boat. I guess I'll just progressively phase out my use of scp in favor of rsync.


Some servers, and some clients, only support one of them. I’ve run into servers without sftp and subsequently stopped using it, as scp always works.

Also rsync does not exist on windows, for instance git bash has only scp AFAIK.


A while ago I tried making windows' remote differential compression COM library work in a CLI utility like rsync.

https://docs.microsoft.com/en-us/previous-versions/windows/d...


You can install cygwin on windows. This will provide sftp, rsync and lftp which is awesome for doing rsync-like transfers on sftp-only chroot setups.


actually, sftp is a bit more scriptable than scp. You can pass sftp a "batchfile" of commands and control per-command whether their success should terminate the batch.

scp wins on brevity of commandline syntax


The script-ability of sftp is pretty awkward, it doesen't conform to how virtually every other tool works in terms of "command + args" in a shell scripting context, instead you need to pass it commands through a pipeline, don't love it.

Would be nice to be able to just type sftp user@domain:/file and have it do the right thing.


Oh, that means you have to open a pipe to the process if you want to control it from another programming language? That would be a bit harder to leverage for automation (though I guess it's just a matter of writing a few helper functions, or you could always use bash and redirections if the language is ill-suited to pipes).


Add lftp on top of sftp and you get extensive scriptability. Working public anonymous demo: [1]

[1] - https://tinyvpn.org/sftp/#lftp


Checking out the mitigation for CVE-2019-6111 [1], how does `sftp` help?

     * scp(1): Relating to the above changes to scp(1); the scp protocol
       relies on the remote shell for wildcard expansion, so there is no
       infallible way for the client's wildcard matching to perfectly
       reflect the server's. If there is a difference between client and
       server wildcard expansion, the client may refuse files from the
       server. For this reason, we have provided a new "-T" flag to scp
       that disables these client-side checks at the risk of
       reintroducing the attack described above.
You could just do a remote `ls` and then `sftp` all the files listed to your client - nothing stopping a malicious return from `ls`? Surely it's the risk of running any kind of wild card copy? If the server is compromised then there's no telling what will be returned from such a command?

    The scp protocol is outdated, inflexible and not readily fixed. We
    recommend the use of more modern protocols like sftp and rsync for
    file transfer instead.
The protocol itself might be outdated, but surely one of the best aspects of `scp` comes from its simplicity [2]?

[1] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6111

[2] https://github.com/openssh/openssh-portable/blob/master/scp....


IMO sftp is underspecified. The 14 IETF drafts that exist for it have all expired and never reached RFC status.

In practice, this means that the OpenSSH implementation is the de facto standard, and that interoperating a non-OpenSSH client with a non-OpenSSH server is a gamble.

It's interesting that they push for sftp. Personally, if scp is vulnerable to certain types of attacks, I'd rather see an entirely new standard and an effort to avoid having it in draft state for decades.


sftp died in IETF draft because they kept tacking on outdated and misguided concepts from the original FTP like ASCII/binary transfer mode and record file types(really guys, when was the last time an operating system that used this was popular, the 1980s?), as well as tons of other completely pointless cruft to an otherwise very clean protocol, so OpenSSH decided to stay at draft version 3, and being as they are the dominant implementation by far, the effort died.

I'm not sure why we need to see an entirely new standard, sftp as implemented by openssh is well conceived, simple, and covers its use case very well.


> I'm not sure why we need to see an entirely new standard, sftp as implemented by openssh is well conceived, simple, and covers its use case very well.

The key to my point here is "as implemented by openssh".

A relevant anecdote: I had to connect from my program to a third party (version 3) SFTP service, IIRC backed by some Oracle software (but don't quote me on that). I had a version 3 client library. The OpenSSH client worked just fine with the remote, and the client library worked just fine with an OpenSSH server. When connecting to the third party, the client gave up.

After some debugging I learned that the third party server didn't include the language tag in its response statuses (as SFTP version 3 software should). OpenSSH was just fine with this and ignored the problem, and the third party software was probably only ever tested with OpenSSH based clients.

It was definitely a bug in the third party service as far as I was concerned, but I think that this is the natural result of relying on behavior "as implemented by ssh" rather than a stable, well defined (and non-expired) standard.


I'm not sure SCP is specified at all, though. I recently wrote a true SCP client for Go (not SFTP, which oddly enough all the other "SCP" clients for Go use). The only documentation I could find at all was a single blog post from a guy who worked at Oracle.


scp only appears to have simplicity because it outsources large parts of its functionality (e.g. glob expansion) to the destination host's shell. scp/rcp was a great protocol for 1981 (yes, that's when it was introduced) but not for 2019


And this leads to the question.. OpenBSD is famous for removing unwanted or outdated code. What makes scp a special case? Is it too used to be deprecated and removed?


yeah, pretty much. If someone implemented scp's command-line with sftp underneath then we could start the (slow) deprecation process.


its part of the ssh protocol, getting rid of cruft from network protocols is almost completely impossible unfortunately, because its going to destroy many people's workflows, in a way that they can not fix, guaranteed.

Yes, we all know about xkcd, no need to link guys.


scp isn't part of the ssh protocol. It's a command that runs over it.


Out of curiosity, since I wasn't sure which one in particular throwaway2048 was talking about, here are my closest guesses:

927 - Standards - https://xkcd.com/927/

1127 - Workflow - https://xkcd.com/1172/

1323 - Protocol - https://xkcd.com/1323/


The only good file transfer solution has always been to use rsync over ssh instead of scp or sftp.

I have stopped using scp and sftp many years ago. Besides the fact that sftp can be too slow, both scp and sftp fail to make identical copies of the transferred files because sometimes they can lose some metadata, e.g. extended attributes or high resolution time stamps.


What I am normally using is:

alias scp='rsync --archive --xattrs --acls --progress --rsh="ssh"'


I've become a huge fan lately of using "--info=progress --no-inc-recursive" for rsync. It can result in more memory usage, since it builds the file list ahead of time rather than dynamically, but it provides you with an overall progress report of the entire transfer, rather than per-file progress reports which may be much less useful.


Aaaah I was so happy to switch until I see "The source and destination cannot both be remote."


This gets my vote for nicest alias in the thread.


Using ssh with tar on both ends is my go to for a new box that doesn't have rsync. Long command lines, but it retains metadata. Like:

  ssh remote 'cd /src;tar -cf - .' | (cd /dst;tar -xf -)
Doesn't call you out so well on typos though.


lftp can be handy too, as that can automatically parallelize transfers.

eg, instead of one connection (eg rsync), lftp can transfer each file using n parallel streams.

Very useful with large files, when there's some kind of bandwidth limit per TCP connection.


Yeah, sftp can't see extended attributes. It's flags are also more cumbersome then scp. For example:

sftp -o IdentityFile=keyfile or scp -i keyfile


Too bad there’s no rsync for windows



Cygwin provides rsync, sftp and lftp for windows.


Cygwin just feels "heavy" when all I want is rsync. Without any package management (that I'm aware of) I can't just install rsync and whatever other dependencies it has, I have to put hunderds? more binaries on my system.


There is also a linux subsystem for windows 10 which I have never used. Perhaps someone can say if that has rsync.

[ edit ] if that and cygwin are too heavy, you could request the samba team create a posix windows build of rsync. They had one long ago, but stopped supporting it. That would be a single binary. I suspect they will push back.


This is even heavier. It's a full blown Ubuntu install.


After a bit of Googling, it looks a company has complied and packged rsync for Windows at a cost of $19 per license @ https://www.itefix.net/cwrsync

I found a Github repo with an older (2017) version of cwRsync that may be from before it went behind a paywall @ https://github.com/billyc/cwrsync-installer


It does.


Check out MobaXterm. It's free as in beer, and seems to wrap up cygwin and x in one console. It's my go to putty replacement.

You can run a local console and rsync directly from your window drives (/drives)


WSL


hmm... works fine in my mobaxterm window.


Unfortunately, you can't alias scp=sftp as the commandline syntax is different. I'd love someone to write a scp replacement that attempted the sftp protocol first and optionally fell back to scp if that didn't work.


This. I've been using rsync for a long time just for the resume feature even without knowing about some of the other benefits. I tend to use the flags "-avzpP"


Not all ssh servers enable the sftp subsystem by default.


> I think it's the time to "alias scp=sftp". If the developers officially believe that scp should be retired, let's do the switch. Both are parts of OpenSSH and the commandline argument is almost identical.

I've never used sftp before. Does it offer equivalents to:

  scp server:'*.pdf' .
or

  scp server:'*(oc[1])' .
for getting the last created file with zsh extended globs, or

  scp server:'$(custom_remote_program)' .
where custom_remote_program is shell code that outputs a list of files using server-side state?

EDIT: Reading:

> * scp(1): Relating to the above changes to scp(1); the scp protocol relies on the remote shell for wildcard expansion, so there is no infallible way for the client's wildcard matching to perfectly reflect the server's. If there is a difference between client and server wildcard expansion, the client may refuse files from the server. For this reason, we have provided a new "-T" flag to scp that disables these client-side checks at the risk of reintroducing the attack described above.

I guess, I'll actually want:

  alias scp='scp -T'
or just use `-T` on a need by basis. I admit I don't need that feature every week, but it is a relief that they didn't remove the functionality outright. I think the closest equivalent is something like:

  ssh server 'tar c $(custom_remote_program)' | tar xC destination
which I'd rather not be writing.


> I think it's the time to "alias scp=sftp"

May I suggest using rsync instead? The syntax is much closer to scp than sftp is, to the point of being compatible for the most trivial use cases. But rsync is actively developed and has much saner defaults, can continue broken transfers etc.


PuTTY's pscp has a command-line UI like scp, but (by default) it is implemented using the sftp protocol. It's a pity OpenSSH scp doesn't work the same way.


This works pretty well:

alias scp="rsync --partial --progress --rsh=ssh"

You should keep in mind that it is rsync in the background, because parameter are different.


I'd recommend to use `--partial-dir=.rsync-partial` instead of `--partial`, as the latter does not prevent an incomplete transfer from looking like a complete one.


Recommending rsync as a more 'modern' replacement for SCP made me smile. Rsync is .. idosyncratic and documented by a single codebase (not withstanding the fine work OpenBSD people have done writing their own rsync)


My pet peeve grief with rsync is that I have to double check the man page for the trailing slash rule every time I use it. Otherwise I love it.


My shell history is sure to show every use of ‘rsync <whatever>’ to be preceded by ‘rsync -n <whatever>’ to make sure I have the paths correct.


I get hit by that one. but, tbf Linux and BSD differ on the behaviour of cp -r with ./thing/ and ./thing -I found references to ./thing/. worked on both.

I also find the --include --exclude behaviour very confusing. does --include=* outweigh --exclude=thing and vice-versa and does this reset left to right?


That's why I have cpdup (https://github.com/DragonFlyBSD/cpdup) around on all systems I use. No need to worry about rsync's special syntax.


While you are right, at least rsync works correctly, i.e. it is able to make perfect copies of files, regardless of the files systems used at source and at destination, while scp and sftp have always lost information (file metadata) in certain cases.

Any file copying program which is unable to make identical copies is garbage in my opinion.

The Linux tmpfs file system is another bad example. Like scp, which can silently lose information, copying a file to a tmpfs directory, then again to a decent file system can lose file metadata.


So zmodem for simple file transfer and rsync for everything else?


Interesting that they seem to use CVS as the canonical source control repo. I wonder what the workflows of the dev team look like and if that tool makes it harder for external contributors to contribute?


For workflows, we have good tooling based around https://github.com/yasuoka/cvs2gitdump to convert OpenBSD CVS commits to git commits in a "pristine upstream" repository (https://github.com/djmdjm/openbsd-openssh-src) and merge them to the portable OpenSSH repository. Merging changes from OpenBSD to portable is usually completely painless and only a hassle when merging something really large (in LoC touched) like a refactoring.

It would certainly be easier for external contributors (and us) if OpenBSD used git natively but as OpenBSD was AFAIK the first open source project to expose a CVS tree of their work to the world there's a lot of legacy to overcome.


It’s part of OpenBSD who can’t use git for licensing reasons and the switch from cvs->svn isn’t really worth it.

There is a git mirror on github and you can work on that and submit diffs to the mailing list if you’re worried about having to read the cvs manpage ;)


Let's make something clear: They can, and they use other GPL software in the base system.

They just choose not to, for (as I understand it) a combination of reasons. Being averse to relying on new GPL code, being "happy enough" with CVS (also working on OpenCVS), and CVS being a little more than just source control for them due to the likes of CVSync[1].

I have patches in Git. If OpenBSD doesn't want to use it that's fine by me, and I wish them the best of luck. But let's be clear, it's not because we're telling them they can't use it.

1. https://www.openbsd.org/cvsync.html


Aren't "patches in Git" just "patches"? Git can give you a patchfile that can be applied with "patch" regardless of VCS.


from GP: "I have patches in Git. If OpenBSD doesn't want to use it that's fine by me, and I wish them the best of luck. But let's be clear, it's not because we're telling them they can't use it."

I think he means he is a contributor to the source of `git` itself, and thus is justified in saying "we" in terms of not prohibiting OpenBSD's contributors from using the software he helps develop (git).


Correct. Thanks!


>also working on OpenCVS

Are they really though? I can't remember for how long "OpenCVS is to be released soon"[1] but the page itself begins the copyright in 2004.

While I respect other technical decisions taken by the OpenBSD team this one strikes me as pure hubris. Sticking to SVN I could sort-of understand but CVS is just ridiculously frustrating to work with.

[1] https://www.openbsd.org/opencvs/index.html


Looking at archive.org, the page has had "to be released soon" since at least 2005 [1]. So I think your skepticism is in order.

[1] https://web.archive.org/web/20050130011942/https://www.openb...


> (also working on OpenCVS)

Not actively. It has been in hybernation for a long time.

Not counting the recently-ish fixes I committed not much is happening with it.


Dragonfly has git in base so that, in and of itself, is not the reason why.


DragonFly uses git but it's only a package like chromium or xfce.


No, it is in the base of the system. I just did an install of the base system and it is there.


I know the process of using CVS has prevented me from making drive-by contributions to OpenBSD. If you just have a specific change in mind, and you saw how to do it on the anonCVS browser while you were out and about, the pain of setting up the CVS and working with it is usually enough to just give up and read a book when you get home instead.


> and if that tool makes it harder for external contributors to contribute?

I hope so!

OpenSSH is a critical piece of infrastructure in most unix and linux environments, so making sure external contributors do some extra diligence seems like a good thing!


Learning CVS is not "diligence." If anything, it takes up mental space you could be using on things like remembering to bounds-check your array accesses.

Asking contributors to learn CVS as a quality gate is like having a tech interview process that requires beating the hiring manager at chess. Sure, you're looking for smart people, and sure, people good at chess are usually smart, but the correlation is so low you'll lose out on good candidates and get candidates who are better at chess than coding.


We don't ask anyone to learn CVS. People send the maintainers (myself and other) their changes (git format-patch is fine) and we integrate them.


Yeah, to be clear I think you the OpenSSH maintainers are quite responsive to people who aren't using CVS (in particular, thank you for digging into https://bugzilla.mindrot.org/show_bug.cgi?id=2863 which didn't even have a patch!). I just want to be clear that people who look at what you're doing and say "OpenSSH is using an obscure process, that's what makes it high-quality" are misguided and should not implement such policies in their own products. It's not what you're doing, and if it were, it wouldn't work.


It may be important not to confuse "diligence" for "encumbrance".


The willingness to jump through outmoded and unnecessary hoops does not correlate to the quality of one's cryptographic code. In fact, many who are deliberately diligent will shy away from teams who care more about tradition than quality, in my experience.


Would love to see FIDO2 / Webauthn in SSH. Working with PKI tokens for key auth works but has to be set up in the client.


As I understand it using FIDO (including FIDO2) for SSH requires some pretty heavy lifting at the protocol layer.

FIDO tokens only want to do two things: Provide you with a cookie and a public key, then later as often as necessary take a cookie and give you proof they still know the associated private key. Very narrowly conceived, on purpose.

SSH public key auth has the client start by proposing "OK, I can prove I know key X" and then the server either says "Fine, do that then" or "No, what else do you have?". An out of box OpenSSH server decides which to do by examining the ~/.ssh/authorized_keys file. A FIDO token needs the _server_ to begin by saying "OK, here's a cookie, can you prove you know the corresponding private key?" so that it can get the cookie, otherwise it can't prove anything.


You should take a look at https://github.com/gravitational/teleport

Disclaimer: I'm one of the contributors.


Sure, it would be nice. But fortunately you can use yubikeys to store keypairs that can be used with ssh. And https://github.com/Yubico/yubico-pam can probably be used with SSH.


I was able to install the software, but there is no documentation about how to create NTRU+X25519 keys and enable it. I checked manpages, mailing list and tried google. How is this done?


It looks like its a key exchange algorithm, not a host key algorithm. So you don't make keys with it, you just tell your client and server to try using it when connecting. You can specify it with the KexAlgorithms config property, like for example ssh -o "KexAlgorithms=whatever". Use ssh -Q kex to see what options are available on your installation.


Thank you!

So to help everyone (read whole post first), you should probably have the line

KexAlgorithms sntrup4591761x25519-sha512@tinyssh.org,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

in /etc/ssh/sshd_config of server and /etc/ssh/ssh_config of client (under "Host ").

(The rest of the kex recommendations are from https://stribika.github.io/2015/01/04/secure-secure-shell.ht...)

---

However, for some reason after running "/usr/sbin/sshd -T" it said

"/etc/ssh/sshd_config line 2: Bad SSH2 KexAlgorithms 'sntrup4591761x25519-sha512@tinyssh.org'."

so I played around. It's hard for me to go back on everything I tried but a working solution seemed to be to add the

KexAlgorithms sntrup4591761x25519-sha512@tinyssh.org,curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

line to server's "/usr/local/etc/sshd_config" and to client's "/usr/local/etc/ssh_config" under "Host ".

You then need to start the server by running "sudo /usr/local/sbin/sshd" and you need to use the ssh client with the binary "/usr/local/bin/ssh".


As I see a new release, I wish OpenSSH would conform to the XDG base directory specification. I do not agree with the proposed rationale [1]

On a side note, I lost the list of applications that are not conforming to it. I know there are a couple, and would be glad if you were to share the ones you know about.

[1]: http://bugzilla.mindrot.org/show_bug.cgi?id=2050


The rationale is pretty strong though:

>OpenSSH (and it's ancestor ssh-1.x) have a 17 year history of using ~/.ssh. This location is baked into innumerable users' brains, millions of happily working configurations and countless tools.

XDG can be considered as a case of arbitrary change for weak justification. SSH is definitely a case where people need to know exactly where the configuration is. There is no point in including it in the not very fun "find the config files" game engendered by XDG.


I agree on security issues, and making the path depend on an environment variable/per-user config file would be opening a new can of worms.

However, an issue I have is that while you can configure that path system-wide, there is no way to control it per-user, or with a finer-gained approach. You can probably use mount namespaces to shadow a ~/.ssh with another config, but that seems overkill.

I admit I have no practical use cases for this right now (though I probably did in the past) besides "uncluttering $HOME". As for hunting the config files, those would reside in ~/.config/ssh by default. I personally find it more irritating when a program picks a random folder instead of conforming to the spec (and if I set the environment variable to somewhere else, I then know where it is). Go hunt trough ~/{.program,.program/conf,Program/conf,Documents/Program/conf,Documents/My\ Games/conf}, or countless variants I experienced. Thought it is admittedly a much smaller problem with better-established programs such as openssh.


And that was in 2012. I reckon everybody who cares knows that openssh config uses ~/.ssh, and those who do not care will not cry if they accidently delete it.



It would sure help encourage testing if we could have it on AUR for easy installation on arch distros.


Updating openssh has been a frustrating experience recently because of their constant breaking of old clients.

I hope this one doesn't continue this worrying tendency.


Old clients are typically broken because of security issues, or fun stuff like disabling SSH 1 protocol.

If your clients are broken, they should be updated.


You are right of course. If nothing works anymore that is by definition most secure.


That's not at all what I said.


It is widely acknowledged that hmac-md5 remains secure (if deprecated).

There is no problem using aes128-ctr as far as I know.

Finally, diffie-hellman-group14-sha1 is not ideal, but breaking sha1 requires vast resources.

An sshd allowing these settings can talk to very, very old versions, and the CPU usage of these is light compared to some of the more modern configurations.

I do have systems that are sensitive to CPU usage, and I retain these settings there, where I am less concerned about ultimate protocol security, and more focused on performance and a light footprint.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: