>An overlong line is no longer automatically hard-wrapped.
Finally. It was one of the most insane defaults in the history of text editors. Things like this are the reason why it's so hard to recommend nano to beginners.
Oh yes. Got severely burned a couple of times by config files riddled with mysterious errors. Php.ini was never truly happy with the second half of every other comment suddenly posing as code.
I like the idea of hard wrap, line of code shouldn't be longer than 80 characters... sure it's confusing for beginners + sometimes data shouldn't be hard-wrapped, but in general (imho) it's better than worse
if [ x$CMDSH = xshell ] ; then
grep -qP $BLKLST <<<"$CMD" && CMD='panic' # skip obviously bad commands like
rm -rf /
# (this isn't a security thing, we just want to catch stupid mistakes)
fi
Have comments containing text that could be interpreted as a call to 'break-the-world(err=/dev/null,filter=const(true))'? Not often, but it only needs to go wrong once.
Checking for obvious problems should happen inside eg rm (see `rm --help | grep root`), not in a random shell script, but I didn't want to bother coming up with a better excuse to have rm -rf / inside a comment.
Usually the errors caused by syntax-unware text mangling are either obvious and harmless (so not as punchy of a example) or much more subtle than "destroys literally the entire system".
Can you explain your thought process here? I believe the idea is that old terminals were 80 characters wide. Also, even with modern monitors it can be nice to have shorter lines so you don't have to move your eyes/head as much. I can't think of why the rule would be specific to source code or why it would apply any differently to other forms of text.
"nano" is an interesting creature - it was originally developed as a replacement for the "pico" command that shipped with "pine" (an amazing curses-based email client) because the license on pico was just not really clear.
I still use nano 50% of the time because it's so easy to jump into a file, make some quick changes, and jump out. Plus the mark/cut/paste keystrokes are seared into my memory after years of use.
I'm confused. The only reason one uses nano is because you ssh'ed into some random machine and you are stuck with whatever is on it. If you got enough bother to install micro or fiddling with the settings of nano you might as well setup something more proper.
I'm a GUI dev/UX/education designer. Nano/pico to me are the only cli editors remotely suitable for ordinary human cognition. I have faint interest in the Emacs ecosystem, but wish it took basic ergonomics into account more seriously. I do understand I could get used to the lack of affordances but life's too short, perhaps. I understand keyboard usage wins you speed but not sure it's worth the general unpleasantness. I suppose I'm in the minority here though.
It is not a human cognition problem. It's a motivation problem.
Lack of motivation to take a few minutes to learn a keyboard command that will save you orders of magnitude more time in the future than you spend learning it.
With the GUI version, what do you have to learn in emacs? In gvim, you have to learn how to switch between normal and insert mode, but I can't think of anything else basic that can't be done in the menus.
You can pick a new keyboard shortcut to learn every day or two, or whenever you get tired of digging through menus to use a feature. In a year you'll wonder why the hell you let negative bias/emotion keep you from learning a real editor for so long.
I believe this discussion is a part of a wider cultural gap and differences in understanding. I have been researching these differences for a decade or two now, and would like to understand them better.
That's why I would like to understand where you're coming from.
First, I'd like to verify: Do you think motivation isn't a human cognition problem?
Second, it appears to me you are making a wide array of assumptions about my preferences and my goals, as well as about what I find difficult or easy.
For example, you appear to assume that were I to choose rationality (?, i.e. not carrying "negative bias/emotion") I should value speed over of a host of other qualities of interaction design.
You seem to imply that committing to a specific kind of interesting, but very quaint visual/interaction style, i.e. the interaction constraints of a terminal UI, are a price worth paying for the speed of operation gained. Am I correct that you are implying this?
Keyboard shortcuts are not an exclusive feature of terminal UIs. I have a strong preference for UIs that comply with, for instance, the well-researched heuristics from the field of human-computer interaction. Terminal UIs are generally a lot more challenging to design to comply with them (for general audiences). Also visual hierarchy and a host of other beneficial properties are harder to achieve.
Yet, certain types of expert users tend to prefer - in my point of view - extremely arcane UIs, as they fit those users' style of operation.
They do not seem to fit mine. I want to emphasize that these are my current preferences, and you may well be right that I would enjoy the speed of emacs/vim usage. So far though, no one has bothered to sell them to me in a way that I personally would find convincing.
Arcane cursor moving commands have never saved me orders of magnitude more time. They are in fact a quickly forgotten inconvenience.
Now, things like Emmet and the possibility of having hundreds of simultaneous editing cursors, these things really save time, and work in an intuitive editor like SublimeText.
I'm not convinced this is true. I have spent a few weeks getting used to both emacs and vim and did end up going back to an IDE for proper development and nano for small editing. I'm not convinced that those two editors are worth the time investment that they require to be even remotely usable, even for the average software developer.
JOE (Joe's Own Editor)[1] is actually easier to use than nano, and it can emulate other editors (including nano/pico via jpico). It also has mouse support. It is installed by default in Slackware, and is in the package manager for most other OSes.
Thirding the recommendation for JOE. And as someone who started programming in the '80's, I especially enjoy its WordStar mode which feels like coming home again!
> The only real downside is that it's not installes by default.
This is why I recommend to anyone wanting to try a -nix like OS to learn the basic commands in both vi and nano. One or the other is shipped with every -nix like OS I've ever used. Once they get more used to the OS, go ahead and install whatever editor they want to. But logging into a fresh install of pretty much any system, one can get by with either of the two.
There was a fork/successor called alpine, I recall having to setup for my wife. But beyond that I think that mutt eventually "won". (Especially when development started picking up again with mutt-ng, etc.)
I wrote a console-based mail-client with lua-based scripting, over at https://lumail.org/ but I don't think I have more than 50 hard-core users. It seems like most people use mutt, and work around its annoyances, or use web-browser for their mail.
(I realize at some point notmuch came out and became popular, but I've never had a strong feeling for how popular. I think I'm a little biased since I read of people using it, but nobody "local" to me does.)
I was used to pine and pico so I migrated to alpine and nano. Alpine works fine for me, alpine is very much like pine. I have spamassassin tagging mail as spam, and I have an alpine filter that moves those tagged messages to a possible spam folder.
I was used to pine when mutt came out, so I always used pine or alpine. You can check both out and see which you prefer.
Being an Emacs user I use Zile for that since it's effectively a trimmed-down version of Emacs. Of course unlike nano it's not usually installed by default on most distros, so I still have to use nano or vi from time to time, but I can't say I enjoy it very much.
If I want a light version of emacs (like on OpenWRT, or a constrained embedded device) I reach for JOVE, or Jonathan's Own Version of Emacs. JOVE was actually the version of emacs I first learned on back in the late 80s.
JOVE is tiny, and has implemented a huge amount of emacs. After you spend 15 minutes or so tweaking your .joverc file you won't even notice it's not GNU.
When I'm not using an IDE, I'm a vi user because it's always there. When my /usr on FreeBSD has issues, vi is in /bin. OpenWrt ships with vi. I honestly have no idea if Emacs is any better, but not being a default makes it a non-starter for me.
Production corporate environments are often firewalled/locked down. To change anything you have to go through layers of dis/approval. You are generally stuck with the defaults till the end of time.
It's amazing how, no matter how old or small a piece of software is, as long as it keeps being used, itremains a living thing and keeps on changing and evolving.
For a stable piece of software the build script(s) somehow turn into a change hotspot. You might have not touched the core of the software in five years, but guess what, every couple months some change to the build script is necessary.
And on how many platforms the software is used. E.g. when people try to compile things on HP-UX, perhaps even an older version, things often get interesting.
There needs to be a console text editor installed by default with CUA keybindings, like notepad, sublime text, dos edit, and borland tools were back in the day.
Nano can get partially there, micro is the current best though it has a few MacOS oddities around home/end but can be configured, in the past used ne.
mcedit, the editor of mc, is a nice DOS-like editor, complete with blue background and pulldown menus. The mc suite contains a simple but powerful hex editor, too.
I'm sure nano/pico and the like are nice but to me there's two things a terminal editor can try to solve for: being nice, or bring ubiquitous. The nicest ones (say emacs) are way nicer than nano in terms of features, extensibility, etc. Now for being ubiquitous, nothing beats vi to date. I can't tell what nano is aiming for?
EDIT
I guess I had assumed that vi/vim was significantly more widespread than nano. Maybe that's an outdated assumption? I feel like I've come across a few instances where vi has been the only choice...
Nano is somewhat ubiquitous in that it's the default editor on many Linux distros, including both Debian and Ubuntu. I often run "crontab -e" or "git commit" and end up in nano because I forgot to set $EDITOR on a new Linux box.
Edit: I suspect it's the default because it launches with visible instructions on how to exit. Launching Vim or Emacs strands a lot of new users.
I use nano because I don't have to remember anything to use it. If you put me in front of nano I can open a config file, go to the line I need, edit in something, save it, and exit without knowing a single thing about nano. If you put me in front of emacs or vi I have absolutely no idea what I'm looking at.
I am sure vi and emacs are much powerful than nano, but the time I'd have to invest in learning them would outweigh the benefit. I just don't edit text files enough for it to pay off.
It seems like nano has started to reach vi-like levels of ubiquity. Every fresh Debian install annoys me by dropping me into nano as the default editor at some point (until I update-alternatives and point it at vim.tiny).
The edge still goes to vi, especially if you log into older/legacy things, or in very resource constricted environments. It looks like busybox, for instance, has a vi but not a nano: https://busybox.net/BusyBox.html
Legacy, more than anything else. Nano has been a thing for decades, and it will likely be a thing for decades more.
Are there better, more powerful editors out there? Sure. But just like vim, you can expect to find nano pretty much anywhere, and if you need to make a quick change, say to a configuration file on a bare-bones system, that's nice.
Of those two (nano/vim), I've also found it easier to introduce people to nano. I personally use vim, but if I'm doing a tutorial for people who aren't very familiar with the unix terminal, and at some point in the tutorial they need to edit a config file, I'd rather not take a big digression into explaining vi's modal-editing concept just to change one line in a config file.
This doesn't help the problem of knowing what to type on any system up-front, but fwiw, FreeBSD ships with a nano-like editor in the base install, ee(1).
Vi's not too bad. 'i' to edit text. ESC to get out of editing. ':w" to save changes. ":q" to quit. ":wq" to do both. ":q!" to quit without saving. "hjkl" for navigation in case the arrow keys don't work.
That's enough to get by for quick edits. I haven't used vi a ton in my life, but I've managed to remember that much.
Emacs comes with extensive built in documentation. At the bottom it says hit C-h C-a which takes you to an about page which has various help links.
The GUI app as a menu bar, a standard gui feature that includes at the far right an entry entitled HELP under which one can find a manual, docs, a tutorial, a FAQ and various other options.
I definitely agree with you that nano is more intuitive and certainly much more beginner-friendly.
Just saying that vi at its most basic level isn't bad at all. The user only has to remember i,esc,:w,:q,:q! for basic editing. That's not much.
But I do agree that it would be nice if vi / vim would simply list those commands when starting the editor for those who have never used it or haven't used it in ages.
Indeed, nano's interface is easily discoverable. Vi(m) gives some hints, and emacs... the tutorial is kinda amazing but takes hours to get through. I can't really imagine a discoverable interface for vim that doesn't take up tons of real estate -- so the alternative is a cheat sheet tacked up next to your monitor
Vim is how much you want to get out of it. I looked at Vim, thought about its value proposition, and decided only to learn the most basic commands from a cheat sheet. That makes it more productive than Nano already.
I know other people who aren't Vim fanatics at all but have occasional work in the terminal, and they too also just learned the most basic Vim commands and have the most basic Vim configurations, if at all.
I don't disagree with you at all and my reply should not be construed as criticism, but I just want to point out how funny the line, "Vim is how much you want to get out of it," is.
Nano is perfect for the casual terminal user, it does its job fine when editing an apache server config file once in a while. But when working on the terminal all day, there's no way around vim, IMO. It's good to memorize some basics (save, quit, search & replace) of vim, because there's always a system where only vi is installed (e.g. ESXi).
vim if you open it without a file has a little blurb that includes common operations like exiting. Hitting Control+c the common hotkey to kill a terminal app in any vim buffer tells you how to exit. Control+z puts it in the background.
emacs is a significantly larger installation (disk) than nano, taking longer to install and more system resources consumed. If I'm jumping into a small VPS to tweak something, or auto-installing an editor in every docker container, nano wins over emacs.
As a developer who almost exclusively uses GUI editors (Sublime / IntelliJ) I have lost the emacs skills I gained in college, and I don't use emacs enough to remember the key strokes necessary to make it more useful than nano. It's more difficult than nano because I have to google a cheat-sheet every time I use emacs.
I never liked vim.
Nano is a great terminal editor, has enough commands to make me productive when needed, and I use it exclusively for terminal text editing.
Not on the supercomputer clusters I used to use. You're confusing user-centric desktop distros with the tiny userlands available in production systems.
It's too bad the Windows Store only contains Ubuntu LTS editions. It'll be July 2020 before all the improvements from 2.9.3 on up become available for most WSL users. :(
No need to wait until 2020. From the GNU nano README:
-------------------------------
How to compile and install nano:
Download the nano source code, then:
tar xvzf nano-x.y.z.tar.gz
cd nano-x.y.z
./configure
make
make install
It's that simple. Use --prefix with configure to override the
default installation directory of /usr/local.
If you haven't configured with the --disable-nanorc option, after
installation you may want to copy the doc/sample.nanorc file to
your home directory, rename it to ".nanorc", and then edit it
according to your taste.
This indeed. This is one of my first steps on any new computer/OS install.
I’m a daily Nano user and in my experience the only real headache you can run into is missing the right version or configure not finding where readline and nucrses/ncursesw are. Those are really easy to install on any platform (even from source).
> It's too bad the Windows Store only contains Ubuntu LTS editions. It'll be July 2020 before all the improvements from 2.9.3 on up become available for most WSL users. :(
There are other Linux distros too. I had openSUSE installed off there not too long back. Theres even Github projects that let you install any ISO onto WSL and wherever on your hard drive you want to install them to.
I've run Debian testing and unstable on WSL lately and it's a fine experience. I don't know about Ubuntu, but for Debian, it's a matter of changing your /etc/apt/sources.list to point to your desired version and then:
$ sudo apt update && sudo apt upgrade
Personally, I've ended up back on Debian stable for WSL, because I have reasons to use a VM rather than WSL. So WSL is now just a dumb ssh terminal into a local VM. If you're curious, the reasons are primarily to do with speed of disk access operations. I also run into some issues with several emacs add-ins that I can work around but don't care to.*
I also run vcxsrv on Windows, and ssh X forwarding didn't work immediately, but a low-priority to-do is to use this to run local-VM X clients connecting to the Windows X server.
* I sometimes stop to ponder the things that I find acceptable to troubleshoot and the things I don't want to put the effort into. For reference, see above about local VM and X Forwarding....
That's super easy to get around. Install the LTS, change the release-upgrade setting from LTS to the non-lts setting (quick google will give the actual value). and run do-release-upgrade. I've been running non-LTS Ubuntu for a while on WSL.
I installed Debian from the store, and set it as default with the ‘wslconfig’ command. I changed Debian to unstable for 99% of things, and use the Ubuntu profile for 1% of things. New packages!
POSIX definition of what is text file actually requires the file to end with \n, so such programs are technically correct.
The reason why this is (and also why the lines should not be longer than LINE_MAX or contain \0) is that when you simply call fgets() in a loop with LINE_MAX sized buffer, all of these things cause perfectly logical, but wrong/surprising behavior. This is the reason why almost any modern small unix tool contains something called myfgets(), getline() or whatever which wraps fgets() in loop with realloc() and correctly distinguishes eof from other errors.
I mean, it ought to be possible to use a text editor to specify exactly what I want in a text file. If a newline is silently added at the end, then, well, that isn't happening.
Edit: Which is to say, I don't think l24ztj is using "expect" in the sense of "anticipate" but rather, y'know, the other sense. (Using Wiktionary's definitions: "To consider obligatory or required" or "To consider reasonably due".)
It should be possible, sure, just like it should be possible to forgo utf-8 and save your file with the iso-8859-1 encoding. But it's the less useful case. Many (most) programs do not care whether there is a final newline or not, but among those that do care, missing endlines will run you into more problems than the opposite (e.g.: Git, GCC, wc and cron will either complain or not work as expected).
What do you mean by "exactly" though? What is the "exact" value of "e"? 0x65? How about "é"?
You expect your editor to take what you've written and provide a valid text file with that information in it, which can then be read by something else. Applying the correct text encoding (UTF-8?) and adding a 0x0A as the last byte are part of making it valid.
If you want to specify the exact bytes on disk, you should use a hex/bin editor.
The correct text encoding is the one I specify. UTF-8 is a decent default. Where did that bit about 0x0A come from? That's just an extra byte. There's no text encoding that says "to decode this you should first remove the 0x0A at the end, which is not part of the text".
I should only need a hex editor if I want to make a file that can't be decoded as text under a known encoding, and no encoding I know (and certainly none that I use) has the encoding/decoding rule I mentioned above. I certainly shouldn't need one just to make a validly-encoded UTF-8 file that happens to end in a character other than a newline.
Basically, either it's part of the encoding or it's part of the text, and if it's part of the text, I ought to be able to control it. And it isn't part of the encoding.
> I mean, it ought to be possible to use a text editor to specify exactly what I want in a text file.
I agree. But for me a text file that does not end in newline is badly formed. It is not a text file, it is a binary file that happens to contain some printable characters.
I have the opposite experience: notepad, notepad++, vscode, kwrite/kate, sublime, gedit, and many more I've used... none of them had this unwanted behaviour.
Yes, but now if you do $ wc -l hello.txt, it will say 0. If a program is reading line by line, it won't get a full line if the last line is not terminated - this can lead to unwanted behavior, such as when you're matching end of line with regex.
This is a Unix thing, on Unix lines in text files are terminated and the terminator is supposed to always be there to mark the end of the line, as opposed to Windows where they are separated and a separator at the end of the file means that the text has an empty line at the end.
This is also why tools that have a Unix heritage tend to complain if there is no terminator/separator at the end of the file (even in Windows), like most C compilers.
The old behaviour is the sane default, at least for editing text files. POSIX defines a text file as a file consisting of a number of lines, each terminated by an LF character.
Are you thinking about most people who use a computer or most people who use a CLI based editor that's presumably on a remote machine they just sshed into?
Hardly any of my colleagues knows what POSIX is (and surely not in any depth, even those that do), but they still SSH and use editors that include Vim all the time...
I'd say in a company of 50+ SSHing people, around 5-6 know POSIX and its history, and usually the older ones (35+).
Especially a 2019 user, 20+ years removed from the systems, decisions, and rationales, behind POSIX.
Just try to get someone (even a seasoned Linux user) to use a POSIX-only userland (as opposed to GNU), as see how fast they'll be pulling their hair out...
I would say it's not just a POSIX demand. Most unix tools are POSIX-complaint, and the user should expect just it and nothing more. Non-proper text files without LF at the end may not be processed properly, so to be assured that everything works fine that LF is mandatory.
Such an insane default. Currently programming a bittorrent client and was edited files to test some functionality. Was confused for a long time until I realized nano was turning my "echo 'this is a test' > test" into "echo 'this is a test\n' > test"
Serious question: how often do you log into a box where it would be impossible or considered rude for you to install a proper editor? I feel like I must have been blessed by the sysadmins in my life (them often being me) but it's never happened in 25 years of logging into *nix boxes.
In any large-ish organization, you probably don't do this on a production machine. You absolutely do not do it if you aren't the person (or on the team) who will get called/paged if it breaks, or you're probably risking your job if the package manager goes sideways or you flub it up and let the package manager throw in a couple of extra upgrades or whatever.
Probably, yes. If you're lucky, it might get rolled into the next deploy/upgrade.
Luckily, you usually don't need to do extensive editing there, because you usually can't change much. Most of the changes you'd want to make to config files, etc. also have to go into the next deploy or through some change management process anyway.
I suppose I have been lucky. If I ever actually had to edit something critical on an unfriendly box, I suspect I'd still just do it in Emacs via tramp because I know I'd be less error prone. But as you suggest, for many years now I've experienced a strict divide between config changes, which are managed properly and not just done as ad-hoc edits, and boxes that are meant to be interactive, in which case the environment is tailored to the users.
A lot of my side work involves fixing things for companies and individuals who use a cheap shared hosting plan from a bottom-barrel reseller. Often this is a virtual host in cPanel/WHM or Plesk and is very limited on what you can install outside of Fantastico, Softaculous, or similar web app stores. In those situations I normally have to deal with extremely restricted SSH access (if any at all) and can't install unapproved software. It's frustrating but it does happen frequently.
Embedded devices running some flavor of *nix, mostly. My main vi nemesis for years was Edgewater Edgemarc hardware. Fortunately these days I've mostly replaced those with pfSense boxes where I can install nano with a single command.
Finally. It was one of the most insane defaults in the history of text editors. Things like this are the reason why it's so hard to recommend nano to beginners.