Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks like Linux syscalls are becoming some kind of de-facto standard. They are already implemented by Linux, FreeBSD, Windows and now IncludeOS.


The mind reels.

Not because Linux is a bad kernel, mind you, but because UNIX itself reeks of ideas that were ok when it was originally developed and clunky at best today. Even Microsoft seems to have given up. The future of computing looks pretty grim to me.


Could you elaborate?


On what? Reasons UNIX isn't so great? There was an entire handbook written about it decades ago, much of which is still relevant. Text isn't a universal interface, files are not a good abstraction for everything (not the way UNIX does it anyway), the permissions system is extremely limited and quite backward for modern problems, the way the file hierarchy is organized hasn't made sense for about 30 years, POSIX has a lot of well known issues, etc.


Two specific errors that are very well-known:

* Filesystem semantics. What POSIX actually says and what most people think it says in this regard is very different. And both groups acknowledge that the semantics are not very useful for things like high-performance distributed filesystems in HPC.

* Signals. Everything about signals is pretty annoying. You can't install per-thread signal handlers, which means you can't do things like "catch" segmentation faults. And signals aren't files, but you can only poll for file things to occur, so you have to reexpose signals as files via signalfd...


What does fork() even mean in a modern multi-threaded process?

A lot of new kernel features have just given up to the point where current advice is to never call it unless you plan to exec immediately after. But the technical debt within the kernel for this misfeature is massive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: