Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is your distinction between "async" and "using threading at a lower level"?


In this case, an async (via cl-async) application served by an async app server (http://wookie.lyonbros.com/), so async all the way down using evented I/O vs an async server (Woo) that farms out all requests to a synchronous thread pool.

I've been asked multiple times why I "didn't just use Woo" by people who don't understand that Turtl's server was async and Woo doesn't support async.


I guess the question is what does that have to do with it? Why not just work asynchronusly with requests that hit the server with a library like lparallel with their futures? Isn't it about the response to the request?


> Isn't it about the response to the request?

High level, yes. Mid/low-level, it really depends on what you're doing. If you're serving static files, sure, use Nginx/Woo. If you're running an evented CL application that deals mostly with network i/o, threading is going to be a tank when you need a hummingbird.

I built cl-async/wookie as parallels for nodejs/express in common lisp.


Oh, are you the developer mentioned in the main post?

If so, I agree a lot with what you said!


I am! Turtl is my baby.


Awesome stuff!


The post talks about using libuv which uses event loops all the way down. I'll be honest, this whole line of questioning feels quite dismissive; OP probably knows exactly _why_ they want an event loop design, you shouldn't try to talk them into using granular thread pools.


I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".

I asked for clarification on the distinction between "async" and "using threading at a lower level", which, for me, are equivalent things. The distinction, to the extent there is one, being that "async" often means "an existing library or language feature versus rolling it myself".


Evented ("async") concurrency, as found in Node, Python, Rust/Tokio, libuv, and Ocaml is based on building chains of events which are waited on by some fast polling mechanism like epoll or kqueue. Any IO call, say a socket read, tells kqueue/epoll to notify some handler to service the event. The flow of events drives execution.

This is distinct from thread pool models where you still block the entire thread for an IO call. While a sufficiently smart scheduler can probably then context switch out of this thread onto something else as the thread waits for an IO response, this is distinct from having the event directly wake up a handler.

That's usually what I associate as the difference between an event loop model and a threaded model. You can certainly make your threads highly granular and isolate each distinct blocking operation to its own thread pool, but it's different from actually being notified and woken up for events.

> I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".

Yeah I think my wires got a bit crossed there. Apologies. That's what I get for being snarky while not paying full attention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: