Evented ("async") concurrency, as found in Node, Python, Rust/Tokio, libuv, and Ocaml is based on building chains of events which are waited on by some fast polling mechanism like epoll or kqueue. Any IO call, say a socket read, tells kqueue/epoll to notify some handler to service the event. The flow of events drives execution.
This is distinct from thread pool models where you still block the entire thread for an IO call. While a sufficiently smart scheduler can probably then context switch out of this thread onto something else as the thread waits for an IO response, this is distinct from having the event directly wake up a handler.
That's usually what I associate as the difference between an event loop model and a threaded model. You can certainly make your threads highly granular and isolate each distinct blocking operation to its own thread pool, but it's different from actually being notified and woken up for events.
> I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".
Yeah I think my wires got a bit crossed there. Apologies. That's what I get for being snarky while not paying full attention.
This is distinct from thread pool models where you still block the entire thread for an IO call. While a sufficiently smart scheduler can probably then context switch out of this thread onto something else as the thread waits for an IO response, this is distinct from having the event directly wake up a handler.
That's usually what I associate as the difference between an event loop model and a threaded model. You can certainly make your threads highly granular and isolate each distinct blocking operation to its own thread pool, but it's different from actually being notified and woken up for events.
> I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".
Yeah I think my wires got a bit crossed there. Apologies. That's what I get for being snarky while not paying full attention.