Yeah, but Half Life 2's Source engine was itself a continuation of Goldsrc which was itself a continuation of the Quake 1 engine. The lineage is there but beyond a certain point it's not really Quake anymore.
GZDoom/UZDoom is a similar grey area, it is built on the original Doom codebase but they've added so many features that it's practically its own distinct engine now. Those forks can even render arbitrary 3D models, which OG idTech couldn't do until Quake.
We'd have to come up with definition of quake :-) FTE has a lot bolted on it but the focus us in Quake, quake mods, lifting some limitations and making mod dev convenient.
But it is the same overall code structure, the same game, etc.
All these oss quake engines, are they quake? Ironwail, quakespasm vkQuake?
> Half-Life 2 sure won't play quake maps nor will it play hl1 maps.
Not without modifications but Half-Life: Source is essentially a tech demo to show that they can be ported easily (if you are OK with dropping some pesky features like randomized wall textures).
AFAIK hl1 maps needs to be open in hammer, tweaked a bit and then recompiled to function in hl2. You also better have those originals .rmfs rather than a .map or a even worse, a .bsp :)
Qames/quake from 9front =). It can run LibreQuake with Malice as a MODs, and that's it. Quake, Quakeworld and everything for vanilla, no modern changes like QuakeSpasm or worse, DarkPlaces. If someone backported HL2 to the original Quake with reduced physics and still run under a Pentium III fast enough, it would be something astonishing.
I see impressive stuff with reimplementations such as Surreal Engine, but they will require far more powerful machines.
If Surreal had a software renderer (not requiring AVX or similar) running under an SSE2 machine, that would yield even more respect, because if your reimplemented engine runs in legacy machines the portability would explode. Just have a look on Scummvm on how many platforms and OSes can it run. Or the Super Mario port for PC, where some fork supports even 3DFX under DOS, and GL 1.2. Thus runnable under TinyGL with no 3D accelerators and even under Plan9/9front with custom tweaks.
It took me about 15 years (out of 20 in the industry) to arrive at similar ideas. Interestingly, I heard all the arguments many times before but somewhat obscured by the way function programming speaks of things.
For the purpose of this game spliting things into core/shell makes certain things super easy: saving and restoring state, undo, debugging, testing, etc.
And one more bit, relevant to this new reality we find outselves in. Having a bunch of pure functions merged into a very focused DSL makes it easy to extend the systems through LLMs: a description of well-understood inputs and outputs fits into limited context windows.
By the way.
It is true that dedicated languages never arrived but FCIS is not a language feature, it's more like a architectural paradigm.
When I ported the Mac version of SimCity to Unix using NeWS/HyperLook for the front end, I naturally had to split it into and engine (the C simulator back-end "client"), and the user interface (the PostScript/NeWS/HyperLook front-end "server"), and define a network messaging interface (API) between them.
That made it a lot easier to later port it to X11/TCL/Tk, since it then had a well defined interface between the simulator and UI, by exposing the same API to TCL (via C calls) like I'd exposed to NeWS (via network messages). It was more monolithic though, integrated into TCL/Tk in the same process instead of talking to the user interface over a socket like a modern web client/server (but the web switches the terms client and server).
More recently I compiled the C++ simulator engine with emscripten into a WASM module, so the simulator and the UI can run in the browser client, with the UI being implemented in TypeScript/canvas/WebGL:
That's a fair question. For me, it's about removing the steep learning curves and gatekeeping from computer science and tech. Because the realities of being a developer have all but consumed my career with busywork.
For example, when I first learned about the borrow checker in Rust, it didn't make sense to me, because I had mostly already transitioned to data-driven development (just use immutable objects with copy-on-write and accept using twice the memory which is cheap anyway). I had the same feeling when I saw the syntactic sugar in Ruby, because it's solving problems which I specifically left behind when I abandoned C++. So I feel that those languages resonate with someone currently working with C-style code, but not, say, Lisp or SQL. We should be asking more of our compilers, not changing ourselves to suit them.
Which comes down to the academic vs pragmatic debate. Simple vs easy. Except that we've made the simple complex and the easy hard.
So I hold a lot of criticism for functional languages too. They all seem to demand that developers transpile the solution in their minds to stuff like prefix notation. Their syntax usually doesn't even look like equations. Always a heavy emphasis on pedantry, none on ergonomics. So that by the time solutions are written, we can't read them anyway.
I believe that most of these problems would go away if we went back to first principles and wrote a developer-oriented language, but one that's formal with no magic.
For example, I would like to write a language that includes something like gofmt that can transpile a file or code block to prefix/infix/postfix notation, then evolve the parser to the point that it can understand all of them. Which I know sounds crazy, but that would let us step up to a level of abstraction where we aren't so much concerned with syntax anymore. Our solutions would be shaped to the problems, a bit like the DSL you mentioned. And someone else could always reshape the code to what they're used to for their own learning.
You're right that FCIS is currently more of a pattern than a syntax. So the language would need to codify it. Normally imperative code would have to run in unsafe blocks, but I'd like to ban those, because they inevitably contaminate everything, leaving us with cruft. One way to do that might be to disallow mutability everywhere. Const is what allows imperative code to be transpiled to functional code and vice versa.
Except then we run into the problem of side effects and managing state, which leads us to monads, which leads us to promises/futures/closures and the async/await pattern (today's goto) which brings us full circle to where we started (nondeterminism), so we want to avoid those too. So we'd need to codify execution boundaries. Rather than monads, we'd treat all code as functional sync/blocking, and imagine the imperative shell as outside the flow of execution, at the point where the environment changes state (like a human editing a cell in a spreadsheet). Maybe the imperative shell should use a regular grammar (type 3 in Chomsky's hierarchy) to manage state transitions like Redux but not be Turing-complete (so more like a state machine than flow control).
Except that state machines are hard to reason about above a few dozen states, especially with nested state machines. Thankfully state machines can be transpiled to coroutines and vice versa. So we can imagine the imperative shell sort of like a shader with const-only variables. An analogy might be using coroutines in Unity for sprite behavior, rather than polluting the main loop with switch() commands based on their state. I've been down both roads, and coroutines are so much easier to reason about that I'll never go back to state machines.
I should add that I realized only recently that monads can be thought of as enumerating every execution path in the logic, so sacrificing them might be premature. For example, if we have a monad that's a boolean or undefined, and we've written a boolean logic function, then it becomes trinary logic with the monad. Which is related to stuff like Prolog, Verilog/VHDL, SPICE and SAT solvers, because we can treat the intermediate code like a tree because Lisp can be transpiled to a tree and vice versa. Then we can put the tree in a solver with the categories/types of the monads and formally define the solution space for a range of inputs. Sort of like fuzzing, but without the uncertainty. So the language should formalize monads too, not for I/O, but for solving and synthesis, so that we can treat code like logic circuits (spreadsheets).
Anyway, this is the low-hanging fruit. I haven't gotten into stuff like atomic operators (banning locks and mutexes), content-addressable memories for parallelizing execution without caching, reprogrammable hardware for stuff like loop optimization, etc. All of this represents the "real work" that private industry refuses to do, because it has no incentive to help the competition enter the walled gardens which it profits from. Fixing this stuff is up to academia (which is being constantly undermined), or people who have won the internet lottery (which presents a chicken and egg problem because they can't win without the thing that gets them to the thing).
Note that even though designing this language would be ambitious, the end result would feel familiar, even ubiquitous. I'm imagining something that looks like JavaScript/PHP but with value-only argument passing via const variables to higher-order methods (or automatic conversion from side-effect-free flow control statements), with the parallel code handling symantecs of Octave/MATLAB, and some other frivolties thrown in like pattern matching, destructuring, really all of the bells and whistles that we've come to expect. It would auto-optimize to the fullest extent possible for a high-multicore machine (1000+ cores, optionally distributed on a network or the internet), so run millions of times faster (potentially infinitely faster) than most anything today that we're used to. Yes we'd still hit Amdahl's law, but not resource limits most of the time. And where some people might see a utopian dream, I see something pedestrian, even boring to design. A series of simple steps that are all obvious, but only from the perspective of having wasted a lifetime fighting the existing tools.
Sorry this got so long. Believe it or not, I tried to keep it as short as possible.
Somebody somewhere suggested doing a clone of Tropico called ElPresidente, which is even cooler.
Btw, Lars, you have endlessly more experience in Elisp than I do. Do you maybe have any ideas/directions on how to make the graphical mode look... A bit more decent and snappy?
Actually, this is a copy of the older version of interface with a few lines dropped. I failed to generate a decent gif last night, will add a screenshot.
Admittedly, while working on this, I did consult my LLMs advisor through gptel(https://github.com/karthink/gptel) with a few custom tools setup, which I cannot recommend enough.
Heh, based on my incorrect and probably wrong experience Dutch and Swedes are the best non-native english speakers in term of both the accent and fluency.
Those and Icelandic people. But there's a fun correlation - see how much the US media content is played compared to local one per country. And which countries use subs rather than dubs or voiceovers in cinemas and TV. https://publications.europa.eu/resource/cellar/e4d5cbf4-a839...
If you have exposure to English media from young age and don't get a translation, you learn pretty quickly.
Yes, but no one is ever talking about pypy or jython implicitly. They are always mentioned by name because they represent <0.1% of all Python usage and are relegated essentially exclusively to niche or experimental use cases for power users.
It’s a bit like arguing people should start saying “homo sapiens” when referencing “people” for added precision. It may be useful to anthropologists but the rest of us really don’t need that. Similarly, CPython is really only a sensible level of precision in a discussion directly about alternative Python implementations.
(although in this case the original post is about implementation internals so I’d give it a pass)
This seems to be literally looking at the details of the C implementation of a Python interpreter. Exactly specifying the implementation makes sense here. You wouldn't say "how does the C++ compiler work" then look only at gcc.
If you know enough about Python to look at how the dict is implemented, you also know the difference between Python and CPython. It's not a beginners intro.
I did not downvote, but I'm guessing that it is perceived as disrespectful to call them failures to the point where they don't even qualify as "alternatives".
But, they are technically correct. The language is defined as by CPython: it is the standard!!! None of the others fully meet that standard, which includes quirks! It's knows trade offs with them! They are, literally, attempts to adhere to that standard.
reply