Bounds checks have nothing to do with data races. GP is right, you can add bounds checks. Either using macros or (in C++) with templates and operator overloading.
Alas, in C or C++ you have mutable aliasing, so I'm afraid you do incur a potential data race because your bounds might alias. Be careful out there.
Also remember that in C++ you may get a reference in these cases and if you keep that reference rather than using it immediately now you also have a potential TOCTOU race because the reference was only valid when you did the bounds check.
With mutable aliasing the length might change even though the data you care about did not, and so adding the check means incurring a race which did not previously exist and which certainly the naive C programmer cannot see...
We can definitely mitigate this in the type system for most real world scenarios, but you don't mitigate problems you don't know about, so knowing is what's important.
t was made to sound like Hi-Fi, which stands for high fidelity, and Wireless, but "wireless fidelity" is a meaningless phrase and not what it was intended to directly mean.
> Less and less people feel it, because people old-enough to have used branch-powered VCSes have long forgotten about them, and those who didn't forget are under-represented in comparison to the newcomers who never have experienced anything else since git became a monopoly.
I'm old enough to have used SVN (and some CVS) and let me tell you branching was no fun, so much that we didn't really do it.
Just because there is one project apparently using this in a way that indicates someone could perceive something as a weakness... It doesn't mean it's a real weakness (nor that it's serious).
You can just not move branches. But once you can do it, you will like it. And you are going to use
git branch --contains COMMIT
which will tell you ALL the branches a commit is part of.
Git's model is clean and simple, and makes a whole lot of sense. IMHO.
Most code most people work on isn't about algorithms at all. The most straightforward algorithm will do. Maybe put some clever data structure somewhere in the core.But for the vast majority of code, there isn't any clear algorithmic improvement, and even if there was, it wouldn't make a difference for the typically small workloads that most pieces of code are processing.
I'll take it back a little bit, because there _is_ in fact a lot of algorithmically inefficient code out there, which slows down everything a lot. But after getting the most obvious algorithmic problems out of the way -- even a log-n algorithm isn't much of an improvement to a linear scan, if n < 1000. It's much more important to get that 100+x speedup by implementing the algorithm in a straightforward and cache friendly way.
Now that is interesting too, because git is very fast for all I have ever done. It may not scale to Google monorepo size, it would ve the wrong tool for that. But if you are talking Linux kernel source scale, it asolutely, is fast enough even for that.
For everything I've ever done, git was practically instant (except network IO of course). It's one of the fastest and most reliable tools I know. If it isn't fast for you, chances are you are on a slow Windows filesysrem additionally impeded by a Virus scanner.
The fact that Git has an extremely strong preference for storing full and complete history on every machine is a major annoyance! “Except for network IO” is not a valid excuse imho. Cloning the Linux kernel should take only a few seconds. It does not. This is slow and bad.
The mere fact that Git is unable to handle large binary files makes it an unusable tool for literally every project I have ever worked on in my entire career.
Takes 21 seconds on my work laptop, indeed a corporate Windows laptop with antivirus installed. Majority of that time is simply network I/O. The cloned repository is 276 MB large.
Actually checking the kernel out takes 90 seconds. This amounts to creating 99195 individual files, totaling 2 GB of data. Expect this to be ~10 times faster on a Linux file system.
—-depth=1 is a hack and breaks assorted things. It’s irritating. No I can’t tell you what random rakes I’ve stepped on in the past because of this. Yes they still exist.
If you’d like to argue that version control should be centralized, shallow, and sparse by default then I agree.
> If you’d like to argue that version control should be centralized, shallow, and sparse by default then I agree.
I get your sentiment, but I know how working with e.g. SVN feels. Just doing "svn log" was a pain when I had to do it. The "distributed" aspect of DVCS doesn't prevent you from keeping central what you need central. E.g. you can have github or your own hosting server that your team is exchanging through.
The main point of distributed is speed and self-sufficiency which is a huge plus. E.g. occasional network outages and general lack of bandwidth are still a thing in 2026 (and remain so to some extent for the foreseeable future).
Now, could git improve and allow some things to be staged/tiered/transparently cached better? Probably, and that's where some things like LFS come in. I don't have a large amount of experience in this field though, because what I work with is adequately served by the out-of-the-box git experience.
Then just do git pull --unshallow whenever you see fit. I normally don't do --depth 1 because cloning repositories is rarely my bottleneck. Just saying that when you need a relatively fast clone time, you can have it.
Git LFS is a gross hack that results in pain and suffering. Effectively all games use Perforce because Git and GitLFS suck too much. It’s a necessary evil.
Yeah this is the very first time I am hearing that templates are "extremely cheap". Template instantiation is pretty much where my project spends all of its compilation time.
It depends on what you are instantiating and how often you're doing so. Most people write templates in header files and instantiate them repeatedly in many many TUs.
In many cases it's possible to only declare the template in the header, explicitly instantiate it with a bunch of types in a single TU, and just find those definitions via linker.
On the few times that I have looked at clang traces to try to speed up the build (which has never succeeded) the template instantiation mess largely arose from Abseil or libc++, which I can't do much about.
Until you try to add / modify a feature of the software and run into confusing template or operator or other C++ specific errors and need to deconstruct a larger part of the code to find (if possible) out where it comes from and spend even more time trying to correct it.
C++ is the opposite of simplicity and clarity in code.
At 150% scaling, one logical pixel maps to 1.5 physical pixels. When a 1px grid line is drawn, the renderer cannot light exactly 1.5 pixels, so it distributes the color across adjacent pixels using anti-aliasing. Depending on where the line falls relative to device-pixel boundaries, one pixel may be fully colored and the next partially colored, or vice versa. This shifts the perceived center of the line slightly. In a repeating grid, these fractional shifts accumulate, making the gaps between lines appear uneven or "vibrating."
Chromium often avoids this by rendering 1px borders as hairlines that snap to a single device pixel, even when a CSS pixel corresponds to 1.5 device pixels at 150% scaling. This keeps lines crisp, but it also means the border remains about one device pixel thick, making it appear slightly thinner relative to the surrounding content.
For some people such artifacts are not noticeable for others they are.
I'm one of those people who are super sensitive to the issues you describe, and let me tell you this: Scaling value (like 150%) is just an integer number.
For the most part, non-ancient renderers (3D but also to a large degree 2D renderers), do not care about physical pixels, and when they do, they care the same amount no matter what the DPI is.
Raster data has a fixed number of pixels, but is generally not meant to be displayed at a specific DPI. There are some rare applications where that might be true, and those are designed to work with a specific display of a given size and number of pixels.
It's especially older applications (like from the 90s and 00s) that work in units of physical pixels, where lines are drawn at "1 pixel width" or something like that. That was ok in an age where targetted displays were all in the range of 70-100 DPI. But that's not true anymore, today the range is more like 100 to 250 or 300 DPI.
One way to "fix" these older applications to work with higher DPIs, is to just scale them up by 2 -- each pixel written by the app results in 2x2 pixels set on the HiDpi screen. Of course, a "200%" display i.e. a display with 192 DPI should be a good display to do exactly that, but you can just as well use a 160 DPI or a 220 DPI screen and do the same thing.
It's true that a modern OS run with a "scaling" setting of 150% generally scales up older applications using some heuristics. Important thing to notice here is that the old application never considered the DPI value itself. It's up to the OS (or a renderer) how it's doing scaling. It could do the 2x2 thing, or it could do the 1.5x thing but increase font sizes internally, to get sharp pixel-blitted fonts when it has control over typesetting. And yeah, some things can come out blurry if the app sends final raster data to the OS, and the OS just does the 1.5x blur thing. But remember, this is an unhappy situation just for old applications, and only where the OS receives raster data from the App, instead of drawing commands. Everything else is up to the OS (old apps) or the app itself (newer, DPI-aware apps).
For newer applications, e.g. on Windows, the scaling value influences nothing but the DPI value (e.g. 150% or 144 DPI) reported to the application, everything else is up to the app.
Sorry none of that makes any sense to me. Go to BestBuy where they have Surface laptops on display. Open the browser and go to a website where a grid with 1px horizontal lines are displayed. I immediately notice that the lines are disproportionately thin. You may not notice it and that's fine.
I've started out with a longer reply, but let's try and condense it a little: I concede you can still find this issue, especially on less than 4k displays, but it's becoming less and less of an issue, because of improving software. Where you see the issue, it's simply a software problem -- CSS or Chrome or the website/app should be fixed.
I don't see much of this issue anymore, on my 27" 4K screen, set to 175% scaling.
It's logical that if you want to do arbitrary scaling or zooming, and want to keep all distance ratios perfectly intact, you will experience some ugliness from antialiasing (less and less noticeable as you go to ~4K and beyond). That will be so regardless of scaling, even when it's set to 100%.
So if it ought to look good, software simply needs to be written more flexibly! 1px in CSS doesn't mean 1 physical pixel but it is a (quite arbitrary) physical distance, defined as 1/96th of an inch. It's all up to the app and the software stack deciding line widths and how they will actually come out on a screen in terms of pixels lit. They should _respect_ the scaling setting (like 150%) but they also are in full control, in principle, to make it look good.
<hr> lines come out are perfectly fine on my screen (175% 4K) with Firefox and Chrome.
1.5px width lines will come out quite bad with 100% scaling, but will look perfect with 150% scaling obviously.
Notice that vector fonts generally look better if you have a reasonably high-dpi display. But on average, it doesn't matter if you test font sizes of 20pt or 21pt or 17pt or whatever. Why is that? Because font rasterizers already snap to pixels. They properly quantice geometric objects. They don't make arbitrary stubborn choices like "it must be exactly 1/96th of a virtual inch wide" but they are a little flexible to fight antialiasing.
And the more higher DPI monitors there are, the less software will be making stubborn choices.
I've had a keyboard like that and with it, xterm (and nothing else) felt like it was displaying the characters even slightly before I had pressed them. It was a weird sensation (but good)
Yes, I know this feeling, it's like typing on air. The Windows Terminal has this same feeling. 8 years ago I opened this issue https://github.com/microsoft/terminal/issues/327 and the creators of the tool explained how they do it.
xterm in X11 has this feeling, ghostty does not. It's like being stuck in mud but it's not just ghostty, all GPU accelerated terminals on Linux I tried have this muddy feel. It's interesting because moving windows around feels really smooth (much smoother than X11).
I wish this topic was investigated in more depth because inputting text is an important part of a terminal. If anyone wants to experience this with Wayland, try not booting into your desktop environment straight into a tty and then type. xterm in X11 and the Windows Terminal feel like this.
reply