Hacker Newsnew | past | comments | ask | show | jobs | submit | ezoe's commentslogin

I guess it's time to consider ditching GitHub. Everything that are purchased by Microsoft ware destined to be rotten.

even aside from this, their reliability has been absolutely terrible since they took it over. it's down so often we had to setup slack notifications directly to the devs to try to take some of the pressure off our ops teams.

they must be migrating it to hyper-v or something. brutal.


Even aside from that, what are we doing centralizing FOSS project hosting on a closed source Microsoft platform?

It was much better than the closed source SourceForge which existed before it. A lot of small projects dont have the energy to self host. Plus for small projects the barrier of entry is an issue. I recently found a typo in an error message in Garage but since they run their own Forgejo instance and OpenID never really became a thing I never created a PR.

It is first now with Codeberg there is a credible alternative. Of course large projects do not have this issue, but for small projects Github delivered a lot if value.


Well - my perspective is the KDE project, which has a team of capable admins who take care of hosting. The project has always been more or less self-hosted (I remember SUSE providing servers) and even provided hosting for at least one barely associated project, Valgrind. I think Valgrind bugs are still on KDE Bugzilla.

It's admittedly not really practical for most projects, but it could be for some large ones - Rust, for example.


I mostly work on PostgreSQL which has always selfhosted but PostgreSQL is a big project, for smaller projects it is much less practical. Even for something decently large like Meson I think the barrier would have been too big.

But, yes, projects like Rust could have selfhosted if they had wanted to.


We still use KDE's bugzilla. One of the reasons that Vagrind was initially developed was to help with KDE back when many developers didn't really understand how to use new and delete.

These days sourceware.org hosts the Valgrind git repo, buildbot CI and web site. We could also use their bugzilla. There isn't much point migrating as long as KDE can put up with us.


KDE uses Phabricator, or at least did the last time I contributed. Worked pretty well in the collaboration aspect for submitting a change, getting code owners to review and giving feedback. I was able to jump in as a brand new contributor and get a change merged. The kind of change that would have been a PR from a fork in GitHub.

However I got the distinct feeling the whole stack would not fit as well into an enterprise environment, nor would the tooling around it work well for devs on Windows machines that just want to get commits out there. It's a perfect fit for that kind of project but I don't think it would be a great GitHub replacement for an enterprise shop that doesn't have software as it's core business


KDE uses GitLab now, the change-over was mostly in 2020 with some less commonly used features staying on Phabricator a while longer.

I use a self-hosted GitLab instance in a commercial setting (with developers on Linux and Windows) as well. It's a software department of a non-software company. Fairly small. The person or persons in charge of GitLab have set up some pretty nifty time-savers regarding CI and multi-repo changes - I'd prefer a monorepo, but the integration makes it bearable.


> It is first now with Codeberg there is a credible alternative.

There is no credible alternative, because 3rd party hosting of the canonical repo is a bad idea to start with. By all means use 3rd party hosting for a more public-facing interaction, but its about time that developers understand that they need to host their own canonical repos.


We understand, and say no thanks. The benefits don’t outweigh the costs

The benefits now don't outweigh the costs. No doubt, totally agree.

The benefits down the road, when your chosen 3rd party host has been enshittified up the wazoo ... they far outweigh the costs.


Strong agree on this. I think a lot of people who've entered software development in the past decade or so don't appreciate just how bad the available options were when Github launched.

If you blanch at the thought of a one line in a pull request just wait until you see what Sourceforge looked like, release download pages where you had to paying keen attention to what you clicked on because the legit download button was surrounded by banner ads made to look like download buttons but they instead take you to a malware installer. They then doubled down on that by wrapping Windows installers people published with their own Windows installer that would offer to install a variety of things you didn't want before the thing you did.


What are some good alternatives for closed source codebases that people have been using and enjoying?

I only ask because I already know of good alternatives for FOSS, but it's the private / work projects that keep me tethered to GH for now.


If it's for work, why do you need GitHub at all?

To me, GitHub only makes sense as a social media site for code. If you are publishing to GitHub with no intent to be open in your code, development process, and contributor roster, then I don't see the point of being on GitHub at all.

Because it's not like their issue tracker is particularly good. It's not like their documentation support is particularly good. It's not like their search is particularly good. It's CI/CD system is bonkers. There are so many problems with using GitHub for its own sake that the only reason I can see to be there is for the network effects.

So, with that in mind, why not just setup a cheap VPS somewhere with a bare git repo? It'll be cheaper than GitHub and you don't have to worry about the LLM mind virus taking over management of your VPS and injecting this kind of junk on you.


What do you use for code review and CI/CD then?

You can do it with forgejo, just have to self-host the runners

I am excited about its potential integration with jujutsu: https://codeberg.org/forgejo/discussions/issues/325

Very true. We have a private git repository running on a server that serves as our master. Works fine for us. We backup to GitHub. But it isn't used in any way in the dev workflow

GitLab is quite good, the organizational features and CI is also mostly on par with GitHub. You can use gitlab.com, SaaS or self-host.

But compared to GitHub it's much more complicated in terms of UX as it covers more enterprise use cases that GitHub doesn't.

I'm a bit confused what you mean. I have to use GitLab for work and don't see much difference. Some UI elements look a bit more complex than on GH but other than that it's working the same way. Less buggy as well.

Personally I host forgejo for my private apps and have had no issues with that either.


Why do you think this? It really isn't.

It really is… I’ve worked with Gitlab for years and moving to GitHub was like a breath of fresh air, everything is much less cluttered. Not saying it’s perfect, but GitHub just feels simpler

Self-hosting. If you really need to push remotely, push to bare repo on your own cloud vm or setup gogs or forgejo.

I now start with local repos first and whatever I deem OSS-useful, I mirror-push from local to Github or anywhere else with forgejo.

Github was never really needed to use git for private projects.


I've been thinking about this. If you have any kind of home network with attached storage at all, setting your local Git to just use that seems like a logical step.

And then if you're still paranoid do a daily backup to like Dropbox or something.


Sourcehut.

Uses the same email-based patch workflow as Linux. Takes an hour to learn, and they have helpful guides: https://git-send-email.io/. No JavaScript.


Azure DevOps <shudder/>

Forgejo is super easy to set up on a 1-2 core vm. Make a compose file and put caddy in front for tls. the whole thing is less than 50 lines and costs about $10-$15 a month.

self hosted Gitea is my recommendation. has everything one needs and is super lean and resource saving. you can run it easily on a 1GB VPS - I even ran it for a while on 512MB.

I really like the end-user experience when I stumble on Gitea repos online, too.

GitLab. We self-host ours and it's rock solid.

Codeberg seems to have legs. License is different, best read it.

Gitlab.

For personal stuff I hopped over to source hut and it's fine.

Simple, direct, and I really like the email based workflows.


Funny that people said the exact same thing back when GitHub was originally acquired [0], I wonder how many actually went through with their words and ditched it. I bet GitHub has more users today than ever before though.

[0] https://news.ycombinator.com/item?id=17227286


>Why MS cares your private repositories? give a reason? Maybe using your code to train their programming robot, lol

>Whether they will abuse the trust of having complete and total access to every private repo and all of the code inside or not remains to be seen

>MS is pushing their ads within their own OS more and more, will GitHub get the same treatment[...]?

Funny.


Just speaking anecdotally, Codeberg today feels like the Gitlab of yesteryear, except that Codeberg has projects on it. Someone who is contributing to open source will eventually need to create a Codeberg account.

The top comment of the linked thread ("If Microsoft shares SSL certs with NSA they could do MITM attacks") is something that I find much more likely today than back in 2018.


They can have the new users pushing out sloppy projects.

The serious users leaving will definitely dent profitability. And GitHub being a social network, could start a death spiral.


What profitability? I'm pretty sure GitHub is a loss leader to push people to Azure and cloud services. I also don't know anyone who actually uses GitHub as a social network even though it ostensibly has such features.

The social features were GH's early secret sauce that contributed heavily to its stickiness and why it eventually dominated. IMO.

I should have said "will dent whatever profitability." I'm not sure it exists either. From the outside, it would seem crazy that it wouldn't be profitable with all the Enterprise stuff (and it's not like you can throw 10k engineers at whatever GH is doing).


i am surprised it took them time to destroy Github. usually they manage to make acquired companies a garbage pretty fast.

Are there any obvious successor to GitHub yet?

There are a few alternatives, but none have the critical mass of users yet.


For open source I would say Codeberg looks like the most promising. There is also SourceHut but seems like Codeberg has the mind share.

> The native NVMe driver (nvmedisk.sys) replaces the legacy storage path that has routed NVMe commands through a SCSI translation layer since before NVMe SSDs existed.

What? What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?


Seriously, that was my thought too. Even if we were to stretch credibility and suggest that general consumers don't care about this sort of thing, they just released this for Windows Server in the past year?

Windows really is a toy of an OS. It continues to blow my mind that people want to use it as a server OS.


Because it offers VMS niceties that UNIX clones still doesn't do, and stuff like AD, SMB, without manually going through configuration files stored somewhere, that differ across UNIX flavours.

Although I do conceed UNIX has won the server room and Windows Servers are mostly about AD, SMB, IIS, Sharepoint, Dynamics, SQL Server.

Naturally some of those can be outsourced into Azure services that Microsoft will gladly provide.


And to run windows only apps like some embedded toolchains. Although that gives a motivation for us to move on to gcc because windows is annoying to be used on CI/CD and gcc is good enough compared to that other toolchain


Which VMS niceties does it offer?


Proper file locking, asynchronous operations across everything, ACL based security, proper ABI.

Not being an OS from C to C as the main programming model.

And then on top, multiple levels of sandboxing, including virtualization of drivers and kernel modules.

Ah and RDP is much nicer than X Windows or VNC.


Other than possibly proper ABI, and yes a tiny handful of file operations that could theoretically block not available through io_uring, like ioctl and splice, Linux has the rest.


In security? Not really, unless you are doing immutable deployments with rootless containers, no shell access, which at the end of the day isn't UNIX any longer.

And which Linux exactly? Plus unless you're doing C or C++, most likely aren't using those APIs.

Anyway, the differences of bare metal servers don't matter in the days of cloud where the actual nature of the kernel running alongside a type 1 hypervisor hardly matters to userspace.


Your fanboi attitude is very welcomed on /.

And billions spent and earned clearly shows where the moniker 'toy' doesn't apply.

BTW year of Linux Desktop when?


  What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?
They were adding Copilot to everything, and implementing advertising tiles, and making sure it won't work without the appropriate TPM DRM, and forcing sign-in with a MS account to install it, and so on.

But they weren't ignoring NVMe entirely, they've got Rohan the intern working on it, and as soon as someone replies to his StackExchange questions he can start coding up the driver.


I am guessing that like ntfs it's a huge legacy spaghetty codebase that nobody understands and thus doesn't want to touch


I hope so. I prefer my evil to be ineffective.


there is so much to get angry about in the world at the moment.. I'm surprised that this one even registered with me.


They all feel like they're parts of a single expansive pattern.


So Weave claims AI based development increase git conflict frequency.

Given that most git conflicts are easy to solve by person who didn't involved at changes, even for a person who don't know that programming language, it's natural to let AI handle the git conflicts.

Solving a git conflict is most often a simple text manipulation without needing much of context. I see no problem current AI models can't do it.


When you start seeing the diffs with entities instead of lines, is what interests me, you get much better semantic info.

If you have a language specific parser, you can make a merge algorithm like weave. But the bigger win isn't resolving conflicts git shows you. It's catching the ones git misses entirely. So in those cases weave is much better, and there also other things like confidence-scored conflict classification, you should try it out it improves the agents performance, especially if you are a power user.


Probably the old habit of batch processing.


It seems to me that this is just an issue of diff features. Git can extended to show semantic diff of binary files and it doesn't technically need a completely new VCS.

As git became the most popular VCS right now and it continues to do so for foreseeable future, I don't think incompatibility with git is a good design choice.


Indeed, if lix were to target code version controlling, incompatibility with git is a “dead on arrival” situation.

But, Lix use case is not version controlling code.

It’s embedding version control in applications. Hence, the reason why lix runs within SQL databases. Apps have databases. Lix runs of top of them.

The benefit for the developer is a version control system within their database, and exposing version control to users.


I've been hearing this for as long as I can remember and I'm not young anymore.


If anyone ever wonder why they don't see productivity improvement, they really need to read Mythical Man-Month.

Garage Duo can out-compete corporate because there is less overhead. But Garage Duo can't possibly output the sheer amount of work matching with corporate.


In my view the reasons why LLMs may be less effective in a corporate environment is quite different from the human factors in mythical man month.

I think that the reason LLMs don't work as well in a corporate environment with large codebases and complex business logic, but do work well in greenfield projects, is linked to the amount of context the agents can maintain.

Many types of corporate overhead can be reduced using an LLM. Especially following "well meant but inefficient" process around JIRA tickets, testing evidence, code review, documentation etc.


I've found that something very similar to those "inefficient" processes works incredibly well when applied to LLMs. All of those processes are designed to allow for seamless handoff to different people who may not be familiar with the project or code which is exactly what an LLM behaves like when you clear its context.


The limited LLM context windows could be an argument in favor of a microservices architecture with each service or library in its own repository.


That just moves the complexity to the interactions between repositories, where it’s more difficult to understand and fix.


>>there is less overhead.

There have been methods to reduce overhead available over the history of our industry. Unfortunately almost all the times it involves using productive tools that would in some way reduce the head counts required to do large projects.

The way this works is you eventually have to work with languages like Lisp, Perl, Prolog, and then some one comes up with a theory that programming must be optimised for the mostly beginners and power tooling must be avoided. Now you are forced to use verbose languages, writing, maintaining and troubleshooting take a lot of people.

The thing is this time around, we have a way to make code by asking an AI tool questions. So you get the same effect but now with languages like JS and Python.


the productivity improvement is the Big Lie


But would you want to run these Win32 software on Linux for daily use? I don't.


Depends on what task you're doing, and to a certain extent how you prefer to do it. For example sure there's plenty of ways to tag/rename media files, but I've yet to find something that matches the power of Mp3tag in a GUI under linux.


Have you tried kid3 (https://kid3.kde.org)? It has both a GUI and a CLI.

From a quick glance at the feature lists it looks quite comparable.


I just did, have you actually tried using them side-by-side? It's hard for me to look favorably on kid3. I actually gave myself 5-10m to try and learn kid3 and a lot of what seems like obvious ways to accomplish a task like 'rename these files using their tags' didn't do anything. I even broke out the manual which didn't help/explain if there was a different mindset I need to adopt. I could manage to manually edit tags/rename file by file, but that seems like table stakes for anything that handles media files (even a file manager) let alone an application that is meant to be a specialist in that area, and we're not into any advanced functionality yet.

More generally though it's not about one specific type of tool, it's that windows and linux have been different ecosystems for decades and that has encouraged different strengths and weaknesses. To catch up would mean a lot of effort even if you're just aiming to be equivalent, or use projects like WINE to blur the lines and use the win32 tool as though the specific platform doesn't matter so much.


I get that you wanted to make a general point. In case you're still curious about this specific case:

It's been a long time since I last used Mp3tag, so I tried the latest Mp3tag in WINE (seems to work nicely) for comparison. I think the basic operations (editing tags) actually do work similarly: in both you select file(s), edit the tag you want to in the GUI and changes get applied to any selected file(s) when you press save.

Renaming filenames based on tags also works according to that principle in kid3, you select the files you want to change (rename) and then use the `Format (arrow pointing from tag fields to filename field)` to specify what the filename pattern should look like and then use the `Tag 1` or `Tag 2` button to fill the placeholders from the (e.g.) ID3v1/ID3v2 tag, and click save to apply the changes.

In Mp3tag you'd also highlight the files, but unlike other tag editing operations you use the `convert->tag to filename` menu item/button, which pops up a wizard asking for the pattern and confirmation.

I'm guessing coming from Mp3tag you tried to use kid3's `Tools->Apply filename format` option, which I believe ensures the filename doesn't include special characters by doing string replacements (these are configured in the settings under `Files->Filname format`). I was wondering if that was perhaps confusingly named, so I had a look in Mp3tag to see what this functionality was called there, but I couldn't find it. I'm sure it's possible somehow, but it probably involves scripting [1].

I noticed that Mp3tag seems to be able to automatically fetch album art whereas in kid3 you need to get the image yourself. I suspect more advanced functionality (scripting etc) will work differently in the two tools.

[1] https://community.mp3tag.de/t/character-replacement-for-tag-...


Gamers have no other option, and thanks Valve, game studios have no reasons left to bother with native Linux clients.

Just target Windows, business as usual, and let Valve do the hard work.


> Gamers have no other option, and thanks Valve, game studios have no reasons left to bother with native Linux clients

But they do test their Windows games on Linux now and fix issues as needed. I read that CDProjekt does that, at least.


CDProjekt releases native linux builds.


I don’t think Witcher 3 or Cyberpunk 2077 have Linux builds available for the common folk? Cyberpunk has a ARM64 Mac build, though.


Huh, I could have sworn Witcher 3 did, but maybe I am misremembering it merely releasing without DRM.


Witcher 2 had a Linux native build, but never Witcher 3.


Not really, most leave that to Valve.


...game studios have no reasons left to bother with native Linux clients.

How many game studios were bothering with native Linux clients before Proton became known?


That's exactly the point. They weren't, so a Linux user didn't have an option to run a native Linux client in preference to a Win32 version.

That goes back to address the original question of "But would you want to run these Win32 software on Linux for daily use?"


More than now, I own a few from the Loki Entertainment days.


Well, not having Proton definitely didn't work to grow gaming on Linux.

Maybe Valve can play the reverse switcheroo out of Microsoft's playbook and, once enough people are on Linux, force the developers' hand by not supporting Proton anymore.


For making music as much as I love the free audio ecosystem there's some very unique audio plugins with specific sounds that will never be ported. Thankfully bridging with wine works fairly well nowadays.


I knew a guy whose main editor for his day to day was Notepad++ running in Wine.


I use some cool ham radio software, a couple SDR applications, and a lithophane generator for my 3d printer. It all works great, if you have a cool utility or piece of software, why wouldn't you want to?


> In the US, for example, shutdowns would be hard to enforce.

Is that really? US government has tanks, bombers, missiles and tactical nukes while "a well regulated Militia" have petty rifles and motolovs.

It's very easy for US government to cause state-wide power blackout, effectively shutdown Internet.


The US hasn't really won any war for the long term since WW2. It turns out it's hard to change people's opinion by bombing them. Equipment is good at destroying the other sides's factories, and making people afraid of you (though even that's usually done with on-the-ground police boots) but it can't actually make people agree with your side, and in fact, seems to usually have the opposite effect. They can only hold control temporarily as long as they apply massive military pressure. As soon as they let up the pressure, they lose.

It probably has something to do with the strict top-down control structure. It's a Linux vs Microsoft situation. Large organisations, regardless of type, cannot innovate.


The quote has nothing to do with a well regulated militia. It's about whether the technical ability for internet shutdowns has been built or not.

>A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)

>The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.

Nukes and tanks weren't built for internet shutdowns, and it's a ridiculous idea that if the US government decided to do an internet shutdown that they would decide to use a nuke for that.


Tactical nukes are a big no-go, so don't expect them to be ever used for something like this.


Oh! You don't need any of those. I'm sure that they have enough tactical EMP devices to do the job.

PS: ElectroMagnetic Pulse weapons for the TLA-haters here.


Hey bro! This is the real English bro! No way we can write like that bro! What? - and ;? The words like "furthermore" or "moreever"? All my homies nver use the words like that bro! Look at you. You're using newline! You're using ChatGPT, right bro?


Given the eloquently natural words in this post, I conclude you must be this thread's prompt engineer! Well done, my fellow Netizen. Reading your words was like smelling a rosebud in spring, just after the heavy snow fell.

Now, please, divulge your secret--your verbal nectar, if you wish--so that I too can flower in your tounge!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: