Using gitea does not help if you goal is to allow non-auth'ed read-only access to the repo from a web browser. The scrapers use that to hit up every individual commit, over and over and over.
We used nginx config to prevent access to individual commits, while still leaving the "rest" of what gitea makes available read-only for non-auth'ed access unaffected.
Every commit. Every diff between 2 different commits. Every diff with different query parameters. Git blame for each line of each commit.
Imagine a task to enumerate every possible read-only command you could make against a Git repo, and then imagine a farm of scrapers running exactly one of them per IP address.
Ugh Ugh Ugh ... and endless ughs, when all they needed was "git clone" to get the whole thing and spend as much time and energy as they wanted analyzing it.
http {
# ... other http settings
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
# ...
}
server {
# ... other server settings
location / {
limit_req zone=mylimit burst=20 nodelay;
# ... proxy_pass or other location-specific settings
}
}
Rate limit read-only access at the very least. I know this is a hard problem for open source projects that have relied on web access like this for a while. Anubis?
As noted by others, the scrapers do not seem to respond to rate limiting. When you're being hit by 10-100k different IP's per hour and they don't respond to rate limiting, rate limiting isn't very effective.
Put it all behind an OAuth login using something like Keycloak and integrate that into something like GitLab, Forgejo, Gitea if you must.
However. To host git, all you need is a user and ssh. You don’t need a web ui. You don’t need port 443 or 80.