Hacker Newsnew | past | comments | ask | show | jobs | submit | KolmogorovComp's commentslogin

The article skip over the results? Did the design succeeded? Which hardware do miner uses, and is it evenly distributed? Can I mine Monero on potato hardware?

The design has been working perfectly from 2018 till today.

https://old.reddit.com/r/Monero/comments/1h6e4nk/randomx_5_y...

Most miners use AMD Ryzens. Couldn't tell you the actual breakdown of CPU types in use. Apple's M series CPUs are quite efficient at it too. Bitmain now sells a "Monero RandomX Mining ASIC" which is just a bunch of RISC-V cores, seemingly based on Sophon SG2042 SOCs. There's nothing special or more cost-effective about their product.

You can mine on old smartphones quite easily. I use a bunch of old Android TVboxes myself. Their hashrates are nothing to crow about, but their hashes/watt are still competitive with faster CPUs.

There is a RandomX V2 that will be deployed soon. Its main improvement is even cheaper verification cost.


> [To lib authors] Nobody is obviously in charge in the way a fast-moving production team would mean "in charge," and that creates understandable hesitation around making breaking changes, even when experience has taught us better ways to design these systems.

> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.

And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.


Now you can develop the lib in the direction that you need and you have people on payroll that do it, this seems like good risk management.

Author here: I think you are projecting quite a bit. We do in fact hire a lot of people who maintain things, and even pay quite a lot for OSS development on things like the compiler and libraries we care about. But we still have business objectives to achieve, and sometimes it makes more sense to write things that better suit our needs.

Honest question, does Uber need that much R&D? And do they expect the ROI to be positive?

i assume this also includes their self driving vehicle research and trucking, not just their consumer mobile app dev

Uber cancelled their self-driving research years ago.

a quick search shows they're not developing their own AVs but they're heavily investing in them via partnerships and other experimental projects

Imagine making your product compliant across 100+ countries while regulatiions, labor-laws, tax rules, insurance requirements, and data privacy laws keep changing.

Imagine itegrating dozens of payment methods - many of them highly localized - across emerging and developed markets, while dealing with fraud, chargebacks, KYC, AML, and settlement complexities.

Imagine processing trillions of data points every day - rides, location updates, pricing signals, ETAs, traffic conditions, demand forecasts, payments, support events.... storing it efficiently, querying it in near real time, generating reports, and keeping the whole pipeline reliable. I have woorked in data engineering, and can tell you confidently that this alone requires an enormous R&d budget.

Then there are the apps - not just customer-facing, but driver-facing, courier-facing, merchant-facing, fleet-management, onboarding, support, operations, compliance, finance, and hundreds of internal tools and dashboards.

Then come the integrations. Companies running at Uber's scale genemrally have hundreds of tjese - mapping providers, payment processors, banks, identity verification, tax systems, telecoms, customer support platforms, fraud detection, analytics, ERP, CRM, and more.

... And then there are even more...

Real-time routing and dispatch optimization

Dynamic pricing and marketplace balancing

Fraud detection and account security

Driver/rider safety systems

ML models for ETA, demand forecasting, incentives, and churn prevention

Experimentation infrastructure for thousands of A/B tests

Reliability engineering across globally distributed systems

Data centers / cloud optimization at massive scale

Localization across languages, currencies, addresses, and cultural norms

Customer support automation at global scale

Autonomous vehicle research, mapping, and computer vision

... to be fair, this is all what I could thing of based on my own work experience in related fields... there is definitely as many more systems in reality as mentioned abpve.


Patches / PR

> It’s probably the core reason developers choose GitHub as their main git forge. I get it. It does have it’s advantages of giving a better experience for reviewing a set of changes. Initially. But what if I told you there was a time when submitting email-based patches was the standard for version control?

The author explains well how you can bear with patches, but not why patches were chosen in the first place. What advantages do they have over PR? I see none, and I won't lose my precious time working-around an inferior process to Github's already subpar PR one.


Here is what email patches are all about:

https://blog.ffwll.ch/2017/08/github-why-cant-host-the-kerne...

I tried email patches with another person myself. The only reason GH won here, is because the git people made one fatal mistake: They forgot to include the tree hash and only show the commit hash in the email patch. But the commit hash is useless. When you email patch, then commits people want to treat as "the same" and talk about have different hashes. The commit times differ and there is not only the commit author, but also the committer.

We stopped doing email patches, because commit hashes became useless for communicating with each other.

GitHub made commit hashes "constant" in a way people care about.

For our purposes, tree hashes would have been much better in practice.

The git user interface is literally "git porcelain". It cuts you for no reason.


That's not the only problem with git send-email by a long way. Even the setup process is extremely painful.

You did not explain why the patch based process is "inferior", neither did you explain why you'd have to "work around" the process!

Learning git format-patch, send-email, configuring SMTP, setting up wrapping, mailing list etiquette, versioned patch sets...

I believe I've read something by Drew DeVault about it, but I can't find it.

The closest I found is this - https://drewdevault.com/blog/Code-review-with-aerc/ - although it has broken links.


I think there is a strong argument that Gerrit is the current evolution of the patches workflow, many prefer it, and there are a lot of good blog posts explaining why.

I don't know what the justification for emailing patches around is though, that seems needlessly painful in the face of alternatives


Patches allow people to contribute without having an account on the forge.


use tmux.

This should really what LLM ought to bring in terms of security. Be able to break things faster considering it is now easier for the maintainers to fix them.

This has downsides of course, moving further into the "everything rot so fast these days" trope, but we will in a adversarial world where the threat is constantly evolving.

Tomorrow (today) the servers and repo won't be scanned by scripts anymore but by increasingly capable models with knowledge about more security issues than many searchers.


You missed the tongue-in-cheek.

And the migrations. Or rather all the half-started migrations that never get through meaning you have to deal with api v1,2,3 all the times.

Those are pervasive in any old and large project but in my experience especially so in compilers.


Management problem more than anything else, I feel.

Compilers should not have so much churn. You decide on a set of language features, stick to it and implement. After that, it should only be bugfixes for the foreseeable future till someone can make a solid case for that shiny new feature.

Scope creep is bane of most projects.


I’ve never understood why insider insight was forbidden, the point of prediction market is betting on the outcome based on information you have.

Is that ‘fair’ for everyone? No! Because no everyone has access to the same level of information. But no one forces you to bet either.


Knower. Thank you. The point of all markets is to aggregate information.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: