Hacker Newsnew | past | comments | ask | show | jobs | submit | littlecranky67's commentslogin

Same shit happend to me - got my google account blocked overnight and locked out of most of my digital life. Learned my lesson and ungoogled asap.

My CCX13 (dedicated cores) went from 15€ to 20€ now. Looking at Netcup as alternative, more cores and more RAM for 12€ - anybody has experience with their root (kvm'ed) servers?

It's OK until you get into support troubles. I would say Scaleway/OVH might be better contender (but they are french hehehe).

well personally I do not expect support for a 12€/month product. Given cost of labour in germany/europe, just talking to a person for 10-20minutes destroys their profit margin for years. I DO expect uninterrupted service, though.

Implementing lightning (bitcoin layer 2 solution) is probably easier or at least at par. And way more accesible. Look at lnurl, lnaddress.

Transaction fees for bitcoin sent via the lightning network (which is a layer 2 solution) are in the "less than a cent" category and are settled in a few seconds. This is not fiction, this is i.e. how Trump made his for-the-cameras bitcoin payment during his campaign.

Lightning isn't even a good solution for most diehard bitcoin users. It's a failed project.

It would take 27 years to onboard every internet user to the lightning network unless you start adding level 3 aggregators and then at that point you lose all the benefit of it being on chain at all.

It would take almost 2 years just to onboard every American assuming during that time there were zero other bitcoin transactions. Then you need to add the fees for the on and off ramps to the individual transaction fees to get the real cost per transaction noting that these would go up quite a bit as the competition between lightning and non-lightning uses of the transaction space would drive prices higher.


The throughput is arbitrarialy limited by bitcoin's current block size, which hasn't been increased since satoshi's era.

Most cryptocurrencies have an adaptive block-size mechanism which allows the blocks to grow to a reasonable size which could facilitate such an onboarding of users. So it isn't a technical problem, it is just a question of bitcoin's current leadership, which is controlled by companies like blockstream.


People have been debating the blocksize for a very long time now and there doesn't seem to be any large desire to change it so while the ability to increase it exists changing anything that fundamental about bitcoin seems to be a non-starter and while that is true lightning is pointless as a solution for the masses.

Even if you increase the block size 100x though you're still not improving the numbers much since my very generous numbers ignore activity outside of lightning and assume a single on chain transaction for every user and a perfect network.


It is not the blocksize. The throughput of transaction on the lightning network is not at all limited by blocksize or the bitcoin blockchain.

The ability for users to access the lightning network is limited by blocksize since you need transactions to open channels.

> you need transactions to open channels

Maybe your understanding of lightning is wrong here. Yes you open channels, and transactions in lightning need open channels, but you do NOT NEED to open channels for specific transactions. You open channels once, and transact over them for years. I run a lightning node with more than 15 channels (each to different lightning nodes) that are all older than 1 year (I route payments, so I have way more channels than needed). You can batch-open channels, i.e. I could have opened all my 15 channels with a single on-chain transaction. Taproot update would make those "commitment transactions" onchain way smaller (in byte) than needed to in the past.


No misunderstanding at all.

Go read what I've said again. My timelines are based entirely on each user making a single transaction(opening a single channel).


Once channels are open, the users on the lightning network can transact back and forth without any new channel opens/closure and thus no on-chain settlement. Hence: Throughput in lightning is not at all limited on the bitcoin transaction throughput.

Throughput depends on users and users are limited by transaction limits is what I'm saying.

You mistake is, you overestimate the amount of people who want to be self custodial. You don't need to onboard every human being in the world on-chain.

Given the US example that would be several years in the absolute best for lightning case to onboard even 5% of individuals. Lightning is doomed from the start.

And if you don't care about self custody then the overhead of using a blockchain is a waste.


It is not black/white. It is okay to have the freedom to become self-custodial anytime, but not everybody needs to transact in self-custody all the time.

Taproot was another major step that enables lightning upgrades in future versions (such as zero-fee channel opens) that is barely discussed. The number of X years for onboarding Y amount of people is not accurate, as it disregards all major developments of the last 5 years.


You need enough users for providers to bother making it an option to pay with so it really is black and white. You might get a few niche providers offering it as a payment method without a critical mass of users but most companies aren't going to invest time and effort into implementing a payment system a tiny percentage of users have access too and if I need to trade money with my friends the low % means that in the vast majority of cases they aren't going to have lightning available either.

All you are saying is a chicken and egg problem of adoption - nothing todo with the technology itself. Adoption IS growing, so we will see.

It will be on bitcoin, and bitcoin only. Except the payment will done with Lightning. And the lightning network will probably be used to send a stablecoin, utilizing taproot assets. But shurely not some shitcoins that is x402 built on (Ethereum, Solana & Co.) :)

Eager to see how that will work with existing laws. At least in a lot of countries in the EU, any advertisement has to be explicitly marked as such. Sponsored content, too. So the AI will have to highlight that.

I don't think that will ever happen. All you need is a trivial browser extension with a locally run, very primite LLM, that takes the output of the commercial LLM, and removes all advertisement. And adBlocker AI, so to speak.

Yes, there will be people not using adblockers just as there are people today. But no adblocker ever was able to remove SEO spam from googles website, all they did was hiding obvious adds. They didn't improve the search experience.


There will always be ad free solutions, but the most popular (or simply, most market captured tools) will be the ones most people flock to. Better services than many market leaders came along, but the leaders kept growing.

Unless this bubble popping is truly catastrophic, I don't see this ending a different way.


> Basic web search has become so horrible

It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.


Don't think that is a fair point, the manipulation was done on a topic of which there are hardly any other sources (hot dog eating competition winner). If you want to manipulate what an AI tells you is the F-150 street price, you will complete with hundreds of sources. The AI will unlikely pick yours.

The marketing game is already moving to game LLMs. Somehow you have to get what you want to have into the training data or the context window.

Currently it is probably just mostly quantity that does the trick w.r.t. training data. So e.g. spam the Internet with "product comparisons" featuring your product as the winner.


Shifting the balance on training data seems like the wrong approach vs focusing on showing up in agent search tool results and swaying them there.

It’s been a long time since agents couldn’t even conduct web search and could only riff off their model. But the examples in this thread are things an agent would search for immediately, and agents are leaning harder on tool calls and external info over time, not less.


I'm saying it over and over, AI is not killing dev jobs, offshoring is. The AI hype happens to fall into the end of the pandemic, and lots of companies went to work-from-home and are now hiring cheaper devs around the world.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: