IBM has more revenue than Oracle even if we hear way less about it. 5 times smaller than Apple, thou. It also has more employees than Microsoft or Alphabet. But it has tighter profit margins than other tech companies.
IBM is not in consumer products nor services so we do not hear about it.
Oracle/TSMC/SpaceX isn’t in consumer products/services, but they are heard about.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
It’s a very different company post the PwC purchase. They have around 1/3 of the revenue from consulting which tends to push the valuation down due to its relative low margin when compared to software. This also inflates the number of employees.
There are several ways of looking at law and order.
One way is that the law applies to everybody equally. That has been the way it works for many years, not perfectly, in democratic countries.
There is another way of working were the law is not blind. Laws are applied based in who is the one affected. This is what big tech and the ultra-rich have been advocating for. The law applies differently to nobility and aristocrats than to the working class.
So, for all this big tech companies the law is clear: I can copy from you, you cannot copy from me.
(That is horrifying in case that anyone needs me to spell it out)
A third way of looking at it is that you can't just blindly copy arguments when the situations are clearly different.
Nobody, not even Anthropic, is arguing that they should be able to host other people's paid content for free. The crux of their fair-use defense is that models are transformative works, just like parodies or book reviews, and hence should be treated as fair use.
You can't just take a pile of books (no pun intended) and turn that into Claude in a day with 30 lines of Python, there's a lot of work and know-how on the Anthropic side that goes into making a good LLM.
If lossy-compressed transcodes of ripped movies are not "transformative works" and can get people even jailed, then lossy-compressed text of ripped books and websites is neither.
There is a lot of knowhow going into a good divx rip too, you know.
And it enables so much novel uses such as popcorn time, with fluorishing business opportunities.
it's an exaggeration for sure but I don't think it's a stretch to believe Anthropic spends considerably more effort on data scraping & curation than anything else
> Unfortunately it is their employees that are paying the price of leadership
Neoliberalism at its finest. The world moving towards conservatism has left us with this model: The working class takes the hit of each crisis from small to big.
Consumers have always paid with data not money. That is just how we are groomed. In fact that is more valuable to companies as it turns out. Sora though doesn’t work that way, it costs the company a lot with no useful data for them. It was always a vehicle to raise the company’s image and nothing else. The only way it’s useful for them is to show the user count to investors in their next funding round. Served no other purpose, but the market changed around them.
Yes. I have noticed that is close to impossible to get good deals on flights, hotels, or even good discounts on-line. Sellers have all the information from consumers that they need to maximize their profit and extract the maximum amount from consumers. Dynamic pricing is making it a personalized experience, so I personally pay the maximum I possible can.
Consumers never pay for stuff on the internet. FB, Insta, TikTok, Google products, Reddit, Snapchat. This is not a new realization that OpenAI is having.
> Somehow It shifted from users know best to "Product" knows best.
In a world where consumers have less and less power, products are designed to please CEOs.
Money is power, as inequality grows and concentrates the average user/worker/citizen has less power and their voices matter less. Today's Internet is designed for the needs of big corporations, users are there just as another product to be sold.
> they've been cloning features of their API customers and adding them to their core products since day 1
Is this not just the strategy of all platforms. Spy on all customers, see what works for them and copy the most valuable business models. Amazon does that with all kinds of products.
Platforms will just grow to own all the market and hike prices and lower quality, and pay close to nothing to employees. This is why we used to have monopoly regulations before being greedy became a virtue.
It is exactly the strategy of all platforms - they get greedy to the point of screwing over their own customers. I've lost count of number of times I've seen a platform get popular and then expand to offer the same services as its customers, often even undercutting market rates.
Just wait till they offer "Developer Certification" so you have to pay them to get a shiny little badge and a certificate while they go around saying no badge = you're shit.
Billionaire CEOs have silenced the informed sources of information. We live in a time that everybody knows the opinion of billionaires in every aspect of society (and it is bad) but science and journalism are seen with mistrust.
Marketing and entertainment are supplanting news and knowledge. I hope that the people that is pushing back succeed.
> "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
> which came back 6 times higher than its supposed to be.
It has been proven that massive testing creates many false positives.
Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.
> I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.
For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.
In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.
I’m not sure how you can come to the conclusion that AI is an advocate for the individual writ large. It seems that AI can just as easily be used to make algorithmic decisions on who receives care (based on symptoms etc). Whether or not that’s an equalizing influence or not depends on the algorithm, training data, etc.
The models could be designed that way, but we don't have evidence that they have been designed that way today. If that were to occur in future, I'm sure people would seek out impartial models.
> From a public perspective, it might be cheaper to just let them die.
You missed the point. More tests can be detrimental to the patient's health as increase the risk of unneeded medication or surgery. Also many test like x-rays have their own risks. To do them for the sake of it increases overall mortality.
So, to not over test is not just cheaper but better for people's health.
Yeah I see that there can be a false positive/negative issue too.
For instance, allergy tests have a false positive rate of ~10% and a false negative rate of ~48%. So you really need a MD (or AI) to help tease things out there.
But I'll push back here a bit. Taking random tests will of course put you at the mercy of statistics. I think this is where AI will actually really help. The tests it'll have you take are not random any more than a MD's tests are (okay maybe a tad more?). Instead the AI's testing strategy will be more broad than an MD's will. Combine the experience and physical presence of the MD and the deep 'knowledge' of the AI and I think that centaur is a lot more potent.
I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.
> I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
> I can see this kind of survival-bias stories distorting the reality.
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
It's important to note that doctors are also humans, and humans are squishy in every sense of the word. Their brain is squishy, it takes a ton of information and distills it down to decisions that we don't understand how we arrived at.
The fact I'm young-ish and healthy looking, with good skin and hair, leads many doctors to outright dismiss me. Never mind my history of cancer and the undeniable fact that I am obviously not healthy. But I can also use the squishyness to my advantage. I talk confidently, I push back, and that works. It sort of short-circuits a lot of doctor's brains.
> doesn't that apply to flesh-and-bone developers?
No, it does not. If you have a developer that knows C++, Java, Haskell, etc. and you ask that developer to re-implement something from one language to another the result will be good. That is because a developer knows how to generalize from one language (e.g. C++) and then write something concrete in the other (e.g. Haskell).
One language in the same category to another in the same category, yes. "Category" here being something roughly like "scripting, compiled imperative, functional". However my experience is that if you want to translate to another category and the target developer has no experience in it, you can expect very bad results. C++ to Haskell is among the most pessimal such translations. You end up with the writing X in Y problem.
IBM is not in consumer products nor services so we do not hear about it.
reply