Feel free to say microslop as much as possible, but it should be noted that many people will automatically dismiss your opinion when you do. I don't know if I agree with doing so or not, but it is more common than you'd think. And no, they aren't just microsoft shills.
You are correct. Thankfully I never said anything of the sort.
I said proteins, plural. The s is a key indicator I was talking about more than one aspect.
Milk is about 5% lactose, 3.5% casein (80%) and whey proteins (20%, almost entirely two specific sub proteins), and 3-4% fat. A negligible amount of minerals. Nonfat milk, which I think we'd both recognize as milk (unpalatable as it may be). Lactose free milk is milk, I think. One assumes lactose free nonfat milk is milk.
So if one is producing the three primary proteins, you've got nearly the whole way there. There's some trace proteins you are missing, but if you are 99.99+% of the way to milk, you've got milk. You sure the milk you get from the store hasn't denatured those trace proteins?
Yes the cynical tropes are getting tired by now, even though I personally agree with most of them.
But suspicions on the legitimacy of the stars seems reasonable, wouldn't you agree? Look at the rate of stars, look at the comments/issues/prs on the repo. It feels safe to assume that most of them are from bots and not organic humans who went out to star a cool project.
PureBasic is very neat. I bought my license almost 20 years ago and I still use it to make small GUI utilities. It's a very nice IDE/editor and the famfamfam icons are always comfy.
It's still alive because it's a passion project for the developer, he doesn't make a lot of money from it. Not because the tool declined in quality, but because nowadays efficient RAD is a very niche market and the licenses are still valid for the lifetime (again showing the passion to the product rather than optimizing income).
> We grant the human a “soul” because we cannot see the trillions of calculations happening in the dark of the skull. We deny the machine a soul simply because we can see the code. We are like children who think a puppet is alive until they see the strings, failing to see how we are just products of the strings of evolution.
Much like children grow up to understand that puppets aren't alive, we'll grow up to understand that LLMs aren't alive.
> ethics boards will strictly prohibit a scientist from testing or manipulating a petri dish of human neurons under certain painful or destructive conditions because of the “sanctity” of the biological material.
This doesn't seem to be true. The closest thing I've found is this paper that suggests maybe eventually we should consider discussing the ethical implications of playing with cerebral organoids: https://www.cambridge.org/core/journals/cambridge-quarterly-...
I think you're right about the petri dish thing. As for LLMs, the author doesn't talk about them specifically about being alive. I don't think even they believe that. It's more about the possibility of silicon to host life
The flood of humorous GPT-generated reviews on Amazon made me stop reading reviews altogether.
I can understand someone using a LLM to extrapolate one sentence into two paragraphs. I don't like it, but I understand that on Amazon the button is right there and it helps people feel smarter about their literary skills, in the way that filters help people feel prettier on instagram.
But the added snark or humoristic tone? Why instruct the LLM to do that? To get more likes? On a review?
Is this really how password managers extensions work? They inject arbitrary javascript in every page you visit?
I would have naively thought that there'd be a better and safer API for it, considering that all browsers already have the infrastructure in place to handle login autocomplete.
reply