Hacker Newsnew | past | comments | ask | show | jobs | submit | nextaccountic's commentslogin

> If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.

Agreed. More broadly, classical logic isn't the only logic out there. Many logics will differ on the meaning of implication if x then y. There's multiple ways for x to imply y, and those additional meanings do show up in natural language all the time, and we actually do have logical systems to describe them, they are just lesser known.

Mapping natural language into logic often requires a context that lies outside the words that were written or spoken. We need to represent into formulas what people actually meant, rather than just what they wrote. Indeed the same sentence can be sometimes ambiguous, and a logical formula never is.

As an aside, I wanna say that material implication (that is, the "if x then y" of classical logic) deeply sucks, or rather, an implication in natural language very rarely maps cleanly into material implication. Having an implication if x then y being vacuously true when x is false is something usually associated with people that smirk on clever wordplays, rather than something people actually mean when they say "if x then y"


> Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.

Some references on that

https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

https://thedecisionlab.com/reference-guide/philosophy/system...

System 1 really looks like a LLM (indeed completing a phrase is an example of what it can do, like, "you either die a hero, or you live enough to become the _"). It's largely unconscious and runs all the time, pattern matching on random stuff

System 2 is something else and looks like a supervisor system, a higher level stuff that can be consciously directed through your own will

But the two systems run at the same time and reinforce each other


In my naive understanding, neither requires any will or consciousness.

S1 is “bare” language production, picking words or concepts to say or think by a fancy pattern prediction. There’s no reasoning at this level, just blabbering. However, language by itself weeds out too obvious nonsense purely statistically (some concepts are rarely in the same room), but we may call that “mindlessly” - that’s why even early LLMs produced semi-meaningful texts.

S2 is a set of patterns inside the language (“logic”), that biases S1 to produce reasoning-like phrases. Doesn’t require any consciousness or will, just concepts pushing S1 towards a special structure, simply backing one keeps them “in mind” and throws in the mix.

I suspect S2 has a spectrum of rigorousness, because one can just throw in some rules (like “if X then Y, not Y therefore not X”) or may do fancier stuff (imposing a larger structure to it all, like formulating and testing a null hypothesis). Either way it all falls down onto S1 for a ultimate decision-making, a sense of what sounds right (allowing us our favorite logical flaws), thus the fancier the rules (patterns of “thought”) the more likely reasoning will be sounder.

S2 doesn’t just rely but is a part of S1-as-language, though, because it’s a phenomena born out (and inside) the language.

Whether it’s willfully “consciously” engaged or if it works just because S1 predicts logical thinking concept as appropriate for certain lines of thinking and starts to involve probably doesn’t even matter - it mainly depends on whatever definition of “will” we would like to pick (there are many).

LLMs and humans can hypothetically do both just fine, but when it comes to checking, humans currently excel because (I suspect) they have a “wider” language in S1, that doesn’t only include word-concepts but also sensory concepts (like visuospatial thinking). Thus, as I get it, the world models idea.


> The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish.

There's a way to handle this: put an automatic AI review of every PR from new contributors. Fight fire with fire.

(Actually, this was the solution for spam even before LLMs. See "A plan for SPAM" by Paul Graham. Basically, if you have a cheap but accurate filter (specially, a filter you can train for your own patterns), it should be enabled as a first line of defense. Anything the filter doesn't catch and the user had to manually mark as spam should become data to improve the filter)

Moreover, if the review detects LLM-generated content but the user didn't disclose it, maybe there should be consequences


Note that the original algorithm is model synthesis. WFC is just a minor tweak on it

> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it?

If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it

> What are the plans to scale up?

I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.


I think "MMAcevedo" basically nails it: https://qntm.org/mmacevedo

I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).

Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).


>Greg Egan's "Rifters"

By Peter Watts actually.


Yes, sorry! I like them both a lot.

Will put it in my list :-)

I had a genuine feeling of dread reading that, wow.

It takes some of the fun out of imagining eternal digital life, doesn't it :-)

Surely you can imagine that there are people who draw their ethical line for permissible suffering with animal farming on the "permissible" side and "slavery on an unimaginable scale" on the "not permissible" side? Imagine you or someone you love duplicated 5 million times and living through 1000 subjective years of pure existential horror (while doing menial industrial cognitive tasks). Some would say this is worse than eating meat.

The duplicated people is imaginary and hypothetical, happening (or not) in an unspecified time in future; the suffering in animal farming is real and happens today.

Somehow many people are ready to ascribe personhood and have ethical considerations towards computer programs and other digital entities, while not being much concerned about the suffering of animals that actually exist today in the physical world


Yes, there's a difference between suffering that happens now, and potential suffering that happens in the future, caused by research being done now. I see many in this discussion (me included) have not made this distinction explicit in all their comments but I also think it should so be obvious that any counter-comment which appears to not understand that this distinction is being made implicitly is best explained as missing the point deliberately.

Anyway to reiterate my point upthread, there could be people who think a chicken-level entity suffering is permissible, and a human-level entity suffering is not, and it is a perfectly consistent moral position for them to say we should not do research into creating new kinds of human-level entities with the potential for suffering. The permissible suffering being in the present and the impermissible suffering being in the future does not really change that.

PS in this thread we were not only talking about computer programs, but artificial brains made from biological human neurons too.


> the first step is to embrace veganism

The past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?


For me the issue isn't with the killing/eating of animals. Rather, it's how they are treated during their lifetime by the meat industry - which is essentially optimizing for the minimum conditions that can still provide meat that can be sold legally. I'm not a vegan by the way, but I can appreciate the moral case vegans make.

I don't know about your country, but in my country whenever there is a power outage there's news where some 10000 or 80000 chickens died because of the power outage

https://www.facebook.com/nhnoticiasmanoelribas/posts/queda-d... 5 days ago, 20k chickens dead in just twenty minutes without power

But power outages don't cause chicken death, at least not directly. The most immediate cause of death is dehydration. And it happens because chickens are kept in an environment so confined, so absurdly cramped, that without giant fans blowing 24/7 they overheat, dehydrate, and very quickly die. (in some cases, the beaks are clamped, too, so they don't peck each other to death)

That's what it takes to have cheap chicken and cheap eggs. That's what happens when we are so detached from animal food production that it becomes a commodity. It doesn't matter how much the animal suffer, as long as the consumer can ignore it safely when they eat. And the reason this can happen is that animal well being isn't worth much. (veganism is just the position that animal well being is worth a lot. It isn't merely a diet choice and has far reaching implications. For example, if you are vegan you ought to be against the destruction of natural habitats, fossil fuels, etc)

Btw this "environment" I described where chickens are raised in hell doesn't look a lot like the natural environments the chickens and other dinosaurs evolved in during millions of years


For the same reason that we now consider murder, assault and other actions that harm people morally wrong. These have also been a part of life ever since humans or other hominids roamed the earth, we just determined that they are morally wrong later on.

Oh? Are you going to do a citizen's arrest on a wolf for traumatically murdering a deer, thereby violating its right to avoid cruel and unusual punishment?

Why should wolves be bound to human rules? These rules were generally agreed upon by human societies, and it's the social contract that gives them legitimacy (and not some universal rule that extends even to wolves).

On the contrary, not only wolves can't be found guilty of murder, they aren't required to pay taxes too.


A wolf has no moral agency and therefore can't be held accountable for its actions. It makes no sense to compare them to humans.

It's a best practice to document things about the code base, so that other devs (even senior devs) don't start to do things differently. This will probably not change

What I think is short lived is this insistence in separating LLM instructions from general documentation for both humans and AI. LLMs can read human docs, and concerns about context window size will probably disappear

But maybe future docs will be LLM-first, but people won't read them directly. They will ask a LLM questions about it


Too much defensive technology makes an attacker bolder and surer there will be no retaliation, though

why won't they?

Bandwidth costs money, less bandwidth = less money spent. I have noticed that if you use a vpn you will get 1080p at best, 720p more often than not.

For me this is a painting vs photography thing

Painting used to be the main way to make portraits, and photography massively democratized this activity. Now everyone can have as many portraits as they want

Photography became something so much larger

Painting didn't disappear though


Compared to painting, software allows you to solve the problem once, then distribute the solution to the problem basically for free.

Market frictions cause the problem to be solved multiple times.

LLMs learn the solution patterns and apply it devaluing coming up with solutions in the first place.


Well, slightly different take: it's like telling an artist the world doesn't need another song about love, these already exist and can be re-heard as needed. Sharper formulated: a CRM or TODO-list is a solved problem in theory, right? tons of solutions even free ones to use out there. still look at what people are doing and selling - CRMs and TODO-list variations. because, in fact, its not solved, and always has certain tradeoffs that doesn't fit some people.

Zed, the editor, is not 18+. It's just the AI offering

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: