Hacker Newsnew | past | comments | ask | show | jobs | submit | voxleone's commentslogin

I rank with those who think human-like intelligence will require embeddings grounded in multiple physical sensory domains (vision, touch, audio, chemical sensing, etc.) fused into a shared world representation. That seems much closer to how biological intelligence works than text-only models. But if this path succeeds and produces systems with something like genuine understanding or sentience, there’s a deeper question: what is the moral status of such systems? If they have experiences or agency, treating them purely as tools could start to look uncomfortably close to slavery.

Its interesting that you seem to be more concerned that we would potentially enslave human like robots (while arguing sentience) while the likelihood of events is that we are far more likely to be enslaved to/by our own creations.

Id say probability wise we don’t create sentient like behavior for a long time (low probability) much higher is the second circumstance.


Personal Agency is a strong characteristic of a personality. AI would have to acquire a personality first. It could probably do this by copying others statistically. In that case, it is only doing what someone else has done.

There is no such thing as real sentient AI theoretically. Our current models are only emulations of humans. Maybe in the future someone will figure out a way for computers to learn how to learn. Then maybe someone will codify computers to acquire base methodologies vs just implementing any methodology it finds in the world.


It's an interesting question. On one hand we don't worry about this much with animals, the most advanced of which we know have personalities, moods, etc (Pigs, for instance). They really only seem to lack the language and higher-order reasoning skills. But where's the line?

We do worry much more about animal well-being than we worry about our "lumps of metal" (as a cousin comment fittingly put it). As we should, and generally I think we should worry much more about animal welfare. I find concerns for AI system welfare voiced by people like Thomas Metzinger wildly misguided.

And while they don't have language like we do, dogs can understand basic commands and they aren't even the smartest animals.

I don't think they will have sentience or agency unless they are designed to:

1) Keep thinking continuously, as opposed to current AIs that stop functioning between prompts. 2) Have permanent memory of their previous experiences. 3) Be able to alter their own weights based on those experiences (a.k.a. learn).


That's the direction the field is already going with "agents". People want autonomous AI agents that are capable of acting independently and that have more and more capabilities. For example, something like Claude code, but that acts as a sidekick that is constantly running, and able to act without being prompted. That's what people are imagining when they talk about teams of agents. You act as a manager, but your coding agents are off working on various features and only check in periodically.

They won't have sentience because it will be antithetical to capitalist business ideology. There's no good business value proposition for having the AI daydream like humans do, or 'sleep' while 'on', or have inspirational thought that might be seen as 'wrong' or useless. If that behavior ever manifests, it will probably be stamped out in a future release.

You can't justify to the board the wasted money to have the android dream.


Does anyone else see an echo of Severance (Apple TV series) here?

What’s the difference from thinking your brain is a slave to your body or vice versa?

We only think slavery is bad because have a philosophy and language to describe and evaluate the situation. It’s unlikely Ant colonies understand the concept of slavery, eunuchs, or feminism. We have the framework to understand these concepts without them we’d be oblivious to them.


Lol. A lump of metal can't be sentient.

Yeah, call me when Yann incorporates the four humors and the elemental force of fire, from which we draw life. Metal lacks the nature for this purpose.

Says the bag of lipids and proteins :)

Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus, Sulfur and a dash of other elements.

$99.85 at Sigma-Aldrich


Mostly water, actually.

Typical. You know they pump the chickens at the grocery store too.


I think the more likely retort will be that we can't be smart, by the AI's standard.

In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.

Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.

One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.


I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.

I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.


One interesting way to look at projects like this is that they’re essentially tiny universes defined by a functional update rule.

The grid + instruction set + step function form something like:

state(t+1) = F(state(t))

Once you have that, you get the same ingredients that appear in many artificial life systems: local interactions; persistence of information (program code); mutation/recombination; selection via replication efficiency. And suddenly you get emergent “organisms”. What’s interesting is that this structure isn’t unique to artificial life simulations. Functional Universe, a concept framework [0], models all physical evolution in essentially the same way: the universe as a functional state transition system where complex structure emerges from repeated application of simple transformations.

From that perspective these kinds of experiments aren’t just toys; they’re basically toy universes with slightly different laws. Artificial life systems then become a kind of laboratory for exploring how information maintains itself across transformations; how replication emerges; why efficient replicators tend to dominate the state space. Which is exactly the phenomenon visible in the GIF from the repo: eventually one replicator outcompetes the rest.

It’s fascinating because the same abstract structure appears in very different places: cellular automata, genetic programming, digital evolution systems like Avida, and even some theoretical models of physics.

In all cases the core pattern is the same: simple local rules + iterative functional updates → emergent complexity. This repo is a nice reminder that you don’t need thousands of lines of code to start seeing that happen.

[0] https://voxleone.github.io/FunctionalUniverse/


Not sure i get the down vote. Real person here, maybe too vague and enthusiastic, but not malicious.

Working on Functional Universe (FU), a formal framework for modeling physical reality as functional state evolution, integrating sequential composition with simultaneous aggregation.

https://voxleone.github.io/FunctionalUniverse/


I can relate to some of what you’re describing, though from a different angle. I’ve always been somewhat of a loner, and as I’ve gotten older I’ve grown increasingly dissatisfied with the shallowness of many modern interactions: the constant glance at the screen, that black brick glued to the hand, the strange absence of attention even when you try to do something kind for someone. It often feels like we’re all performing a kind of theater of socialization.

One thing that helped me over the years was cultivating a richer inner life and maintaining some contact with nature. Long walks, quiet time, reading, building things slowly, the kinds of activities that don’t depend on an audience. At first that kind of solitude can feel oppressive, but with time it can also become a kind of freedom.

As you get older, or at least that has been my experience, you begin to realize how precious each moment is, and how little sense it makes to spend too much of it on interactions that feel hollow. Real presence, even if rare, becomes much more valuable.

Your situation is clearly different, and the transition you’re going through sounds genuinely hard. But sometimes these chapters also open space to rediscover parts of yourself that were quiet for a long time. I wish you strength navigating this change, and I hope you eventually find a rhythm that feels meaningful again.


Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning. Capabilities without these foundations produce fragile, short-lived results. Only those who anchor their work in proper abstractions are actually engineering, no matter who’s writing the code.

I’ve always designed systems along the classic path: requirements → use cases → schematization. With AI, I continue in the same spirit (structure precedes prompting), but now the foundational layer of my systems is axioms and constraints, and the architecture emerges through structured prompts. Any AI on the shift is an aide in building systems that are logically grounded. This is where the “all of us as AI engineers” claim becomes subtle. Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning.

What’s striking here is the convergence on a minimal axiomatic kernel (Lean) as the only scalable way to guarantee coherent reasoning. Some of us working on foundational physics are exploring the same methodological principle. In the “Functional Universe” framework[0], for example, we start from a small set of axioms and attempt to derive physical structure from that base.

The domains are different, but the strategy is similar: don’t rely on heuristics or empirical patching; define a small trusted core of axioms and generate coherent structure compositionally from there.

[0] https://voxleone.github.io/FunctionalUniverse


I’ve found a workflow that feels both structured and respectful of professional craft, especially in the context of this thread. I don’t just "vibe code" and let an LLM fill in the blanks. I use a classic design discipline (UML and use-cases) to document the process: 1. Start with requirements – 2.Define use cases - 3. Implement classes/objects (Architecture first, not after-the-fact refactors) 4. Add constraints and invariants (Contracts, boundaries, failure modes, etc.) - 5. Let the agent work inside that frame, pausing at milestones for human oversight.

Those UML/use-case/constraint artifacts aren’t committed as session logs per se, but they are part of the author’s intent and reasoning that gets committed alongside the resulting code. That gives future reviewers the why as well as the what, which is far more useful than a raw AI session transcript.

Stepping back, this feels like a decent and dignified position for a programmer in 2026: humans retain architectural judgement --> AI accelerates boilerplate and edge implementation --> version history still reflects intent and accountability rather than chat transcripts. I can’t afford to let go of the productivity gains that flow from using AI as part of a disciplined engineering process, but I also don’t think commit logs should become a dumping ground for unfiltered conversation history.


It is interesting that Google translates the first paragraph of the text like this>>

"And the word he spoke was all like this. He was a hired hand, and he was full of malice, and he was in ƿælfæst. He didn't remember the man's name. He was in gefeohte(...)"

It says Icelandic.:)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: