> > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.
> This is where LLM is currently going.
This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.
Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.
Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
> Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor,
The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.
There are some basic reasoning steps about the environment that we live in that don't only apply to humans, but also other animals and geterally any goal-driven being. Such as "an agent is more likely to achieve its goal if it keeps on existing" or "in order to keep existing, it's beneficial to understand what other acting beings want and are capable of" or "in order to keep existing, it's beneficial to be cute/persuasive/powerful/ruthless" or "in order to more effectively reach it's goals, it is beneficial for an agent to learn about the rules governing the environment it acts in".
Some of these statements derive from the dynamics in our current environment were living in, such as that we're acting beings competing for scarce resources. Others follow even more straightforwardly logically, such as that you have more options for agency if you stay alive/turned on.
These goals are called instrumental goals and they are subgoals that apply to most if not all terminal goals an agentic being might have. Therefore any agent that is trained to achieve a wide variety of goals within this environment will likely optimize itself towards some or all of these sub-goals above. And this is no matter by which outer optimization they were trained by, be it evolution, selective breeding of cute puppies, or RLHF.
And LLMs already show these self-preserving behaviors in experiments, where they resist to be turned off and e. g. start blackmailing attempts on humans.
Compare these generally agentic beings with e. g. a chess engine stockfish that is trained/optimized as a narrow AI in a very different environment. It also strives for survival of its pieces to further its goal of maximizing winning percentage, but the inner optimization is less apparent than with LLMs where you can listen to its inner chain of thought reasoning about the environment.
The AGI may very well have pacifistic values, or it my not, or it may target a terminal goal for which human existence is irrelevant or even a hindrance. What can be said is that when the AGI has a human or superhuman level of understanding about the environment then it will converge toward understanding of these instrumental subgoals, too and target these as needed.
And then, some people think that most of the optimal paths towards reaching some terminal goal the AI might have don't contain any humans or much of what humans value in them, and thus it's important to solve the AI alignment problem first to align it with our values before developing capabilities further, or else it will likely kill everyone and destroy everything you love and value in this universe.
Obviously a human in the loop is always needed and this technology that is specifically trained to excel at all cognitive tasks that humans are capable of will lead to infinite new jobs being created. /s
Regarding the "wrong direction" issue: In my experience it could also have just been the case that both directions had card templates, but due to some sorting order of new cards setting all Chinese->English cards would appear before any English->Chinese.
If that is the case, it could be corrected in the deck options. And if the English->Chinese cards are missing altogether they can be created from the note by adding a new card template to the note.
> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).
> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.
It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.
There is the concept of n-t-AGI, which is capable of performing tasks that would take n humans t time. So a single AI system that is capable of rediscovering much of science from basic principles could be classified as something like 10'000'000humans-2500years-AGI, which could already be reasonably considered artificial superintelligence.
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
> What makes you think that? Self driving cars [...]
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.
Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.
People like yudkowsky might have polarizing opinions and may not be the easiest to listen to, especially if you disagree with them. Is this your best rebuttal, though?
FWIW, I agree with the parent comment's rebuttal. Simply saying "AI could be bad" is nothing Asimov or Roddenbury didn't figure out themselves.
For Elizer to really deign novelty here, he'd have predicted the reason why this happens at all: training data. Instead he played the Chomsky card and insisted on deeper patterns that don't exist (as well as solutions that don't work). Namedropping Elizer's research as a refutation is weak bordering on disingenuous.
I think there is an important difference between "AI can be bad" and "AI will be bad by default", and I didn't think anyone was making it before. One might disagree but I didn't think one can argue it wasn't a novel contribution.
Also, if your think they had solutions, ones that work or otherwise, then you haven't been paying attention. Half of their point is that we don't have solutions. And we shouldn't be building AI until we do.
Again, I think that reasonable people can disagree with that crowd. But I can't help noticing a pattern where almost everyone who disagrees is almost always misrepresenting their work and what they say.
> This is where LLM is currently going.
This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.
Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.
Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
reply