It doesn't force you go through risk modelling because by now most SOC2 platforms have templates you just fill in the blanks and sign off. Conversely, the auditors are paid by the company, so their incentive is to pass the audit so the client can get what it wants.
Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140
As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.
It’s not surprising. There has been quite a bit of industrial research in how to manage mere apes to be deterministic with huge software control systems, and they are an unruly bunch I assure you.
You’re just a bag of meat. That is why it’s just math is an unsatisfying argument.
It’s not even an interesting question. Sentience has no definition. It’s meaningless.
People have needs that are being met. That is something we can meaningfully observe and talk about. Is the super stimulus beneficial or harmful? We can measure that.
I submit that there is a difference between me and a corpse. Or between a steak and a cow in the field.
"Well, okay, you're just (living) flesh on bones." There's a difference between me and a zombie (or, if you prefer, brain-dead me). There's a difference between me and lab-grown organs [1], or even between me and my kidney cut out of me.
> It’s not even an interesting question.
Consciousness is an active area of research (ergo, interesting enough for some people to devote research to it): biologically [2] and philosophically [3].
Unless you enjoy nihilism, there are some serious problems with materialism (that is, matter is all that there is), which we are encountering. There are also some philosophical problems with it; a cursory search turned up this journal article [4].
The point is that if we're simplifying LLMs to being "just" a bag of math and can discard because of that, then humans are also "just" a bag of meat and can similarly be discarded. Somewhere in that bag of math, LLMs take on properties that some people find hard to simply dismiss because it is based on matrix multiplication. It's an oversimplification, and if you oversimplify, you lose resolution.
It’s not insane. They are correct that is the point of civilization which carries information from generation to generation outside the oral tradition in a systematic organized reliable way.
The point of civilisation, however loose that idea may be, if it’s anything at all, is determined by people.
Technology exists today in a way that feels like it could be defining its own path in a sense, but much like oral tradition, neither are large enough concepts to describe civilisation.
Dubious. Ai psychosis is the opposite. It’s about being empowered to explore ideas much further but with a maladaptive tool designed to be an appeaser by reinforcement learning.
Because there's no adversarial pressure as a check and balance to the security, and AICPA is clearly just happy to take the fees, it's a hollow shirt. It's like this scene from The Big Short. https://youtu.be/mwdo17GT6sg?si=Hzada9JcdIPfdyFN&t=140
As usual, it's only people that care that force positive change. The companies that want good security will have good security. Customers who want good security will demand good security.
reply