Don't get me wrong, the government requires a high level of scrutiny.
I would be interested to see how this compares to industry standard though, 77% doesn't seem outrageous to me given all the trackers and advertising code I've seen over the years. It wouldn't surprise me if this is inline with many apps people install and don't think twice about.
Anthropic could at least make a compelling case for the copyright.
It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).
I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.
Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.
Anthropic's user agreement does not have a similar agreement.
My point was that they could make a compelling case though, not that they would win.
I don't know of ant precedent where the code was literally generated on someone else's system. Its an open question whether that implies any legal right to the work and I could pretty easily see a court accepting the case.
Pre-LLM, it was much easier for reviewers to discern that. Now, the AI-generated code can look like it was well thought out by somebody competent, when it wasn't.
Have you ever reviewed an AI-generated commit from someone with insufficient competence that was more compelling than their work would be if it was done unassisted? In my experience it’s exactly the opposite. AI-generation aggravates existing blindspots. This is because, excluding malicious incompetence, devs will generally try to understand what they’re doing if they’re doing it without AI
I have. It's always more compelling in a web diff. These guys are the first coworkers for which it became absolutely necessary for me to review their work by pulling down all their code and inspecting every line myself in the context of the full codebase.
I try to understand what the llm is doing when it generates code. I understand that I'm still responsible for the code I commit even if it's llm generated so I may as well own it.
If you're only expecting to live to 65, you would be trying to time your 401k into a roughly 5 year window (assuming you wait until 59 1/2 to begin withdrawl).
There is a difference for sure between hosting your own email server and using it for official government communications and having your own personal email address used for personal communications.
The issue that seemed to completely disappear related to the use of Signal messenger for official white house communications seems more aligned to the email server issue. It was reported heavily at the time what the reporting requirements were and that they would have to submit the full chat histories within 30 days or something like that to stay within the law. I never heard whether that actually happened or not, the story just died.
HN is overrun by partisans whose majority does not care about factual interpretations of current events and flags level-headed comments in favor of cheap shots, double standards, hyperbolic misconstructions, and ad hominem. I don't think it's difficult to be critical of the government without resorting to such low-brow commentary, but it is what it is. I once offended some people by comparing HN to Reddit, but the lines are getting more blurred by the day.
The moderators need to take a more active stance on getting these hot button political topic wars off HN. We're seeing some sort of brigading and/or manipulation going on here with behaviors (like flagging) that are not consistent with what I think we want to have on the platform. Certainly no following of the guidelines. Just look at the top comment here.
"Normal" people are stuck in two modes, either they ignore it or they need to descend to the same level. I put normal in double quotes since I honestly don't know what's normal any more. I would like to believe the majority of the kind of community we used to have here on HN does not operate at this level of discussion.
To some extent this is a reflection of broader polarization, tribal behavior, and social media manipulation. Even Reuters IMO have chosen a sensationalist headline and seem to have an agenda here. There's an easy tell - can you tell the political orientation of the author by reading the article/comments etc.
This topic could be an interesting one and we could actually have some good discussions about security. Instead it degenerates into what's essentially a political bashing flame war.
It's beind downvoted because "but, her emails..." is not saying it's the same thing, but rather, that so much fuss was made about her emails, and then when something similar happens, the right conveniently ignores it. For example, as you mentioned, signalgate, or the times members of the Trump administration used their "own email server and using it for official government communications and having your own personal email address used for personal communications."
It's being down voted because it's attacking a strawman. No one is saying they are the same exact thing. It's that you will see people activatley defending this as a big nothingburger when in truth, it's still a security breach that has the potential to lower our defenses.
I'd argue that it was all downhill after we moved away from using HTML as the state representation.
Moving state out of HTML and into JS means we now have to walk this ridiculous tightrope walk trying to force state changes back into the DOM and our styles to keep everything in sync.
Given that problem, reactivity isn't the worst solution in my opinion. It tries to automate that syncing problem with tooling and convention, usually declaratively.
If I had to do it all again though, DOM would still be the source of truth and any custom components in JS would always be working with DOM directly. Custom elements are a great fit for that approach if you stick to using them for basic lifecycle hooks, events, and attribute getters/setters.
Wasn’t that the Lit framework? It was okay. Like a slightly more irritating version of React.
I recall the property passing model being a nasty abstraction breaker. HTML attributes are all strings, so if you wanted to pass objects or functions to children you had to do that via “props” instead of “attributes.”
I also recall the tag names of web components being a pain. Always need a dash, always need to be registered.
None of these problems broke it; they just made it irritating by comparison. There wasn’t really much upside either. No real performance gain or superior feature, and you got fewer features and a smaller ecosystem.
The point of Lit is not to compete with React itself, but to build interoperable web components. If your app (Hi Beaker!) is only using one library/framework, and will only ever one one in eternity, then interoperability might not be a big concern. But if you're building components for multiple teams, mixing components from multiple teams, or ever deal with migrations, then interoperability might be hugely important.
Even so, Lit is widely used to build very complex apps (Beaker, as you know, Photoshop, Reddit, Home Assistant, Microsoft App Store, SpaceX things, ...).
Property bindings are just as ergonomic as attributes with the .foo= syntax, and tag name declaration has rarely come up as a big friction point, especially with the declarative @customElement() decorator. The rest is indeed like a faster less proprietary React in many ways.
Kind of? Lit does add some of the types of patterns I'm talking about but they add a lot more as well. I always avoided it due to the heavy use of typescript decorators required to get a decent DX, the framework is pretty opinionated on your build system in my experience.
I also didn't often see Lit being used in a way that stuck to the idea that the DOM should be your state. That could very well be because most web devs are coming to it with a background in react or similar, but when I did see Lit used it often involved a heavy use of in-memory state tracked inside of components and never making it into the DOM.
Lit is not opinionated about your build system You can write Lit components in plain JS, going back to ES2015.
Our decorators aren't required - you can use the static properties block. If you think the DX is better with decorators... that's why we support them!
And we support TypeScript's "experimental" decorators and standard TC39 decorators, which are supported in TypeScript, Babel, esbuild, and recently SWC and probably more.
Regarding state: Lit makes it easier to write web components. How you architect those web components and where they store their state is up to you. You can stick to attributes and DOM if that's what you want. Some component sets out there make heavy use of data-only elements: something of a DSL in the DOM, like XML.
It just turns out that most developer and most apps have an easier time of presenting state in JS, since JS has much richer facilities for that.
Dont get me wrong, I'm a pretty big believer in interop, but in practice I've rarely run into a situation where I need to mix components from multiple frameworks. Especially because React is so dominant.
HTML simply can't represent the complex state of real apps. Moving state to HTML actually means keeping the state on the server and not representing it very well on the client.
That's an ok choice in some cases, but the web clearly moved on from that to be able to have richer interaction, and in a lot of cases, much easier development.
I'm sure you could find examples to prove me wrong here so I'm definitely not saying this is a hard line, but I've always found that if app state is too complex to represent in the UI or isn't needed in the UI at all, that's state that belongs on the back end rather than the frontend.
My usual go-to rule is that business logic belongs where the state lives - almost always on the back end for state of any real complexity.
With true web apps like Figma I consider those entirely different use cases. They're really building what amounts to a native app that leverage the web as a distribution platform, it has nothing to do with HTML at all really.
It's a bit more nuanced than that. State in Qite is held both in HTML and in JS Component. The html serialization is sort of consequence of changing a field (like when you want to update textarea content, for example). You can completely ignore it or you can also use it for CSS, for example. Another usecase is when user interacts with the pages, changes text in said textarea and it also automatically updates the JS Component field. Finally, there are also flags, which aren't stored in DOM. I'd like to point out this architecture isn't random, it came from building apps and realizing how everything interacts.
If the state can't, or shouldn't, be serialized in the client I question whether that state belongs in the client at all.
I'm sure you could find counterexamples so that isn't a hard line I'm proposing, but it is my opinion that nearly all website or web app built today over uses client state.
> This is a more narrow version of my belief that general AI tools like LLMs fundamentally don't fit as additions to products, but rather subsume products
That seems reasonable, its just yet to be seen if LLMs are a form of artificial intelligence in any meaningful sense of the word.
They're impressive ML for sure, but that is in fact different from AI despite how companies building them have tried to merge the terms together.
What I'm saying is not (directly) related to whether or not LLMs are "true AI" or not. It's sufficient that they are fully general problem solvers.
A software product (whether bought or rented as a service) is defined by its boundaries - there's a narrow set of specific problems, and specific ways it can be used to solve those problems, and beyond those, it's not capable (or not allowed) to be used for anything else. The specific choices of what, how, and on what terms, are what companies stick a name to to create a "software product", and those same choices also determine how (and how much) money it will make for them.
Those boundaries are what LLMs, as general-purpose problem solvers, break naturally, and trying to force-fit them within those limits means removing most of the value they offer.
Consider a word processor (like MS Word). It's solving the problem of creating richly-formatted, nice-looking documents. By default it's not going to pick the formatting for you, nor is it going to write your text for you. Now, consider two scenarios of adding LLMs to it:
- On the inside: the LLM will be able to write you a poem or rewrite a piece of document. It could be made to also edit formatting, chat with you about the contents, etc.
- From the outside: all the above, but also the LLM will be able to write you an itinerary based on information collected from maps/planning tool, airline site, hotel site, a list of personal preferences of your partner, etc. It will be able to edit formatting to match your website and presentation made in the competitor's office tools and projected weather for tomorrow.
Most importantly, it will be able to do both of those automatically, just because you set up a recurring daily task of "hey, look at my next week's worth of calendar events and figure out which ones you can do some useful pre-work for me, and then do that".
That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already. It's about generality that allows them to erase the boundaries that define what products are - which (this is the "mortal wound to software industry" part) devalues software products themselves, reducing them to mere tool calls for "software agents", and destroying all the main ways software companies make money today - i.e. setting up and exploiting tactics like captive audience, taking data hostage, bundled offers, UI as the best marketing/upsale platform, etc.
(To be clear - personally, I'm in favor of this happening, though I worry about consequences of it happening all at once.)
> That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already.
They most certainly are not. With the current state of LLMs, anyone who puts them in charge of things is being a fool. They have zero intelligence, zero ability to cope with novel situations, and even for things in their training data they do worse than a typical skilled practitioner would. Right now they are usable only for something where you don't care about the quality of the result.
What you lose is control. Even in the case of an actually-intelligent agent, if you task a subordinate with producing a document for you, they are going to come up with something that is different from exactly what you had in mind. If they are really good, they might even surprise you and do a better job than than you'd have done yourself, but it still will be their vision, not yours.
Your notion of a "mortal wound" to the software industry seems to assume that today's SaaS portals are the only form that industry can take. Great software is "tool calls for agents". Those human agents who care about getting exactly the result they want will not be keen on giving up Photoshop for Photoshop-but-with-an-AI-in-front-of-it.
I hasn't said anything about true AI though, and I'm not sure how we would define that.
That's part of the problem, we as an industry have dove straight into the deep end without pausing for even the basics like agreeing on definitions.
What is intelligence, and how do we recognize it? What is consciousness, if it even exists? How do we measure intelligence - is it really just economic value as OpenAI argues, and if so can that only be measures 6+ months after we unleash it on society?
> and it doesn't take "true AI" - the LLMs as we have today are enough already.
I believe that relatively few people would agree with you on that point. LLMs aren’t good enough (yet?), and very obviously so, IMO, to be autonomous problem solvers for the vast majority of problems being solved by software companies today.
I'm surprised GitHub got by acting fairly independently inside Microsoft for so long. I'm also surprised GitHub employees expected that to last
The real problem today IMO is that Microsoft waited so long to drop the charade that they now felt like they had to rip the bandaid. From what I've heard the transition hasn't gone very smoothly at all, and they've mostly been given tight deadlines with little to no help from Microsoft counterparts.
why is az devops on the floor? i am having to choose between the clients existing az dops and our internal gitlab for where to host a pipeline, and i don't know what would be good at all
It works fine,it just feels like it has been under a kind of maintenance mode for a while.
There's clearly one small team that works on it. There are pros and cons to that.
It hasn't even got an obnoxious Copilot button yet for example, but on the other hand it was only relatively recently you could properly edit comments in markdown.
If the client has existing AzDo Pipelines then I'd suggest keeping them there.
This was after seeing those ridiculous PRs where microsoft engineers patiently deconstructed AI slop PRs they were forced to deal with on the open source repos they maintained.
When he was gone a few months later and github was folded into microsoft's org chart the writing was firmly on the wall.
He was never truly independent though. The org structure was such that the GitHub CEO reported up through a Microsoft VP and Satya. He was never really a CEO after the acquisition, it was in name only.
Also of note is that the Microsoft org chart always showed GitHub in that structure while the org chart available to GitHub stopped at their CEO. Its not that they were finally rolled into Microsoft's org chart so much as they lifted the veil and stopped pretending.
I never said he was "truly independent" nor meant to imply it.
Nonetheless it looks like he was both willing and able to push back on a good deal of the AI stupidity raining down from above and then he was removed and then, well, this...
You said he was independent, I didn't include "truly" intending to make a distinction there. How could one be an independent CEO while reporting to a VP who reports to another CEO?
I don't personally know him and wouldn't begin to assume on what, or how, he pushed back. Though Microsoft had AI in the GitHub org well before the leadership change - the AI leader now in charge of GitHub was previously in charge of an AI org that was moved over in the org chart to dotted line report as embedded employees, or whatever they would have been called.
I've been confused by this with many LLM products in general. Sometimes infrastructure is part of it so there's that, but often it seems like the product is a magic incantation of markdown files.
Here I'm mostly considering the seemingly countless services that are little more than some markdown files and their own API passing data to/from the LLM procider's API.
By no means is that every AI product today, and I wasn't saying the OP QA service falls into that bucket though.
More of a general comment related to the GP, maybe too off topic here though?
I would be interested to see how this compares to industry standard though, 77% doesn't seem outrageous to me given all the trackers and advertising code I've seen over the years. It wouldn't surprise me if this is inline with many apps people install and don't think twice about.
reply