Yea, I was about to comment the same thing. I have noticed a lot of people weaponizing people's hatred of AI/slop and using rage baiting to drive views. No doubt someone would have looked at that entry of "Amazon lost 6M orders due to slop!" at face value and come away thinking it was true.
I thought it was a padding scheme, where you use a moving mask to obscure the plaintext, then encrypt that. Since it's being XOR'ed, adjacent characters will not have the same encryption values anymore. Sort of like using CBC instead of ECB for block ciphers. However, because this article is about the maths with RSA itself, he probably correctly thought it was not relevant to what he was writing about and would just unnecessarily complicate things.
I mean, you shouldn't send data to any SaaS LLM for code you want to be private, unless you have had them sign some sort of contract saying they will not train on your use. In fact, it is probably never a good idea to send anything you want to be private off premises unencrypted.
I feel the same way, but I think my feelings may change if I didn't actually think the person was a good enough person that deserves to have their writing immortalized, like in this case. Of course, we only have his side, but the GP doesn't seem to think his dad was a good person and wrote some hurtful things in the diary about someone they cared about, which I feel as though is justification for their actions.
>Knowledge distillation works like this: you take a large model, have it perform tasks with detailed reasoning, then feed those reasoning traces to a smaller model until the student learns to mimic the teacher. The smaller model ends up far more capable than if you’d trained it from scratch on the same data. Apple can now do this with the full Gemini, not just their own in-house models, and the distilled output runs locally. No internet required.
No freaking way. AI companies see this as tantamount to pirating their models. There is no way that Google is not explicitly banning this in their agreement to allow Apple use their models.
Yeah, it's weird they even included that. It reads like a psych shelf exam question to test if you know the connection between marijuana use and acute psychosis. But still, it is difficult to completely separate the AI being a possible catalyst for it.
reply