I love diagramming, but I genuinely don't understand how people can use these wonky looking tools. It looks off, I had to make my own[1] to create something that's easy to use and looks good/normal.
I like the wonky, hand-drawn looking style. I think it fits well beause usually if I use a diagram it's not 100% precise and accurate, but more a high-level illustration. The wonky style conveys the approximate precision of the presented concept.
I agree 100% it's personal, wasn't trying to imply anything else, but for me the style takes away from the actual content and makes it harder to read/grasp.
Excalidraw has a 1 click 'sloppiness' change. We do drafts and ideation in 'full sloppy' mode, to indicate to the reader that this is not fully thought through, or a final documented decision. Once we've gotten through discussions and analysis, the final diagram is changed to be 'not sloppy', and the font changed from handwriting to a san serif font.
It's pretty effective to immediately communicate to folks that 'this is a concept' approach. Too many people instantly jump to conclusions about diagrams - if it's written down it must be done / fixed / formal.
Click the AI button in the toolbar to copy the Grafly format reference.
Paste it into any LLM (Claude, ChatGPT, Gemini…) along with a description of the diagram you want.
Copy the JSON the LLM returns.
Click the Import JSON button () in the toolbar and paste it in.
”
Super user friendly as well! I don’t even understand the instructions on how to use it.
I absolutely love it that you can import mermaid. I love mermaid because I'm a huge fan of anything related to code that can I check into git, track its evolution and the thinking that went behind it.
However, those who don't know mermaid have to struggle with updating my diagrams. Your approach, atleast in theory, should get us the best of both worlds. Mermaid for those who would like to and the mouse for those who don't.
This also addresses the issue that large complex diagrams can get unwieldy using Mermaid and moving things around with a mouse would fix those edge cases.
In the upper right there is an import/export button that could be used for this. It's stored in localstorage so you could also dump that to wherever you like.
edit: added link to the repo in the about modal.
edit2: added import export of the entire localstorage entry on the bottom of the diagrams(left) panel.
Depends on what you want to achieve with your look. Do you want to scream professionalism, authority, and completed?
Use a regular UML tool.
Want to say this is a rough draft of a few ideas? Then using UML is probably THE wrong look. And Exaclidraw should be used instead.
---
Anecdote time. According to one of my professors, they showed how the prototype will look in action, and the customers were so impressed by the smoke and mirrors prototype they wanted to start using it right away.
In the end, customer walked away because they thought they were being strung along to pay for something that was already done.
A while ago I made this to get content from websites for reading in pdf. With what I use (Supernote) you can have an automated script to pull articles in the morning and put them in a dropbox folder that automatically syncs with the device.
It's really funny how people can say these things online without giving them a second thought. There are literal weapons being produced that are killing people daily. But no, it's the meme generator that's evil.
Because this is a tech forum, not a weapons forum. I'd wager that a sizeable chunk of folk decrying AI/LLMs in this manner also do, in fact, decry the same weapons you refer to. They just do it elsewhere because it's not typically on-topic here.
Context is tech, I agree. Is there no tech in weapons? Palantir? Drones? Are there developers that are proud when they made the kill machine 1% more precise; more optimized?
I think the evil part is putting it in the hands of the general public. The ability to create propaganda and deep fakes gives everyone a powerful tool for manipulation. The rich and powerful are going to do whatever the want, anyway. Everyone having access to that same tool doesn't make it any less dangerous.
There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.
This has been possible for pretty much the entire history of humanity. The bar has been lowered, but not by a lot imho.
I don't disagree on the rest, and I didn't say there aren't bad uses, but there are many many good uses for AI/Sora. You can't say the same for weapons.
violence at scale is often facilitated by and preceded by propaganda at scale, which is one of Sora’s only applications. Certain things are obvious to normal people, like “propaganda is real, powerful, bad, and historical of enormous significance”.
How well you interact with other members of a society increases your chances of procreation, survival, knowledge acquisition, ie. it makes sense as a measure of intelligence
It's a pretty ambiguous definition. The most powerful man in the world right now is not someone I consider a role model for social cognition and yet there he is with the football for the second time demonstrating grandmaster skill at social cognition to get there.
So in all seriousness with a bit of snark: Do you want a malevolent AGI? Because "good at navigating society" as the only benchmark here is how you get a malevolent AGI...
Evidence: cuckoos and cheaters all the way down the evolutionary ladder as a winning strategy and arms race against the hard workers.
They are objectively similar in that both are a big multi-decade commitment to a living being that you chose for yourself (yes, you did choose to have the kid unless you live in a country with no birth control access) but saying something is similar is still not making a value comparison
Yeah of course you can choose the level of evaluating how things are similar. Yes they both breathe. Yes they have DNA. Both are objectively true.
Also, you keep saying value comparison like it's something I used against OP. I never mentioned anything about dog <=> child, nor did OP. I just meant that the core decision of having either is different, so it's not comparable even though you could boil it down to "you care for both".
Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human.
I addressed that directly in the comment you’re replying to.
It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.
It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.
They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.
That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
>It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.
>They are not human, so attributing human characteristics to them is highly illogical
Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.
Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.
After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.
"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.
Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.
>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.
>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
Good thing that we aren't talking about RDBMS then....
It's something I commonly see when there's talk about LLM/AI
That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that.
> They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.
What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?
> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.
> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.
Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.
I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.
If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.
[1] https://grafly.io
reply