Hacker Newsnew | past | comments | ask | show | jobs | submit | lnenad's commentslogin

I love diagramming, but I genuinely don't understand how people can use these wonky looking tools. It looks off, I had to make my own[1] to create something that's easy to use and looks good/normal.

[1] https://grafly.io


I like the wonky, hand-drawn looking style. I think it fits well beause usually if I use a diagram it's not 100% precise and accurate, but more a high-level illustration. The wonky style conveys the approximate precision of the presented concept.

Also, and that's personal, I think it's cute.


I agree with you. I think the 'wonky' comment was more to serve as justification for the plug than an actual criticism of Excalidraw.

Excalidraw is my favourite thinking tool, and the style it produces is just the right level of limiting, disarming, and professional at the same time.


It's not, I genuinely find it harder to read diagrams. And also the plug is very relevant, I wanted to share, it's not a saas it's a free tool.

I agree 100% it's personal, wasn't trying to imply anything else, but for me the style takes away from the actual content and makes it harder to read/grasp.

I thought they were saying the tool is wonky looking, but <shrug>?

One person's bug is another's feature.

Excalidraw has a 1 click 'sloppiness' change. We do drafts and ideation in 'full sloppy' mode, to indicate to the reader that this is not fully thought through, or a final documented decision. Once we've gotten through discussions and analysis, the final diagram is changed to be 'not sloppy', and the font changed from handwriting to a san serif font.

It's pretty effective to immediately communicate to folks that 'this is a concept' approach. Too many people instantly jump to conclusions about diagrams - if it's written down it must be done / fixed / formal.


In Excalidraw, you can reduce (and completely remove) the "sloppiness" in the element properties.

“USING AI TO GENERATE DIAGRAMS

Click the AI button in the toolbar to copy the Grafly format reference. Paste it into any LLM (Claude, ChatGPT, Gemini…) along with a description of the diagram you want. Copy the JSON the LLM returns. Click the Import JSON button () in the toolbar and paste it in. ”

Super user friendly as well! I don’t even understand the instructions on how to use it.


This looks really clean, nice work. I’ve had the same issues with most diagramming tools, it's either not so good looking or the insane pricing .

I went a different route using diagram-as-code with Mermaid instead of manual drawing.

[1] https://graphlet.xyz


Thanks! I love Mermaid as well, I made it so you can import Mermaid diagrams as well.

The best way to drive adoption to your product is to not shit on someone else's labour of love. Just a little pro-tip.

How did I shit on excalidraw? I don't like how it looks, it's a personal preference. I don't think saying that equates to shitting on it.

I absolutely love it that you can import mermaid. I love mermaid because I'm a huge fan of anything related to code that can I check into git, track its evolution and the thinking that went behind it.

However, those who don't know mermaid have to struggle with updating my diagrams. Your approach, atleast in theory, should get us the best of both worlds. Mermaid for those who would like to and the mouse for those who don't.

This also addresses the issue that large complex diagrams can get unwieldy using Mermaid and moving things around with a mouse would fix those edge cases.


Whimsical is a whiteboard/diagram app that I think looks pretty nice, not too far from how yours looks

Questions:

1. Will you be making the source code public?

2. How to export the JSON for SCM, then re-import for updating/maintenance?


It's open source, I just haven't linked it in the project (my bad).

https://github.com/lnenad/grafly/

In the upper right there is an import/export button that could be used for this. It's stored in localstorage so you could also dump that to wherever you like.

edit: added link to the repo in the about modal. edit2: added import export of the entire localstorage entry on the bottom of the diagrams(left) panel.


When a background shape is in focus it comes to the foreground covering the shapes that are on top of it.

That is by design. If you deselect it it goes back to it's layer.

> It looks off

Depends on what you want to achieve with your look. Do you want to scream professionalism, authority, and completed? Use a regular UML tool.

Want to say this is a rough draft of a few ideas? Then using UML is probably THE wrong look. And Exaclidraw should be used instead.

--- Anecdote time. According to one of my professors, they showed how the prototype will look in action, and the customers were so impressed by the smoke and mirrors prototype they wanted to start using it right away.

In the end, customer walked away because they thought they were being strung along to pay for something that was already done.



I prefer excalidraw …

looks awesome man !

A while ago I made this to get content from websites for reading in pdf. With what I use (Supernote) you can have an automated script to pull articles in the morning and put them in a dropbox folder that automatically syncs with the device.

https://github.com/lnenad/newser


It's really funny how people can say these things online without giving them a second thought. There are literal weapons being produced that are killing people daily. But no, it's the meme generator that's evil.

Because this is a tech forum, not a weapons forum. I'd wager that a sizeable chunk of folk decrying AI/LLMs in this manner also do, in fact, decry the same weapons you refer to. They just do it elsewhere because it's not typically on-topic here.

Context is tech, I agree. Is there no tech in weapons? Palantir? Drones? Are there developers that are proud when they made the kill machine 1% more precise; more optimized?

Plenty of HN threads about Palantir and drones also have people commenting about their evil.

Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.


I'm not arguing that, OP said

> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.

I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.


I think the evil part is putting it in the hands of the general public. The ability to create propaganda and deep fakes gives everyone a powerful tool for manipulation. The rich and powerful are going to do whatever the want, anyway. Everyone having access to that same tool doesn't make it any less dangerous.

There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.


> The ability to create propaganda

This has been possible for pretty much the entire history of humanity. The bar has been lowered, but not by a lot imho.

I don't disagree on the rest, and I didn't say there aren't bad uses, but there are many many good uses for AI/Sora. You can't say the same for weapons.


Genuinely curious what the [morally] good use cases for Sora would be.

violence at scale is often facilitated by and preceded by propaganda at scale, which is one of Sora’s only applications. Certain things are obvious to normal people, like “propaganda is real, powerful, bad, and historical of enormous significance”.

This is textbook whataboutism.

Yes, literal weapons are bad, too. But that's not the current topic.


> one of

It is not. Why is that relevant to social entities?

How well you interact with other members of a society increases your chances of procreation, survival, knowledge acquisition, ie. it makes sense as a measure of intelligence

It's a pretty ambiguous definition. The most powerful man in the world right now is not someone I consider a role model for social cognition and yet there he is with the football for the second time demonstrating grandmaster skill at social cognition to get there.

You don't have to be empathetic and nice, just good at navigating society.

So in all seriousness with a bit of snark: Do you want a malevolent AGI? Because "good at navigating society" as the only benchmark here is how you get a malevolent AGI...

Evidence: cuckoos and cheaters all the way down the evolutionary ladder as a winning strategy and arms race against the hard workers.


I don't like a$$holes but they do exist and they are part of our species, ergo intelligent. My opinion of them doesn't change the fact

Yes, but we have a choice about whether the AGI is an a$$h0l3 or not. That's the difference here. You do see that right?

I agree 100%.

Also I am in the process of fine tunning a small model on the data so that you'll be able to build diagrams inside of the app.


It's really amazing how stability of platforms has gone down in the last year or so.


If only this was correlated with something else going on in the industry...


yes, the new normal is crazy. Claude/Github et al.

They are dogfooding their own tools and causing so much downtime, all in the spirit of "staying a head".


> 100% of our code is written by AI

Yeah we can tell...


The schadenfreude is so fucking palpable


Weird take, will you also look sour at devs who use local LLM's in ~50 years? Or is that different


The mass immigration probably still taking a toll.


Who is leaving your possession is just as relevant as who is driving your possession?


You're comparing actual humans to a pet?


What are you asking? Nchagnet is just acknowledging the existence of people who regret having kids, not making a value comparison


[flagged]


They are objectively similar in that both are a big multi-decade commitment to a living being that you chose for yourself (yes, you did choose to have the kid unless you live in a country with no birth control access) but saying something is similar is still not making a value comparison


Yeah of course you can choose the level of evaluating how things are similar. Yes they both breathe. Yes they have DNA. Both are objectively true.

Also, you keep saying value comparison like it's something I used against OP. I never mentioned anything about dog <=> child, nor did OP. I just meant that the core decision of having either is different, so it's not comparable even though you could boil it down to "you care for both".



How is saying that your biological offspring is different to a pet discrimination/unfair treatment?


It's still clunky though. It's a great, cool thing that OP built but just not very practical.


Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human.


I addressed that directly in the comment you’re replying to.

It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.

It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).


>It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

>They are not human, so attributing human characteristics to them is highly illogical

Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.

Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.

After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.

"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.

Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.

>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.

>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

Good thing that we aren't talking about RDBMS then....


It's something I commonly see when there's talk about LLM/AI

That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that.


I agree 100% with everything you wrote.


> They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?

> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.

> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

You're moving the goalposts.


Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.

Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.


I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.

If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: