Hacker Newsnew | past | comments | ask | show | jobs | submit | willietran's commentslogin

Whoa, this is really neat. Does this mean that I can essentially try any LLM on my local machine?


Yep. Right now we've packaged llama2, vicuna, wizardlm, and orca. The idea is to make it crazy easy to get started though. You do need quite a bit of RAM (16GB should work for the smaller models, 32MB+ for the bigger ones), and for now a newer Mac. We're working versions for Windows and Linux too though.

EDIT: We don't let you run stuff from HF, but we are trying to repackage the popular models. The plan is to let you upload your own in the future to share them.


Awesome! Thanks for this. Trying this out now.


I think you meant 32GB+


Not "any" yet per se, but the groundwork is there. It helped me try out the GGML stuff after failing to get it online previously.


Lovely!! Congratulations on the launch! This seems really neat


This is half true. Nothing will ever trump making a great product. However, it would be pretty foolish to just rely on that alone.

Companies need marketing strategies to serve as a catalyst to their product growth. Again, product is king. No doubt about that, because without a good product, no amount of marketing will make it stand out.

But not every company is going to be the next Instagram, Snapchat, or Facebook. Focusing on only product may not work out as well.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: