Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT struggled to tell me how to pin the bottom of a div to the bottom of its parent when scrolling (like a chat window does). It gave me the first thing that I tried (which is wrong). Eventually I figured it out, then confronted ChatGPT about it and it insisted on the wrong answer, and blithely dismissed the correct answer.


Sounds like it's ready to be promoted to architect.


I've been trying to get ChatGPT to solve pretty basic calculus questions and it is often either totally wrong or wrong in some tiny detail. I got into an argument over dimensional analysis a few weeks ago where it felt like it was gaslighting me.


I find ChatGPT very often hallucinates things and then tries to gaslight me when I present the correct answer. I wonder where it got this habit from.


I like chatgpt for programming but I don't like the sound of using a language model for math. I rather use Wolfram alpha


programming and math are the same thing


In the same way as building bridges and math are the same thing. There is some overlap, but not much for the every day tasks. If you want to build something completely new and unprecedented, you will need a lot more math, but still it will only get you so far


An AI struggling with CSS might be the best evidence of intelligence :)


It passed the Turing test since it suggested the thing I already tried




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: