The new OpenAI "gpt-3.5-turbo" model is cheap but how does it perform? Check out its summarization style compared to the 10x more expensive "text-davinci-003" model on summaries of Hacker News stories.
Also, how did you decide on those specific prompts?
Lastly, is it accurate that the text being summarized is truncated instead of for example streaming it in chunks to create a summary of summaries for longer text. If so, why even process the articles that require truncation?
https://github.com/jiggy-ai/hn_summary/blob/master/src/summa...
Also, how did you decide on those specific prompts?
Lastly, is it accurate that the text being summarized is truncated instead of for example streaming it in chunks to create a summary of summaries for longer text. If so, why even process the articles that require truncation?