Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.
It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.
He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing
The state of California can classify some driving under the influence cases as operating with "implied malice". Not sure it would qualify in this scenario, but there is precedent for arguing that reckless incompetence is malicious when it is done without regard for the consequences.
“In any statutory definition of a crime ‘malice’ must be taken not in the old vague sense of ‘wickedness’ in general, but as requiring either (i) an actual intention to do the particular kind of harm that was in fact done, or (ii) recklessness as to whether such harm should occur or not (ie the accused has foreseen that the particular kind of harm might be done, and yet has gone on to take the risk of it).” R v Cunningham