I think the reason people refer to LLMs as generative comes from the term GPT, which is short for generative pre-trained transformer I believe. At its core, it generates new outputs based on previous ones, and its purpose is to create new content. There are plenty of models that are not generative, like dedicated classifiers (think sentiment analyzers, models that try to identify what an object is, etc).
I think the reason people refer to LLMs as generative comes from the term GPT, which is short for generative pre-trained transformer I believe. At its core, it generates new outputs based on previous ones, and its purpose is to create new content. There are plenty of models that are not generative, like dedicated classifiers (think sentiment analyzers, models that try to identify what an object is, etc).