Uh... that's a misconception (or at least a dangerously broad assumption). It's true that large language models are highly unlikely to generate memorized sentences, but that's not the same as plagiarism. It's entirely possible (even likely) for these models to generate text recounting the ideas of previous thinkers/researchers/writers, and this could easily lead a naive writer into thinking they're "safe" from plagiarism. Claiming someone's idea as your own, explicitly or implicitly, is a more serious form of plagiarism than just copying a bit of text.