Yes, it's observable after using ChatGPT for even a short amount of time that it doesn't have original thought. However, it doesn't sound like it's just parroting about a topic verbatim, but paraphrasing an idea with its own words.
From an outside perspective (I am speaking very shallow here...inner-workings aside), this is not wildly different from how humans paraphrase what they've read in the news, watched on YouTube, quoting a colleague's annecdote, repeat what they were told in school, etc.
I understand (that by design) it lacks what's required for individual thought outside of the above. E.g. To use acquired knowledge to make decisions unrelated to the language model. Or experiencing something personally and being able to either recall that, or to repress it - only to create trauma - which may affect their behaviour and opinions on whichever topic in the future.
Or maybe I am wrong. These are just shallow observations.