OpenAI got GPT-3 by learning 12 years of New York Times, Reddit, the full text of online books, etc. Why have they not collected and learned further 120 years of materials to get ChatGPT, instead of using both supervised and reinforcement learning techniques to get there? Do we have a scaling issue here?
Compared with human, deep learning is notoriously not for “small” data anyway.
Everone gets the same or similar dictionary but the trick is how to do the word permutations (arranging words) with it. J. Rowling has done it right and everyone buy her books and sadly to my word permutations (writings), even I myself hate them.
On the other hand, Rowling has made good word permutations based on very limited reading, compared with ChatGPT’s “reading” scope (very big data). Therefore, big data is not a decicive factor.
Furthermore, Rowling got a very simple promot: write a few books about Harry Potter. If I would give ChatGPT the same prompt, will I get another set of best sellers? I’ll give a try.
On the other hand, no human has read so much materials. We might read only 1% of them and understand, say, 30% of them, but we really understand it. Humans learn less but still write pretty good articles and do good human conversations.
ChatGPT “read” them, memorize the words and their combinations (absolutely different from human understanding).
Leave A Comment