• Re: More precision of my philosophy about the weakness of GenerativePre-trained Transformer and more of my thoughts..

    From vvvvvvvvvvvvvvvvvvvv11111@vvvvvvvvvvvvvvvvvvvv11111@mail.ee to comp.programming on Thu Mar 30 08:30:08 2023
    From Newsgroup: comp.programming

    You talk too much......
    Try to be more introvertive. ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
    ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀



    On Sunday, January 1, 2023 at 10:30:56 PM UTC+2, Amine Moulay Ramdane wrote:
    Hello,


    More precision of my philosophy about the weakness of Generative Pre-trained Transformer and more of my thoughts..

    I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ,
    so i think i am discovering the pattern with my fluid intelligence that explains the weakness of Generative Pre-trained Transformer of like ChatGPT, and it is that ChatGPT can discover the patterns using
    the existing patterns from the data or knowledge, so it is like using
    the smartness of the data, but ChatGPT can not use the smartness of the human brain that also comes with human consciousness that optimizes more smartness, so it can not invent highly smart patterns or things from like is doing it a highly smart human from his brain, so i think that ChatGPT will still be not capable of this kind of highly smart creativity, but still it remains really powerful and really useful, so i invite you to read my following previous thoughts that make you understand my views:


    More precision of my philosophy about the mechanisms of attention and self-attention of Transformers AI models and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, i think i am understanding deep learning, but i say that Transformers are deep learning + self-attention and attention, and this attention and self-attention permit to grasp "context" and "antecedents", for example when you say the following sentence:


    "The animal didn't cross the street because it was too tired"


    So we can ask how the artificial intelligence of ChatGPT that uses Generative Pre-trained Transformer will understand that the "it" in
    the above sentence is not the street but the animal, so i say that
    it is with self-attention and attention mechanisms of artificial intelligence and with the training with more and more data that the transformer can "detect" the pattern of the "it" refers to the "animal" in the above sentence, so self-attention and attention of the artificial intelligence of ChatGPT that we call Generative Pre-trained Transformer permit to grasp "context" and "antecedents" too, it is also like logically inferring the patterns using self-attention and attention from the context of the many many sentences from the data, and since the data is exponentially growing and since the artificial intelligence is also generative, so i think it will permit to make the artificial intelligence of the transformer much more powerful, so as you notice that the data is King , and the "generative" word of the Generative Pre-trained Transformer refers to the model's ability to generate text, and of course we are now noticing that it is making ChatGPT really useful and powerful, and of course i say that ChatGPT will still much more improve , and read my following previous thoughts so that to understand my views about it:


    More of my philosophy about transformers and about the next GPT-4 and about ChatGPT and more of my thoughts..



    The capabilities of transformer architectures, as in GPT of ChatGPT that is called Generative Pre-trained Transformer, are truly remarkable, as they allow machine learning models to surpass human reading comprehension and cognitive abilities in many ways. These models are trained on massive amounts of text data, including entire corpora such as the English Wikipedia or the entire internet, which enables them to become highly advanced language models (LMs) with a deep understanding of language and the ability to perform complex predictive analytics based on text analysis. The result is a model that is able to approximate human-level text cognition, or reading, to an exceptional degree - not just simple comprehension, but also the ability to make sophisticated connections and interpretations about the text, because the network of Transformer pay “attention” to multiple sentences, enabling it to grasp "context" and "antecedents". These transformer models represent a significant advancement in the field of natural language processing and have the potential to revolutionize how we interact with and understand language.

    GPT-4 is significantly larger and more powerful than GPT-3, with 170 trillion parameters compared to GPT-3’s 175 billion parameters(and even GPT-3.5 of the new ChatGPT has 175 billion parameters). This allows GPT-4 to process and generate text with greater accuracy and fluency, so with feedback from users and a more powerful GPT-4 model coming up and by being trained on a substantially larger amount of data , ChatGPT that will use GPT-4 may "significantly" improve in the future. So i think ChatGPT will still become much more powerful. And i invite you to read my previous thoughts about my experience with the new ChatGPT:


    More of my philosophy about my experience with ChatGPT and about artificial intelligence and more of my thoughts..


    I think i am highly smart since I have passed two certified IQ tests and i have scored "above" 115 IQ, and i mean that it is "above" 115 IQ,
    so in those two last days i have just tested ChatGPT so that to see if
    this new artificial intelligence launched by OpenAI in November 2022 is efficient, and i think that it is really useful, since i think by testing it that it can score well on the human average smartness, but if you want it to be highly smart by inventing highly smart things , it will not be able to do it, but if you want ChatGPT to be highly smart on what it has learned from the existing smartness of the human knowledge that it has been trained on, i think it can also score high in many times of it, also ChatGPT can in many times make much less errors than humans, so i think that ChatGPT is really useful, and i think that ChatGPT will be improved much more by increasing the size of
    its transformer (A transformer is a deep learning model that adopts the mechanism of self-attention) , and i also think that ChatGPT will be
    improved much more when it will be trained on a substantially larger amount of data, considering an article that DeepMind just published a few days ago demonstrating that the performance of these models can be drastically improved by scaling data more aggressively than parameters ( Read it here: https://arxiv.org/pdf/2203.15556.pdf ), and it is
    why i am optimistic about the performance of ChatGPT and i think that it will be much more improved.


    Thank you,
    Amine Moulay Ramdane.
    --- Synchronet 3.20a-Linux NewsLink 1.114