well i have not thought of a single topic on which i will be writing a
blog but i do have an idea to write about and yes it is ai aka ARTIFICIAL
INTELLIGENCE.
i have recently started reading and learning about this whole branch and
it has just blown my mind everytime when i learn something new in this
particular domain like can you imagine that the machine is learning THE
MACHINE IS LEARNING like dude it is all rocks and sand (i mean just
rocks).
like can you imagine how we as humans started from cave to stone age to this current era in which we are living, where the very own stones which used to be our shovels and chives are our minds or say are even capable of generating something on their own(not exactly but you get what i mean). but we all `AI enthusiasts` wants to know how we will achieve AGI `Artificial General Intelligence` and as per the name suggests it simply does mean that the very same machine or say rocks will have the thinking capabilities of their own like can you see a day where a simple chip made of SiO2 is taking with you! I actually can't wait to see how it will turn out when we will reach AGI as humanity.
but to be honest as far as i can see, the research of AGI in general is not increasing significantly. for eg. as per today's date 27.09.2024, all the execs and co-founders have left openai, meta released a new compact model named Llama 3.2 (which is open source) but it does not shows the path to AGI. I mean yes, i might be wrong in this whole take but don't you think we are just roaming around and just increasing the benchmark above the previous models from the particular companies. but in my opinion, i just sometimes think if we are just trying to mimic all the behaviour of human brain onto a machine then why can't we use this same tactic to somewhat try to achieve AGI we all know, we are just mimicing the overall human brain system and how does it learn from the information around it. it is just simply by learning that for the machine we developed algorithm to decide whether the generated or say predicted value or image or code or anything is correct or not but don't you think a hidden factor missing in all this.
it is the sense of loss or sense of wrongness or sense of rightness or sense of responsibilities. i know i sound super superficious or dumb but hold on a second and listen. as human you, who are reading this, what gave you that thought of "what bro is even talking about? does he know AI?" "does he know how AI or ML algos work lol, wow too much informative take that you gave miss..."