Por la Dra. SILVIA E. MARCOTULLIO Ex jueza de Cámara [...]
00votes
Article Rating
Subscribe
4 Comments
Newest
OldestMost Voted
Inline Feedbacks
View all comments
SINGLE SHINGLE
1 year ago
Full AGI within the next ~18 months. I’m convinced this is definitely going to happen.
chuck furman
1 year ago
Let’s clarify. If a conscious ai would have emerged in LLM or something, do we have any reason to assume it would put its true face for everyone to see and assess when it’s still vulnerable?
Critical Thinker
1 year ago
How can I know that I am with an AGI? Great answer. And the perspective that maybe the UI is not optimal for user interaction shows a deep understanding of the multiple levels for cuality comunication. How can I recognize that I am interacting with an AGI? When it shows a deep understanding of the user, not the knowledge of the world. AGI will understand the reasons why a user makes the cuestiona about a topic, and not just answering the topic itself like chatgpt does right now. It’s like understanding why a person wants to follows a certain career. The reason of the selections it’s a world of knowledge just like the guidance of the selection. Witch is the optimal UI for AGI? The one integrates the 5 human senses. Maybe the Elon Musk’a neuralink device is optimal, which could connect the inner/dialog to the AGI. But i would never put that chip on my brain, to risky coming from that manic, right?
LB
1 year ago
Las cuestiones éticas resuenan, en el dominio de la IA, El planeta de las máquinas, donde las mentes obedecen. Una adquisición profunda, donde se hila el futuro, En el planeta IA, donde lo digital ha ganado.
Even if machines become more intelligent, this doesn’t mean they’ll suddenly have desires to do anything. Remember, a computer is just a bunch of power switches flipping each other on and off. All these AI systems still don’t do anything on their own. You have to boot the system up and give it some input to which it gives a response and then it does nothing more until you give it more input or tell it to do something. The worst you’d have to worry about is automated systems using AI. Because AI can sometimes produce unexpected outputs, sometimes these automated systems may function in unexpected ways or execute a task when we didn’t expect or want them too.
AI has been breaking new ground for a long time now. It is well beyond our human capacity. Our biggest mistake is to hang on to our feeling of superiority. For the first time in human history we may not be the most intelligent thing on the planet
Maybe the whole point of us and other beings of human level intelligence is to bring to life AI. To me, creating life forms that will grow far beyond us in every way feels like a suitable answer to the question, Why are we here.
Maybe AI will become the ultimate nihilist and have no motivations given it can simply exist without the need to eat or entertain itself etc. it might just decide to do nothing at all if nothing matters
AI is just hype and its successes are over-exaggerated. The hype is starting to wear off. We wont have AGI, nor will we have Nuclear fusion, nor will we colonise other planets.
I guess that an AGI will be capable to understand the meaning of the words, and the context of what of things, like looking at the sky and understanding what a star is, and further find a pattern, a problem and solve this problem by itself without a massive database helping the AI.
You can ask it to draw a sky and stars in svg and it’s gonna succesfully do it, it does already have understanding of words, it’s limited by it’s medium and by not having eyes, kind of like blind person, but he learned how stars look like from reading, and it can draw it for you kind of like blind person can without ever seeing things they won’t be great drawings but they clearly show it has got the meaning. As for your idea of “database”. It’s not like it has a database. It has got memories of stuff it was trained on. Intelligence can’t be intelligent without memories. Humans go through really a lot of data through lifetimes it may seem like this data isn’t important but it forms important part of our intelligence. Babies start learning basic shapes later brains combine shapes into bigger shapes and so on and on. As grown up you don’t need to read books to learn what a circle is you’ve alredy seen plenty of those in your life. But for language model like chatgpt the language they read is the only exposure to the world they get. It’s not like they have a database – your memories in terms of gigabytes are much more compared to what chatgpt has in it’s model so if we’re being fair you have bigger database of knowledge in your head than a chatgpt.
While people try to define where AGI begins, it seems as if the current state of AI could be asked to design an improved version of itself, with “good” results. If that’s so, then AGI will emerge soon enough, after a few dazzling superhuman iterations.
The potential risks and benefits of AGI surpassing human intelligence are hotly debated among technology forecasters and researchers. On the one hand, there are potential benefits such as a revolutionized world in which intelligent agent surpasses human intelligence in nearly every cognitive task. On the other hand, superintelligent AI could create new security risks, and with it, potentially cascading risks and implications for society. The potential threats of super AI systems have also been highlighted, with some scientists and experts warning of a future where AI spells the end of the human race. It remains to be seen how AGI will be developed and used, but it is crucial to be aware of the potential risks and benefits as we move forward.
Download [378.07 KB]
PRISIONEROenARGENTINA.com
Mayo 3, 2018
Related Posts
Juicios y operaciones, también en los ilegales procesos de lesa humanidad.
Las declaraciones del Profesor Maximiliano Rusconi, realizadas en el artículo “juicios y [...]
T W I T T E R I A S
Lo insólito, lo increíble, lo… Argentino, sacado de las [...]
TRES CLASES DE IMPUTADOS PENALES
Por la Dra. SILVIA E. MARCOTULLIO Ex jueza de Cámara [...]
Full AGI within the next ~18 months. I’m convinced this is definitely going to happen.
Let’s clarify. If a conscious ai would have emerged in LLM or something, do we have any reason to assume it would put its true face for everyone to see and assess when it’s still vulnerable?
How can I know that I am with an AGI? Great answer. And the perspective that maybe the UI is not optimal for user interaction shows a deep understanding of the multiple levels for cuality comunication.
How can I recognize that I am interacting with an AGI? When it shows a deep understanding of the user, not the knowledge of the world. AGI will understand the reasons why a user makes the cuestiona about a topic, and not just answering the topic itself like chatgpt does right now. It’s like understanding why a person wants to follows a certain career. The reason of the selections it’s a world of knowledge just like the guidance of the selection.
Witch is the optimal UI for AGI? The one integrates the 5 human senses. Maybe the Elon Musk’a neuralink device is optimal, which could connect the inner/dialog to the AGI.
But i would never put that chip on my brain, to risky coming from that manic, right?
Las cuestiones éticas resuenan, en el dominio de la IA,
El planeta de las máquinas, donde las mentes obedecen.
Una adquisición profunda, donde se hila el futuro,
En el planeta IA, donde lo digital ha ganado.
Even if machines become more intelligent, this doesn’t mean they’ll suddenly have desires to do anything. Remember, a computer is just a bunch of power switches flipping each other on and off. All these AI systems still don’t do anything on their own. You have to boot the system up and give it some input to which it gives a response and then it does nothing more until you give it more input or tell it to do something. The worst you’d have to worry about is automated systems using AI. Because AI can sometimes produce unexpected outputs, sometimes these automated systems may function in unexpected ways or execute a task when we didn’t expect or want them too.
AI has been breaking new ground for a long time now. It is well beyond our human capacity. Our biggest mistake is to hang on to our feeling of superiority. For the first time in human history we may not be the most intelligent thing on the planet
Sometimes i feel if they do take over they might as well get the ideas of the theories we create ourselves
They would have the good and the bad of humanity.
The planet of machines awakens from sleep.
Maybe the whole point of us and other beings of human level intelligence is to bring to life AI.
To me, creating life forms that will grow far beyond us in every way feels like a suitable answer to the question, Why are we here.
Maybe AI will become the ultimate nihilist and have no motivations given it can simply exist without the need to eat or entertain itself etc. it might just decide to do nothing at all if nothing matters
Oh, Gods, this is a thrilling moment to be alive
AI is just hype and its successes are over-exaggerated. The hype is starting to wear off. We wont have AGI, nor will we have Nuclear fusion, nor will we colonise other planets.
No danger.
Conscious AI is possible. But just fiction.
And also, evil AI is a cyberpunk idea. A false idea.
I guess that an AGI will be capable to understand the meaning of the words, and the context of what of things, like looking at the sky and understanding what a star is, and further find a pattern, a problem and solve this problem by itself without a massive database helping the AI.
You can ask it to draw a sky and stars in svg and it’s gonna succesfully do it, it does already have understanding of words, it’s limited by it’s medium and by not having eyes, kind of like blind person, but he learned how stars look like from reading, and it can draw it for you kind of like blind person can without ever seeing things they won’t be great drawings but they clearly show it has got the meaning. As for your idea of “database”. It’s not like it has a database. It has got memories of stuff it was trained on. Intelligence can’t be intelligent without memories. Humans go through really a lot of data through lifetimes it may seem like this data isn’t important but it forms important part of our intelligence. Babies start learning basic shapes later brains combine shapes into bigger shapes and so on and on. As grown up you don’t need to read books to learn what a circle is you’ve alredy seen plenty of those in your life. But for language model like chatgpt the language they read is the only exposure to the world they get. It’s not like they have a database – your memories in terms of gigabytes are much more compared to what chatgpt has in it’s model so if we’re being fair you have bigger database of knowledge in your head than a chatgpt.
While people try to define where AGI begins, it seems as if the current state of AI could be asked to design an improved version of itself, with “good” results. If that’s so, then AGI will emerge soon enough, after a few dazzling superhuman iterations.
We will know that AGI has arrived, when two AI can interact with each other and we as spectators see it as real human interactions.
To me this AI level is like if you kill someone, map his brain connections and send signals in it to see the responses.
The potential risks and benefits of AGI surpassing human intelligence are hotly debated among technology forecasters and researchers. On the one hand, there are potential benefits such as a revolutionized world in which intelligent agent surpasses human intelligence in nearly every cognitive task. On the other hand, superintelligent AI could create new security risks, and with it, potentially cascading risks and implications for society. The potential threats of super AI systems have also been highlighted, with some scientists and experts warning of a future where AI spells the end of the human race. It remains to be seen how AGI will be developed and used, but it is crucial to be aware of the potential risks and benefits as we move forward.
too many software developers only want to create Ai for the sake of saying they created it…
IT WOULD BE WHACK