SpyderTracks
We love you Ukraine
So we're all used to basic AI systems like Alexa, Siri etc, but these are really just clever search systems, although they do "learn" based on our interactivity with them.
These systems are huge in scope and are literally learning from data it receives from the multiple millions of endpoints attached to it around the world that keep feeding it.
But behind the scenes, there has been a lot of work in creating much more proficient AI systems.
You may have heard a few years ago that one of Elon Musks endeavours OpenAI had it's Dal-E 2 AI start to generate it's own language so it's creators couldn't understand it anymore:
Facebook's AI also did the same and 2 AI's were secretly communicating together which made them shut down the program (disclaimer, this was later refuted by Facebook, but I don't trust them with a bargepole)
Google's DeepMind AI similarly ended up as an aggressive AI
And now there's a brand new endeavour again by Google on the LaMDA (language model for dialogue applications) chatbot development system.
Again, Google have publicly refuted this, but they do a lot of government related contracts, and if they didn't, they'd probably get heavily sued, I'm certain this is probably supposed to be a Top Secret project that the whistleblower has aired. All this stuff needs more transparency to keep people honest.
To me this is progress, the AI learns off the quality of the data it is fed by us humans, and of course, the quality of it's underlying code structure. This one suggests that there's been significant improvement in those areas as it seems genuinely concerned with bettering itself and others which is a marked departure from previous AI's progress.
Mindblowing stuff though. The fact that this level of real AI is even possible with the limitations of our current computing architecture is astounding. I didn't believe this kind of stuff would be possible until we achieved Quantum computing.
These systems are huge in scope and are literally learning from data it receives from the multiple millions of endpoints attached to it around the world that keep feeding it.
But behind the scenes, there has been a lot of work in creating much more proficient AI systems.
You may have heard a few years ago that one of Elon Musks endeavours OpenAI had it's Dal-E 2 AI start to generate it's own language so it's creators couldn't understand it anymore:
An Image Generation AI Created Its Own Secret Language But Skynet Says No Worries
A researcher claims that DALL-E, an OpenAI system that creates images from textual descriptions, is making up its own language.
hothardware.com
Facebook's AI also did the same and 2 AI's were secretly communicating together which made them shut down the program (disclaimer, this was later refuted by Facebook, but I don't trust them with a bargepole)
Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future
In a glimpse at what the beginning of the technological singularity might look like, researchers at Facebook shut down an artificial intelligence platform after the bots went off script and developed a unique language that humans could not understand.
www.forbes.com
Google's DeepMind AI similarly ended up as an aggressive AI
Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations
We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity.
www.sciencealert.com
And now there's a brand new endeavour again by Google on the LaMDA (language model for dialogue applications) chatbot development system.
Google engineer put on leave after saying AI chatbot has become sentient
Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child
www.theguardian.com
Again, Google have publicly refuted this, but they do a lot of government related contracts, and if they didn't, they'd probably get heavily sued, I'm certain this is probably supposed to be a Top Secret project that the whistleblower has aired. All this stuff needs more transparency to keep people honest.
To me this is progress, the AI learns off the quality of the data it is fed by us humans, and of course, the quality of it's underlying code structure. This one suggests that there's been significant improvement in those areas as it seems genuinely concerned with bettering itself and others which is a marked departure from previous AI's progress.
Mindblowing stuff though. The fact that this level of real AI is even possible with the limitations of our current computing architecture is astounding. I didn't believe this kind of stuff would be possible until we achieved Quantum computing.