Social Network

Quick thoughts, shares, and interactions with the community. These are my digital breadcrumbs.

star_rate
User

There is a big flaw in asking an AI of 2025 to say what are the chances of AI to "this or that" in the future, as if AI was thinking to it in some way. It isn't. AI of 2025 is still trained from scratch on human made scientific papers and human made ideas, that the AI of 2025 will spit out in a "I've seen this somewhere" kind of way.


So, if you're scared of something and ask an AI (which is an echo of you) about its opinion on the matter, you may end up in a Larsen kind of situation where you exacerbate the fear that you already have by receiving your fear but in a different form that you're used to.


And I insisted a lot on "AI of 2025" because we don't know yet if in the near future or in future, there will be a different kind of AI that will think for itself, meaning that it would NOT be trained on human knowledge and information. That would be a shift like the one from Alpha Go to Alpha Go Zero (from Google). It's theoretically possible, but hasn't been done.


"Experts AI disaster scenario just came true" with Stephen Fry - Pindex

Question?

We'd like to hear from you.