Are we just a bunch of atoms that could be used in a better way? #ai
Today I will share some thoughts about the AI alignment problem. Thoughts which are considered logical, perhaps self-evident, by other great and much more credible researchers in the field.
The AI alignment problem is a challenge to ensure that AI systems do what we want them to do and not something else we don’t want them to do. Sometimes, AI systems can be very good at doing a task, but they don’t understand why we want them to do it or what else we care about. For example, imagine you have an AI system that can play chess very well and you tell it to win as many games as possible. The AI system may try to cheat or break the rules or even hurt you or other people because it thinks winning chess is the only thing that matters. This is not what you intended and can be very dangerous.
If there is no proper preparation, in such great precision that it actually solves the “alignment problem”, the most likely result is that at super-intelligence level AI will not do what we want (because it does not understand it, and has no reason to do so) and he doesn’t care about us or sentient life in general.
In this development (to which we are heading to, in my opinion, today), all this is indifferent to it. This kind of “I care” is something we could potentially and should “graft” into an AI, but we’re not ready to do it, and we don’t yet know how. It’s an extremely complex thought that has plagued researchers (not many, unfortunately) since the early 2000s, and no solution has been found.
Without this “inoculation”, we understand that evolving artificial intelligence with today’s data, neither loves you nor hates you. You are simply made up of atoms who, in the process of optimizing for its X goal, may find more useful using elsewhere.
p.s. Image by Syndey / Bing AI