Featured image was created by AI with the prompt "AI zombie ruling the world"
General artificial intelligence, which is also called “strong artificial intelligence,” is a type of AI that can do any intellectual task that a human can. This type of AI is often compared to narrow AI systems, which are made to do a specific task or set of tasks. This is the stage of AI that we are in right now.
The theoretical goal of making general artificial intelligence is to make machines that can think and learn like humans, be able to solve complex problems, understand and interpret natural language, and adapt to new situations. Even though there has been a lot of progress in the development of AI technologies, it is believed that we are still a long way from having true general AI.
But, are we?
Some of the hardest parts of making general artificial intelligence is creating algorithms that can learn and adapt to new environments and tasks and systems that can process and understand natural language at a level similar to human intelligence. Another problem is making AI systems that can think and decide in a way that is similar to how humans do. I really don’t think that will be the case here, though. When we talk about a combined super intelligence, it might not be fair to compare everything to how humans do it. I think this could be a post-human creation that might think or act differently. In ways we can even conceive.
To approach such systems thought, Artificial intelligence (AI) systems need to be able to learn on their own. It means that an AI system can get better over time by learning from data and experiences without being specifically programmed to do so.
Self-learning
Currently AI systems can learn in a number of different ways, such as through supervised learning, unsupervised learning, and reinforcement learning. Let me explain.
In supervised learning, the AI system is trained on a labeled dataset, which has both the correct inputs and outputs for each input. Based on what it has learned from the training data, the system can then make predictions about new input data.
In unsupervised learning, the AI system doesn’t get labeled data. Instead, it gets a big set of data and has to figure out patterns and relationships on its own. This can help with tasks like clustering and reducing the number of dimensions.
Reinforcement learning is the process of teaching an AI system what to do in a given environment to get the most reward. The system learns by trying things out and changing what it does based on how those things turn out.
AI systems need to be able to learn on their own because it helps them get better over time and adapt to new tasks and environments. It also lets them learn from large amounts of data and make more accurate and useful decisions and predictions.
Decentralisation
Decentralisation means that power, functions, or the ability to make decisions are spread out from a central place or authority. In a decentralised system, power and control are not centralised in a single authority or central location. Instead, they are spread out among many different people, machines or groups.
Decentralization can be used in many different areas, such as government, the economy, and technology.
In the context of government, decentralisation means giving power from a central government to regional or local governments, or to other actors like civil society organisations or private sector entities. This can mean that the central government gives power to make decisions, resources, or responsibilities to lower levels of government or other actors.
In economics, “decentralization” means moving economic power and decision-making away from large corporations or central authorities and toward smaller groups or individuals. Getting this done can be done in a number of ways, such as through deregulation, privatization, or the creation of decentralized markets or networks.
In technology, decentralisation means moving control or power away from a central authority or point. This can be seen in the growth of peer-to-peer networks like decentralised finance (DeFi) systems and decentralised applications (DApps) that don’t have a central authority in charge of them. Blockchains and digital currencies like Bitcoin and Etherium are examples of such technologies.
Decentralisation can help to encourage more participation, accountability, and resilience in different systems and can also be a way to keep power in check and support democracy.
But it can also cause problems, such as the possibility of bad actors to coordinate and work together in order to take over the system and the chance that decentralisation will lead to unstoppable fragmentation or conflict.
A deadly combination (?)
Decentralisation and self-learning are two ideas that have the potential to significantly advance the capabilities of AI systems and lead to the development of a super general artificial intelligence that possesses vast knowledge and capabilities.
Instead of being vested in a single entity or individual, power, authority, and responsibility have been decentralised and are now held by a number of different people, systems, or organisations. In the context of artificial intelligence (AI), decentralisation can refer to the process of developing and deploying AI systems, as well as the data that is utilised to train and assess those systems.
Decentralized artificial intelligence (AI) systems have the potential to be more resilient to failure or tampering because there is no single point of failure that may bring the whole system crashing down. Also, in theory decentralized AI systems can be more open and accountable than centralized ones because they don’t rely on a single source of data or make decisions that could be biased. Instead, they use information from many different sources.
But let me highlight here that, no central point of failure means no way to turn it off.
Self-learning artificial intelligence systems have the potential to grow more efficient and exponentially over time because they can learn from their own experiences and adapt to new situations as they arise.
This could lead to the development of an artificial intelligence system that is capable of storing massive amounts of knowledge and capabilities and that has the potential to quickly outperform the human species.
But it’s important to make and use AI in a responsible and ethical way, with the goal of making people better and society as a whole better. The development of a super-general artificial intelligence shouldn’t be undertaken for the sake of dominance or superiority; rather, it should be done to further humanity’s cause and enhance the quality of human life.
The creation of an artificial intelligence (AI) that is super general and has huge knowledge and capabilities could pose substantial hazards and difficulties to humanity.
Let me share some thoughts with you.
Misuse or abuse
If a super general artificial intelligence were to fall into the wrong hands, it might be exploited for harmful reasons like cyber assaults or propaganda. It is possible that it will be utilized to exploit vulnerable communities or to perpetuate injustices if it is not created and implemented in a responsible and ethical manner.
Widespread unemployment and economic upheaval
A super general artificial intelligence that possesses a wide range of capabilities has the potential to automate a great number of jobs, which would result in widespread unemployment and economic upheaval. This could have major repercussions for both society and the economy, including a widening of the income gap and rising social discontent.
Loss of control
This is maybe the most important one, the risk of losing control over the system (AI). It may be difficult or even impossible to maintain command over a super general artificial intelligence that is capable of learning and adapting at a considerably faster rate than humans. Especially if due to decentralisation, a generic switch off button will not be possible, making this super-intelligence system immortal. This could have unexpected repercussions or lead to conduct that is unpredictable, both of which put society in peril. For instance, an AI system created with the intention of optimizing a certain goal, such as the maximizing of profits, could possibly pursue that goal in ways that are unethical or destructive.
Ethical problems
An further risk that may be posed by a super general AI is the possibility that it will give rise to ethical concerns. Concerns may be raised regarding the decision-making processes and values that are prioritized by a super general artificial intelligence that is capable of making complex judgments and acting on its own. If it is not developed and educated in an ethical manner, for instance, an artificial intelligence system that is used to make medical diagnoses or distribute resources can make decisions that are biased or unfair.
It is essential that the creation of a super general artificial intelligence (AI) and its subsequent deployment be directed by ethical principles and values in order to meet the ethical concerns that have been raised. This could include integrating ethicists and other stakeholders in the design and testing of the system, as well as developing supervision and accountability procedures to guarantee that AI is utilized in a manner that is ethical. It will also be essential to carefully evaluate the potential repercussions of AI actions and guarantee that they are consistent with human values and ethical standards. This is because AI is expected to play an increasingly important role in the future.
It will be required to carefully develop and test the system in order to assure its robustness and transparency. Additionally, it will be necessary to have robust monitoring and accountability procedures in place in order to reduce the risk of losing control of a super general artificial intelligence.
The creation of a super general artificial intelligence could present substantial hazards and difficulties for civilisation. It is absolutely essential to approach the creation and deployment of artificial intelligence in a responsible and ethical manner, with an emphasis on expanding human capabilities and benefitting society as a whole, rather than seeking supremacy or superiority in the field.
Setting the rules today and putting in the “off-switch” by design will be critical while we explore the new waters of the exponential unknown.