AI has become stronger each year as more industries adopt this technology. Superintelligence is on the horizon, so industry professionals must be one step ahead through superalignment. How could U.S. regulations factor into the equation?
Here’s what you should know about the future of AI.
How Far Away Is Superintelligence?
AI’s growth will someday lead to superintelligence. However, industry experts vary in how far away advanced technologies are. At the Beneficial AGI Summit in 2024, Ben Goertzel said artificial general intelligence (AGI) should arrive by 2030 or as early as 2027. Then, cybersecurity professionals could quickly develop artificial superintelligence (ASI). This technology could exceed all of humanity’s computing powers.
However, other professionals are less optimistic. Brent Smolinski from IBM says the industry is not close to superintelligence due to the efficiency gap between ML and humans. AI has difficulty accomplishing human tasks like physical sports and driving. While C++ technology has improved autonomous vehicle technology, the automotive industry is a few years away from fully self-driving cars. Modern AI cannot match a human’s versatility when diversifying skill sets.
Another skepticism from Smolinski is how conscious superintelligence must be. Self-awareness may be necessary to achieve the desired levels of this advanced technology, though current AI systems are incapable of it. While they can simulate creativity, significant advancements may be necessary to reach superintelligence levels. Regardless, the fast rise of AI necessitates superalignment for mitigation.
The Importance of Superalignment
Once superintelligence arrives, the world will need superalignment to ensure ethical use. Some people or organizations could leverage inputs to produce harmful outputs that are the antithesis of human goals. Superalignment will be integral to maintaining the public’s faith in this advanced technology while avoiding existential risks. While superintelligence relies on AI, humans are responsible for maintaining the technology’s long-term integrity.
Without proper intervention, superintelligence could escape the control of cybersecurity professionals. For instance, bad actors could compromise advanced EV charging networks or the country’s national defense systems. Superalignment must also address bias, as it can affect the most advanced AI systems. This mitigation may be more challenging due to superintelligence’s broad range, though it is necessary to prevent discrimination and other harms.
Tech companies have started superalignment to ensure advanced systems align with human goals. A recent example comes from OpenAI and its ML researchers and engineers team. They have developed an automated alignment researcher aiming to reach human levels of intelligence. Crafting this technology requires training, validation, and comprehensive stress testing to correct misaligned models.
Regulating Superintelligence in the U.S.
Superalignment is necessary to control superintelligent systems, though legal questions exist. Is this concept consistent with current laws and the U.S. Constitution? The U.S. does not have comprehensive legislation dictating the direction of AI, superintelligence, and superalignment. However, executive guidance has outlined preliminary paths for regulations. In 2023, the White House unveiled an AI Bill of Rights to promote equity in AI systems.
Besides the executive branch, the government has other agencies that determine the legality and direction of AI. For instance, the FCC has banned artificial or prerecorded voice messages based on a law from the 1990s. Another example is the FTC’s warning about using AI tools for discriminatory practices or exaggerating what the specific software is capable of.
Safeguarding AI and superintelligence in the U.S. could follow the European Union’s strategy for implementation and enforcement. Europe’s governing body passed the AI Act in 2024 to divide this technology into various risk categories. Some systems, such as government-run software in adversarial countries, are banned. However, AI-based navigation systems and map models might remain unregulated or subject to legal restrictions.
What Researchers Recommend for Superalignment
Superalignment and superintelligence regulation are necessary for the long-term health of AI. While this advanced technology is in the development stages, researchers have studied superintelligence and outlined its capabilities. A 2024 study discussed key research problems in superalignment that cybersecurity professionals must address. The primary focuses of the report were weak-to-strong generalization, evaluation, and oversight.
Addressing weak-to-strong generalization is essential for improving performance. Superintelligent systems must function better than humans when uncertainty arises, even if the training model has not addressed the situation. Researchers responded to these potential issues with three proposed modules — an attacker, a learner, and a critic. Each system has a role in superalignment by exposing weaknesses, refining skill sets, and generating critiques.
Another 2024 study argued that existing LLM infrastructure is insufficient to achieve superalignment. The researchers said static AI models are far apart from human values and the dynamics of various societies. Through two examples, the study demonstrated how existing training data limits an LLM’s ability to adapt to changing beliefs. Therefore, industry professionals must eliminate discrepancies in alignment to achieve more adaptable and responsive superintelligence.
Fostering Superalignment for Ethical Superintelligence
Superintelligence has tremendous capabilities, from solving financial problems to advancing scientific research. By reducing human errors, companies with advanced AI can become more efficient and better manage risk.
However, ethical development is crucial to ensure its safety and effectiveness. Superalignment can rein in the negative potential of advanced AI and instill public trust. Researchers should address weak-to-strong generalization, oversight, and other issues that could inhibit superintelligence.
Dylan Berger has several years of experience writing about cybercrime, cybersecurity, and similar topics. He’s passionate about fraud prevention and cybersecurity’s relationship with the supply chain. He’s a prolific blogger and regularly contributes to other publications across the web.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.