#ChatGPT and Web3
Last week, ChatGPT, the dialogue-based AI chatbot capable of understanding natural human language, took the world by storm. Gaining over 1 million registered users in just 5 days, it became the fastest growing tech platform ever. ChatGPT generates impressively detailed human-like written text and thoughtful prose, after being fed a text input prompt. In addition, ChatGPT also writes code. The Web3 community were intrigued, curious and shocked by the power of this AI Chatbot.
Now #ChatGPT can write, scan and hack Smart Contracts, where do we go next?
The ChatGPT AI code writer is a game changer for Web3 which can go two ways
- Near instant security audits of smart contract code to find vulnerabilities & exploits (existing & prior to implementation).
- On the flip side, bad actors can program AI to find vulnerabilities to exploit SC code. (thousands existing SC could suddenly find themselves exposed)
The Naoris Protocol POV:
In the long term this will be a net positive for the future of Web3 security
In the short term AI will expose vulnerabilities which will need to be addressed as we could see a potential spike in breaches.
AI will illuminate where humans need to improve
For Web3 devs & development (pre-deployment)
Web3 developers and auditors will be in less demand. The future may look like this:
Devs will instruct, write and generate code using AI
Devs will read and criticise AIs output, learning patterns, looking for weak spots
Auditors will need to understand errors, mistakes and code patterns
Auditors will need to learn the limitations of AI
AI will work in tandem with dev teams to strengthen future code and systems
AI will be part of the development to production pipeline
For devs and auditors, It will be survival of the fittest
Only the best who can work with, instruct and evaluate AI, will survive
Dev teams will reduce in numbers with an AI on the team
For Web3 security (post-deployment)
Swarm AI will be used to scan in near real time the status of Smart Contracts
Code will be monitored for anomalies, code injections and hacks
The attack position is to find bugs and errors of the AI, instead of the code itself
This will improve Web3 smart contract security hugely ($3billion hacked in 2022 to date)
This will also impact the CISOs and IT teams ability to monitor in real time
Security budgets will be reduced, cybersecurity teams will reduce in numbers
Only those who can work with and interpret AI will be in demand
Conclusion
AI is not a human being. It will miss basic preconceptions, knowledge and subtleties that only humans see. It is a tool that will improve vulnerabilities that are coded in error by humans. It will seriously improve the quality of coding in Smart Contracts. But we can never 100% trust its output
#ChatGPT / Web2 and Enterprise
Last week saw the release of ChatGPT, the dialogue-based AI chatbot capable of understanding natural human language. It took the world by storm, gaining over 1 million registered users in just 5 days, becoming the fastest growing tech platform ever.
ChatGPT generates impressively detailed human-like written text and thoughtful prose, following a text input prompt. In addition, ChatGPT can write and hack code which is a potential major issue from an infosec point of view. This AI can analyse and find the answer in seconds example tested: https://twitter.com/gf_256/status/1598104835848798208
- Is the genie out of the bottle that will threaten traditional infosec and the enterprise?
- Is centralised AI a risk to the world?
- What if it was programmed with biases that could tilt the AIs to output to be evil?
- Remember the Apple AI bot that became a racist misogynist?
- Will AI aid hackers in phishing attacks for e.g.shaping language around social engineering, making them more powerful than they already are?
- Will adding safeguards be self-defeating?
The Naoris Protocol POV:
Artificial Intelligence that writes and hacks code could spell trouble for enterprises, systems and networks. Current cybersecurity is already failing with exponential rises in hacks across every sector in recent years with 2022 reportedly already 50% up on 2021.
With ChatGPT on the horizon, it can be used positively within an enterprises security and development workflow, which increases the defence capabilities above the current (existing) security standards. However, bad actors can increase the attack vector, working smarter and a lot quicker by instructing AI to look for exploits in well established code and systems. Well regulated enterprises like FSI spaces, for example, would not be able to react or recover in time due to the way current cybersecurity and regulation is configured.
For example the current breach detention time as measured by IBM (IBM’s 2020 Data security report) is up to 280 on average. Using AI as part of the enterprise defence in depth posture breach detection time could be reduced to less than 1 second, which changes the game.
The advent of AI platforms like ChatGPT will require enterprises to up their game, they will have to implement and use AI services within their security QA workflow processes prior to launching any new code / programmes.
Conclusion
As soon as the genie is out of the bottle, if one side isn’t using the latest technology, they’re going to be in a losing position. So if there’s offensive AI out there, enterprises will need the best defensive AI to come back. It’s an arms race to who’s got the best tool.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.