Following news that threat analysts have discovered ten malicious Python packages on the PyPI repository, used to infect developer’s systems with password-stealing malware, cyber security experts reacted below.
Researchers have discovered 10 new malicious Python packages distributed via the #Python Package Index (PyPI) to harvest critical data points, such as users' passwords and API tokens.Read details: https://t.co/GaFQmVhOGF#infosec #cybersecurity #hacking #malware— The Hacker News (@TheHackersNews) August 9, 2022
Researchers have discovered 10 new malicious Python packages distributed via the #Python Package Index (PyPI) to harvest critical data points, such as users' passwords and API tokens.Read details: https://t.co/GaFQmVhOGF#infosec #cybersecurity #hacking #malware
Attacks where shared developer resources contain malicious instructions have been around since the beginning of computers. They are known in general as watering hole or poisoned well attacks. In 1984, Ken Thompson, one of the creators of Unix and the C Language, embedded a trojan horse program into one of his programs, people unknowingly downloaded and used it, and he wrote a paper and announced (https://dl.acm.org/doi/10.1145/358198.358210) in a speech and a paper how people should not just trust other people’s code. So, this has been a problem for a long, long time. It’s a bit sad that this has been a problem for this long, and not only do we not keep making the same mistakes, but seemed doomed to keep making the same mistake forevermore.
Some of the biggest and most impacting companies of our time, including Microsoft, Google, and Apple, have been hit by these types of attacks. No organization that allows their developers to obtain and use other people’s code is immune to it. All developers should be educated about waterhole attacks…they are not uncommon, and instructed to either inspect and verify every bit of code of anything they download or not to download and use other people’s code. It’s just too risky ot rely on other people’s unverified code.
The benefits of leveraging community driven open source software packages in corporate software projects are widely known and accepted today (reducing tech debt, reducing development time and time to market, etc). To minimize any risk associated with these benefits, it is important that organizations stay disciplined and ensure they are taking the necessary security precautions when introducing OSS packages in their projects. Organizations should have a security governance practice in place that scans OSS packages prior to usage, and not rely on a false trust that the public repositories that host these packages will flush out malicious code for them. Secondly, organizations should employ proper detection mechanisms that can immediately detect if malicious code is active within its digital supply chain or application ecosystem.
This is recurring problem, we have seen both incidents based on this form or typosquatting attacks as well as incidents based on intentional or accidental breaches of security in open source supersede chains – essentially it is a risk that comes native to the idea of using someone else’s code as an integral part of your own. You invest trust in an unknown individual or group, and pass that on to those in turn depending on you. Most of the time it works out great, but you must always judge the risk, and as here, even after doing so ensure to be thorough.
Using fake packages to distribute malware is something that has been around for a long time, as typically package distribution sites, specially for python, lack any sort of proper control mechanisms, as any user can upload a new package as long as the name hasn’t been used before.
In this talk at RootedCon, two analysts were able to infect more than 800 devices in a “Whitehat” exercise by doing exactly what the article you mentioned describes, uploading modified packages with slight name changes (for example creating the package “reqeusts” to trick users looking for the package “requests”:
It’s not surprising though that this technique is becoming more “mainstream” after recent incidents came to light where actual developers of these packages sabotaged them in a hacktivist attempt to bring attention to social or political issues: https://www.bleepingcomputer.com/news/security/big-sabotage-famous-npm-package-deletes-files-to-protest-ukraine-war/
Interestingly enough, some companies are starting to take measures to prevent these types of incidents. For example, github, a Microsoft company, is implementing code signing to help prevent supply chain attacks trying to infect the repositories hosted in the platform: https://www.wired.com/story/github-code-signing-sigstore/
Malware is a common tool threat actors use to steal credentials and sensitive information. There is a broad range of malware families out there that do everything from secretly capturing users’ movements to locking up systems. Organizations must mitigate such risks through constant backup to ensure data can be restored rapidly if it is locked, and also utilize proven data-centric security to foil the attack itself. If data is neutralized using modern data-centric techniques – such as tokenization or format-preserving encryption – that enable data use and data analytics in the enterprise while protected while restricting access to the minimum live data, attackers will get the equivalent of digital coal, not data gold.
Information Security Buzz (aka ISBuzz News) is an independent resource that provides the experts comments, analysis and opinion on the latest Information Security news and topics