In addition, CISA is working with Juniper Networks to develop a patch for the vulnerabilities associated with the exploit chain. CISA is also working with other vendors to ensure that their products are not vulnerable to the exploit chain.
Category: Vulnerabilities
The State of Knowledge and Risk Management in Industrial Cybersecurity (ISA/IEC-62443-3-2)
The state of knowledge in industrial cybersecurity during the past decade is based on a vast experience. There is a lot more to come soon.
Industrial PLCs worldwide impacted by CODESYS V3 RCE flaws
Industrial PLCs around the world are vulnerable to CODESYS V3 RCE flaws, potentially leading to serious security risks. Learn more about the potential impacts and how to protect your systems.
Cisco warned customers of a high-severity cisco switch vulnerabilities.
Cisco has recently warned customers of a high-severity vulnerability impacting some of its switch models. This vulnerability could allow attackers to tamper with encrypted traffic, potentially leading to data theft or other malicious activities. Cisco has released a security advisory to address the issue and is urging customers to update their systems as soon as possible.
CISA Warns of Flaws in Siemens, GE Digital, and Contec Industrial Control Systems
CISA has issued a warning about critical vulnerabilities in Siemens, GE Digital, and Contec industrial control systems. These flaws could allow attackers to gain access to and manipulate the systems.
CISA Alert: Veeam Backup and Replication Vulnerabilities Being Exploited in Attacks
CISA has issued an alert warning of active exploitation of vulnerabilities in Veeam Backup and Replication. Organizations should take steps to protect their systems from potential attacks.
New attacks use Windows security bypass zero-day to drop Qbot malware
New phishing attacks use a Windows zero-day vulnerability to drop the Qbot malware without displaying Mark of the Web security warnings.
Serious Security: Linux Kernel Bugs That Emerged After 15 Years
Researchers from cybersecurity company GRIMM recently published an interesting trio of bugs they found in the Linux kernel…
… In a code that had been there without attracting attention for about 15 years.
Fortunately, it seemed like no one else had looked at the code for all that time, at least not with enough diligence to spot the bugs, so they’re now patched and the three CVEs they found are now fixed:
- CVE-2021-27365. Exploitable stack buffer overflow due to the use of sprintf().
- CVE-2021-27363. Loss of kernel address due to the pointer used as the unique ID.
- CVE-2021-27364. Buffer overread that leads to data leakage or denial of service (kernel panic).
The bugs were found in the kernel code that implements iSCSI, a component that implements the venerable SCSI data interface over the network, so you can talk to SCSI devices like tape and disk drives that aren’t directly connected to your own computer.
Of course, if you no longer use SCSI or iSCSI anywhere on your network, you’re probably shrugging your shoulders right now and thinking, “Don’t worry about me, I don’t have any of the iSCSI kernel drivers loaded because ‘I’m just not using them.'”
After all, the failed kernel code cannot be exploited if it is only on disk; it has to be loaded into memory and actively used before it can cause any problems. Except, of course, that most (or at least many) Linux systems not only come with hundreds or even thousands of kernel modules in the /lib/modules directory tree, ready to use in case they are ever needed, but also come configured to allow properly authorized applications. to enable automatic loading of modules on demand.
Note. As far as we know, these bugs were fixed in the following officially maintained Linux kernels, all dated 2021-03-07: 5.11.4, 5.10.21, 5.4.103, 4.19.179, 4.1.4.224, 4.9.260, 4.4.260. If you have a vendor-modified kernel or an unofficial serial kernel that is not on this list, check with your distribution manufacturer. To check your kernel version, run uname -ren a command prompt.
For example, my own Linux system comes with almost, 4500 kernel modules just in case you need them:
root@slack: /lib/modules/5.10.23# search. -name ‘* .ko’
./kernel/arch/x86/crypto/aegis128-aesni.ko
./kernel/arch/x86/crypto/blake2s-x86_64.ko
./kernel/arch/x86/crypto/blowfish-x86_64 .ko
[… 4472 lines removed …]
./kernel/sound/usb/usx2y/snd-usb-usx2y.ko
./kernel/sound/x86/snd-hdmi-lpe-audio.ko
./kernel /virt/lib/irqbypass.ko #
I guess one day I might need the Blowfish encryption module, but since I don’t have any software I hope to use, I could probably do without the blowfish-x86_64.kocontroller.
And while I wouldn’t really mind having one of Tascam’s cool Ux2y sound cards (e.g. US122, US224, US428), I don’t really need any space for one, so I doubt I’ll ever need the snd-usb-usx2y.kocontroller. either.
However, there they are, and by accident or design, any of those drivers could end up loading automatically, depending on the software I use, even if I’m not running as a root user at the time.
Worth a second look
The potential risk posed by unloved, unused, and mostly overlooked drivers is what caused GRIMM to double-examine the errors mentioned above. The researchers were able to find software that an unprivileged attacker could run to activate the faulty driver code they had found, and they were able to produce functional exploits that could take a number of forms:
- Perform privilege escalation to promote a regular user to have kernel-level superpowers.
- Extract addresses from kernel memory to facilitate other attacks that need to know where kernel code is loaded into memory.
- It locks the kernel and therefore the entire system.
- Read pieces of data from kernel memory that were supposed to be off limits.
As uncertain and limited in scope as the latest exploit sounds, it seems that the data an unprivileged user might see could include chunks of data that is transferred during access to genuine iSCSI devices. If so, this means, in theory, that a criminal with an unprivileged account on a server where iSCSI was used could run an innocent-looking program to sit in the background, sniffing out a random selection of privileged data from memory. . Even a fragmented, unstructured stream of sensitive data intermittently extracted from a privileged process (remember the infamous Heartbleed bug?) It could allow dangerous secrets to escape. Don’t forget how easy it is for computer software to recognize and “scrape” data patterns as they fly by in RAM, such as credit card numbers and email addresses.
Revisited errors
Above, we mentioned that the first error in this set was due to the “use of sprintf()”. That’s a function of C which is the print abbreviation formatted in a string, and it’s a way to print a text message on a block of memory so you can use it later.
For example, this code…
char buf [64]; / * Reserve a block of bytes of 64 bytes *
/ char * str = “42”; / * It actually has 3 bytes, therefore: ‘4’ ‘2’ NUL * /
/ * Zero end added automatically: 0x34 0x32 0x00 * /
sprintf (buf, “The answer is% s”, str)
… It would leave the 12-character “Answer is 42″ memory block, followed by a zero-byte terminator (ASCII NUL), followed by 51 untouched bytes at the end of the 64-byte buffer.
However, sprintf()is always dangerous and should never be used, because it does not check if there is enough space in the final memory block to fit the printed data.
Above, if the string stored in the variable is more than 54 bytes, including the zero byte at the end, then it will not fit together with the additional text ” Answer is “.
Worse, if the strno text data has a zero byte at the end, which is how C indicates when to stop copying a string, you could accidentally copy thousands or even millions of bytes that follow stren the memory until you simply hit a zero byte, at which point the kernel has almost certainly crashed.
Modern code should not use C functions that can make memory copies of unlimited length – I use snprintf(), which means formatting and printing at most N bytes in a string, and your friends instead.
Do not give your address
The second previous bug arose from the use of memory addresses as unique identifiers. That sounds like a good idea: if you need to indicate a data object in your kernel code with an identification number that won’t collide with any other objects in your code, you can use the numbers 1, 2, 3, and so on. , adding one at a time and solve the problem.
But if you want a unique identifier that doesn’t conflict with any other numbered objects in the kernel, you might think, “Why not use the memory address where my object is stored, because it’s obviously unique, since two objects can’t be in the same place in the kernel RAM at the same time? (No, unless there is already a crisis with memory use.)
The problem is that if your object ID is ever visible outside the kernel, for example, so that untrusted programs in the user area can refer to it, you just give information about the internal design of kernel memory, and that’s not supposed to happen.
Modern cores use what’s called KASLR, short for kernel address space layout randomization, specifically to prevent unprivileged users from discovering the exact internal design of the kernel. If you’ve ever opened locks (it’s a popular and surprisingly relaxing pastime among hackers and cybersecurity researchers; you can even buy transparent locks to have fun educationally), you’ll know it’s much easier if you already know how the lock works.
The mechanism is presented internally. Similarly, knowing exactly what has been loaded inside the kernel almost always makes other errors, such as buffer overflows, much easier to exploit.
What to do?
- Update your kernel. If you trust the creator of your distribution for the new cores, be sure to get the latest update. See above for the version numbers where these bugs were fixed.
- Do not use C programming functions that are known to be problematic. Avoid any memory access feature that does not track the maximum amount of data to use. Keep track of officially documented “C string secure features” for your chosen operating system or programming environment and use them whenever you can. This gives you a better chance of preventing memory overloads.
- Do not use memory addresses as “unique” identifiers or identifiers. If you can’t use a counter that only increases by 1 at a time, use a random number of at least 128 bits instead. These are sometimes referred to as UUIDs, for universal unique identifiers. Use a high-quality random font, such as /dev/urandomen Linux and macOS, or BCryptGenRandom()on Windows.
- Consider blocking the kernel module load to avoid surprises. If you configure the Linux system variable a kernel.modules_disable=1 time your server has started and is working properly, no more modules can be loaded, either by accident or by design, and this setting can only be disabled by restarting. Use sysctl -w kernel.modules_disable=1o echo 1 > /proc/sys/kernel/modules_disable.
- Consider identifying and retaining only the kernel modules you need. You can build a static kernel with only the required modules compiled, or create a kernel package for your servers with all the unnecessary modules removed. With a static kernel, you can disable module loading completely if you want.
Source: Link
The U.S. food supply is not cyber-secure or safe from threats to control systems
The U.S. Food and Drug Administration (FDA) issued the final rule on the Food Safety Modernization Act (FSMA) in November 2015 and, according to the FDA’s website, is still in effect as of 10/21/2020.The rule aims to prevent the intentional adulteration of acts intended to cause large-scale harm to public health, including acts of terrorism aimed at the food supply.