One could argue that August 2010 was The Month of 0-dayz with the release of the Windows Application DLL Loading Hijacking Thing.
I am sure you know about it by now. Originally discovered by Acros and published as Remote Binary Planting in Apple iTunes for Windows [txt] and then soon after re-researched by HD Moore to uncover the full extent of problem. It´s in the order of the file structure searches Windows performs when any DLL not listed in the KnownDLLs registry key is loaded by an application. Bojan Zdrnja wrote an in-depth and nuff technical ISC Diary post explaining the bug in detail.
Check also the Rapid7 blog post and the Metasploit blog post by HD Moore for details and updates. Reportedly hundreds of Windows applications affected. 0-day of all time maybe? In any case it is as huge as the media coverage suggests.
The 0-day stream definitely continues in September as the Abyssec Security Team is having the Month Of Abyssec Undisclosed Bugs in collaboration with the exploit-db.com.
The first four are out. They are very detailed advisories. Adobe Reader and Flash Player (advisory with unstable exploit), QuickTime (advisory with Proof-of-Concept), CPanel (advisory) and Rainbowportal (XSS and SQL Injection) are affected this far. If Abyssec continues with the pace set on first two days, there will be about 56 more 0-dayz out before the month is over.
Its the second 0-day for QuickTime actually already this week. There was a very curious one published on Monday. See the Bugtraq post by Reversemode about it. Gotta love the naming of the properties here btw.
Its full disclosure with Proof-of-Concept code, apparently quite trivial to exploit, works via Internet Explorer (in case any QuickTime components are installed on the system) and allows code execution in the context of the web browser or QuickTime Player with user privileges.
Enough bugs around. Good time to chill out with QuickTime at least. On Windows workstations you maybe want to killbit the following registry keys in order to prevent the drive-by exploitation via the Internet Explorer at least.
{02BF25D5-8C17-4B23-BC80-D3488ABDDC6B}
{4063BE15-3B08-470D-A0D5-B37161CFFD69}
Or even better temporarily block all different file types (as attack vectors) associated with the vulnerable QuickTime Player by _deleting_ ALL registry keys matching
HKEY_CLASSES_ROOT\QuickTime.*
Have them backed up in order to restore the QuickTime functionality after the vulnerability has been fixed. There are multiple.
September 2, 2010
July 28, 2010
SCADA Hard-Coded Admin Credentials Win The Information Security Candy of the Summer Award
The winner of the InfoSec Candy of The Summer Award has been announced. The 2010 award go to "The Hard-Coded Admin Credential Issue in Various SCADA Systems".
There has been a looot of talk about the supervisory control and data acquisition (SCADA) system security and compliance during the past few years, no? I am sure you have stumbled across it and possibly also wondered about the lack of concrete details and depth regarding the vulnerabilities in and attacks against the SCADA systems. You know...the "ok, I agree, we need SCADA compliance, but what does that really mean?" feeling many articles about the issue leave you with. What can we do to harden these obviously vulnerable systems?
Mr. James Arlen seems to feel it also as he has invited the community in the BlackHat USA 2010 to sit down, discuss openly and have a little "fireside chat" this afternoon (Las Vegas time) to define the baseline in SCADA security. I feel like this really is needed.
We have now one concrete SCADA compliance issue at hands. Various SCADA systems apparently lack reliable database access control. These systems really have to be run in isolated enviroments accessable only via well protected and properly authenticating hopping stations.
The database admin credentials for Simatic WinCC SCADA systems are hard coded, should not be changed according to the vendor and now very publicly known and maliciously used in the wild. The password was curiously initially leaked to public in April, 2008., but now even your grandmother knows them. Yep. She is just not telling them to you.
Very serious issue indeed, but still maybe not The Candy without checking out how it was re-disclosed publicly this summer. We were quite ironically linked to the WinCC default configuration flaw by a very curious Windows LNK file 0-day detected this summer in the wild by the Belarus security company VirusBlokAda.
See their initial advisory here.
They discovered a worm now commonly known as the Stuxnet. It exploited a previously unknown vulnerability in the Windows Shell affecting all Windows versions. An malware analyst named Frank Boldewin soon released results of his initial decrypting/unpacking of the code and showed that at the nth attack stage Stuxnet sample with MD5 hash 016169ebebf1cec2aad6c7f0d0ee9026 utilized the hard coded database admin credentials of the Simatic WinCC SCADA Systems to run SQL on the databases. His "original advisory" is still cached by Google here.
In my opinion it is "the next Operation Aurora". The next incident in the long lineage of targeted attacks or Advanced Persistent Threats (APT) as the latest term goes. Siemens initially reported that only one customer had been attacked and later studies show that notable majority of the USB key distributed malware are detected in the Middle East. Now with the automated attack in the form of a worm is out in wild, there will be undoubtedly more attacks to follow.
For further reading enjoy the KrebsOnSecurity.com article about the discovery of the malware and the Windows Shell vulnerability. It remains unpatched, but Microsoft has released the Security Advisory KB2286198 to address the issue and there is the CVE-2010-2568 assigned for the vulnerability.
There is also an interesting Microsoft Threat Research & Response blog post about Stuxnet and Wired.com has a good Threat Level blog post about the Simatic WinCC hard coded credentials issue.
There has been a looot of talk about the supervisory control and data acquisition (SCADA) system security and compliance during the past few years, no? I am sure you have stumbled across it and possibly also wondered about the lack of concrete details and depth regarding the vulnerabilities in and attacks against the SCADA systems. You know...the "ok, I agree, we need SCADA compliance, but what does that really mean?" feeling many articles about the issue leave you with. What can we do to harden these obviously vulnerable systems?
Mr. James Arlen seems to feel it also as he has invited the community in the BlackHat USA 2010 to sit down, discuss openly and have a little "fireside chat" this afternoon (Las Vegas time) to define the baseline in SCADA security. I feel like this really is needed.
We have now one concrete SCADA compliance issue at hands. Various SCADA systems apparently lack reliable database access control. These systems really have to be run in isolated enviroments accessable only via well protected and properly authenticating hopping stations.
The database admin credentials for Simatic WinCC SCADA systems are hard coded, should not be changed according to the vendor and now very publicly known and maliciously used in the wild. The password was curiously initially leaked to public in April, 2008., but now even your grandmother knows them. Yep. She is just not telling them to you.
Very serious issue indeed, but still maybe not The Candy without checking out how it was re-disclosed publicly this summer. We were quite ironically linked to the WinCC default configuration flaw by a very curious Windows LNK file 0-day detected this summer in the wild by the Belarus security company VirusBlokAda.
See their initial advisory here.
They discovered a worm now commonly known as the Stuxnet. It exploited a previously unknown vulnerability in the Windows Shell affecting all Windows versions. An malware analyst named Frank Boldewin soon released results of his initial decrypting/unpacking of the code and showed that at the nth attack stage Stuxnet sample with MD5 hash 016169ebebf1cec2aad6c7f0d0ee9026 utilized the hard coded database admin credentials of the Simatic WinCC SCADA Systems to run SQL on the databases. His "original advisory" is still cached by Google here.
In my opinion it is "the next Operation Aurora". The next incident in the long lineage of targeted attacks or Advanced Persistent Threats (APT) as the latest term goes. Siemens initially reported that only one customer had been attacked and later studies show that notable majority of the USB key distributed malware are detected in the Middle East. Now with the automated attack in the form of a worm is out in wild, there will be undoubtedly more attacks to follow.
For further reading enjoy the KrebsOnSecurity.com article about the discovery of the malware and the Windows Shell vulnerability. It remains unpatched, but Microsoft has released the Security Advisory KB2286198 to address the issue and there is the CVE-2010-2568 assigned for the vulnerability.
There is also an interesting Microsoft Threat Research & Response blog post about Stuxnet and Wired.com has a good Threat Level blog post about the Simatic WinCC hard coded credentials issue.
July 8, 2010
Security Distro Roundup
There have been some interesting computer security related ISO images released recently.
As already mentioned in an earlier blog post, Metasploit project has made the Metasploitable server image available. It is a vulnerable server image based on the Ubuntu Server 8.04 and comes with various vulnerable applications and configuration flaws by default. Check the Metasploit blog post for further information and download the image [torrent] for hours of legal exploitation fun.
Guy Bruneau from SANS ISC released recently ISO images (32-bit and 64-bit) for a properly preconfigured DNS Sinkhole server. Check the ISC Diary blog post by the author himself for the full description, but in essence DNS Sinkhole is a stand alone server image based on Slackware Linux operating system. You have a choice of using the Bind DNS Server or the PowerDNS server.
The magic with DNS sink holing lies in the blacklists. The Bruneaus DNS Sinkhole server parses its blacklist from three different sources (namely the Malware Domain Blocklist, ZeuS Tracker and Malware Threat Center SRI) and then replies to all DNS queries involving any of the blacklisted malware domains with a non-routed or simply non-existing internal network address thus disabling effectively any communication to these domains.
IPFire is a recently released Linux firewall / internet gateway server image. It is a stateful inspection firewall based on the Linux netfilter framework and complete with filters for bad packets and full IDS integration by the Guardian IPS add-on. IPFire can also act as a VPN endpoint for secure remote access and it can be deployed as a proxy for FTP, HTTP and DNS traffic or as a DHCP server for local clients.
Kind of reminds me of Devil-Linux which is an older Linux firewall distribution, which by now has also expanded to a multiserver distribution which can practically be used to implement any (or all?) common DMZ or LAN servers securely.
Via another recent SANS ISC blog post comes a very interesting paper [PDF] about creating a Live CD specifically for incident response purposes. The paper was written by Bert Hayes for his SANS Gold certification process and it offers a very detailed instructions on how to compile a Knoppix based Live CD to be used when remotely investigating possibly compromised systems. The paper is complete with detailing how to set up secure connectivity to a remote administration point (called Mothership), but it is maybe worth to note that the actual Live CD should be run locally on the system being investigated. There is no ISO image available for the Live CD yet and it seems like the project is still work in progress as new tools are planned to be integrated to the compilation.
In the other custom distribution related news we have the recent release of the Ubuntu Customization Kit (UCK), which is a tool for customizing any of the available Ubuntu distributions and which allows you to add and remove packages, tweak various configuration items and boot maneuvers and then create Live CD ISO images of these customized systems.
As already mentioned in an earlier blog post, Metasploit project has made the Metasploitable server image available. It is a vulnerable server image based on the Ubuntu Server 8.04 and comes with various vulnerable applications and configuration flaws by default. Check the Metasploit blog post for further information and download the image [torrent] for hours of legal exploitation fun.
Guy Bruneau from SANS ISC released recently ISO images (32-bit and 64-bit) for a properly preconfigured DNS Sinkhole server. Check the ISC Diary blog post by the author himself for the full description, but in essence DNS Sinkhole is a stand alone server image based on Slackware Linux operating system. You have a choice of using the Bind DNS Server or the PowerDNS server.
The magic with DNS sink holing lies in the blacklists. The Bruneaus DNS Sinkhole server parses its blacklist from three different sources (namely the Malware Domain Blocklist, ZeuS Tracker and Malware Threat Center SRI) and then replies to all DNS queries involving any of the blacklisted malware domains with a non-routed or simply non-existing internal network address thus disabling effectively any communication to these domains.
IPFire is a recently released Linux firewall / internet gateway server image. It is a stateful inspection firewall based on the Linux netfilter framework and complete with filters for bad packets and full IDS integration by the Guardian IPS add-on. IPFire can also act as a VPN endpoint for secure remote access and it can be deployed as a proxy for FTP, HTTP and DNS traffic or as a DHCP server for local clients.
Kind of reminds me of Devil-Linux which is an older Linux firewall distribution, which by now has also expanded to a multiserver distribution which can practically be used to implement any (or all?) common DMZ or LAN servers securely.
Via another recent SANS ISC blog post comes a very interesting paper [PDF] about creating a Live CD specifically for incident response purposes. The paper was written by Bert Hayes for his SANS Gold certification process and it offers a very detailed instructions on how to compile a Knoppix based Live CD to be used when remotely investigating possibly compromised systems. The paper is complete with detailing how to set up secure connectivity to a remote administration point (called Mothership), but it is maybe worth to note that the actual Live CD should be run locally on the system being investigated. There is no ISO image available for the Live CD yet and it seems like the project is still work in progress as new tools are planned to be integrated to the compilation.
In the other custom distribution related news we have the recent release of the Ubuntu Customization Kit (UCK), which is a tool for customizing any of the available Ubuntu distributions and which allows you to add and remove packages, tweak various configuration items and boot maneuvers and then create Live CD ISO images of these customized systems.
The IDS Incident Handling 101
We came up with a decent short description of Intrusion Detection System (IDS) incident handling during a recent lunch discussion.
Handling IDS incidents is not very different to making a puzzle. You first gather and identify the individual pieces and then you place them together correctly. While traditional puzzles, once put together, present you with a complete picture, the IDS incidents present you with a possible attack against one of your IP addresses.
The alerts in the current generation of IDS systems come in five pieces. There are always...
the source IP address
the source port
the destination IP address
the destination port
...and the IDS signature, which was triggered by an pattern detected in the network traffic by the IDS sensors.
As with traditional puzzles, you have to examine each given piece thoroughly before attempting to put them together. Some preliminary questions regarding the involved systems are the installed operating systems and the applications. Large part of the IDS signatures detect exploits to specific vulnerabilities in software, so the patching level of the systems and of the destination especially is one of the initial things to check also.
Ideally the incident handler is able to securely connect to the monitored environment and verify all of the above from the systems themselves, but in case this is not possible, ensure the incident response team has a system database of some type available to them. ALL monitored IP addresses really should be listed. I know it may be a taunting task in large environments to identify the various switch port addresses and the virtual addresses used for load balancing and clustering et cetera, but I guarantee you that simply assessing and documenting properly your computing resources is a big step in hardening your environment against misuse.
Among many other things, a proper system database enables your incident response team to react fast and properly to any possibly true positive attacks. The database should include at least the following properties for each monitored IP address:
-DESCRIPTION of the system usage
(as in web server, internet gateway router, LAN server, workstation etc.)
-OPERATING SYSTEM and the patching level
(including the network device firmware etc.)
-APPLICATIONS installed on the systems
(the exact version numbers)
-CONTACT DETAILS of the system administrators
(for running commands on the systems etc.)
Do not overlook the system information carried by the IPv4 addresses. One of the steps to full system identification is identifying the IP address ranges involved. In case the monitored environment includes publicly available servers, pay attention to the source addresses. Any addresses in the private IPv4 address ranges are most likely internal systems and as such possibly indications of more serious compromise.
The next piece to look at are the TCP/UDP ports. Do multiple searches to identify all possible services known to use the involved ports on the given platform. The IANA Port Numbers list is the official resource for figuring out the legitimate use of the Well Known Ports and the Registered Ports (namely the ports between the 0 and the 49151).
The SANS Internet Storm Center (ISC) provides a quality port search on the upper right corner of their diary page often listing the malware known to use the given port also. The DefaultPorts.com is another good database to check and don't forget the frequently updated list of known and detected port usage hosted at nmap.org.
The Trojan List hosted at simovits.com can be sorted by the port numbers also. I am not sure if the list is still maintained actively, but it is worthy information source nevertheless.
Here again access to the monitored environment comes very handy as the incident handler can quickly copy the running processes and the open network connections to the case log for later analysis. You will notice in time, that the mentioned two listings from the destination will be constantly needed.
Take the network design into consideration also when doing preliminary analysis on the ports. It is often an easy way to identify false positives and further implement filters to the sensors. Ensure you know the firewalls and routers examining the monitored traffic AFTER the sensor examines it. Depending on the implementation, the IDS sensor may see the possible attacks before the traffic gets filtered by a firewall or the routers. There may be even high volumes of attacks against the environment, but if the destination port of the attacks is closed by the firewall, no permanent harm can be caused.
After some innovative system identification and port analysis exercises, the incident handler is left with the last piece only. The IDS signature. This is the piece that tells you what actually was detected. Note that the signature does not tell you what the attack was as often used description goes. It tells you what was detected in the traffic between the two end points.
Study the signature that got triggered. Have the relevant IDS signature description library available to you at all times when monitoring. Its THE starting point. I believe all IDS vendors now provide online access to such libraries. Study the description, check all related advisories and references and do some background study if necessary to fully appreciate what the signature detects. This piece really completes the puzzle often with the IDS alerts. Here the next steps most often are revealed. They may be as simple as verifying that a patch is installed or that the software has been updated on the destination, but whatever the next steps may be, they are to be found by analysing the signature as far as possible.
In my opinion the incident handler should have access to the attack packets. At least in order to copy them to the case log for later analysis. The Cisco IDS for example gives you often also the exact character pattern detected by the signature in the same window with the attack packet data.
The attack packet data is priceless when investigating alerts against non-patched vulnerabilities such as the ones in custom web applications, where there is not actually even any specific description of the vulnerability available a part from the generic XSS or SQL Injection papers.
In case you use the much loved Snort IDS, make sure you can read the Snort rule language effortlessly as Snort uses an open signature format which allows you a detailed view into the signature construction. It uses a specific syntax, but the variables are not too many.
Emerging Threats has a good wiki entry titled Snort Signatures 101 to get you over the fundamentals
In my opinion the IDS incident handling process is completed with identifying and disregarding the obvious false positive alerts (and later maybe permanently filtering the reoccurring false positive alerts from the IDS monitoring). After this point the incident handling is over and the incident response begins although they often should overlap each other and occur simultaneously.
I have been the dude introducing various junior agents to the IDS incident handling. Kind of like giving them the first idea of the science here and showing them their first real alerts. I find it very challenging to give them any type of short description or introductory note to start with. The puzzle comparison is the current favourite here by far.
Handling IDS incidents is not very different to making a puzzle. You first gather and identify the individual pieces and then you place them together correctly. While traditional puzzles, once put together, present you with a complete picture, the IDS incidents present you with a possible attack against one of your IP addresses.
The alerts in the current generation of IDS systems come in five pieces. There are always...
the source IP address
the source port
the destination IP address
the destination port
...and the IDS signature, which was triggered by an pattern detected in the network traffic by the IDS sensors.
As with traditional puzzles, you have to examine each given piece thoroughly before attempting to put them together. Some preliminary questions regarding the involved systems are the installed operating systems and the applications. Large part of the IDS signatures detect exploits to specific vulnerabilities in software, so the patching level of the systems and of the destination especially is one of the initial things to check also.
Ideally the incident handler is able to securely connect to the monitored environment and verify all of the above from the systems themselves, but in case this is not possible, ensure the incident response team has a system database of some type available to them. ALL monitored IP addresses really should be listed. I know it may be a taunting task in large environments to identify the various switch port addresses and the virtual addresses used for load balancing and clustering et cetera, but I guarantee you that simply assessing and documenting properly your computing resources is a big step in hardening your environment against misuse.
Among many other things, a proper system database enables your incident response team to react fast and properly to any possibly true positive attacks. The database should include at least the following properties for each monitored IP address:
-DESCRIPTION of the system usage
(as in web server, internet gateway router, LAN server, workstation etc.)
-OPERATING SYSTEM and the patching level
(including the network device firmware etc.)
-APPLICATIONS installed on the systems
(the exact version numbers)
-CONTACT DETAILS of the system administrators
(for running commands on the systems etc.)
Do not overlook the system information carried by the IPv4 addresses. One of the steps to full system identification is identifying the IP address ranges involved. In case the monitored environment includes publicly available servers, pay attention to the source addresses. Any addresses in the private IPv4 address ranges are most likely internal systems and as such possibly indications of more serious compromise.
The next piece to look at are the TCP/UDP ports. Do multiple searches to identify all possible services known to use the involved ports on the given platform. The IANA Port Numbers list is the official resource for figuring out the legitimate use of the Well Known Ports and the Registered Ports (namely the ports between the 0 and the 49151).
The SANS Internet Storm Center (ISC) provides a quality port search on the upper right corner of their diary page often listing the malware known to use the given port also. The DefaultPorts.com is another good database to check and don't forget the frequently updated list of known and detected port usage hosted at nmap.org.
The Trojan List hosted at simovits.com can be sorted by the port numbers also. I am not sure if the list is still maintained actively, but it is worthy information source nevertheless.
Here again access to the monitored environment comes very handy as the incident handler can quickly copy the running processes and the open network connections to the case log for later analysis. You will notice in time, that the mentioned two listings from the destination will be constantly needed.
Take the network design into consideration also when doing preliminary analysis on the ports. It is often an easy way to identify false positives and further implement filters to the sensors. Ensure you know the firewalls and routers examining the monitored traffic AFTER the sensor examines it. Depending on the implementation, the IDS sensor may see the possible attacks before the traffic gets filtered by a firewall or the routers. There may be even high volumes of attacks against the environment, but if the destination port of the attacks is closed by the firewall, no permanent harm can be caused.
After some innovative system identification and port analysis exercises, the incident handler is left with the last piece only. The IDS signature. This is the piece that tells you what actually was detected. Note that the signature does not tell you what the attack was as often used description goes. It tells you what was detected in the traffic between the two end points.
Study the signature that got triggered. Have the relevant IDS signature description library available to you at all times when monitoring. Its THE starting point. I believe all IDS vendors now provide online access to such libraries. Study the description, check all related advisories and references and do some background study if necessary to fully appreciate what the signature detects. This piece really completes the puzzle often with the IDS alerts. Here the next steps most often are revealed. They may be as simple as verifying that a patch is installed or that the software has been updated on the destination, but whatever the next steps may be, they are to be found by analysing the signature as far as possible.
In my opinion the incident handler should have access to the attack packets. At least in order to copy them to the case log for later analysis. The Cisco IDS for example gives you often also the exact character pattern detected by the signature in the same window with the attack packet data.
The attack packet data is priceless when investigating alerts against non-patched vulnerabilities such as the ones in custom web applications, where there is not actually even any specific description of the vulnerability available a part from the generic XSS or SQL Injection papers.
In case you use the much loved Snort IDS, make sure you can read the Snort rule language effortlessly as Snort uses an open signature format which allows you a detailed view into the signature construction. It uses a specific syntax, but the variables are not too many.
Emerging Threats has a good wiki entry titled Snort Signatures 101 to get you over the fundamentals
In my opinion the IDS incident handling process is completed with identifying and disregarding the obvious false positive alerts (and later maybe permanently filtering the reoccurring false positive alerts from the IDS monitoring). After this point the incident handling is over and the incident response begins although they often should overlap each other and occur simultaneously.
I have been the dude introducing various junior agents to the IDS incident handling. Kind of like giving them the first idea of the science here and showing them their first real alerts. I find it very challenging to give them any type of short description or introductory note to start with. The puzzle comparison is the current favourite here by far.
July 5, 2010
Enviroment Hardening As Training Course
I have been giving some very rewarding trainings in the past few weeks. I want to publicly thank all the attendants. I truly enjoyed both experiences. I was blessed with very knowledgeable and motivated participants which turned the classes into enjoyable peer-to-peer experiences instead of hours of monologue in front of people clearly absent minded.
I lead two very different courses. The first was aimed at the internal IT staff of a company who are not exactly in the computer business, but whose business is very dependent on computers and who are very concerned about their IT administrators performance. I believe we had a full hand of application, system and network administrators participating.
I did my Security by OSI Layers -course for them which goes through the entire Open Systems Interconnection (OSI) model talking about the threats past and present affecting each layer. I find the OSI model to be very convenient actually for this type of generic computer security trainings. Gives them structure.
As usual we spent good part of the course reviewing the application layer security, but finally there is a lot to talk about on each layer. The layered approach allows me also to inject some history of hacking in to the course and thus illustrate the progress of the game and some of the original reasoning behind the concepts like Public Key Infrastructure (PKI) and such computer security mechanisms nowadays often taken for granted.
We obviously also covered various mitigation techniques and defensive measures against the reviewed threats. With this type of course it is relatively easy to provide concrete additional value to the customer as the example material can be all drawn from the real life customer environment as we did now. In this particular case all our course exercises were actually related to assessing and hardening their own environment.
The other course I did was a bit different and somewhat more theoretic. It was for a group of junior members of a Computer Security Incident Response Team (CSIRT). People who have been working from zero to few years responding to corporate computer security incidents and IDS alerts. The course was built around the CompTIA Security+ certification as passing the exam was one of the internal requirements for a senior seat in the team.
I personally find the Security+ exam to serve perfect for this type of junior agent graduation. These people often do not possess the full five years of work experience needed for the (ISC)2 CISSP exam, so the CompTIA alternative serves them well. While consisting of only six knowledge domains, the Security+ manages in my opinion to test the applicants knowledge of the fundamentals rather well.
On both courses we also had a little isolated lab environment for some hands on exercises. I find this essential. Many of the central concepts related to computer security are challenging to teach by word only. Check the original buffer overflow article by Aleph One for proof. While it is extremely important to understand the science behind the security vulnerabilities and computer attacks, I find it equally essential to have hands on experience on launching such attacks, witnessing them happening and examining the results of an successful attack. Especially to the system and network administrators who are not maybe directly involved with computer security research, but very much affected by it.
Central point in our little labs was the Metasploitable server image [torrent] recently released by the good man HD Moore and his associates. It is an extremely vulnerable Ubuntu 8.04 server that comes with various expired versions of applications and servers, with weak account credentials and multiple configuration flaws by default. Happy trainer toy for demonstrating what is this thing called computer compromise. Big up Metasploit crew once again.
Check the Metasploit blog post linked above to get started with the image, but I encourage you to explore also. There is much, much more insecurity to be found beyond the few attacks outlined in the post. Good for brute-force exercises also. We were not actually able to break into the root account during the course, but we did get root in later attack stages with some privilege escalation exploitation. Lots of fun included.
We attacked the Metasploitable server with various BackTrack 4 systems. Another reason to give thanks and praise, but this time to the Offensive Security crew. I am sure you are aware of the BackTrack distro by now, so I only testify that it is very suitable for training lab use also.
There has been some major changes in the BackTrack version 4 by the way. BackTrack is now based on Debian. I find it only nice to have the Advanced Packaging Tool (APT) handling the update and install procedures among some other things now included as well. The BackTrack 4 comes to the network very quiet. Seems like the network interfaces are disabled by default and not even the DHCP client run automatically. You have to therefore run ifup eth0 (or whatever your connected interface may be) to enable the interface and run dhclient to get the DHCP configuration manually from the server, which in our case was the Metasploitable server.
So...I am available for trainings : ) Feel free to contact me by email for any queries regarding the courses I am able to lead. I am willing to create some custom courses also aimed at very specific audiences as in web application developers or IDS incident handlers for example, if needed.
I lead two very different courses. The first was aimed at the internal IT staff of a company who are not exactly in the computer business, but whose business is very dependent on computers and who are very concerned about their IT administrators performance. I believe we had a full hand of application, system and network administrators participating.
I did my Security by OSI Layers -course for them which goes through the entire Open Systems Interconnection (OSI) model talking about the threats past and present affecting each layer. I find the OSI model to be very convenient actually for this type of generic computer security trainings. Gives them structure.
As usual we spent good part of the course reviewing the application layer security, but finally there is a lot to talk about on each layer. The layered approach allows me also to inject some history of hacking in to the course and thus illustrate the progress of the game and some of the original reasoning behind the concepts like Public Key Infrastructure (PKI) and such computer security mechanisms nowadays often taken for granted.
We obviously also covered various mitigation techniques and defensive measures against the reviewed threats. With this type of course it is relatively easy to provide concrete additional value to the customer as the example material can be all drawn from the real life customer environment as we did now. In this particular case all our course exercises were actually related to assessing and hardening their own environment.
The other course I did was a bit different and somewhat more theoretic. It was for a group of junior members of a Computer Security Incident Response Team (CSIRT). People who have been working from zero to few years responding to corporate computer security incidents and IDS alerts. The course was built around the CompTIA Security+ certification as passing the exam was one of the internal requirements for a senior seat in the team.
I personally find the Security+ exam to serve perfect for this type of junior agent graduation. These people often do not possess the full five years of work experience needed for the (ISC)2 CISSP exam, so the CompTIA alternative serves them well. While consisting of only six knowledge domains, the Security+ manages in my opinion to test the applicants knowledge of the fundamentals rather well.
On both courses we also had a little isolated lab environment for some hands on exercises. I find this essential. Many of the central concepts related to computer security are challenging to teach by word only. Check the original buffer overflow article by Aleph One for proof. While it is extremely important to understand the science behind the security vulnerabilities and computer attacks, I find it equally essential to have hands on experience on launching such attacks, witnessing them happening and examining the results of an successful attack. Especially to the system and network administrators who are not maybe directly involved with computer security research, but very much affected by it.
Central point in our little labs was the Metasploitable server image [torrent] recently released by the good man HD Moore and his associates. It is an extremely vulnerable Ubuntu 8.04 server that comes with various expired versions of applications and servers, with weak account credentials and multiple configuration flaws by default. Happy trainer toy for demonstrating what is this thing called computer compromise. Big up Metasploit crew once again.
Check the Metasploit blog post linked above to get started with the image, but I encourage you to explore also. There is much, much more insecurity to be found beyond the few attacks outlined in the post. Good for brute-force exercises also. We were not actually able to break into the root account during the course, but we did get root in later attack stages with some privilege escalation exploitation. Lots of fun included.
We attacked the Metasploitable server with various BackTrack 4 systems. Another reason to give thanks and praise, but this time to the Offensive Security crew. I am sure you are aware of the BackTrack distro by now, so I only testify that it is very suitable for training lab use also.
There has been some major changes in the BackTrack version 4 by the way. BackTrack is now based on Debian. I find it only nice to have the Advanced Packaging Tool (APT) handling the update and install procedures among some other things now included as well. The BackTrack 4 comes to the network very quiet. Seems like the network interfaces are disabled by default and not even the DHCP client run automatically. You have to therefore run ifup eth0 (or whatever your connected interface may be) to enable the interface and run dhclient to get the DHCP configuration manually from the server, which in our case was the Metasploitable server.
So...I am available for trainings : ) Feel free to contact me by email for any queries regarding the courses I am able to lead. I am willing to create some custom courses also aimed at very specific audiences as in web application developers or IDS incident handlers for example, if needed.
June 3, 2010
Poetry, Broken Arrows and Much More In The SOURCE Boston 2010
The presentations and the videos from the recent SOURCE Conference held in Boston are online. SOURCE seems to be finding the difficult balancing point between the technical and the business. I have not had time to go through many of the presentations yet, but there are many to check despite the somewhat detectable (and by now already common) presentation replay with some older stuff in the schedule as well.
For those running SNORT and/or having ability to implement custom IDS/IPS signatures (and having the attack packets available for analysis), I recommend going through at least Windows File Pseudonyms: Pwnage and Poetry [.pptx] by Dan Crowley and Reverse Engineering Broken Arrows [.pdf] by Adam Meyers. Both offer top quality concrete advise to the day-to-day of incident response.
I found How to Detect Penetration Testers [.pptx] by Ron Gula to be a spot on and "funny" wake up call type of thing to (our) incident response industry and Drinking from the Firehose: Ten Years of Vulnerabilities through the CVE Lens [.pptx] by Steve Christey to be a definitely due homage to the CVE project and very educating listen. Bullseye on Your Back - Life on the Adobe Product Incident Response Team [.pptx] by Wendy Poland and David Lenoe is also a must for anybody working in this industry IMO.
Dan Kaminsky published interesting looking stuff with The Fine Art of Hari Kari (.JS), And Other Approaches For The Strange Reality Of Web Defense [.pptx], but I haven´t found time to go through it yet. One to check also is Managed Code Rootkits – Hooking into Runtime Environments [.ppt] by Erez Metula and another one is Linux Kernel Exploitation - Earning Its Pwnie a Vuln at a Time [.pdf] by Jon Oberheide. Anonymity, Privacy, and Circumvention with Tor in the Real World [.pdf] by Jake Appelbaum seems interesting as well...check for ya self. Too many to mention, get it right from the SOURCE.
PS. For the Please SOURCE Publish These -list I have Neurosurgery With Meterpreter by Colin Ames, Rooting Out the Bad Actors by Alex Lanstein and Cracking the Foundation: Attacking WCF Web Services by Brian Holyfield among a few other things.
For those running SNORT and/or having ability to implement custom IDS/IPS signatures (and having the attack packets available for analysis), I recommend going through at least Windows File Pseudonyms: Pwnage and Poetry [.pptx] by Dan Crowley and Reverse Engineering Broken Arrows [.pdf] by Adam Meyers. Both offer top quality concrete advise to the day-to-day of incident response.
I found How to Detect Penetration Testers [.pptx] by Ron Gula to be a spot on and "funny" wake up call type of thing to (our) incident response industry and Drinking from the Firehose: Ten Years of Vulnerabilities through the CVE Lens [.pptx] by Steve Christey to be a definitely due homage to the CVE project and very educating listen. Bullseye on Your Back - Life on the Adobe Product Incident Response Team [.pptx] by Wendy Poland and David Lenoe is also a must for anybody working in this industry IMO.
Dan Kaminsky published interesting looking stuff with The Fine Art of Hari Kari (.JS), And Other Approaches For The Strange Reality Of Web Defense [.pptx], but I haven´t found time to go through it yet. One to check also is Managed Code Rootkits – Hooking into Runtime Environments [.ppt] by Erez Metula and another one is Linux Kernel Exploitation - Earning Its Pwnie a Vuln at a Time [.pdf] by Jon Oberheide. Anonymity, Privacy, and Circumvention with Tor in the Real World [.pdf] by Jake Appelbaum seems interesting as well...check for ya self. Too many to mention, get it right from the SOURCE.
PS. For the Please SOURCE Publish These -list I have Neurosurgery With Meterpreter by Colin Ames, Rooting Out the Bad Actors by Alex Lanstein and Cracking the Foundation: Attacking WCF Web Services by Brian Holyfield among a few other things.
May 26, 2010
System Scanning with SHODAN
Whether named after the first black belt degree in Japanese martial arts or after the evil AI in the System Shock games, SHODAN The Computer Search Engine is very interesting experiment indeed. In short SHODAN provides a web based interface for data mining various details about computers and services in the public network. Think Google for server banners.
While there are various NMAP-like scanners with a web interface already available in the internetz, SHODAN takes the game to the next level. According to the authors, SHODAN is running a custom built distributed port scanner currently querying publicly available HTTP, FTP, SSH and Telnet services (more ports will be possibly added later) and indexing the banner data returned by the servers. SHODAN also provides various clever filters for sorting out the search results including a world map showing the geolocations and standard CIDR notation can be used to focus the searches to desired IP address ranges only.
Simple, but ah so devastating. Have you ever wondered are there any pre-1993 versions of the Cisco IOS running in the public networks still? Or any open anonymous FTP servers? Surely there are no Microsoft IIS 4.0 web servers in production anymore? The reason you see only three pages worth of results is probably due to the fact that you are not logged in to SHODAN.
While there are various NMAP-like scanners with a web interface already available in the internetz, SHODAN takes the game to the next level. According to the authors, SHODAN is running a custom built distributed port scanner currently querying publicly available HTTP, FTP, SSH and Telnet services (more ports will be possibly added later) and indexing the banner data returned by the servers. SHODAN also provides various clever filters for sorting out the search results including a world map showing the geolocations and standard CIDR notation can be used to focus the searches to desired IP address ranges only.
Simple, but ah so devastating. Have you ever wondered are there any pre-1993 versions of the Cisco IOS running in the public networks still? Or any open anonymous FTP servers? Surely there are no Microsoft IIS 4.0 web servers in production anymore? The reason you see only three pages worth of results is probably due to the fact that you are not logged in to SHODAN.
Labels:
banner,
data mining,
fingerprinting,
FTP,
HTTP,
port scan,
reconnaissance,
SHODAN,
SSH,
Telnet
May 18, 2010
KHOBE Cause Temblor
Matousec did cause some temblor in the infosec community with the KHOBE attack paper published last week. See my earlier post Windows TOCTTOU Attacks with KHOBE for the initial material.
KHOBE has gotten a lot of publicity and has generated active response and commentary from the anti-virus industry already. Temblor reached the internet s0c1ety also due to some "juicy" details published in a GData blog post about their recent communication with Matousec in order to get more information and evaluate the effect on their software correctly.
It is the art of vulnerability disclosure, no? The difficulty to handle all the aspects of disclosing delicate information. Considering that Matousec apparently did disclose privately their full research to their "clients and other software vendors" already in August 2008 and (especially) considering the fact that the KHOBE code is a result of some years of research and development, I personally find it only correct that Matousec now sell the full details and offer audit services for paying customers only. On the other hand, helping out an affected vendor a bit beyond the public paper is not too much to ask either IMO, so lets leave it to 1-1.
The technical talk about the attack has been somewhat limited due to all this. Anti-virus vendors want to see code before assessing the threat further and have concentrated responding only to the facts detailed in the Matousec paper for now. See Paul Ducklin´s blog post from Sophos for a thorough write-up on the issue, which in my opinion does good job in summarizing the initial vendor stance on KHOBE across the field as well.
While I completely agree with the point about layered defense providing security beyond the system call and parameter checks (and find the point about unknown malware bypassing the protection with or without KHOBE to be logical), I think the discussion is far from being over yet. Let's assume we are running a multicore/multiprocessor system for the rest of the post. System where the threads are not competing for the clock cycles of only one processor, but have multiple parallel clock cycles to choose from and are able to actually run parallel in time.
I suspect various security software in this type of Windows system to be highly vulnerable to the attack. I am limited to the information available in the KHOBE paper about Matousec´s findings, but studying the earlier papers published about the TOCTTOU attacks on Windows leaves me feeling like possibly every validation check done on Windows platform is vulnerable.
The problem is not really the SSDT hooking dominating the public discussion at the moment. As far as I can see the root cause of the vulnerability lies somewhere in the way the data is being validated on Windows. In the way it is being referenced in the validation process and especially in the alarming detail that the memory areas being validated can actually be manipulated while they are being validated in some cases.
Seems like the KHOBE code focuses on exploiting the vulnerability in software which uses SSDT hooks in order to intercept the system calls and validate the parameters, but I doubt the exploitation is limited to checks initiated by SSDT hooks. The problem really is the accessibility of the memory areas under validation, not the way the checks are initiated. Any type of validation check requires multiple clock cycles, which possibly allow a parallel thread running on parallel clock cycles plenty of time to manipulate the values in the memory while they are under examination and possibly cause invalidated malicious parameters actually to be passed to the processor for execution.
The anti-virus vendors rushed a bit in my opinion to declare that any known malware would be detected regardless of KHOBE due to various alternations monitored in the system. While true obviously for big zoos of known malicious code, it does not exactly address the issue sufficiently in the enterprise environments.
Imagine an installer exploiting TOCTTOU vulnerabilities. Used in a staged attack as the initial payload for bypassing the security checks when installing further compromise tools including a malicious communication component again utilizing the technique to bypass the firewall for stealth communications. The race condition exists as long as the user memory objects can be manipulated, while the values are being examined.
It is not the end of the world by any means, but definitely something to keep an eye on. Very possibly maybe a real threat (at least until more details about KHOBE are published), but in any case a serious vulnerability which apparently exists and which probably require some changes both to the Windows kernel and to the security software functionality in order to get solved completely.
One more reason for the enterprises to ensure they have adequate incident response capabilities available in addition to the preventive security mechanisms. All the hope obviously should not be placed on the anti-virus vendor and the end point protection. Preventive security measures will be circumvented repeatedly and intrusions do happen. Just as trusted systems need hardening, they need constant intrusion and integrity monitoring throughout their lifetime.
KHOBE has gotten a lot of publicity and has generated active response and commentary from the anti-virus industry already. Temblor reached the internet s0c1ety also due to some "juicy" details published in a GData blog post about their recent communication with Matousec in order to get more information and evaluate the effect on their software correctly.
It is the art of vulnerability disclosure, no? The difficulty to handle all the aspects of disclosing delicate information. Considering that Matousec apparently did disclose privately their full research to their "clients and other software vendors" already in August 2008 and (especially) considering the fact that the KHOBE code is a result of some years of research and development, I personally find it only correct that Matousec now sell the full details and offer audit services for paying customers only. On the other hand, helping out an affected vendor a bit beyond the public paper is not too much to ask either IMO, so lets leave it to 1-1.
The technical talk about the attack has been somewhat limited due to all this. Anti-virus vendors want to see code before assessing the threat further and have concentrated responding only to the facts detailed in the Matousec paper for now. See Paul Ducklin´s blog post from Sophos for a thorough write-up on the issue, which in my opinion does good job in summarizing the initial vendor stance on KHOBE across the field as well.
While I completely agree with the point about layered defense providing security beyond the system call and parameter checks (and find the point about unknown malware bypassing the protection with or without KHOBE to be logical), I think the discussion is far from being over yet. Let's assume we are running a multicore/multiprocessor system for the rest of the post. System where the threads are not competing for the clock cycles of only one processor, but have multiple parallel clock cycles to choose from and are able to actually run parallel in time.
I suspect various security software in this type of Windows system to be highly vulnerable to the attack. I am limited to the information available in the KHOBE paper about Matousec´s findings, but studying the earlier papers published about the TOCTTOU attacks on Windows leaves me feeling like possibly every validation check done on Windows platform is vulnerable.
The problem is not really the SSDT hooking dominating the public discussion at the moment. As far as I can see the root cause of the vulnerability lies somewhere in the way the data is being validated on Windows. In the way it is being referenced in the validation process and especially in the alarming detail that the memory areas being validated can actually be manipulated while they are being validated in some cases.
Seems like the KHOBE code focuses on exploiting the vulnerability in software which uses SSDT hooks in order to intercept the system calls and validate the parameters, but I doubt the exploitation is limited to checks initiated by SSDT hooks. The problem really is the accessibility of the memory areas under validation, not the way the checks are initiated. Any type of validation check requires multiple clock cycles, which possibly allow a parallel thread running on parallel clock cycles plenty of time to manipulate the values in the memory while they are under examination and possibly cause invalidated malicious parameters actually to be passed to the processor for execution.
The anti-virus vendors rushed a bit in my opinion to declare that any known malware would be detected regardless of KHOBE due to various alternations monitored in the system. While true obviously for big zoos of known malicious code, it does not exactly address the issue sufficiently in the enterprise environments.
Imagine an installer exploiting TOCTTOU vulnerabilities. Used in a staged attack as the initial payload for bypassing the security checks when installing further compromise tools including a malicious communication component again utilizing the technique to bypass the firewall for stealth communications. The race condition exists as long as the user memory objects can be manipulated, while the values are being examined.
It is not the end of the world by any means, but definitely something to keep an eye on. Very possibly maybe a real threat (at least until more details about KHOBE are published), but in any case a serious vulnerability which apparently exists and which probably require some changes both to the Windows kernel and to the security software functionality in order to get solved completely.
One more reason for the enterprises to ensure they have adequate incident response capabilities available in addition to the preventive security mechanisms. All the hope obviously should not be placed on the anti-virus vendor and the end point protection. Preventive security measures will be circumvented repeatedly and intrusions do happen. Just as trusted systems need hardening, they need constant intrusion and integrity monitoring throughout their lifetime.
Labels:
anti-virus,
intrusion detection,
KHOBE,
Matousec,
race condition,
SSDT,
TOCTTOU,
vulnerabilities,
Windows
May 13, 2010
Disabling Broadcast Domains With PVLAN
Yep. I am a fan boy. Been following the Internet Storm Center Diary (ISC) almost daily for years now. Have learned a lot and have been inspired to look deeper into various things by the diary over the years. Big up them incident handlers @ ISC.
I am also a firm believer that the broadcast domain concept in Ethernet and Token Ring design (and in whatever other network technology that implements it) is a security vulnerability.
Gaining man-in-the-middle (MITM) position in an Ethernet broadcast domain is trivial task with Ettercap (and similar) and MITM is about as close as you can get to complete system compromise in the networks. MITM in an Ethernet broadcast domain allows complete compromise of all network traffic to/from a victim system, so any efforts to mitigate and complicate the MITM attacks are fully endorsed here.
Rob Van Den Brink pointed out an effective technique to disable the Ethernet broadcast domains in his ISC post yesterday.
Private Virtual Local Area Network (PVLAN) is a commonly implemented feature in switches. It isolates the access ports by blocking all traffic from one port to another unless it is specifically sent by the source to another system in the same PVLAN (using MAC destination in the Ethernet frame). Uplink is the term used in PVLAN talk for the mighty port forwarding traffic to/from other networks. Any PVLAN port/host can send traffic ONLY to the uplink port or to another specific port/host in the same PVLAN.
The feature seems to be supported by both bigs Cisco and Juniper, but apparently Cisco does not support PVLANs on the 1xxx or the 2xxx series. You have to go all the way up to Cisco Catalyst 3560 models to have the technology supported. As far as I can see all Juniper EX switches support PVLANs.
Ensure your datacenter or cloud provider and your network administrators have PVLAN correctly implemented (as suitable) on the switches. Especially, if you are operating in any Infrastructure-as-a-Service (IaaS) clouds shared by multiple clients. My testing possibilities are very limited (and virtual only), so I really would love to hear about any issues caused by PVLAN implementation in whatever type of testing environment. Quick testing on a workstation access switch in a small Windows 2003 Active Directory domain did not reveal any immediate problems.
Note that there have been attacks published against PVLANs, so the normal post-installation hardening routines are needed here as well. There was the @stake Security Assessment in 2002 on Cisco Catalyst switches mentioning the Layer 2 Proxy attacks against PVLANs and later in 2005 Arhont Ltd. detailed a MAC spoofing attack allowing PVLAN jumping. Check the SecuriTeam article for the details and the Cisco response to Arhont Ltd.
Cisco has published some excellent papers on VLAN security and Layer 2 attacks. I recommend the VLAN Security White Paper and the SAFE Layer 2 Security In-Depth (PDF) for further reading. Check also the Securing Networks with Private VLANs and VLAN Access Control Lists for correct implementation guidance.
I am also a firm believer that the broadcast domain concept in Ethernet and Token Ring design (and in whatever other network technology that implements it) is a security vulnerability.
Gaining man-in-the-middle (MITM) position in an Ethernet broadcast domain is trivial task with Ettercap (and similar) and MITM is about as close as you can get to complete system compromise in the networks. MITM in an Ethernet broadcast domain allows complete compromise of all network traffic to/from a victim system, so any efforts to mitigate and complicate the MITM attacks are fully endorsed here.
Rob Van Den Brink pointed out an effective technique to disable the Ethernet broadcast domains in his ISC post yesterday.
Ensure your datacenter or cloud provider and your network administrators have PVLAN correctly implemented (as suitable) on the switches. Especially, if you are operating in any Infrastructure-as-a-Service (IaaS) clouds shared by multiple clients. My testing possibilities are very limited (and virtual only), so I really would love to hear about any issues caused by PVLAN implementation in whatever type of testing environment. Quick testing on a workstation access switch in a small Windows 2003 Active Directory domain did not reveal any immediate problems.
Cisco has published some excellent papers on VLAN security and Layer 2 attacks. I recommend the VLAN Security White Paper and the SAFE Layer 2 Security In-Depth (PDF) for further reading. Check also the Securing Networks with Private VLANs and VLAN Access Control Lists for correct implementation guidance.
Labels:
broadcast domain,
Ethernet,
Ettercap,
Layer 2,
man-in-the-middle,
MITM,
Token Ring,
VLAN
May 11, 2010
Windows TOCTTOU Attacks with KHOBE
Matousec has been one of these unsung Internet heroes for sometime already. I know them from actively testing Windows software firewalls and openly sharing the test results as well as the testing methods on their website. But what may have started in 2006 as a small security software testing group, have by now truly matured into a cutting edge research crew.
They published a somewhat groundbreaking vulnerability advisory 2010-05-05.01 on their website last week. The vulnerability and the attack is explained in the accompanying article entitled KHOBE – 8.0 Earthquake For Windows Desktop Security Software.
Matousec did not publicly release the KHOBE engine code with all the research implemented, but apparently they have created a tool to successfully bypass the majority, if not almost all, of the kernel mode security checks performed by the current Windows security software. Think malware checks by the anti-virus software, traffic content checks by the software firewall, all bypassed in the final frontier in kernel mode.
In short the attack exploits a specific type of race condition previously known as time-of-check-to-time-of-use (TOCTTOU) bug, which (apparently almost constantly) occur when Windows security software is performing its various check ups on application behavior. The attack was documented already in 1996 in the Checking for Race Conditions in File Accesses (PDF) paper by Matt Bishop and Michael Dilger and the vulnerability was detailed further by Andrey Kolishak in the end of 2003 in his Bugtraq mailing list post entitled TOCTOU with NT System Service Hooking.
The attack happens on the thread level in the system. There in the grey area between the user mode and the kernel mode where the application threads are calling various operating system services in order to install and execute correctly in the system. There the modern security software appears as additional or hooked functionality to the operating system usually adding some type of mandatory access control for calls to the Windows registry, running process and files among other things.
The security applications usually modify the System Service Descriptor Table (SSDT) in Windows replacing various entries in the table and thus causing the calls and the parameters passed to these services to be examined by the security application. Matousec presented calls to load system drivers and calls to terminate processes as examples, but there are multiple calls that get intercepted by similar methods.
The vulnerability is largely due to the fact that although the hooks may be in kernel mode, the actual memory buffer content and the parameter content of the calls are in the user mode address space and therefore accessible to the attacker. He would need to run two threads, but he will be able to manipulate the buffer or the parameter content concurrently while it is being checked by the security thread. The attacker is able to pass a legitimate value to the security thread and have it validated as acceptable, but then get the concurrently manipulated malicious value to be actually passed to and processed by the called system service.
Sounds very theoretic and applicable only with good luck and with the famous specific conditions? According to Matousec the current version of the KHOBE engine successfully and reliably bypassed the tested security checks in ALL tested software on Windows XP SP3 and Windows Vista SP1 systems running on 32-bit hardware. They point out that with some "smart manipulation" of the thread priorities and the ever more common multicore/multiprocessor hardware allowing them to literally run their attack threads parallel in time to the security threads, they are able to create the necessary conditions for a successful attack in the matter of seconds.
Do not sleep on the bolded comment made by Matousec when listing the known affected products that due to "time limitation" only limited number of products have been tested, but they suspect that majority of the Windows security software is/was vulnerable to the attack. Matousec also states that the KHOBE engine should work equally on Windows 7 and on 64-bit hardware, but this has not been tested yet. Apparently the currently used methods to hook the security software functionality to both user mode and kernel mode are vulnerable by design regardless of platform version.
Matousec did not publish their suggested solution for the attack publicly, but my guess is this will be hard to fix. First thing that come to mind is attempting to limit the time the security check ups take in order to narrow down the race condition time frame, but obviously this would be only mitigation, not the solution. Maybe the memory areas under examination could be locked for the time it takes to verify them. In any case there is very little a system administrator or an user can do. The changes needed here have to happen in the operating systems or in the security software.
Symantec by the way have acknowledged the validity of the attack in a communication sent to their enterprise customers. They do not however consider it a vulnerability in their products for now, but rather (a bit confusingly) a problem present in "any product that implements kernel-mode hooking". For mitigation they recommend to harden the other layers of defense in order to prevent this type of malicious code from getting into the system.
They published a somewhat groundbreaking vulnerability advisory 2010-05-05.01 on their website last week. The vulnerability and the attack is explained in the accompanying article entitled KHOBE – 8.0 Earthquake For Windows Desktop Security Software.
Matousec did not publicly release the KHOBE engine code with all the research implemented, but apparently they have created a tool to successfully bypass the majority, if not almost all, of the kernel mode security checks performed by the current Windows security software. Think malware checks by the anti-virus software, traffic content checks by the software firewall, all bypassed in the final frontier in kernel mode.
Labels:
anti-virus,
firewall,
KHOBE,
malware protection,
Matousec,
Matt Bishop,
Michael Dilger,
race condition,
SSDT,
TOCTTOU,
Windows
May 5, 2010
Hijacking Emails with Microsoft SMTP Service
It is the spring of 2010, not the summer of 2008, but in vulnerability management things sometimes happen with some delay. After publishing the first post, I went for my usual daily browsage of the various infosec news sites. There were the news about Adobe having now more vulnerabilities in their products than Microsoft, there was some talk about another new instant messaging worm, but what really blew me away was an advisory published yesterday by Core.
The Microsoft SMTP Service and the Microsoft Exchange Server have been severely vulnerable to the DNS poisoning attacks until the April 13th, 2010.
Microsoft released the patch 981832 on that Tuesday. The patch actually fixed multiple issues although only two of them got documented. The Microsoft Security Bulletin MS10-024 states that the patch fixes the vulnerabilities documented in CVE-2010-0024 and CVE-2010-0025. Especially the CVE-2010-0024 was interesting. Unpatched Microsoft SMTP component in multiple Microsoft server versions "does not properly parse MX records, which allows remote DNS servers to cause a denial of service (service outage) via a crafted response to a DNS MX record query" according to the CVE. Hmm.
It is a curious patch. Does the Microsoft SMTP component really parse the DNS responses independently? How does it exactly resolve the unknown domain names?
Mister Nicolás Economou from Core got into investigating the issue a bit further. He found out some very interesting things. The Microsoft SMTP component indeed does resolve the unknown domain names and parse the DNS responses independently. It does not use the DNS service offered by the Windows operating systems. Nicolás reversed engineered different versions of the Microsoft SMTP component and found out that the DNS resolver feature in the SMTP component DID NOT randomize the DNS message ID (TXID) in their queries, but instead only incremented it by one for each subsequent query sent, but in a sense that did not even matter, since Nicolás also verified that the Microsoft SMTP component DID NOT verify the TXID of the received DNS responses. Apparently any DNS response coming to the correct port and containing an MX record of any pending query got accepted as the definitive one prior to the MS10-024. Hmm.
I wonder how does the Microsoft SMTP service cache the DNS entries?
The DNS resolver of the Microsoft SMTP component clearly got forgotten during the summer of 2008 when Dan Kaminskys research triggered the (previously unseen?) mass patching for DNS cache poisoning vulnerabilities. Microsoft fixed the Windows DNS resolver with the Microsoft Security Bulletin MS08-037. Microsoft did admit to Core that in addition to fixing the documented two vulnerabilities the MS10-024 also added heavier source port randomization for the DNS queries sent out, but classified them as "defense-in-depth changes".
The two undocumented vulnerabilities Nicolás Economou discovered got documented in CVE-2010-1689 and CVE-2010-1690. I very much agree with Nicolás and Core that the posthumously documented vulnerabilities fixed with MS10-024 greatly increase the criticality of the patch. It is definitely beyond Important. I would say it is in the infamous Your Servers Are Under Attack category now. In case you have not yet, install this one fast.
The Microsoft SMTP Service and the Microsoft Exchange Server have been severely vulnerable to the DNS poisoning attacks until the April 13th, 2010.
Microsoft released the patch 981832 on that Tuesday. The patch actually fixed multiple issues although only two of them got documented. The Microsoft Security Bulletin MS10-024 states that the patch fixes the vulnerabilities documented in CVE-2010-0024 and CVE-2010-0025. Especially the CVE-2010-0024 was interesting. Unpatched Microsoft SMTP component in multiple Microsoft server versions "does not properly parse MX records, which allows remote DNS servers to cause a denial of service (service outage) via a crafted response to a DNS MX record query" according to the CVE. Hmm.
It is a curious patch. Does the Microsoft SMTP component really parse the DNS responses independently? How does it exactly resolve the unknown domain names?
Mister Nicolás Economou from Core got into investigating the issue a bit further. He found out some very interesting things. The Microsoft SMTP component indeed does resolve the unknown domain names and parse the DNS responses independently. It does not use the DNS service offered by the Windows operating systems. Nicolás reversed engineered different versions of the Microsoft SMTP component and found out that the DNS resolver feature in the SMTP component DID NOT randomize the DNS message ID (TXID) in their queries, but instead only incremented it by one for each subsequent query sent, but in a sense that did not even matter, since Nicolás also verified that the Microsoft SMTP component DID NOT verify the TXID of the received DNS responses. Apparently any DNS response coming to the correct port and containing an MX record of any pending query got accepted as the definitive one prior to the MS10-024. Hmm.
I wonder how does the Microsoft SMTP service cache the DNS entries?
The DNS resolver of the Microsoft SMTP component clearly got forgotten during the summer of 2008 when Dan Kaminskys research triggered the (previously unseen?) mass patching for DNS cache poisoning vulnerabilities. Microsoft fixed the Windows DNS resolver with the Microsoft Security Bulletin MS08-037. Microsoft did admit to Core that in addition to fixing the documented two vulnerabilities the MS10-024 also added heavier source port randomization for the DNS queries sent out, but classified them as "defense-in-depth changes".
The two undocumented vulnerabilities Nicolás Economou discovered got documented in CVE-2010-1689 and CVE-2010-1690. I very much agree with Nicolás and Core that the posthumously documented vulnerabilities fixed with MS10-024 greatly increase the criticality of the patch. It is definitely beyond Important. I would say it is in the infamous Your Servers Are Under Attack category now. In case you have not yet, install this one fast.
Summer of 2008
Let’s start this thing by stepping back a few years in time.
The summer of 2008 was a BIG one for us aspiring network security headz. It definitely deservers a revisit. By that time I personally had reached the sufficient level of technical understanding to truly appreciate and enjoy the science and the art of the research published that summer. It was intellectually a very inspiring season for me. True eye opener to the possibilities of elite network hacking.
Some of the research summarized below got somewhat written off in the mainstream press as already known issues (you know the “network protocols were not designed with security in mind” response), but actually two very foundational network attacks got major updates published during the long hot summer of 2008.
The summer kicked off in grand manner with the public release of the Simple Network Management Protocol (SNMP) version 3 HMAC Authentication Bypass vulnerability in the beginning of June. It got documented with the CVE-2008-0960. The vulnerability allowed the attacker to possibly authenticate their arbitrary SNMP messages by getting only the first byte correct of any HMAC code of a valid username. Yep. That serious. Even without any knowledge of a valid username, the attacker had 1 in 256 chances to get it right with any byte sent. Fair.
Here the protocol was not broken, but rather the implementations of the protocol turned out to be vulnerable and as is common with this type of infrastructure software, the same SNMP implementation code is used by multiple vendors. The list of affected devices and systems in the US-CERT Vulnerability Note VU#878044 took time to scroll.
The SNMP version 3 was considered as a major upgrade of the protocol. It introduced security to the SNMP definitions. Version 3 was defined by the RFC 3411 and the RFC 3418. The Internet Engineering Task Force (IETF) later declared it an Internet Standard (STD0062) recognizing the full maturity of the RFC. The older versions of the protocol are considered as “obsolete” after the full release of SNMPv3 in 2004.
The security additions in version 3 largely centered on the use of Hash-based Message Authentication Code (HMAC) with SNMP messages. As is usual with the keyed hash function output, HMAC can be used to verify both the integrity and the authenticity of the messages. Both MD5 and SHA-1 are widely used to calculate HMAC codes, but practically any cryptographic hash function can be used as long as both participating entities know the chosen function and the secret key used in the hash encryption.
This far all good. We have a well secured SNMP messages that actually can be trusted to deliver the delicate service expected from them. But not quite. Turns out that a vast majority of the actual implementations of the new SNMP version included a peculiar detail in their functionality.
The clients were allowed to explicitly specify the applied HMAC length and HMAC codes with the minimum length of 1 byte were happily accepted for authentication. I do not know all the details behind the design error, but it would make an interesting study without a doubt. I find it curious that so many different development teams repeated the mistake that frankly speaking sounds incomprehensible to a n00b. As far as I can see the involved RFC documents can not be blamed here.
Exploit for the vulnerability was quickly published and is still available although this one should serve perfect for any private exploit development practices. The vulnerability is strikingly simple and straight forward despite the graveness of the threat it presents.
Maybe something to bounce back to with a future blog entry. The intention is to cover the full spectrum of IT security topics although the focus of the blog may be set somewhere closer to the attack and response tactiques for now.
As the summer of 2008 got hotter, so increased the heat on the internet infrastructure. The suspicions rose around July 8th when multiple vendors suddenly released very similar looking patches to address an issue in the source port assignment of the Domain Name System (DNS) queries. The patches all randomized the source port choice further. Hmm. Ok. US-CERT released the Vulnerability Note VU#800113 to publicly acknowledge the issue alongside the patches and the vulnerability was later documented with CVE-2008-1447.
Then appears a dude named Dan Kaminsky publicly asking for everybody to install the patches. Install them fast. The most critical DNS vulnerability ever had been found. One that gives you anything you want with DNS in the internet.
The big one, but to know the details, the community had to wait until the Kaminskys presentation in the Black Hat USA 2008 scheduled for August 6th.
The noise was loud. A lot of people had issues with Kaminskys chosen method of disclosure, there was the “network protocols were not designed with security in mind” all over the mainstream media for a moment, but there was also private disclosure by Kaminsky to three fellow researchers, which eventually led to public leak of the exploit description on July 21st.
In fact Kaminskys method was even used in the wild against an AT&T server in Texas, USA, before Kaminsky gave the presentation in the Black Hat. The google.com DNS entry for the local AT&T Internet subscribers was poisoned to point the traffic to attackers’ lookalike Google, which on the side hosted some ad-clicking services.
Despite all the excessive press prior to the presentation, I personally found Kaminskys research awesome, when it was published in the beginning of August. Chilling out at home with his laptop, trying to break things up, concentrating this time on DNS. He almost accidently stumbles upon an issue, but he was also capable of analysing it to the point that he understood the underlying root cause and could imagined the potential of the findings. He really had the global DNS administration privately handed over to him. king-in-the-middle position anyone? pwning traffic at will almost.
Kaminsky noticed that not only a singular DNS Resource Record can be poisoned by DNS flooding. Entire zone authorities can be hijacked using similar method. He proved that also the NS record AKA the authoritative name server field in the replies can be poisoned allowing him complete DNS control of the affected zone.
It would be quickly noticed by the network administrators due to lack of service? Not, if you do not deny the service, but just proxy it instead. The vulnerability allows true man-in-the-middle position for ALL desired DNS dependent traffic in the affected domain.
The attack is fairly simple. Once you can determine the DNS message ID (TXID) used by a recursive DNS server in its outbound queries, you can attack it by flooding it with DNS replies. Flooding, until you get the TXID correct before the legitimate source does and get your data cached by the recursive DNS server or the client. You get also to determine the Time To Live (TTL) for the record in cache. The maximum specified in the RFCs is over 68 years, but the servers in the public networks seem to generally cache entries for days or weeks maximum.
DNS is largely defined in the RFC 1034 and RFC 1035. The RFC 2181 was later published to clarify some details and "there might be some others as well" (as is the norm with RFCs). The message ID aka TXID is defined in RFC 1035 in the section 4.1.1. as a 16-bit field in the DNS message header. Due to the 16-bit size there is a limited supply (64k) of different message IDs and limited possibilities to randomize the TXIDs to provide a level of integrity to the query sessions when using User Datagram Protocol (UDP).
It’s a known weakness in the DNS protocol. The DNS security has been boosted up in the server implementations by having the DNS server to preallocate multiple UDP ports to use for the DNS queries and then use them randomly adding an extra randomization layer to "secure" the sessions.
This logically forces the attacker to predict an additional entry in order to successfully forge a response. He needs to get both the TXID _and_ the source port correct to get his response treated as the trusted one.
The patches released for the Kaminskys DNS bug reinforced the source port randomization. The DNS message headers obviously could not be redesigned and reimplemented successfully in one summer, so only the additional layers of defense had to be updated for the time being. The actual vulnerability that Kaminsky detected was that the randomization used by the majority of the DNS servers was not random enough to resist an attack for many seconds. Update was needed indeed. The Microsoft’s DNS update is said to have increased the source port variety from 64k to 134M.
There were some interesting discussions about moving the public DNS servers to use Transmission Control Protocol (TCP) for communications instead of UDP, but apparently this would cause too much load to the current global network infrastructure. Apparently there are just simply so many DNS queries performed constantly that adding the TCP handshake traffic to each query would overwhelm the infra according to a lot of network administrators. As far as I can see, in order to implement DNSSEC succesfully we need to move the DNS traffic to use TCP anyway.
Using TCP would in any case add a third random variable to the queries. The sessions would be controlled additionally by the 32 bit TCP Sequence Numbers. By the DNS protocol specifications both TCP and UDP queries are allowed and majority of the DNS server applications support both, so using TCP could be considered as a mitigating factor to further secure the local recursive DNS server caches.
The various open DNS services available in the public networks scored well in defense by the way. I believe Kaminsky himself confirmed that OpenDNS already had the mitigation implemented and never was vulnerable to the attack. PowerDNS and MaraDNS have also stated that they both had the Query ID _and_ the source port heavily randomized already before Kaminskys findings and were never vulnerable.
Interestingly only one publicly available DNS server application seemed to be non-vulnerable. DJBDNS server authored by mister Daniel J. Bernstein always had the full mitigation in place even before Kaminsky discovered the vulnerability. Respect. Pretty much all other widely used DNS servers were vulnerable.
The summer was not over yet. In fact the August holiday season was only beginning. There were many sunny days left still in 2008.
I didn't know DEFCON originally meant "defense readiness condition" of the US armed forces. For me it has always been the biggest hacker convention of them all held annually in Las Vegas, USA. The Chaos Communication Congress held in Berlin annually seems to be the oldest one by the way. Chaos Computer Club has been organizing the Congress continuously since 1984.
The DEF CON 2008 was held (as is usual) in the end of August. That year the conference finished off with an unscheduled presentation given by Alex Pilotov and Anton Kapela. The talk was titled Stealing The Internet - An Internet-Scale Man In The Middle Attack. The community was just getting in grips with the near complete pwnage capabilities of the Kaminskys DNS cache poisoning attacks. We certainly did not expect another complete network pwnage to surface so soon.
Border Gateway Protocol (BGP) is one of the core routing protocols in the Internet. You got this blog post delivered to your browser largely due to BGP being used as the de-facto standard globally. BGP enables the core routers to decide routes to use (among some other things) when delivering Internet traffic from one system to another. The BGP routers advertise which IP address spaces they deliver to and cache similar advertisements from other BGP routers. In BGP talk these IP address ranges are known as Autonomous Systems (AS).
The route advertisements are not authenticated nor verified in any way. Whoever who pwns a BGP talking device, can advertise networks at will. This has been a known issue already since long. The hope has been placed on the easy detection of the possible attacks and (the frankly impressive) recovery capabilities of the BGP itself.
So I could very well advertise myself as owning the route to the Google IP ranges, but obviously routing traffic successfully to the Google servers and back to the clients would be difficult for me in the core networks. The common understanding was that I would only end up black holing the Google addresses by advertising their AS numbers.
The attack has been unintentionally proven multiple times in the public networks, but actually the mitigation also has been verified various times. There was the AS7007 incident in 1997 and there was the Pakistani Telecom (local ISP) accidentally hijacking the traffic to YouTube causing the service to become unavailable for a few hours on 24th of February 2008.
They intended to block YouTube from their subscribers due to government orders, but actually ended up advertising BGP routes shorter than the legitimate ones with the YouTube prefix and eventually hijacked the traffic to YouTube. The issue obviously was quickly detected and the responsibles were notified. Once Pakistani Telecom stopped the false announcements, it took less than 5 minutes for the global BGP routers to recover and return routing traffic to YouTube correctly. The incident served as a type of Business Continuity Planning (BCP) audit for the protocol among other things and it indeed verified the awesome auto recovery capabilities.
More recently on April 8th, 2010, a Chinese ISP called IDC China Telecommunication hijacked about 10% of the internet traffic with similar "configuration error". Various major telecommunications companies including AT&T, Deutsche Telekom and Telefonica were affected, but the entire issue was over in 15 minutes.
Back in the DEF CON 2008 Pilotov and Kapela upgraded the attack and presented a method to successfully bypass this Denial-of-Service effect of the BGP hijacking attacks and to gain a nearly stealth man-in-the-middle (MITM) position for all traffic of the entire affected AS.
They used a legitimate BGP attribute called AS-PATH. They were able to define the path the traffic should take, route it successfully to the correct destination and inject their own device to the path thus being able to freely examine, manipulate and store any unencrypted traffic destined to the hijacked AS.
There is one limitation though. Only the traffic _destined_ to the victim AS can be intercepted by the methods presented by Pilotov and Kapela. The traffic sourced from the victim could only be intercepted in some unspecified cases.
As a proof-of-concept (PoC) Pilotov and Kapela successfully hijacked ALL Internet passing traffic to the DEF CON conference itself. Renesys monitored Pilosovs and Kapelas attack on DEF CON AS number and confirmed that it took only slightly over 80 seconds before the DEF CON traffic was completely hijacked in the monitored environment .
As stated earlier the issue itself is not new. There were papers published on the security weaknesses of BGP already in the end of eighties. As far as I am aware Pilotov and Kapela performed the first ever public demonstration of the attack. Apparently the issue has been disclosed and demonstrated privately to US government officials already earlier.
Summer of 2008 was a beautiful one :) Set some spirit no doubt both personally and professionally for me. Lit the fiya. All that. There was much more to it than the above, but let's leave some of that for later posts. I think this one is stretching the length limits already a bit. There was the issue with the OpenSSH Pseudo Random Number Generator (PRNG) in Debian Linux and Ubuntu creating predictable keys aka the CVE-2008-0166, there was a lot of research published about compromising hypervisors, there was the GSM-cracking-made-cheap making the rounds, the buffer overflow in the Citect CitectSCADA ODBC service documented in CVE-2008-2639 and of course soon after the summer the networks got seriously hit by a botnet called Conficker. But lets leave that for some time later.
Welcome to the blog! Expect to find a bit more focused and compact (or not) musings on all things information security in this one. Due to professional engagements I will probably record whole lotta stuff related to intrusion detection and attacks against corporate environments for now, but finally wish to provide coverage of the entire intriguing research output provided by the global information security community and keep you updated on all things related to network security.
With the agenda set, I quote Pep Guardiola. Fasten your seatbelts, let´s have a good ride. Let's go beyond.
The summer of 2008 was a BIG one for us aspiring network security headz. It definitely deservers a revisit. By that time I personally had reached the sufficient level of technical understanding to truly appreciate and enjoy the science and the art of the research published that summer. It was intellectually a very inspiring season for me. True eye opener to the possibilities of elite network hacking.
Some of the research summarized below got somewhat written off in the mainstream press as already known issues (you know the “network protocols were not designed with security in mind” response), but actually two very foundational network attacks got major updates published during the long hot summer of 2008.
The summer kicked off in grand manner with the public release of the Simple Network Management Protocol (SNMP) version 3 HMAC Authentication Bypass vulnerability in the beginning of June. It got documented with the CVE-2008-0960. The vulnerability allowed the attacker to possibly authenticate their arbitrary SNMP messages by getting only the first byte correct of any HMAC code of a valid username. Yep. That serious. Even without any knowledge of a valid username, the attacker had 1 in 256 chances to get it right with any byte sent. Fair.
Here the protocol was not broken, but rather the implementations of the protocol turned out to be vulnerable and as is common with this type of infrastructure software, the same SNMP implementation code is used by multiple vendors. The list of affected devices and systems in the US-CERT Vulnerability Note VU#878044 took time to scroll.
The SNMP version 3 was considered as a major upgrade of the protocol. It introduced security to the SNMP definitions. Version 3 was defined by the RFC 3411 and the RFC 3418. The Internet Engineering Task Force (IETF) later declared it an Internet Standard (STD0062) recognizing the full maturity of the RFC. The older versions of the protocol are considered as “obsolete” after the full release of SNMPv3 in 2004.
The security additions in version 3 largely centered on the use of Hash-based Message Authentication Code (HMAC) with SNMP messages. As is usual with the keyed hash function output, HMAC can be used to verify both the integrity and the authenticity of the messages. Both MD5 and SHA-1 are widely used to calculate HMAC codes, but practically any cryptographic hash function can be used as long as both participating entities know the chosen function and the secret key used in the hash encryption.
This far all good. We have a well secured SNMP messages that actually can be trusted to deliver the delicate service expected from them. But not quite. Turns out that a vast majority of the actual implementations of the new SNMP version included a peculiar detail in their functionality.
The clients were allowed to explicitly specify the applied HMAC length and HMAC codes with the minimum length of 1 byte were happily accepted for authentication. I do not know all the details behind the design error, but it would make an interesting study without a doubt. I find it curious that so many different development teams repeated the mistake that frankly speaking sounds incomprehensible to a n00b. As far as I can see the involved RFC documents can not be blamed here.
Exploit for the vulnerability was quickly published and is still available although this one should serve perfect for any private exploit development practices. The vulnerability is strikingly simple and straight forward despite the graveness of the threat it presents.
Maybe something to bounce back to with a future blog entry. The intention is to cover the full spectrum of IT security topics although the focus of the blog may be set somewhere closer to the attack and response tactiques for now.
As the summer of 2008 got hotter, so increased the heat on the internet infrastructure. The suspicions rose around July 8th when multiple vendors suddenly released very similar looking patches to address an issue in the source port assignment of the Domain Name System (DNS) queries. The patches all randomized the source port choice further. Hmm. Ok. US-CERT released the Vulnerability Note VU#800113 to publicly acknowledge the issue alongside the patches and the vulnerability was later documented with CVE-2008-1447.
Then appears a dude named Dan Kaminsky publicly asking for everybody to install the patches. Install them fast. The most critical DNS vulnerability ever had been found. One that gives you anything you want with DNS in the internet.
The big one, but to know the details, the community had to wait until the Kaminskys presentation in the Black Hat USA 2008 scheduled for August 6th.
The noise was loud. A lot of people had issues with Kaminskys chosen method of disclosure, there was the “network protocols were not designed with security in mind” all over the mainstream media for a moment, but there was also private disclosure by Kaminsky to three fellow researchers, which eventually led to public leak of the exploit description on July 21st.
In fact Kaminskys method was even used in the wild against an AT&T server in Texas, USA, before Kaminsky gave the presentation in the Black Hat. The google.com DNS entry for the local AT&T Internet subscribers was poisoned to point the traffic to attackers’ lookalike Google, which on the side hosted some ad-clicking services.
Despite all the excessive press prior to the presentation, I personally found Kaminskys research awesome, when it was published in the beginning of August. Chilling out at home with his laptop, trying to break things up, concentrating this time on DNS. He almost accidently stumbles upon an issue, but he was also capable of analysing it to the point that he understood the underlying root cause and could imagined the potential of the findings. He really had the global DNS administration privately handed over to him. king-in-the-middle position anyone? pwning traffic at will almost.
Kaminsky noticed that not only a singular DNS Resource Record can be poisoned by DNS flooding. Entire zone authorities can be hijacked using similar method. He proved that also the NS record AKA the authoritative name server field in the replies can be poisoned allowing him complete DNS control of the affected zone.
It would be quickly noticed by the network administrators due to lack of service? Not, if you do not deny the service, but just proxy it instead. The vulnerability allows true man-in-the-middle position for ALL desired DNS dependent traffic in the affected domain.
The attack is fairly simple. Once you can determine the DNS message ID (TXID) used by a recursive DNS server in its outbound queries, you can attack it by flooding it with DNS replies. Flooding, until you get the TXID correct before the legitimate source does and get your data cached by the recursive DNS server or the client. You get also to determine the Time To Live (TTL) for the record in cache. The maximum specified in the RFCs is over 68 years, but the servers in the public networks seem to generally cache entries for days or weeks maximum.
DNS is largely defined in the RFC 1034 and RFC 1035. The RFC 2181 was later published to clarify some details and "there might be some others as well" (as is the norm with RFCs). The message ID aka TXID is defined in RFC 1035 in the section 4.1.1. as a 16-bit field in the DNS message header. Due to the 16-bit size there is a limited supply (64k) of different message IDs and limited possibilities to randomize the TXIDs to provide a level of integrity to the query sessions when using User Datagram Protocol (UDP).
It’s a known weakness in the DNS protocol. The DNS security has been boosted up in the server implementations by having the DNS server to preallocate multiple UDP ports to use for the DNS queries and then use them randomly adding an extra randomization layer to "secure" the sessions.
This logically forces the attacker to predict an additional entry in order to successfully forge a response. He needs to get both the TXID _and_ the source port correct to get his response treated as the trusted one.
The patches released for the Kaminskys DNS bug reinforced the source port randomization. The DNS message headers obviously could not be redesigned and reimplemented successfully in one summer, so only the additional layers of defense had to be updated for the time being. The actual vulnerability that Kaminsky detected was that the randomization used by the majority of the DNS servers was not random enough to resist an attack for many seconds. Update was needed indeed. The Microsoft’s DNS update is said to have increased the source port variety from 64k to 134M.
There were some interesting discussions about moving the public DNS servers to use Transmission Control Protocol (TCP) for communications instead of UDP, but apparently this would cause too much load to the current global network infrastructure. Apparently there are just simply so many DNS queries performed constantly that adding the TCP handshake traffic to each query would overwhelm the infra according to a lot of network administrators. As far as I can see, in order to implement DNSSEC succesfully we need to move the DNS traffic to use TCP anyway.
Using TCP would in any case add a third random variable to the queries. The sessions would be controlled additionally by the 32 bit TCP Sequence Numbers. By the DNS protocol specifications both TCP and UDP queries are allowed and majority of the DNS server applications support both, so using TCP could be considered as a mitigating factor to further secure the local recursive DNS server caches.
The various open DNS services available in the public networks scored well in defense by the way. I believe Kaminsky himself confirmed that OpenDNS already had the mitigation implemented and never was vulnerable to the attack. PowerDNS and MaraDNS have also stated that they both had the Query ID _and_ the source port heavily randomized already before Kaminskys findings and were never vulnerable.
Interestingly only one publicly available DNS server application seemed to be non-vulnerable. DJBDNS server authored by mister Daniel J. Bernstein always had the full mitigation in place even before Kaminsky discovered the vulnerability. Respect. Pretty much all other widely used DNS servers were vulnerable.
The summer was not over yet. In fact the August holiday season was only beginning. There were many sunny days left still in 2008.
I didn't know DEFCON originally meant "defense readiness condition" of the US armed forces. For me it has always been the biggest hacker convention of them all held annually in Las Vegas, USA. The Chaos Communication Congress held in Berlin annually seems to be the oldest one by the way. Chaos Computer Club has been organizing the Congress continuously since 1984.
The DEF CON 2008 was held (as is usual) in the end of August. That year the conference finished off with an unscheduled presentation given by Alex Pilotov and Anton Kapela. The talk was titled Stealing The Internet - An Internet-Scale Man In The Middle Attack. The community was just getting in grips with the near complete pwnage capabilities of the Kaminskys DNS cache poisoning attacks. We certainly did not expect another complete network pwnage to surface so soon.
Border Gateway Protocol (BGP) is one of the core routing protocols in the Internet. You got this blog post delivered to your browser largely due to BGP being used as the de-facto standard globally. BGP enables the core routers to decide routes to use (among some other things) when delivering Internet traffic from one system to another. The BGP routers advertise which IP address spaces they deliver to and cache similar advertisements from other BGP routers. In BGP talk these IP address ranges are known as Autonomous Systems (AS).
The route advertisements are not authenticated nor verified in any way. Whoever who pwns a BGP talking device, can advertise networks at will. This has been a known issue already since long. The hope has been placed on the easy detection of the possible attacks and (the frankly impressive) recovery capabilities of the BGP itself.
So I could very well advertise myself as owning the route to the Google IP ranges, but obviously routing traffic successfully to the Google servers and back to the clients would be difficult for me in the core networks. The common understanding was that I would only end up black holing the Google addresses by advertising their AS numbers.
The attack has been unintentionally proven multiple times in the public networks, but actually the mitigation also has been verified various times. There was the AS7007 incident in 1997 and there was the Pakistani Telecom (local ISP) accidentally hijacking the traffic to YouTube causing the service to become unavailable for a few hours on 24th of February 2008.
They intended to block YouTube from their subscribers due to government orders, but actually ended up advertising BGP routes shorter than the legitimate ones with the YouTube prefix and eventually hijacked the traffic to YouTube. The issue obviously was quickly detected and the responsibles were notified. Once Pakistani Telecom stopped the false announcements, it took less than 5 minutes for the global BGP routers to recover and return routing traffic to YouTube correctly. The incident served as a type of Business Continuity Planning (BCP) audit for the protocol among other things and it indeed verified the awesome auto recovery capabilities.
More recently on April 8th, 2010, a Chinese ISP called IDC China Telecommunication hijacked about 10% of the internet traffic with similar "configuration error". Various major telecommunications companies including AT&T, Deutsche Telekom and Telefonica were affected, but the entire issue was over in 15 minutes.
Back in the DEF CON 2008 Pilotov and Kapela upgraded the attack and presented a method to successfully bypass this Denial-of-Service effect of the BGP hijacking attacks and to gain a nearly stealth man-in-the-middle (MITM) position for all traffic of the entire affected AS.
They used a legitimate BGP attribute called AS-PATH. They were able to define the path the traffic should take, route it successfully to the correct destination and inject their own device to the path thus being able to freely examine, manipulate and store any unencrypted traffic destined to the hijacked AS.
There is one limitation though. Only the traffic _destined_ to the victim AS can be intercepted by the methods presented by Pilotov and Kapela. The traffic sourced from the victim could only be intercepted in some unspecified cases.
As a proof-of-concept (PoC) Pilotov and Kapela successfully hijacked ALL Internet passing traffic to the DEF CON conference itself. Renesys monitored Pilosovs and Kapelas attack on DEF CON AS number and confirmed that it took only slightly over 80 seconds before the DEF CON traffic was completely hijacked in the monitored environment .
As stated earlier the issue itself is not new. There were papers published on the security weaknesses of BGP already in the end of eighties. As far as I am aware Pilotov and Kapela performed the first ever public demonstration of the attack. Apparently the issue has been disclosed and demonstrated privately to US government officials already earlier.
Summer of 2008 was a beautiful one :) Set some spirit no doubt both personally and professionally for me. Lit the fiya. All that. There was much more to it than the above, but let's leave some of that for later posts. I think this one is stretching the length limits already a bit. There was the issue with the OpenSSH Pseudo Random Number Generator (PRNG) in Debian Linux and Ubuntu creating predictable keys aka the CVE-2008-0166, there was a lot of research published about compromising hypervisors, there was the GSM-cracking-made-cheap making the rounds, the buffer overflow in the Citect CitectSCADA ODBC service documented in CVE-2008-2639 and of course soon after the summer the networks got seriously hit by a botnet called Conficker. But lets leave that for some time later.
Welcome to the blog! Expect to find a bit more focused and compact (or not) musings on all things information security in this one. Due to professional engagements I will probably record whole lotta stuff related to intrusion detection and attacks against corporate environments for now, but finally wish to provide coverage of the entire intriguing research output provided by the global information security community and keep you updated on all things related to network security.
With the agenda set, I quote Pep Guardiola. Fasten your seatbelts, let´s have a good ride. Let's go beyond.
Labels:
Alex Pilotov,
Anton Kapela,
BGP,
Black Hat 2008,
CVE-2008-0960,
CVE-2008-1447,
Dan Kaminsky,
Def Con 2008,
DNS,
SNMPv3,
VU#800113,
VU#878044
Subscribe to:
Posts (Atom)