Whether named after the first black belt degree in Japanese martial arts or after the evil AI in the System Shock games, SHODAN The Computer Search Engine is very interesting experiment indeed. In short SHODAN provides a web based interface for data mining various details about computers and services in the public network. Think Google for server banners.
While there are various NMAP-like scanners with a web interface already available in the internetz, SHODAN takes the game to the next level. According to the authors, SHODAN is running a custom built distributed port scanner currently querying publicly available HTTP, FTP, SSH and Telnet services (more ports will be possibly added later) and indexing the banner data returned by the servers. SHODAN also provides various clever filters for sorting out the search results including a world map showing the geolocations and standard CIDR notation can be used to focus the searches to desired IP address ranges only.
Simple, but ah so devastating. Have you ever wondered are there any pre-1993 versions of the Cisco IOS running in the public networks still? Or any open anonymous FTP servers? Surely there are no Microsoft IIS 4.0 web servers in production anymore? The reason you see only three pages worth of results is probably due to the fact that you are not logged in to SHODAN.
May 26, 2010
System Scanning with SHODAN
Labels:
banner,
data mining,
fingerprinting,
FTP,
HTTP,
port scan,
reconnaissance,
SHODAN,
SSH,
Telnet
May 18, 2010
KHOBE Cause Temblor
Matousec did cause some temblor in the infosec community with the KHOBE attack paper published last week. See my earlier post Windows TOCTTOU Attacks with KHOBE for the initial material.
KHOBE has gotten a lot of publicity and has generated active response and commentary from the anti-virus industry already. Temblor reached the internet s0c1ety also due to some "juicy" details published in a GData blog post about their recent communication with Matousec in order to get more information and evaluate the effect on their software correctly.
It is the art of vulnerability disclosure, no? The difficulty to handle all the aspects of disclosing delicate information. Considering that Matousec apparently did disclose privately their full research to their "clients and other software vendors" already in August 2008 and (especially) considering the fact that the KHOBE code is a result of some years of research and development, I personally find it only correct that Matousec now sell the full details and offer audit services for paying customers only. On the other hand, helping out an affected vendor a bit beyond the public paper is not too much to ask either IMO, so lets leave it to 1-1.
The technical talk about the attack has been somewhat limited due to all this. Anti-virus vendors want to see code before assessing the threat further and have concentrated responding only to the facts detailed in the Matousec paper for now. See Paul Ducklin´s blog post from Sophos for a thorough write-up on the issue, which in my opinion does good job in summarizing the initial vendor stance on KHOBE across the field as well.
While I completely agree with the point about layered defense providing security beyond the system call and parameter checks (and find the point about unknown malware bypassing the protection with or without KHOBE to be logical), I think the discussion is far from being over yet. Let's assume we are running a multicore/multiprocessor system for the rest of the post. System where the threads are not competing for the clock cycles of only one processor, but have multiple parallel clock cycles to choose from and are able to actually run parallel in time.
I suspect various security software in this type of Windows system to be highly vulnerable to the attack. I am limited to the information available in the KHOBE paper about Matousec´s findings, but studying the earlier papers published about the TOCTTOU attacks on Windows leaves me feeling like possibly every validation check done on Windows platform is vulnerable.
The problem is not really the SSDT hooking dominating the public discussion at the moment. As far as I can see the root cause of the vulnerability lies somewhere in the way the data is being validated on Windows. In the way it is being referenced in the validation process and especially in the alarming detail that the memory areas being validated can actually be manipulated while they are being validated in some cases.
Seems like the KHOBE code focuses on exploiting the vulnerability in software which uses SSDT hooks in order to intercept the system calls and validate the parameters, but I doubt the exploitation is limited to checks initiated by SSDT hooks. The problem really is the accessibility of the memory areas under validation, not the way the checks are initiated. Any type of validation check requires multiple clock cycles, which possibly allow a parallel thread running on parallel clock cycles plenty of time to manipulate the values in the memory while they are under examination and possibly cause invalidated malicious parameters actually to be passed to the processor for execution.
The anti-virus vendors rushed a bit in my opinion to declare that any known malware would be detected regardless of KHOBE due to various alternations monitored in the system. While true obviously for big zoos of known malicious code, it does not exactly address the issue sufficiently in the enterprise environments.
Imagine an installer exploiting TOCTTOU vulnerabilities. Used in a staged attack as the initial payload for bypassing the security checks when installing further compromise tools including a malicious communication component again utilizing the technique to bypass the firewall for stealth communications. The race condition exists as long as the user memory objects can be manipulated, while the values are being examined.
It is not the end of the world by any means, but definitely something to keep an eye on. Very possibly maybe a real threat (at least until more details about KHOBE are published), but in any case a serious vulnerability which apparently exists and which probably require some changes both to the Windows kernel and to the security software functionality in order to get solved completely.
One more reason for the enterprises to ensure they have adequate incident response capabilities available in addition to the preventive security mechanisms. All the hope obviously should not be placed on the anti-virus vendor and the end point protection. Preventive security measures will be circumvented repeatedly and intrusions do happen. Just as trusted systems need hardening, they need constant intrusion and integrity monitoring throughout their lifetime.
KHOBE has gotten a lot of publicity and has generated active response and commentary from the anti-virus industry already. Temblor reached the internet s0c1ety also due to some "juicy" details published in a GData blog post about their recent communication with Matousec in order to get more information and evaluate the effect on their software correctly.
It is the art of vulnerability disclosure, no? The difficulty to handle all the aspects of disclosing delicate information. Considering that Matousec apparently did disclose privately their full research to their "clients and other software vendors" already in August 2008 and (especially) considering the fact that the KHOBE code is a result of some years of research and development, I personally find it only correct that Matousec now sell the full details and offer audit services for paying customers only. On the other hand, helping out an affected vendor a bit beyond the public paper is not too much to ask either IMO, so lets leave it to 1-1.
The technical talk about the attack has been somewhat limited due to all this. Anti-virus vendors want to see code before assessing the threat further and have concentrated responding only to the facts detailed in the Matousec paper for now. See Paul Ducklin´s blog post from Sophos for a thorough write-up on the issue, which in my opinion does good job in summarizing the initial vendor stance on KHOBE across the field as well.
While I completely agree with the point about layered defense providing security beyond the system call and parameter checks (and find the point about unknown malware bypassing the protection with or without KHOBE to be logical), I think the discussion is far from being over yet. Let's assume we are running a multicore/multiprocessor system for the rest of the post. System where the threads are not competing for the clock cycles of only one processor, but have multiple parallel clock cycles to choose from and are able to actually run parallel in time.
I suspect various security software in this type of Windows system to be highly vulnerable to the attack. I am limited to the information available in the KHOBE paper about Matousec´s findings, but studying the earlier papers published about the TOCTTOU attacks on Windows leaves me feeling like possibly every validation check done on Windows platform is vulnerable.
The problem is not really the SSDT hooking dominating the public discussion at the moment. As far as I can see the root cause of the vulnerability lies somewhere in the way the data is being validated on Windows. In the way it is being referenced in the validation process and especially in the alarming detail that the memory areas being validated can actually be manipulated while they are being validated in some cases.
Seems like the KHOBE code focuses on exploiting the vulnerability in software which uses SSDT hooks in order to intercept the system calls and validate the parameters, but I doubt the exploitation is limited to checks initiated by SSDT hooks. The problem really is the accessibility of the memory areas under validation, not the way the checks are initiated. Any type of validation check requires multiple clock cycles, which possibly allow a parallel thread running on parallel clock cycles plenty of time to manipulate the values in the memory while they are under examination and possibly cause invalidated malicious parameters actually to be passed to the processor for execution.
The anti-virus vendors rushed a bit in my opinion to declare that any known malware would be detected regardless of KHOBE due to various alternations monitored in the system. While true obviously for big zoos of known malicious code, it does not exactly address the issue sufficiently in the enterprise environments.
Imagine an installer exploiting TOCTTOU vulnerabilities. Used in a staged attack as the initial payload for bypassing the security checks when installing further compromise tools including a malicious communication component again utilizing the technique to bypass the firewall for stealth communications. The race condition exists as long as the user memory objects can be manipulated, while the values are being examined.
It is not the end of the world by any means, but definitely something to keep an eye on. Very possibly maybe a real threat (at least until more details about KHOBE are published), but in any case a serious vulnerability which apparently exists and which probably require some changes both to the Windows kernel and to the security software functionality in order to get solved completely.
One more reason for the enterprises to ensure they have adequate incident response capabilities available in addition to the preventive security mechanisms. All the hope obviously should not be placed on the anti-virus vendor and the end point protection. Preventive security measures will be circumvented repeatedly and intrusions do happen. Just as trusted systems need hardening, they need constant intrusion and integrity monitoring throughout their lifetime.
Labels:
anti-virus,
intrusion detection,
KHOBE,
Matousec,
race condition,
SSDT,
TOCTTOU,
vulnerabilities,
Windows
May 13, 2010
Disabling Broadcast Domains With PVLAN
Yep. I am a fan boy. Been following the Internet Storm Center Diary (ISC) almost daily for years now. Have learned a lot and have been inspired to look deeper into various things by the diary over the years. Big up them incident handlers @ ISC.
I am also a firm believer that the broadcast domain concept in Ethernet and Token Ring design (and in whatever other network technology that implements it) is a security vulnerability.
Gaining man-in-the-middle (MITM) position in an Ethernet broadcast domain is trivial task with Ettercap (and similar) and MITM is about as close as you can get to complete system compromise in the networks. MITM in an Ethernet broadcast domain allows complete compromise of all network traffic to/from a victim system, so any efforts to mitigate and complicate the MITM attacks are fully endorsed here.
Rob Van Den Brink pointed out an effective technique to disable the Ethernet broadcast domains in his ISC post yesterday.
Private Virtual Local Area Network (PVLAN) is a commonly implemented feature in switches. It isolates the access ports by blocking all traffic from one port to another unless it is specifically sent by the source to another system in the same PVLAN (using MAC destination in the Ethernet frame). Uplink is the term used in PVLAN talk for the mighty port forwarding traffic to/from other networks. Any PVLAN port/host can send traffic ONLY to the uplink port or to another specific port/host in the same PVLAN.
The feature seems to be supported by both bigs Cisco and Juniper, but apparently Cisco does not support PVLANs on the 1xxx or the 2xxx series. You have to go all the way up to Cisco Catalyst 3560 models to have the technology supported. As far as I can see all Juniper EX switches support PVLANs.
Ensure your datacenter or cloud provider and your network administrators have PVLAN correctly implemented (as suitable) on the switches. Especially, if you are operating in any Infrastructure-as-a-Service (IaaS) clouds shared by multiple clients. My testing possibilities are very limited (and virtual only), so I really would love to hear about any issues caused by PVLAN implementation in whatever type of testing environment. Quick testing on a workstation access switch in a small Windows 2003 Active Directory domain did not reveal any immediate problems.
Note that there have been attacks published against PVLANs, so the normal post-installation hardening routines are needed here as well. There was the @stake Security Assessment in 2002 on Cisco Catalyst switches mentioning the Layer 2 Proxy attacks against PVLANs and later in 2005 Arhont Ltd. detailed a MAC spoofing attack allowing PVLAN jumping. Check the SecuriTeam article for the details and the Cisco response to Arhont Ltd.
Cisco has published some excellent papers on VLAN security and Layer 2 attacks. I recommend the VLAN Security White Paper and the SAFE Layer 2 Security In-Depth (PDF) for further reading. Check also the Securing Networks with Private VLANs and VLAN Access Control Lists for correct implementation guidance.
I am also a firm believer that the broadcast domain concept in Ethernet and Token Ring design (and in whatever other network technology that implements it) is a security vulnerability.
Gaining man-in-the-middle (MITM) position in an Ethernet broadcast domain is trivial task with Ettercap (and similar) and MITM is about as close as you can get to complete system compromise in the networks. MITM in an Ethernet broadcast domain allows complete compromise of all network traffic to/from a victim system, so any efforts to mitigate and complicate the MITM attacks are fully endorsed here.
Rob Van Den Brink pointed out an effective technique to disable the Ethernet broadcast domains in his ISC post yesterday.
Ensure your datacenter or cloud provider and your network administrators have PVLAN correctly implemented (as suitable) on the switches. Especially, if you are operating in any Infrastructure-as-a-Service (IaaS) clouds shared by multiple clients. My testing possibilities are very limited (and virtual only), so I really would love to hear about any issues caused by PVLAN implementation in whatever type of testing environment. Quick testing on a workstation access switch in a small Windows 2003 Active Directory domain did not reveal any immediate problems.
Cisco has published some excellent papers on VLAN security and Layer 2 attacks. I recommend the VLAN Security White Paper and the SAFE Layer 2 Security In-Depth (PDF) for further reading. Check also the Securing Networks with Private VLANs and VLAN Access Control Lists for correct implementation guidance.
Labels:
broadcast domain,
Ethernet,
Ettercap,
Layer 2,
man-in-the-middle,
MITM,
Token Ring,
VLAN
May 11, 2010
Windows TOCTTOU Attacks with KHOBE
Matousec has been one of these unsung Internet heroes for sometime already. I know them from actively testing Windows software firewalls and openly sharing the test results as well as the testing methods on their website. But what may have started in 2006 as a small security software testing group, have by now truly matured into a cutting edge research crew.
They published a somewhat groundbreaking vulnerability advisory 2010-05-05.01 on their website last week. The vulnerability and the attack is explained in the accompanying article entitled KHOBE – 8.0 Earthquake For Windows Desktop Security Software.
Matousec did not publicly release the KHOBE engine code with all the research implemented, but apparently they have created a tool to successfully bypass the majority, if not almost all, of the kernel mode security checks performed by the current Windows security software. Think malware checks by the anti-virus software, traffic content checks by the software firewall, all bypassed in the final frontier in kernel mode.
In short the attack exploits a specific type of race condition previously known as time-of-check-to-time-of-use (TOCTTOU) bug, which (apparently almost constantly) occur when Windows security software is performing its various check ups on application behavior. The attack was documented already in 1996 in the Checking for Race Conditions in File Accesses (PDF) paper by Matt Bishop and Michael Dilger and the vulnerability was detailed further by Andrey Kolishak in the end of 2003 in his Bugtraq mailing list post entitled TOCTOU with NT System Service Hooking.
The attack happens on the thread level in the system. There in the grey area between the user mode and the kernel mode where the application threads are calling various operating system services in order to install and execute correctly in the system. There the modern security software appears as additional or hooked functionality to the operating system usually adding some type of mandatory access control for calls to the Windows registry, running process and files among other things.
The security applications usually modify the System Service Descriptor Table (SSDT) in Windows replacing various entries in the table and thus causing the calls and the parameters passed to these services to be examined by the security application. Matousec presented calls to load system drivers and calls to terminate processes as examples, but there are multiple calls that get intercepted by similar methods.
The vulnerability is largely due to the fact that although the hooks may be in kernel mode, the actual memory buffer content and the parameter content of the calls are in the user mode address space and therefore accessible to the attacker. He would need to run two threads, but he will be able to manipulate the buffer or the parameter content concurrently while it is being checked by the security thread. The attacker is able to pass a legitimate value to the security thread and have it validated as acceptable, but then get the concurrently manipulated malicious value to be actually passed to and processed by the called system service.
Sounds very theoretic and applicable only with good luck and with the famous specific conditions? According to Matousec the current version of the KHOBE engine successfully and reliably bypassed the tested security checks in ALL tested software on Windows XP SP3 and Windows Vista SP1 systems running on 32-bit hardware. They point out that with some "smart manipulation" of the thread priorities and the ever more common multicore/multiprocessor hardware allowing them to literally run their attack threads parallel in time to the security threads, they are able to create the necessary conditions for a successful attack in the matter of seconds.
Do not sleep on the bolded comment made by Matousec when listing the known affected products that due to "time limitation" only limited number of products have been tested, but they suspect that majority of the Windows security software is/was vulnerable to the attack. Matousec also states that the KHOBE engine should work equally on Windows 7 and on 64-bit hardware, but this has not been tested yet. Apparently the currently used methods to hook the security software functionality to both user mode and kernel mode are vulnerable by design regardless of platform version.
Matousec did not publish their suggested solution for the attack publicly, but my guess is this will be hard to fix. First thing that come to mind is attempting to limit the time the security check ups take in order to narrow down the race condition time frame, but obviously this would be only mitigation, not the solution. Maybe the memory areas under examination could be locked for the time it takes to verify them. In any case there is very little a system administrator or an user can do. The changes needed here have to happen in the operating systems or in the security software.
Symantec by the way have acknowledged the validity of the attack in a communication sent to their enterprise customers. They do not however consider it a vulnerability in their products for now, but rather (a bit confusingly) a problem present in "any product that implements kernel-mode hooking". For mitigation they recommend to harden the other layers of defense in order to prevent this type of malicious code from getting into the system.
They published a somewhat groundbreaking vulnerability advisory 2010-05-05.01 on their website last week. The vulnerability and the attack is explained in the accompanying article entitled KHOBE – 8.0 Earthquake For Windows Desktop Security Software.
Matousec did not publicly release the KHOBE engine code with all the research implemented, but apparently they have created a tool to successfully bypass the majority, if not almost all, of the kernel mode security checks performed by the current Windows security software. Think malware checks by the anti-virus software, traffic content checks by the software firewall, all bypassed in the final frontier in kernel mode.
Labels:
anti-virus,
firewall,
KHOBE,
malware protection,
Matousec,
Matt Bishop,
Michael Dilger,
race condition,
SSDT,
TOCTTOU,
Windows
May 5, 2010
Hijacking Emails with Microsoft SMTP Service
It is the spring of 2010, not the summer of 2008, but in vulnerability management things sometimes happen with some delay. After publishing the first post, I went for my usual daily browsage of the various infosec news sites. There were the news about Adobe having now more vulnerabilities in their products than Microsoft, there was some talk about another new instant messaging worm, but what really blew me away was an advisory published yesterday by Core.
The Microsoft SMTP Service and the Microsoft Exchange Server have been severely vulnerable to the DNS poisoning attacks until the April 13th, 2010.
Microsoft released the patch 981832 on that Tuesday. The patch actually fixed multiple issues although only two of them got documented. The Microsoft Security Bulletin MS10-024 states that the patch fixes the vulnerabilities documented in CVE-2010-0024 and CVE-2010-0025. Especially the CVE-2010-0024 was interesting. Unpatched Microsoft SMTP component in multiple Microsoft server versions "does not properly parse MX records, which allows remote DNS servers to cause a denial of service (service outage) via a crafted response to a DNS MX record query" according to the CVE. Hmm.
It is a curious patch. Does the Microsoft SMTP component really parse the DNS responses independently? How does it exactly resolve the unknown domain names?
Mister Nicolás Economou from Core got into investigating the issue a bit further. He found out some very interesting things. The Microsoft SMTP component indeed does resolve the unknown domain names and parse the DNS responses independently. It does not use the DNS service offered by the Windows operating systems. Nicolás reversed engineered different versions of the Microsoft SMTP component and found out that the DNS resolver feature in the SMTP component DID NOT randomize the DNS message ID (TXID) in their queries, but instead only incremented it by one for each subsequent query sent, but in a sense that did not even matter, since Nicolás also verified that the Microsoft SMTP component DID NOT verify the TXID of the received DNS responses. Apparently any DNS response coming to the correct port and containing an MX record of any pending query got accepted as the definitive one prior to the MS10-024. Hmm.
I wonder how does the Microsoft SMTP service cache the DNS entries?
The DNS resolver of the Microsoft SMTP component clearly got forgotten during the summer of 2008 when Dan Kaminskys research triggered the (previously unseen?) mass patching for DNS cache poisoning vulnerabilities. Microsoft fixed the Windows DNS resolver with the Microsoft Security Bulletin MS08-037. Microsoft did admit to Core that in addition to fixing the documented two vulnerabilities the MS10-024 also added heavier source port randomization for the DNS queries sent out, but classified them as "defense-in-depth changes".
The two undocumented vulnerabilities Nicolás Economou discovered got documented in CVE-2010-1689 and CVE-2010-1690. I very much agree with Nicolás and Core that the posthumously documented vulnerabilities fixed with MS10-024 greatly increase the criticality of the patch. It is definitely beyond Important. I would say it is in the infamous Your Servers Are Under Attack category now. In case you have not yet, install this one fast.
The Microsoft SMTP Service and the Microsoft Exchange Server have been severely vulnerable to the DNS poisoning attacks until the April 13th, 2010.
Microsoft released the patch 981832 on that Tuesday. The patch actually fixed multiple issues although only two of them got documented. The Microsoft Security Bulletin MS10-024 states that the patch fixes the vulnerabilities documented in CVE-2010-0024 and CVE-2010-0025. Especially the CVE-2010-0024 was interesting. Unpatched Microsoft SMTP component in multiple Microsoft server versions "does not properly parse MX records, which allows remote DNS servers to cause a denial of service (service outage) via a crafted response to a DNS MX record query" according to the CVE. Hmm.
It is a curious patch. Does the Microsoft SMTP component really parse the DNS responses independently? How does it exactly resolve the unknown domain names?
Mister Nicolás Economou from Core got into investigating the issue a bit further. He found out some very interesting things. The Microsoft SMTP component indeed does resolve the unknown domain names and parse the DNS responses independently. It does not use the DNS service offered by the Windows operating systems. Nicolás reversed engineered different versions of the Microsoft SMTP component and found out that the DNS resolver feature in the SMTP component DID NOT randomize the DNS message ID (TXID) in their queries, but instead only incremented it by one for each subsequent query sent, but in a sense that did not even matter, since Nicolás also verified that the Microsoft SMTP component DID NOT verify the TXID of the received DNS responses. Apparently any DNS response coming to the correct port and containing an MX record of any pending query got accepted as the definitive one prior to the MS10-024. Hmm.
I wonder how does the Microsoft SMTP service cache the DNS entries?
The DNS resolver of the Microsoft SMTP component clearly got forgotten during the summer of 2008 when Dan Kaminskys research triggered the (previously unseen?) mass patching for DNS cache poisoning vulnerabilities. Microsoft fixed the Windows DNS resolver with the Microsoft Security Bulletin MS08-037. Microsoft did admit to Core that in addition to fixing the documented two vulnerabilities the MS10-024 also added heavier source port randomization for the DNS queries sent out, but classified them as "defense-in-depth changes".
The two undocumented vulnerabilities Nicolás Economou discovered got documented in CVE-2010-1689 and CVE-2010-1690. I very much agree with Nicolás and Core that the posthumously documented vulnerabilities fixed with MS10-024 greatly increase the criticality of the patch. It is definitely beyond Important. I would say it is in the infamous Your Servers Are Under Attack category now. In case you have not yet, install this one fast.
Summer of 2008
Let’s start this thing by stepping back a few years in time.
The summer of 2008 was a BIG one for us aspiring network security headz. It definitely deservers a revisit. By that time I personally had reached the sufficient level of technical understanding to truly appreciate and enjoy the science and the art of the research published that summer. It was intellectually a very inspiring season for me. True eye opener to the possibilities of elite network hacking.
Some of the research summarized below got somewhat written off in the mainstream press as already known issues (you know the “network protocols were not designed with security in mind” response), but actually two very foundational network attacks got major updates published during the long hot summer of 2008.
The summer kicked off in grand manner with the public release of the Simple Network Management Protocol (SNMP) version 3 HMAC Authentication Bypass vulnerability in the beginning of June. It got documented with the CVE-2008-0960. The vulnerability allowed the attacker to possibly authenticate their arbitrary SNMP messages by getting only the first byte correct of any HMAC code of a valid username. Yep. That serious. Even without any knowledge of a valid username, the attacker had 1 in 256 chances to get it right with any byte sent. Fair.
Here the protocol was not broken, but rather the implementations of the protocol turned out to be vulnerable and as is common with this type of infrastructure software, the same SNMP implementation code is used by multiple vendors. The list of affected devices and systems in the US-CERT Vulnerability Note VU#878044 took time to scroll.
The SNMP version 3 was considered as a major upgrade of the protocol. It introduced security to the SNMP definitions. Version 3 was defined by the RFC 3411 and the RFC 3418. The Internet Engineering Task Force (IETF) later declared it an Internet Standard (STD0062) recognizing the full maturity of the RFC. The older versions of the protocol are considered as “obsolete” after the full release of SNMPv3 in 2004.
The security additions in version 3 largely centered on the use of Hash-based Message Authentication Code (HMAC) with SNMP messages. As is usual with the keyed hash function output, HMAC can be used to verify both the integrity and the authenticity of the messages. Both MD5 and SHA-1 are widely used to calculate HMAC codes, but practically any cryptographic hash function can be used as long as both participating entities know the chosen function and the secret key used in the hash encryption.
This far all good. We have a well secured SNMP messages that actually can be trusted to deliver the delicate service expected from them. But not quite. Turns out that a vast majority of the actual implementations of the new SNMP version included a peculiar detail in their functionality.
The clients were allowed to explicitly specify the applied HMAC length and HMAC codes with the minimum length of 1 byte were happily accepted for authentication. I do not know all the details behind the design error, but it would make an interesting study without a doubt. I find it curious that so many different development teams repeated the mistake that frankly speaking sounds incomprehensible to a n00b. As far as I can see the involved RFC documents can not be blamed here.
Exploit for the vulnerability was quickly published and is still available although this one should serve perfect for any private exploit development practices. The vulnerability is strikingly simple and straight forward despite the graveness of the threat it presents.
Maybe something to bounce back to with a future blog entry. The intention is to cover the full spectrum of IT security topics although the focus of the blog may be set somewhere closer to the attack and response tactiques for now.
As the summer of 2008 got hotter, so increased the heat on the internet infrastructure. The suspicions rose around July 8th when multiple vendors suddenly released very similar looking patches to address an issue in the source port assignment of the Domain Name System (DNS) queries. The patches all randomized the source port choice further. Hmm. Ok. US-CERT released the Vulnerability Note VU#800113 to publicly acknowledge the issue alongside the patches and the vulnerability was later documented with CVE-2008-1447.
Then appears a dude named Dan Kaminsky publicly asking for everybody to install the patches. Install them fast. The most critical DNS vulnerability ever had been found. One that gives you anything you want with DNS in the internet.
The big one, but to know the details, the community had to wait until the Kaminskys presentation in the Black Hat USA 2008 scheduled for August 6th.
The noise was loud. A lot of people had issues with Kaminskys chosen method of disclosure, there was the “network protocols were not designed with security in mind” all over the mainstream media for a moment, but there was also private disclosure by Kaminsky to three fellow researchers, which eventually led to public leak of the exploit description on July 21st.
In fact Kaminskys method was even used in the wild against an AT&T server in Texas, USA, before Kaminsky gave the presentation in the Black Hat. The google.com DNS entry for the local AT&T Internet subscribers was poisoned to point the traffic to attackers’ lookalike Google, which on the side hosted some ad-clicking services.
Despite all the excessive press prior to the presentation, I personally found Kaminskys research awesome, when it was published in the beginning of August. Chilling out at home with his laptop, trying to break things up, concentrating this time on DNS. He almost accidently stumbles upon an issue, but he was also capable of analysing it to the point that he understood the underlying root cause and could imagined the potential of the findings. He really had the global DNS administration privately handed over to him. king-in-the-middle position anyone? pwning traffic at will almost.
Kaminsky noticed that not only a singular DNS Resource Record can be poisoned by DNS flooding. Entire zone authorities can be hijacked using similar method. He proved that also the NS record AKA the authoritative name server field in the replies can be poisoned allowing him complete DNS control of the affected zone.
It would be quickly noticed by the network administrators due to lack of service? Not, if you do not deny the service, but just proxy it instead. The vulnerability allows true man-in-the-middle position for ALL desired DNS dependent traffic in the affected domain.
The attack is fairly simple. Once you can determine the DNS message ID (TXID) used by a recursive DNS server in its outbound queries, you can attack it by flooding it with DNS replies. Flooding, until you get the TXID correct before the legitimate source does and get your data cached by the recursive DNS server or the client. You get also to determine the Time To Live (TTL) for the record in cache. The maximum specified in the RFCs is over 68 years, but the servers in the public networks seem to generally cache entries for days or weeks maximum.
DNS is largely defined in the RFC 1034 and RFC 1035. The RFC 2181 was later published to clarify some details and "there might be some others as well" (as is the norm with RFCs). The message ID aka TXID is defined in RFC 1035 in the section 4.1.1. as a 16-bit field in the DNS message header. Due to the 16-bit size there is a limited supply (64k) of different message IDs and limited possibilities to randomize the TXIDs to provide a level of integrity to the query sessions when using User Datagram Protocol (UDP).
It’s a known weakness in the DNS protocol. The DNS security has been boosted up in the server implementations by having the DNS server to preallocate multiple UDP ports to use for the DNS queries and then use them randomly adding an extra randomization layer to "secure" the sessions.
This logically forces the attacker to predict an additional entry in order to successfully forge a response. He needs to get both the TXID _and_ the source port correct to get his response treated as the trusted one.
The patches released for the Kaminskys DNS bug reinforced the source port randomization. The DNS message headers obviously could not be redesigned and reimplemented successfully in one summer, so only the additional layers of defense had to be updated for the time being. The actual vulnerability that Kaminsky detected was that the randomization used by the majority of the DNS servers was not random enough to resist an attack for many seconds. Update was needed indeed. The Microsoft’s DNS update is said to have increased the source port variety from 64k to 134M.
There were some interesting discussions about moving the public DNS servers to use Transmission Control Protocol (TCP) for communications instead of UDP, but apparently this would cause too much load to the current global network infrastructure. Apparently there are just simply so many DNS queries performed constantly that adding the TCP handshake traffic to each query would overwhelm the infra according to a lot of network administrators. As far as I can see, in order to implement DNSSEC succesfully we need to move the DNS traffic to use TCP anyway.
Using TCP would in any case add a third random variable to the queries. The sessions would be controlled additionally by the 32 bit TCP Sequence Numbers. By the DNS protocol specifications both TCP and UDP queries are allowed and majority of the DNS server applications support both, so using TCP could be considered as a mitigating factor to further secure the local recursive DNS server caches.
The various open DNS services available in the public networks scored well in defense by the way. I believe Kaminsky himself confirmed that OpenDNS already had the mitigation implemented and never was vulnerable to the attack. PowerDNS and MaraDNS have also stated that they both had the Query ID _and_ the source port heavily randomized already before Kaminskys findings and were never vulnerable.
Interestingly only one publicly available DNS server application seemed to be non-vulnerable. DJBDNS server authored by mister Daniel J. Bernstein always had the full mitigation in place even before Kaminsky discovered the vulnerability. Respect. Pretty much all other widely used DNS servers were vulnerable.
The summer was not over yet. In fact the August holiday season was only beginning. There were many sunny days left still in 2008.
I didn't know DEFCON originally meant "defense readiness condition" of the US armed forces. For me it has always been the biggest hacker convention of them all held annually in Las Vegas, USA. The Chaos Communication Congress held in Berlin annually seems to be the oldest one by the way. Chaos Computer Club has been organizing the Congress continuously since 1984.
The DEF CON 2008 was held (as is usual) in the end of August. That year the conference finished off with an unscheduled presentation given by Alex Pilotov and Anton Kapela. The talk was titled Stealing The Internet - An Internet-Scale Man In The Middle Attack. The community was just getting in grips with the near complete pwnage capabilities of the Kaminskys DNS cache poisoning attacks. We certainly did not expect another complete network pwnage to surface so soon.
Border Gateway Protocol (BGP) is one of the core routing protocols in the Internet. You got this blog post delivered to your browser largely due to BGP being used as the de-facto standard globally. BGP enables the core routers to decide routes to use (among some other things) when delivering Internet traffic from one system to another. The BGP routers advertise which IP address spaces they deliver to and cache similar advertisements from other BGP routers. In BGP talk these IP address ranges are known as Autonomous Systems (AS).
The route advertisements are not authenticated nor verified in any way. Whoever who pwns a BGP talking device, can advertise networks at will. This has been a known issue already since long. The hope has been placed on the easy detection of the possible attacks and (the frankly impressive) recovery capabilities of the BGP itself.
So I could very well advertise myself as owning the route to the Google IP ranges, but obviously routing traffic successfully to the Google servers and back to the clients would be difficult for me in the core networks. The common understanding was that I would only end up black holing the Google addresses by advertising their AS numbers.
The attack has been unintentionally proven multiple times in the public networks, but actually the mitigation also has been verified various times. There was the AS7007 incident in 1997 and there was the Pakistani Telecom (local ISP) accidentally hijacking the traffic to YouTube causing the service to become unavailable for a few hours on 24th of February 2008.
They intended to block YouTube from their subscribers due to government orders, but actually ended up advertising BGP routes shorter than the legitimate ones with the YouTube prefix and eventually hijacked the traffic to YouTube. The issue obviously was quickly detected and the responsibles were notified. Once Pakistani Telecom stopped the false announcements, it took less than 5 minutes for the global BGP routers to recover and return routing traffic to YouTube correctly. The incident served as a type of Business Continuity Planning (BCP) audit for the protocol among other things and it indeed verified the awesome auto recovery capabilities.
More recently on April 8th, 2010, a Chinese ISP called IDC China Telecommunication hijacked about 10% of the internet traffic with similar "configuration error". Various major telecommunications companies including AT&T, Deutsche Telekom and Telefonica were affected, but the entire issue was over in 15 minutes.
Back in the DEF CON 2008 Pilotov and Kapela upgraded the attack and presented a method to successfully bypass this Denial-of-Service effect of the BGP hijacking attacks and to gain a nearly stealth man-in-the-middle (MITM) position for all traffic of the entire affected AS.
They used a legitimate BGP attribute called AS-PATH. They were able to define the path the traffic should take, route it successfully to the correct destination and inject their own device to the path thus being able to freely examine, manipulate and store any unencrypted traffic destined to the hijacked AS.
There is one limitation though. Only the traffic _destined_ to the victim AS can be intercepted by the methods presented by Pilotov and Kapela. The traffic sourced from the victim could only be intercepted in some unspecified cases.
As a proof-of-concept (PoC) Pilotov and Kapela successfully hijacked ALL Internet passing traffic to the DEF CON conference itself. Renesys monitored Pilosovs and Kapelas attack on DEF CON AS number and confirmed that it took only slightly over 80 seconds before the DEF CON traffic was completely hijacked in the monitored environment .
As stated earlier the issue itself is not new. There were papers published on the security weaknesses of BGP already in the end of eighties. As far as I am aware Pilotov and Kapela performed the first ever public demonstration of the attack. Apparently the issue has been disclosed and demonstrated privately to US government officials already earlier.
Summer of 2008 was a beautiful one :) Set some spirit no doubt both personally and professionally for me. Lit the fiya. All that. There was much more to it than the above, but let's leave some of that for later posts. I think this one is stretching the length limits already a bit. There was the issue with the OpenSSH Pseudo Random Number Generator (PRNG) in Debian Linux and Ubuntu creating predictable keys aka the CVE-2008-0166, there was a lot of research published about compromising hypervisors, there was the GSM-cracking-made-cheap making the rounds, the buffer overflow in the Citect CitectSCADA ODBC service documented in CVE-2008-2639 and of course soon after the summer the networks got seriously hit by a botnet called Conficker. But lets leave that for some time later.
Welcome to the blog! Expect to find a bit more focused and compact (or not) musings on all things information security in this one. Due to professional engagements I will probably record whole lotta stuff related to intrusion detection and attacks against corporate environments for now, but finally wish to provide coverage of the entire intriguing research output provided by the global information security community and keep you updated on all things related to network security.
With the agenda set, I quote Pep Guardiola. Fasten your seatbelts, let´s have a good ride. Let's go beyond.
The summer of 2008 was a BIG one for us aspiring network security headz. It definitely deservers a revisit. By that time I personally had reached the sufficient level of technical understanding to truly appreciate and enjoy the science and the art of the research published that summer. It was intellectually a very inspiring season for me. True eye opener to the possibilities of elite network hacking.
Some of the research summarized below got somewhat written off in the mainstream press as already known issues (you know the “network protocols were not designed with security in mind” response), but actually two very foundational network attacks got major updates published during the long hot summer of 2008.
The summer kicked off in grand manner with the public release of the Simple Network Management Protocol (SNMP) version 3 HMAC Authentication Bypass vulnerability in the beginning of June. It got documented with the CVE-2008-0960. The vulnerability allowed the attacker to possibly authenticate their arbitrary SNMP messages by getting only the first byte correct of any HMAC code of a valid username. Yep. That serious. Even without any knowledge of a valid username, the attacker had 1 in 256 chances to get it right with any byte sent. Fair.
Here the protocol was not broken, but rather the implementations of the protocol turned out to be vulnerable and as is common with this type of infrastructure software, the same SNMP implementation code is used by multiple vendors. The list of affected devices and systems in the US-CERT Vulnerability Note VU#878044 took time to scroll.
The SNMP version 3 was considered as a major upgrade of the protocol. It introduced security to the SNMP definitions. Version 3 was defined by the RFC 3411 and the RFC 3418. The Internet Engineering Task Force (IETF) later declared it an Internet Standard (STD0062) recognizing the full maturity of the RFC. The older versions of the protocol are considered as “obsolete” after the full release of SNMPv3 in 2004.
The security additions in version 3 largely centered on the use of Hash-based Message Authentication Code (HMAC) with SNMP messages. As is usual with the keyed hash function output, HMAC can be used to verify both the integrity and the authenticity of the messages. Both MD5 and SHA-1 are widely used to calculate HMAC codes, but practically any cryptographic hash function can be used as long as both participating entities know the chosen function and the secret key used in the hash encryption.
This far all good. We have a well secured SNMP messages that actually can be trusted to deliver the delicate service expected from them. But not quite. Turns out that a vast majority of the actual implementations of the new SNMP version included a peculiar detail in their functionality.
The clients were allowed to explicitly specify the applied HMAC length and HMAC codes with the minimum length of 1 byte were happily accepted for authentication. I do not know all the details behind the design error, but it would make an interesting study without a doubt. I find it curious that so many different development teams repeated the mistake that frankly speaking sounds incomprehensible to a n00b. As far as I can see the involved RFC documents can not be blamed here.
Exploit for the vulnerability was quickly published and is still available although this one should serve perfect for any private exploit development practices. The vulnerability is strikingly simple and straight forward despite the graveness of the threat it presents.
Maybe something to bounce back to with a future blog entry. The intention is to cover the full spectrum of IT security topics although the focus of the blog may be set somewhere closer to the attack and response tactiques for now.
As the summer of 2008 got hotter, so increased the heat on the internet infrastructure. The suspicions rose around July 8th when multiple vendors suddenly released very similar looking patches to address an issue in the source port assignment of the Domain Name System (DNS) queries. The patches all randomized the source port choice further. Hmm. Ok. US-CERT released the Vulnerability Note VU#800113 to publicly acknowledge the issue alongside the patches and the vulnerability was later documented with CVE-2008-1447.
Then appears a dude named Dan Kaminsky publicly asking for everybody to install the patches. Install them fast. The most critical DNS vulnerability ever had been found. One that gives you anything you want with DNS in the internet.
The big one, but to know the details, the community had to wait until the Kaminskys presentation in the Black Hat USA 2008 scheduled for August 6th.
The noise was loud. A lot of people had issues with Kaminskys chosen method of disclosure, there was the “network protocols were not designed with security in mind” all over the mainstream media for a moment, but there was also private disclosure by Kaminsky to three fellow researchers, which eventually led to public leak of the exploit description on July 21st.
In fact Kaminskys method was even used in the wild against an AT&T server in Texas, USA, before Kaminsky gave the presentation in the Black Hat. The google.com DNS entry for the local AT&T Internet subscribers was poisoned to point the traffic to attackers’ lookalike Google, which on the side hosted some ad-clicking services.
Despite all the excessive press prior to the presentation, I personally found Kaminskys research awesome, when it was published in the beginning of August. Chilling out at home with his laptop, trying to break things up, concentrating this time on DNS. He almost accidently stumbles upon an issue, but he was also capable of analysing it to the point that he understood the underlying root cause and could imagined the potential of the findings. He really had the global DNS administration privately handed over to him. king-in-the-middle position anyone? pwning traffic at will almost.
Kaminsky noticed that not only a singular DNS Resource Record can be poisoned by DNS flooding. Entire zone authorities can be hijacked using similar method. He proved that also the NS record AKA the authoritative name server field in the replies can be poisoned allowing him complete DNS control of the affected zone.
It would be quickly noticed by the network administrators due to lack of service? Not, if you do not deny the service, but just proxy it instead. The vulnerability allows true man-in-the-middle position for ALL desired DNS dependent traffic in the affected domain.
The attack is fairly simple. Once you can determine the DNS message ID (TXID) used by a recursive DNS server in its outbound queries, you can attack it by flooding it with DNS replies. Flooding, until you get the TXID correct before the legitimate source does and get your data cached by the recursive DNS server or the client. You get also to determine the Time To Live (TTL) for the record in cache. The maximum specified in the RFCs is over 68 years, but the servers in the public networks seem to generally cache entries for days or weeks maximum.
DNS is largely defined in the RFC 1034 and RFC 1035. The RFC 2181 was later published to clarify some details and "there might be some others as well" (as is the norm with RFCs). The message ID aka TXID is defined in RFC 1035 in the section 4.1.1. as a 16-bit field in the DNS message header. Due to the 16-bit size there is a limited supply (64k) of different message IDs and limited possibilities to randomize the TXIDs to provide a level of integrity to the query sessions when using User Datagram Protocol (UDP).
It’s a known weakness in the DNS protocol. The DNS security has been boosted up in the server implementations by having the DNS server to preallocate multiple UDP ports to use for the DNS queries and then use them randomly adding an extra randomization layer to "secure" the sessions.
This logically forces the attacker to predict an additional entry in order to successfully forge a response. He needs to get both the TXID _and_ the source port correct to get his response treated as the trusted one.
The patches released for the Kaminskys DNS bug reinforced the source port randomization. The DNS message headers obviously could not be redesigned and reimplemented successfully in one summer, so only the additional layers of defense had to be updated for the time being. The actual vulnerability that Kaminsky detected was that the randomization used by the majority of the DNS servers was not random enough to resist an attack for many seconds. Update was needed indeed. The Microsoft’s DNS update is said to have increased the source port variety from 64k to 134M.
There were some interesting discussions about moving the public DNS servers to use Transmission Control Protocol (TCP) for communications instead of UDP, but apparently this would cause too much load to the current global network infrastructure. Apparently there are just simply so many DNS queries performed constantly that adding the TCP handshake traffic to each query would overwhelm the infra according to a lot of network administrators. As far as I can see, in order to implement DNSSEC succesfully we need to move the DNS traffic to use TCP anyway.
Using TCP would in any case add a third random variable to the queries. The sessions would be controlled additionally by the 32 bit TCP Sequence Numbers. By the DNS protocol specifications both TCP and UDP queries are allowed and majority of the DNS server applications support both, so using TCP could be considered as a mitigating factor to further secure the local recursive DNS server caches.
The various open DNS services available in the public networks scored well in defense by the way. I believe Kaminsky himself confirmed that OpenDNS already had the mitigation implemented and never was vulnerable to the attack. PowerDNS and MaraDNS have also stated that they both had the Query ID _and_ the source port heavily randomized already before Kaminskys findings and were never vulnerable.
Interestingly only one publicly available DNS server application seemed to be non-vulnerable. DJBDNS server authored by mister Daniel J. Bernstein always had the full mitigation in place even before Kaminsky discovered the vulnerability. Respect. Pretty much all other widely used DNS servers were vulnerable.
The summer was not over yet. In fact the August holiday season was only beginning. There were many sunny days left still in 2008.
I didn't know DEFCON originally meant "defense readiness condition" of the US armed forces. For me it has always been the biggest hacker convention of them all held annually in Las Vegas, USA. The Chaos Communication Congress held in Berlin annually seems to be the oldest one by the way. Chaos Computer Club has been organizing the Congress continuously since 1984.
The DEF CON 2008 was held (as is usual) in the end of August. That year the conference finished off with an unscheduled presentation given by Alex Pilotov and Anton Kapela. The talk was titled Stealing The Internet - An Internet-Scale Man In The Middle Attack. The community was just getting in grips with the near complete pwnage capabilities of the Kaminskys DNS cache poisoning attacks. We certainly did not expect another complete network pwnage to surface so soon.
Border Gateway Protocol (BGP) is one of the core routing protocols in the Internet. You got this blog post delivered to your browser largely due to BGP being used as the de-facto standard globally. BGP enables the core routers to decide routes to use (among some other things) when delivering Internet traffic from one system to another. The BGP routers advertise which IP address spaces they deliver to and cache similar advertisements from other BGP routers. In BGP talk these IP address ranges are known as Autonomous Systems (AS).
The route advertisements are not authenticated nor verified in any way. Whoever who pwns a BGP talking device, can advertise networks at will. This has been a known issue already since long. The hope has been placed on the easy detection of the possible attacks and (the frankly impressive) recovery capabilities of the BGP itself.
So I could very well advertise myself as owning the route to the Google IP ranges, but obviously routing traffic successfully to the Google servers and back to the clients would be difficult for me in the core networks. The common understanding was that I would only end up black holing the Google addresses by advertising their AS numbers.
The attack has been unintentionally proven multiple times in the public networks, but actually the mitigation also has been verified various times. There was the AS7007 incident in 1997 and there was the Pakistani Telecom (local ISP) accidentally hijacking the traffic to YouTube causing the service to become unavailable for a few hours on 24th of February 2008.
They intended to block YouTube from their subscribers due to government orders, but actually ended up advertising BGP routes shorter than the legitimate ones with the YouTube prefix and eventually hijacked the traffic to YouTube. The issue obviously was quickly detected and the responsibles were notified. Once Pakistani Telecom stopped the false announcements, it took less than 5 minutes for the global BGP routers to recover and return routing traffic to YouTube correctly. The incident served as a type of Business Continuity Planning (BCP) audit for the protocol among other things and it indeed verified the awesome auto recovery capabilities.
More recently on April 8th, 2010, a Chinese ISP called IDC China Telecommunication hijacked about 10% of the internet traffic with similar "configuration error". Various major telecommunications companies including AT&T, Deutsche Telekom and Telefonica were affected, but the entire issue was over in 15 minutes.
Back in the DEF CON 2008 Pilotov and Kapela upgraded the attack and presented a method to successfully bypass this Denial-of-Service effect of the BGP hijacking attacks and to gain a nearly stealth man-in-the-middle (MITM) position for all traffic of the entire affected AS.
They used a legitimate BGP attribute called AS-PATH. They were able to define the path the traffic should take, route it successfully to the correct destination and inject their own device to the path thus being able to freely examine, manipulate and store any unencrypted traffic destined to the hijacked AS.
There is one limitation though. Only the traffic _destined_ to the victim AS can be intercepted by the methods presented by Pilotov and Kapela. The traffic sourced from the victim could only be intercepted in some unspecified cases.
As a proof-of-concept (PoC) Pilotov and Kapela successfully hijacked ALL Internet passing traffic to the DEF CON conference itself. Renesys monitored Pilosovs and Kapelas attack on DEF CON AS number and confirmed that it took only slightly over 80 seconds before the DEF CON traffic was completely hijacked in the monitored environment .
As stated earlier the issue itself is not new. There were papers published on the security weaknesses of BGP already in the end of eighties. As far as I am aware Pilotov and Kapela performed the first ever public demonstration of the attack. Apparently the issue has been disclosed and demonstrated privately to US government officials already earlier.
Summer of 2008 was a beautiful one :) Set some spirit no doubt both personally and professionally for me. Lit the fiya. All that. There was much more to it than the above, but let's leave some of that for later posts. I think this one is stretching the length limits already a bit. There was the issue with the OpenSSH Pseudo Random Number Generator (PRNG) in Debian Linux and Ubuntu creating predictable keys aka the CVE-2008-0166, there was a lot of research published about compromising hypervisors, there was the GSM-cracking-made-cheap making the rounds, the buffer overflow in the Citect CitectSCADA ODBC service documented in CVE-2008-2639 and of course soon after the summer the networks got seriously hit by a botnet called Conficker. But lets leave that for some time later.
Welcome to the blog! Expect to find a bit more focused and compact (or not) musings on all things information security in this one. Due to professional engagements I will probably record whole lotta stuff related to intrusion detection and attacks against corporate environments for now, but finally wish to provide coverage of the entire intriguing research output provided by the global information security community and keep you updated on all things related to network security.
With the agenda set, I quote Pep Guardiola. Fasten your seatbelts, let´s have a good ride. Let's go beyond.
Labels:
Alex Pilotov,
Anton Kapela,
BGP,
Black Hat 2008,
CVE-2008-0960,
CVE-2008-1447,
Dan Kaminsky,
Def Con 2008,
DNS,
SNMPv3,
VU#800113,
VU#878044
Subscribe to:
Posts (Atom)