Monday, April 14, 2014

Heartbleed: A postmortem

I saw a new article without comments on Hacker News about Heartbleed as I was about to go to sleep on Monday night.  I skimmed over it quickly, figured it was sensationalist crap, and went to sleep.  When I got to work on Tuesday morning, it was clear that Heartbleed was in fact serious.

By Tuesday morning Redhat had already released an update for openssl.  We don't really take proper advantage of RHN/Satellite, so I used Chef's knife tool to update the vulnerable servers.  Since only RHEL 6.5 was affected:
 knife ssh "platform_version:6.5" "yum update openssl -y && /etc/init.d/httpd restart"  

I used one of the online scanners to verify our external network was secure.  Only a lone McAfee server was found to still be vulnerable and was patched as soon as McAfee had released an update.

We use a wildcard SSL certificate for most of our public facing sites.  It is due to expire later this year, so I decided to purchase a new certificate rather than re-key.  Our previous certificate was issued by Comodo, but their web site is infuriating so I purchased the new certificate from DigiCert.  Compared to Comodo, this was a much more pleasant experience.

Installing the new certificate on the Chef controlled Linux servers was just a simple cookbook modification.  However, we had many devices and Windows Servers controlled by other groups that needed updating as well.  To get a list of servers using the old certificate, I used the DNS zone file to get a list of all subdomains (obviously this is highly variable and would only work for our zone file):
 cat example.com.zone | grep -v static- | grep -v ";" \  
 | grep -e '^$' -v | grep -v SRV | grep IN | grep -v NS \  
 | tail -n+5 | awk '{print $1 ".example.com"}' >> examplesites  

I then looped through the list of sites using curl to connect to each site on port 443, looking for a certificate with an expiration date that matched the old certificate:
 #!/bin/bash  
 while read p; do  
   headers=`curl -Ivs --connect-timeout 2 https://$p 2>&1`  
     if [[ $headers == *2014-10-21* ]]  
   then  
     echo $p >> badsites  
   fi    
 done < "examplesites"  

Interestingly, we later ran into an issue where an application running on a RHEL 5.7 system was unable to connect to a system using the new DigiCert issued certificate.  The application was using curl  and this system's /etc/pki/tls/certs/ca-bundle.crt file did not trust DigiCert.  After updating openssl to get an updated ca-bundle.crt, the issue went away.  I ended up just updating openssl on every system we have with:
 knife ssh "name:*" "yum update openssl -y && /etc/init.d/httpd restart"  

Next came the task of finding all other vulnerable systems on the internal network.  The author of masscan added Heartbleed support and I was able to scan our entire /8 network in just a few minutes.  I also wanted the memory contents of each system as I didn't know what most of them were and I wanted to see what heartbleed would reveal.  I took the output of masscan and looped it through heartbleed.c:
 ./masscan 10.0.0.0/8 -p443 -S 10.200.200.200 --rate 100000 --heartbleed > bleeders  
 grep HEARTBLEED bleeders | awk '{print $6}' | sed 's/://' | xargs -I {} ./heartbleed -p 443 -f {}.bleed -t 1 -s {}  

It was then easy to use strings to examine each memory dump.  I found a ton of interesting things such as PHP session cookies and even a Cisco VCS server that revealed its admin username and password.

No comments: