First off: Whoa. This year’s ICE games were a significant departure from last years games (in a good way of course). Last year, Paul and I MC’ed, this year we ran teams. The scoring engine was much slicker as well. The educational experience for the defenders was immense.
There’s nothing like walking in to a network that you must begin to perform incident response with little to no system documentation, patches or time. The team had no access to patches, a firewall that we could only examine logs on (with default ip permit any any rules), and every service under the sun running on the systems; Windows 2000, and an older version of Fedora. This is exactly what I would come to expect as a third party consultant taking over a customer’s network during an incident response engagement.
It didn’t help that we had to keep the “business” up and running while we responded either. It was also typical that Tim, our game host and “CE-Oh no” showed up immediately to demand that our e-commerce site be up; Did I mention that it was not up, or even configured when we walked in the door? Yeah, now time to learn MySQL and ZenCart on the fly.
In other words, this was an almost perfect real world scenario.

Night 1 – Where to Start?

The first night only delivered us a few participants. We didn’t even have enough folks to fully staff both teams – team 2 had one person. We made a decision early on to only focus our efforts on team one, and eventually we had some more help. With this strategy (and some more help) a few hours into the game, we were able to use team 2 as a “test” environment to see what worked, and take those lessons and apply them to team 1.
The first order of business; change those bad default passwords. Once we completed that, we started to harden systems; turn off unneeded services, change passwords, begin hardening, change passwords, discover an intrusion and mitigate, change passwords…
panic button.jpgThen you begin finding more about the systems. Did I mention default usernames and passwords for the Asterisk web interface? Yep, they’ve re-routed all of the phone traffic. Change the password on the SCADA box? What password on the SCADA box! In fact, no authentication at all! No wonder the re-routed phones kept powering down.
At least we put up some “calling cards” handed out by some gentlemen on the street in front of the web cams.
The guys did a fantastic job dealing with all of my harassing on the status of the systems, while acting as incident commander. they developed some hardening guidelines, created some scripts to disconnect suspicious incoming sessions, and implemented some host based firewalling – most had to learn Windows ipsec policies and iptables rules in game.
Eventually, the folks at Fortinet came over and gave us the ability to manage our firewall. At that point the game turned a little bit, and we were fairly successful at keeping the attackers out, and restoring business services.

Night 2 – Game On!

This night we already had some significant game plans form the previous night. We also had some more help, and alas, some defectors to the dark side. John Strand came by, and we put together a new strategy; take two of the team “leaders” from the night before, and make them incident commanders.
Why is this a big deal? The natural reaction for the two new incident commanders was to pull up a keyboard and start remediating. This is the wrong thing to do as an incident commander, and they, and the team quickly learned that having someone as the middle management and not being technical was a big help. The incident commanders could offer some limited technical advice, but they could spend more time with the “customers”.
Enter social engineering. We had attempts to gain credentials to the system via phone asking for a password reset, and for a new account. Fortunately, the incident commanders responded appropriately, and the attacker (whom they recognized as the voice of the Lt. Col. from the Air Force) could not provide the appropriate information to validate the request. However, we had the tables slightly turned. During the day, one of the 560 students came to me to see if a mole on the attacker’s side would be permitted. It was, and the mole was able to SMS text message me through the event that evening letting the teams know where attacks were originating from, what was compromised, and that they could hear us on the microphones. I’m still not sure why they didn’t pick out the really big guy SMS-ing all night.
A few times the teams needed to enact their disaster recovery plan, and have some of the systems restored to a last known working point. they became so hosed by the attackers, or the defenders were no longer able to log in, that the only option was to “restore from tape”.
This night also gave us a wireless access point. That was an easy reconfigure of the username and password, and a re-config of a known strong WPA2 key. The problem that we made was that we had a “remote” worker whom we appropriately reconfigured to utilize the new wireless settings. The attackers were able to gain physical access to the remote worker, and compromise that system to use as a pivot point. It just proves a point about needing to protect your remote workers…

Night 3 – The Filthy Rogues!

On the third night, most of the teams had everything coming together pretty well. The typical scramble ensued in the beginning, however this rogue.jpgtime the defenders were given 5 minutes to disconnect form the network and begin locking down. We also assigned some different incident commanders, and had one of the star incident commanders from the previous night take on the role of upper management to manage the commanders.
Everything went well. The teams came together quickly, and the new commanders suffered the same problems and the previous ones, which was to be expected. The hardening scripts and firewall rules were easy to re-do from the knowledge gained. The configuration of the e-commerce site got easier. We also found us without a wireless network this time. Heck, we even got one of the teams (team 2, the former “lab” from night one) to do some serious poking at the webcams and phones, changing the passwords and such on the devices themselves.
From there, the game was pretty much the same.
Or was it?
The team members kept seeing attacks coming from an unusual network address range, but couldn’t determine what it was. Upper management made some suggestions well into the exercise this night that may have been overlooked, and game progressed in the usual fashion.
Enter the rogue AP.
The teams made the assumption that because no hardware device for wireless was obviously present that it didn’t exist. Except that it did. There was an AP deployed under the conference table, and no-one spotted it, either physically or via a wireless assessment. Soon, a call came in from Tim the “CE Oh-no”, that there was a rogue AP and the game had ended.

Lessons Learned

Incident Response is hard. Don’t expect everything to be working when you show up.
Overcoming the drive to do technical items while being incident commander can be detrimental. It is certainly something that can be uncomfortable, but is a skill that can be learned.
Remote workers need to be secured appropriately. This means physical access too.
Don’t underestimate the power of the Rogue AP. Sure, your policies may say no wireless, but you can’t be sure that it isn’t there until you test for it.
Social Engineering can be defeated with the proper staff training.
Windows 2000, without any host based protection is almost impossible to defend, even behind a firewall. Go for the crunchy on the inside, crunchy on the outside security model. Once the attacker is inside without host based protection, the game is almost assuredly over.
A big thanks to Tim, Dwight, Joe, Justin, Alex, Anthony, and the sponsors (Immunity, CORE, Think Geek and Fortinet and all those who came out (both the attackers and defenders). I can’t wait to do this one again!
- Larry “haxorthematrix” Pesce

About the author