You weren’t hacked; you’re just stupid.

By Blake R. Swopes


In the two and a half years that I’ve been running Linux boxen with dedicated internet connections, there have been several occasions when I was convinced I had been hacked. I would notice some strange behaviour that I could not explain otherwise… Only to have it turn out to be a misconfiguration.


The security community is paranoid by nature; this is a good thing, generally. But, I’m here to temper that paranoia a bit. The consequences of deciding that you’ve been hacked can be very real, depending on your situation. Not everyone can cleanly wipe their drives and start over without a lot of work, especially in the business world. The lost man-hours by the IT team alone is significant, but the losses resulting from resources not being available to other employees can be huge.


I’d like to share a couple examples, so that hopefully others will remember patience when dealing with unexpected system behaviour.


Empty Logs


Many moons ago, when I was still a neophyte Linux admin/user, I was picking up my security knowledge from a hax0r friend of mine at work. I knew that a default install was far from secure, and that I would need to shut down a number of services my system was offering. I didn’t quite know how to do this, though.


Luckily, someone pointed me to the nifty /etc/services file, which contained a huge number of entries that I didn’t want anywhere near my system… I set to work commenting them out.


A few days later, I noticed that my logs were empty! Dear God, it was too late! I had been hacked already! So, I quickly reinstalled my OS… Only this time, I knew about the services file, so I made sure to clean it up before I put my system online… But it happened again!


Now, many of you will recognize the source of my problem(s) already. I had commented out syslog in /etc/services, so it was unable to write to the file (don’t ask me, that’s just how syslog works)… The logs were being output to the console instead, along with some warnings about what the deal was, but I didn’t see that… because the system was just a 486 I was using as a gateway, sitting in another room, with no monitor attached. I had been flipping out and reformatting for nothing.


This experience taught me the importance of not jumping to the conclusion that I had been hacked.


File system usage


Now a sysadmin professionally, I had a system I recently was given administrative control of max out its /var filesystem; much to the dismay of qmail and lprng as well as a few users. df showed that the partition was at 100% usage, but something strange happened when I went to look for the printer log file I was sure had filled the system (even have a cron job to do this, it happens so often), and found only normal sized logs… du told me that I still should have 500 megs available on the partition!


Oh my god, I thought, someone must have gotten into the system and installed a lkm to prevent me from seeing certain files on /var (a sniffer log, I was sure). I’ve been hacked! Friends I shared this problem with agreed that it sounded highly suspect, and I think they were a bit surprised at my slowness to respond with a full reinstall.


I got the system up and running again until I could bring it down after hours for a more thorough looking at. When that time came, I brought it down and fsck’d the disk, hoping for bad blocks or some sort of explanation, but I couldn’t find one. I contemplated dropping the disk into another linux system to examine it, but due to reasons I won’t go into, that wasn’t practical (plus, I’m lazy).


So, I decided that if root couldn’t see it, it shouldn’t be there; I put the system in single user mode, backed up /var and mke2fs -j’d it… ext3 and 30% usage; hacker or filesystem error, whatever the probem was it was solved. Or so I thought…


The next day, the system was growing again, up to 40%… This pretty well finished off any hopes I had that the problem was caused by just some filesystem hiccups. So I started hunting for an explanation… That’s when I found a number of threads on du and df inconsistencies, and one of them held my answer; logrotate.


It seemed that bind wasn’t happy with the way logrotate was handling named.log. It still had the old inode in memory, and it wasn’t about to stop writing to it just because the file was gone… I restarted named, and for my troubles received 10% of my /var filesystem back. There was no hack; just a misconfiguration. A lot of people might have flipped out and wiped their system. Small businesses on tight budgets and tight deadlines can’t afford to lose their servers for a few days.


Taking down the company’s central server during the week to rebuild it isn’t the kind of thing you can do on a whim. Its very costly. Luckily, I remembered my missing log files from my neophyte days, and I reserved judgment on the hacker question.


So what’s the point?


Am I saying that no one ever gets hacked? Or that you shouldn’t be careful? No. Definitely not. I’m just saying to be patient as well. People say “where there’s smoke there’s fire”, but what they forget is that sometimes steam looks a hell of a lot like smoke.