We all know it happens. Bitter old IT guy leaves a back door into the system and network in order to have fun with the new guys and show the company how bad things are without him.
I've never personally experienced this. The most I've experienced is somebody who broke and stole stuff right before leaving. I'm sure this happens though.
So, when taking over a network that can't quite be trusted, what steps should be gone through to ensure everything is safe and secure?
-
Delete everything, start again ;)
James Lawrie : +1 - if a server is root-level compromised you have to start again from scratch. If the last admin couldn't be trusted, assume root-level compromise.Jason Berg : Well...Yes...Best solution...Also kind of hard to convince management to redo everything. Active Directory. Exchange. SQL. Sharepoint. Even for 50 users this is no small task...much less when it's for 300+ users.jscott : @danp: Heck yes, OVERTIME PAY AND NO WEEKENDS OFF. :(danp : aww, sysadmins being miserable, who could have predicted :pJohn Gardeniers : aww, sysadmins being sensible, who could have predicted. While your idea has technical merit, it is rarely practical or feasible.Bayonian : In most case, 90% at least, you can't just come in and wipe out the old system. Business continuity must be ensured. Time and Cost constraint..plus other factor, and also you've just got hired.From danp -
I would say it is a balance of how much concern you have vs the money you are willing to pay.
Very concerned:
If you are very concerned then you may want to hire an outside security consultant to do a complete scan of everything from both an outside and internal perspective. If this person was particularly smart you could be in trouble, they might have something that will be dormant for a while. The other option is to simply rebuild everything. This may sound very excessive but you will learn the environment well and you make a disaster recovery project as well.Mildly Concerned:
If you are only mildly concerned you might just want to do:- A a port scan from the outside.
- Virus/Spyware Scan. Rootkit Scan for Linux Machines.
- Look over the firewall configuration for anything you don't understand.
- Change all passwords and look for any unknown accounts (Make sure they didn't activate someone who is no longer with the company so they could use that etc).
- This might also be a good time to look into installing an Intrusion Detection System (IDS).
- Watch the logs more closely than you normally do.
For the Future:
Going forward when an admin leaves give him a nice party and then when he drunk just offer him a ride home -- then dispose of him in the nearest river, marsh, or lake. More seriously, this is one of the good reasons to give admins generous severance pay. You want them to feel okay about leaving as much as possible. Even if they shouldn't feel good, who cares?, suck it up and make them happy. Pretend it is your fault and not theirs. The cost of a raise in costs for unemployment insurance and the severance package don't compare to the damage they could do. This is all about the path of least resistance and creating as little drama as possible.Jason Berg : Answers that do not include murder would probably be preferred :-)jscott : +1 for the BOFH suggestion.Kyle Brandt : @Jason: Updated with a more serious suggestion of having a severance policy.Jason Berg : +1 for giving me an alternative to murderGregD : @Kyle: That was supposed to be our little secret...Bill Weiss : Dead-man switches, Kyle. We put them there in case we go away for a while :) By "we", I mean, uh, they?Mr Shoubs : In a lot of cases rebuilding may be total infeasible... do this at your own risk!Evan Anderson : +1 - It's a practical answer, and I like the discussion based on a risk / cost analysis (because that's what it is). Sysadmin1138's answer is a little more comprehensive re: the "rubber meets the road", but doesn't necessarily go into the risk / cost analysis and the fact that, much of the time, you have to set some assumptions aside as being "too remote". (That may be the wrong decision, but nobody has infinite time / money.)From Kyle Brandt -
If you can't redo the server, the next best thing is probably to lock down your firewalls as much as you can. Follow every single possible inbound connection and make sure it is reduced to the absolute minimum.
Change all passwords.
Replace all ssh keys.
From wolfgangsz -
I suggest you start at the perimeter. Verify your firewall configurations make sure you do not have un-expected entry points into the network. Make sure the network physically secure against him re-entering and getting access to any computers.
Verify that you have fully working and restoreable backups. Good backups will keep you loosing data if he does do something destructive.
Checking any services that are allowed through the perimeter and make sure he has been denied access. Make sure that those systems have good working logging mechanisms in place.
From Zoredache -
Unless you're really really paranoid, then my suggestion would simply be running several TCP/IP scanning tools (tcpview, wireshark etc) to see if there is anything suspicious attempting to contact the outside world.
Change the administrator passwords and make sure there are no 'additional' administrator accounts that don't need to be there.
Also, don't forget to change the wireless access passwords and check over your security software settings (AV and Firewall in particular)
PP : +1 for change the administrator passwordsKhai : Ok but beware of passively listening for weird stuff because you could be blinking when that `TRUNCATE TABLE customer` is run :PFrom emtunc -
First things first - get a backup of everything on off-site storage (e.g. tape, or HDD that you disconnect and put in storage). That way, if something malicious takes place, you may be able to recover a little.
Next, comb through your firewall rules. Any suspicious open ports should be closed. If there is a back door then preventing access to it would be a good thing.
User accounts - look for your disgruntled user and ensure their access is removed as soon as possible. If there are SSH keys, or /etc/passwd files, or LDAP entries, even .htaccess files, should all be scanned.
On your important servers look for applications and active listening ports. Ensure the running processes attached to them appear sensible.
Ultimately a determined disgruntled employee can do anything - after all, they have knowledge of all the internal systems. One hopes that they have the integrity not to take negative action.
Joe H. : backups may also be important if something does happen, and you decide to go with the prosecution route, so you may want to find out what the rules for evidence handling are, and make sure you follow 'em, just in case.Shannon Nelson : But don't forget that what you have just backed up may include rooted apps/config/data etc.From PP -
- Audit and disable the employee's account(s).
- Audit VPN access. Scrutinze VPN access logs closely.
- Change all "known" shared account passwords.
- Audit Server admin/user lists. Prune as required.
Short of burning-it-to-the-ground and rebuilding, any access he gets after the above is likely breaking the law in many/all jurisdictions. Inform management of the possibility of his unauthorized access.
From jscott -
A well-run infrastructure is going to have the tools, monitoring, and controls in place to largely prevent this. These include:
- Proper network segmentation and firewalling
- Host based IDS
- Network based IDS
- Central logging
- Access control
- Change control
If these tools are in place properly, you will have an audit trail. Otherwise, you're going to have to perform a full penetration test.
First step would be to audit all access and change all passwords. Focus on external access and potential entry points-- this is where your time is best spent. If the external footprint is not justified, eliminate it or shrink it. This will allow you time to focus on more of the details internally. Be aware of all outbound traffic as well, as programmatic solutions could be transferring restricted data externally.
Ultimately, being a systems and network administrator will allow full access to most if not all things. With this, comes a high degree of responsibility. Hiring with this level of responsibility should not be taken lightly and steps should be taken to minimize risk from the start. If a professional is hired, even leaving on bad terms, they would not take actions that would be unprofessional or illegal.
There are many detailed posts on Server Fault that cover proper system auditing for security as well as what to do in case of someone's termination. This situation is not unique from those.
From Warner -
It's really, really, really hard. It requires a very complete audit. If you're very sure the old person left something behind that'll go boom, or require their re-hire because they're the only one who can put a fire out, then it's time to assume you've been rooted by a hostile party. Treat it like a group of hackers came in and stole stuff, and you have to clean up after their mess. Because that's what it is.
- Audit every account on every system to ensure it is associated with a specific entity.
- Accounts that seem associated to systems but no one can account for are to be mistrusted.
- Accounts that aren't associated with anything need to be purged (this needs to be done anyway, but it is especially important in this case)
- Change any and all passwords they might conceivably have come into contact with.
- This can be a real problem for utility accounts as those passwords tend to get hard-coded into things.
- If they were a helpdesk type responding to end-user calls, assume they have the password of anyone they worked with.
- If they had Enterprise Admin or Domain Admin to Active Directory, assume they grabbed a copy of the password hashes before they left.
- If they had root access to any *nix boxes assume they walked off with the password hashes. Also reset any public-key SSH keys that may be in use for root-login SSH (don't do that at all, but if you have it, clear 'em).
- If they had access to any telecom gear, change any router/switch/gateway/PBX passwords. This can be a really royal pain.
- Fully audit your perimeter security arrangements.
- Ensure all firewall holes trace to known authorized devices and ports
- Ensure all remote access methods (VPN, SSH, BlackBerry, ActiveSync, Citrix, SMTP, IMAP, WebMail, whatever) have no extra authentication tacked on, and fully vet them for unauthorized access methods.
- Ensure remote WAN links trace to fully employed people, and verify it. Especially wireless connections. You don't want them walking off with a company paid cell-modem or smart-phone. Contact all such users to ensure they have the right device.
- Fully audit internal privileged-access arrangements. These are things like SSH/VNC/RDP access to servers that general users don't have, or any access to sensitive systems like payroll.
- Start hunting for logic bombs.
- Check all automation (task schedulers, cron jobs, or anything that runs on a schedule) for signs of evil. By "All" I mean all. Check every single crontab. Check every single Windows Task Scheduler. Even workstations.
- Validate key system binaries on every server to ensure they are what they should be. This is tricky.
- Start hunting for rootkits. By definition they're hard to find, but there are scanners for this.
Not easy in the least. Justifying the expense of all of that can be really hard without definite proof that the now-ex admin was in fact evil. The entirety of the above may not even be doable with company assets, which will require hiring security consultants to do some of this work.
If actual evil is detected, especially if the evil is in some kind of software, trained security professionals are the best to determine the breadth of the problem. This is also the point when a criminal case can start being built, and you really want people who are trained in handling evidence to be doing this analysis.
But, really, how far do you have to go? For routine admin departures where expectation of evil is very slight, the full circus is probably not required; changing admin-level passwords and re-keying any external-facing SSH hosts is probably sufficient. Again, corporate security posture determines this.
For admins who were terminated for cause, or evil cropped up after their otherwise normal departure, the circus becomes more needed. The worst-case scenario is a paranoid BOFH-type who has been notified that their position will be made redundant in 2 weeks, as that gives them plenty of time to get ready; in circumstances like these Kyle's idea of a generous severance package can mitigate all kind of problems. Even paranoids can forgive a lot of sins after a check containing 4 months pay arrives. That check will probably cost less than the cost of the security consultants needed to ferret out their evil.
But ultimately, how deep you have to dig is determined by:
- The expectation that evil was done
- The expected skill level of any evil being done
- The systems potentially exposed to the evil
- The potential damage of any evil
- Regulatory requirements for reporting perpetrated evil vs preemptively found evil. Generally you have to report the former, but not the later.
But ultimately, it comes down to the cost of determining if evil was done versus the potential cost of any evil actually being done.
Evan Anderson : +1 - The state of the art with respect to auditing system binaries is pretty bad today. Computer forensics tools can help you verify signatures on binaries, but with the proliferation of different binary versions (particularly on Windows, what w/ all the updates happening every month) it's pretty hard to come up with a convincing scenario where you could approach 100% binary verification. (I'd +10 you if I could, because you've summed-up the entire problem pretty well. It's a hard problem, especially if there wasn't compartmentalization and separation of job duties.)sysadmin1138 : @evan The binary problem is REALLY bad. There are just so many library files, in so many locations, it's hard to keep up.Adam Robinson : +1. EXCELLENT answer.Kara Marfia : +++ Re: changing service account passwords. This should be thoroughly documented anyway, so this process is doubly important if you're to be expected to do your job.Joe H. : And pull a full backup of everything *now* and take it out of rotation -- you may need it incase a logic bomb does go off, either for restoration, or if needed as evidence.Evan Anderson : @Joe H.: Don't forget verifying the contents of said backup independently from the production infrastructure. The backup software could be trojanized. (One of my Customers had a third-party w/ an independent installation of their LOb application who was contracted to restore backups, load them into the app, and verify that financial statements generated from the backup matched those generated by the production system. Pretty wild...)Mox : Great answer. Also, don't forget to remove the departed employee as an authorized point of contact for service providers and vendors. Domain registrars. Internet service providers. Telecommunications companies. Ensure all these external parties get the word that the employee is no longer authorized to make any changes or discuss the company's accounts.gWaldo : Additionally, an Audit of Scheduled Tasks on all servers would be prudent. This is best scripted. Nothing like the potential of a time bomb to keep you awake at night...MightyE : "The entirety of the above may not even be doable with company assets, which will require hiring security consultants to do some of this work." - of course it may be *this* exposure which leads to compromise. This level of audit requires extremely low level system access - and by individuals who *know* how to hide things.neoneye : Also remember to change the locks on all doors so the person can't get physical access while and after you clean up the mess.Greeblesnort : another +1 for adding the, what some would consider obvious, idea that making sure your outgoing admin isn't psychotically pissed off at you may be allow you to save the business potentially millions of dollars in consultants.Jess : This is another reason why having Linux boxes can be dangerous. It's extremely easy for a knowledgeable linux admin to build a back door into the kernel or another core part of the product, and it's far more difficult to find than on Windows.From sysadmin1138 - Audit every account on every system to ensure it is associated with a specific entity.
-
In essence, make the knowledge of the previous IT people worthless.
Change everything you can change without impacting the IT infrastructure.
Changing or diversifying suppliers is another good practice.
John Gardeniers : I can't understand the relevance of the suppliers to the question.lrosa : Because the supplier could be a friend or could be connected to the previous IT team. If you keep the same supplier and change everything else, you risk to inform the old IT team and made everything worthless. I wrote this based on previous experience.Piskvor : Well, unless you've handed your private keys to the supplier, not sure what the previous IT team stands to gain by this: "So as you say, Bob, they generated new keys, new passwords, and closed all access from outside? Hmm. [opens a Mac laptop, runs nmap; types for two seconds] Ok, I'm in." (CUT!)lrosa : It's not only a matter of perimeter access, but a matter of internal IT infrastructure. Say you want to carry on an attack based on social engineering: knowing internal structure is very handy (Mitnick rules).From lrosa -
Generally its quite hard...
but if its a website, have a look at the code just behind the Login button.
We found a "if username='admin'" type thing once...
From Mark Redman -
Don't forget the likes of Teamviewer, LogmeIn, etc... I know this was already mentioned, but a software audit (many apps out there) of every server/workstation wouldn't hurt, including subnet(s) scans with nmap's NSE scripts.
From Mars -
Basically, I'd say that if you've a competent BOFH, you're doomed... there are plenty of way of installing bombs that would be unnoticed. And if your company is used to eject "manu-military" those who are fired, be sure that the bomb will be planted well bofore the layoff !!!
Best way is to minimize the risks of having an angry admin... Avoid the "layoff for costs cut" (if he is a competent and vicious BOFH, the losses you may incur will probably be way bigger than what you'll get from the layoff)... If he made some unacceptable mistake, it's better to have him fix it (unpaid) as an alternative to layoff... He'll be more prudent next time to not repeat the mistake (which will be an increase in his value)... But be sure to hit the good target (it's common that uncompetent people with good charisma reject their own fault to the competent but less social one).
And if you're facing a true BOFH in the worst sense (and that that behaviour is the reason of the layoff), you'd better be prepared to reinstall from scratch all the system he has been in contact with (which will probably mean every single computer).
Don't forget that a single bit change may make the whole system go havoc... (setuid bit, Jump if Carry to Jump if No Carry, ...) and that even the compilation tools may have been compromized.
From Denis -
It strikes me that the problem exists even before the admin leaves. It's just that one notices the problem more at that time.
-> One needs a process to audit every change, and part of the process is that changes are only applied through it.
Mr. Shiny and New : I'm curious about how you enforce this sort of process?From Stephan Wehner -
Check logs on your servers (and computers they directly work on). Look not only for their account, but also accounts that are not known administrators. Look for holes in your logs. If an event log was cleared on a server recently, it is suspect.
Check the modified date on files on your web servers. Run a quick script to list all the recently changed files and review them.
Check the last updated date on all of your group policy and user objects in AD.
Verify all of your backups are working and the existing backups still exists.
Check servers where you are running Volume Shadow Copy services for previous history to be missing.
I already see lots of good things listed and just wanted to add these other things you can quickly check. It would be worth it to do a full review of everything. But start with the places with the most recent changes. Some of these things can be quickly checked and can raise some early red flags to help you out.
From Kevin Marquette -
Burn it.... burn it all.
It is the only way to be sure.
Then, burn all of your external interests, domain registrars, credit card payment providers the lot.
On second thought, perhaps it is easier to ask any Bikie mates to convince the individual that it is healthier for them not to bother you.
Piskvor : Eh, great. So if an admin is fired, just wipe out the entire company altogether? Okay, let me explain that to the shareholders.Hubert Kario : the *only* way to be sure is to nuke it from orbitFrom David -
Presumably, a competent admin somewhere along the way made what is called a BACKUP of the base system configuration. It would also be safe to assume there are backups done with some reasonable level of frequency allowing for a known safe backup to be restored from.
Given that some things do change, it is a good idea to run from your backup virtualized if possible until you can ensure the primary installation is not compromised.
Assuming the worst becomes evident, you merge what you are able to, and input by hand the remainder.
I'm shocked no one has mentioned using a safe backup, prior to myself. Does that mean I should submit my resume to your HR departments?
-
Make sure to tell everyone in the company once they have left. This will eliminate the social engineering attack vector. If the company is big, then make sure the people who need to know, know.
If the admin was also responsible for code written (corporate website etc) then you will need to do a code audit as well.
From Jason -
I don't understand why a pen test would be useful. Assuming the former employee in question did his job, it should reveal nothing (in a real world environment it probably will anyway, but ignoring that). The former admin would (presumably) have passwords that the pen test team wouldn't have, therefore the pen test is not going to model the admin's attack. If the pen testers are given passwords, the exercise is futile because one cannot determine which ones are compromised. What is needed is a password change and key rotation, as described by others.
From John -
Good luck if he really knows anything and set anything up in advance. Even a dimwit can call/email/fax the telco with disconnects or even ask them to run full test patterns on the circuits during the day.
Seriously, showing a little love and a few grand on departure really lessens the risk.
Oh yeah, in case they call to "get a password or something" remind them of your 1099 rate and the 1 hour min and 100 travel expenses per call regardless if you have to be anywhere...
Hey, that's the same as my luggage! 1,2,3,4!
From PointedHead Kid -
There's a big one that everyone's left out.
Remember that there aren't just systems.
- Do vendors know that person isn't on staff, and shouldn't be allowed access (colo, telco)
- Are there any external hosted services that may have separate passwords (exchange, crm)
- Could they have blackmail material on anyway (alright this is starting to reach a bit...)
From LapTop006 -
A clever BOFH could do any of the following:
Periodic program that initiates a netcat outbound connection on a well known port to pick up commands. E.g. Port 80. If well done the back and forth traffic would have the appearance of traffic for that port. So if on port 80, it would have HTTP headers, and the payload would be chunks embedded in images.
Aperiodic command that looks in specific places for files to execute. Places can be on users computers, network computers, extra tables in databases, temporary spool file directories.
Programs that check to see if one or more of the other backdoors is still in place. If it is not, then a variant on it is installed, and the details emailed to the BOFH
Since much in the way of backups is now done with disk, modify the backups to contain at least some of your root kits.
Ways to protect yourself from this sort of thing:
When an BOFH class employee leaves, install a new box in the DMZ. It gets a copy of all traffic passing the firewall. Look for anomalies in this traffic. The latter is non-trivial, especially if the BOFH is good at mimicking normal traffic patterns.
Redo your servers so that critical binaries are stored on read-only media. That is, if you want to modify /bin/ps, you have to go to the machine, physically move a switch from RO to RW, reboot single user, remount that partition rw, install your new copy of ps, sync, reboot, toggle switch. A system done this way has at least some trusted programs and a trusted kernel for doing further work.
Of course if you are using windows, you're hosed.
- Compartmentalize your infra-structure. Not reasonable with small to medium size firms.
Ways to prevent this sort of thing.
Vet applicants carefully.
Find out if these people are disgruntled and fix the personnel problems ahead of time.
When you dismiss an admin with these sorts of powers sweeten the pie:
a. His salary or a fraction of his salary continues for a period of time or until there is a major change in the system behaviour that is unexplained by the IT staff. This could be on an exponential decay. E.g. he gets full pay for 6 months, 80% of that for 6 months, 80 percent of that for the next 6 months.
b. Part of his pay is in the form of stock options that don't take effect for one to five years after he leaves. These options are not removed when he leaves. He has an incentive to make sure that the company will be running well in 5 years.
From sherwood botsford
0 comments:
Post a Comment