Saturday, January 29, 2011

What is the failure rate of servers being moved physically from one location to another?

We are moving 55 servers, mostly Dell PowerEdge 2850's 30 miles over a mountain road.

  • Not enough data. How are they being packed? How bumpy is the road?

    If you're using reasonable caution, you probably won't see any problems, with reasonably new servers, but you can always lose a hard drive. I'd make sure my backups were up to date, and the servers were well padded and secured.

    @lombardm

    I'd be worried too. Almost all of those things can shorten the life of your HDDs. Definitely make sure of your backups. Still, in all likelihood, this will only (again) shorten the life of those drives. Most of them are going to weather it fine.

    lombardm : The servers will be moved on air ride trucks, but the road is bumpy.These servers are not new, and have been in a sketchy environment. There are regular power events and no UPS. The room's AC is not balanced very well either. I am a consultant project manager and am hoping that a 10% loss rate is reasonable.
    Bart Silverstrim : If budget and time allow, you could try getting some new drives and add them to the array to rebuild them, then you'd have newer drives in the array to guard against unexpected failure. It's generally a bad idea to have multiple drives in an array of the same age anyway, as they may all fail near the same time...but that's just another idea.
    Farseeker : Hmm, I doubt even the worst corrugated roads in outback Australia are any match for the delivery guy I saw literally chucking Dell branded boxes from the back of his truck onto the loading dock, or the day I saw a courier do an underarm throw over a fence to a front door. If it can survive that, it can survive anything (with the correct packaging, of course)!
  • How old are the servers? We had some old (5 year) video storage servers that had about a 10% hard drive failure rate when we had to just shut them down for a few hours for power maintenance. We figured the 24x7 fatigue on the hard drives finally caught up when they were spun down and then fired back up. Something we definitely didn't expect from performing a soft shutdown/startup.

    Michael Kohne : Anecdotally, this doesn't seem uncommon - there's lots of stories of systems that have been running fine for long periods of time losing drives on a routine power cycle.
    einstiien : yeah, definitely something I will add into the budget for a power distribution upgrade in the future.
    From einstiien
  • In all likelihood if you're not really beating the servers up during the move you probably won't see any failures, but to be safe I would say expect a few hard drive failures, so have spare drives available to swap in for RAID rebuilds (your drives are in RAID arrays, right?) and make sure your backups are up to date (you are running and testing backups, right?).

    Also like Satanicpuppy said, make sure the servers are packed well for transport - If you're just chucking them in the back of a pick-up truck and doing 80 over potholes and gravel roads your failure rate will obviously be higher, to say nothing of the servers that might bounce out of the bed along the way :)

    From voretaq7
  • You don't mention how long the drives have been in use, how old they are, etc. The longer they've been running 24/7 the greater the chance they won't spin back up.

    The transport could be rough on them if you're talking about a long haul over potholed roads.

    If you're very worried...

    Have backups to tape. Backups that can restore from bare metal.

    Buy some spare drives. You will most likely have a few die. If not now, definitely down the road :-)

    Pack everything very well, but label and remove each and every drive and pack them separately and transport them separately from the servers/racks. Hand-transport them in your car with foam lined cartons, then plug them all back into the proper server and drive slot, so that hopefully they'll take less punishment in your car than in the back of a transport truck. Some would probably argue this is overkill, however.

    Travel with the backups and servers separately. An accident or incident shouldn't kill both your data systems and your backups.

    mh : +1, like you read my mind!
    Oskar Duveborn : +1 Especially like the part about separating the data from the server transport
    Michael Kohne : Eek. While separate packing of the HDs does make some sense, balance that against the likelyhood of something going wrong during re-assembly. With all those servers I'd be more worried about things getting put back together wrong than with losing a drive due to it being shipped in the chassis.
    Bart Silverstrim : @Michael: it's a risk, yes. That's why I said it could be overkill. But at the same time, the drives could fail due to being loosened from the interfaces during transport, or even temp and pressure changes during transport on a mountain road. Personally, I'd say it's a wash.
    Bart Silverstrim : @Michael: I'd also think that transporting within the servers themselves will generate more vibration and inertial wear, as there's not much "give" in most transport trucks, racks, and server mounting hardware. That means every shake and rattle gets transferred to the drives. If you remove the drives and pack them in a cushy, warm car, with foam or padding around them, chances are they'll take less of a beating in the haul compared to those poor CPU's.
    mh : "With all those servers I'd be more worried about things getting put back together wrong" - that's why you label them properly. I'd consider it a wash too.
    Satanicpuppy : Agreeing with Michael. No way would I disassemble the machines. Not only do you have potential re-assembly problems, but the drives are not going to like being pulled out of the machines, jammed in the boxes, moved, then pulled out of the boxes and shoved back into the machines any more than just riding along in the machines.
    Bart Silverstrim : I'd also think it's a wash, but it's an idea, that's balanced by the fact that we don't know what this terrain is like where they're travelling, the trucks being used to transport them, how old the servers are and how long the drives have been in use, and if they're marginal enough that removal and replacement (or spindown/powerup) causes a failure, he's got marginal server reliability as it is and a problem waiting to be exposed. Backups and spare drives are definitely more important in my book than debating drive removal and whether the conditions warrant babying them on a trip.
    mh : @Satanicpuppy: Have you seen a 2850? They're designed for easy drive removal. No substantive concerns there.
    Satanicpuppy : @Mh: Sure, in fact I have a couple downstairs. Still it's motion outside of their normal range, conditions outside their normal conditions, and I'd feel safer (were they mine) with the drives still locked in place in the server.
  • The most likely components to have trouble are the ones with moving parts - HDs, PSUs and CPU fans. It's also possible for cables and even PCI cards to work loose in transit.

    What I'd do is pop out the components in question, box them separately (using lots of bubble wrap) and transport them in the boot of someone's car. Making sure to clearly label what belongs where of course (goes without saying). When reassembling all cables and cards then get an extra nudge to ensure that they're seated correctly.

    Having said all of that, I've placed Dell servers at the tender mercies of courier companies before, and on longer trips down twisty narrow and hilly countryside roads, and they've always come up smiling.

    From mh
  • I can't speak towards the 2850s specifically, but the 1950s are quite resilient and have survived far more than some bumps on a road.

  • Is the mountain made of magnetite?

    Farseeker : I'm not going to upvote the answer because it's not actually helpful, but it did make me laugh out loud for real
    Nathan Long : Thanks for letting me know. :)
  • If you care about the data (and it sounds like you do) do NOT move them. Buy new hosts somewhere else and migrate the data. Those machines are beyond their usable life expectancy. Do some calculations on the power savings that you'll get by purchasing new R710s. You'll likely only need to buy 7 of them to replace all 55 of those 2850s. 2850s were power HOGS, Dell's capacity planner says they draw over 500 watts each. My real world experience with the R710s (dual L5520, 72GB RAM and eight 73G 15k 2.5" drives) are drawing 220 watts when in use (120 is common when idle.) 55x500 = 27,500 watts, 7x220 = 1540 watts. If power is $.15/kwh, you'll pay $2970/mo for the 55 servers or $166.32/mo for the 7. Plus cooling factor (which can easily double that cost.) If you can get a decent Dell lease those new servers, you'll come out ahead over the life of the R710s.

    joeqwerty : A for effort, but really? Advising the OP to buy all new hardware and migrate all of the data, applications, etc. is hardly a suggestion I would take seriously.
    toppledwagon : I've had to make similar justifications to C-level management, they love this sort of thing. If you have 55 servers, you have some sort of configuration management in place so re-creating everything will be next to trivial. What part of this would you not take seriously?
  • We recently moved 50+ Dell servers (1650, 1850, 2850, 1950, 2950, etc.) and various other components (switches, firewalls, etc.) in two racks to a new data center 30 miles away (no mountains involved). We secured all of the equipment in both racks, wrapped the racks with moving blankets, strapped them in the back of a moving truck equipped with air shocks and had 6 movers move each one for us. When the racks were placed on the data center floor we reseated all hard drives, recabled all of the equipment and, knock on wood, 2 months later we're still running without a blip.

    From joeqwerty

1 Linux Backup server - 4 linux servers, 3 Windows servers - how to backup?

I have:

  • 1 Linux "backup" server (Ubuntu 9.10 server) (plenty of disk, new server)
  • 2 Gentoo servers
  • 2 CentOS servers
  • 3 Windows 2003 servers

I'd like to backup all of the servers as disk-to-disk backups to the Ubuntu backup server.

HOW?

Please be gentle as I know Windows but not Linux. I've looked at Bacula, BackupPC, and Amanda. All seem to be a little too complex for me. I'm tasked with doing this cheap, so I can't simply load Windows on the backup server and put something like BackupExec on it.

MY REQUIREMENTS:

  • simple to setup on each client
  • simple to setup on the server
  • disk to disk backup
  • easy to monitor/check backup status
  • easy to restore files
  • email me results of backups
  • scheduled weekly full backups and nightly differentials
  • free/open source

Your help is much appreciated and I would think this kind of question would help others in the future if you are thorough in your answer.

Thanks!

  • I would probably go with rsync. For linux, it can be set up with cron jobs. On windows, there is a great rsync front end called DeltaCopy

    In both cases, you will get incremental backups so that you aren't absorbing that disk space too fast.

  • Have another look at backuppc.
    Backuppc has a very intuitive web interface, works for both linux and windows hosts, and although it claims to be for PCs, it would work fine for backing up servers too!

    If you're really intent on doing it on the cheap, you will probably find you have to make some sacrifices.

    TheCleaner : I went ahead and dug further into Backuppc. Guess my virtualADD wanted it to simply work with wizards, etc. without RTFM.
  • how about add a virualization layer (esx, xen, ...), then you can simply backup them as diskfiles.

    Tom O'Connor : Waaaayyy too complex. Crazy!
    Lee B : No, you can't. Take a low-level backup of a database without considering its currently changing data or using some sort of atomic operation, and that data will be corrupted. In other words, you think you have a backup, and you might, but you might not. SOME virtualisation tools support snapshots, which may be what you're thinking of. However, snapshots are the thing that get you simple reliable image backups, not virtualization. Splitting a RAID mirror or using windows shadow copies will get you the same thing.
    From Dyno Fu
  • A simple and effective tool I've used is rsnapshot. It's easy to setup, and it looks like there's even a Windows HOWTO on their website (though I've never used it with Windows).

    You could also share out your snapshots directory with NFS and/or CIFS, and then mount it on the clients for easy restores. Be sure to share it read only, that way the clients can't delete or change the actual backup files.

    devon : I just reread your requirements, and this won't provide weekly full + daily incremental. It's basically incremental forever, so if that's a hard requirement then this probably won't work.
    Lee B : Continually incremental/dedupped/linked (what's the term?) tools like rsnapshot, rdiff-backup, duplicity, dirvish, and faubackup are even better than full backups, imho, so long as you use a proper, reliable, RAID storage solution for your backup system store.
    From devon

What service do you use to manage DNS?

What service do you use to manage your DNS? I don't really want to use the DNS manager that is provided with website hosting company. Is there any reliable and fast DNS manager service?

I found a lot while Googling, but I don't know which one I should used. Any recommendation?

Thanks

  • www.afraid.org gets my vote every time. and its free for most use.

    Charles Stewart : +1 I was just looking for backup DNS options, and this looks perfect.
  • I'm a fan of vi, myself.

    Keith Stokes : We run our own BIND 9 servers and vi + restarting BIND is what does it for me.
    womble : "restarting bind"... `rndc reload ` ftw!
    Charles Stewart : +1 s/vi/emacs/. I *like* the ~ files: they give me something to diff.
    From womble
  • We use DNS Made Easy. We had another provider for several years with less than enticing service. After switching to these guys, with their 100% SLA, we've not ever had a single problem. High priority tickets were responded to < 5 mins, 'normal' tickets ~ 20 minutes.

    They've always been courtious and helpful, even when I'm having a blonde day and ask some downright stupid questions.

    Steve Wortham : They seem to be one of the cheaper DNS providers with global Anycast Routing... http://www.dnsmadeeasy.com/s0306/res/ipanycast.html
    EarthMind : All their nameservers seem to be located in the USA, which is not recommended if you want a low latency global reach. Also, people should rely on the DNS failover feature as it's not as reliable as it seems. However, it should help in minimizing downtime instead of eliminating it.
    Steve Wortham : Most of their nameservers are in the USA. But they also have locations in UK, Germany, and Hong Kong.
    From Farseeker
  • The best option is to go with geographically dispersed nameservers (with the use of anycast) if speed is the most important part, otherwise just add a few free nameservers (use google) and share secondary DNS with a few other sysadmins.

    As for a commercial DNS provider, easydns.com has always been my first choice.

    Jason : +1 easyDNS They are not cheap, but they are pro. I also personally like their admin interface a lot.
    EarthMind : They are not really expensive either when you compare them to domain registrars asking $30 (or even euros!) for a com/net/org domain and just offering some local nameservers. Of course you pay for this kind of quality.
    From EarthMind
  • I second DNS Made Easy, Have used them for a few years now and have not had a single issue.

    From davemac
  • easydns.com works fine too.

    From aspitzer
  • I use [Zonomi's DNS hosting service][1] for some domains (and in fact wrote the app).

    I think you're actually fine using your hosting providers DNS service (if they offer one). Since it can be easier for them to help you, for example, if your IP address changes.

    I would however recommend you keep your domain registrar separate from your hosting company. Since the registrar is where you can control the name server records. Which lets you change away from your hosting company, should that become necessary. And (depending on how unethical your hosting provider is) you don't want them being your registrar and holding your domain hostage.

    [1]: http://zonomi.com"Zonomi's DNS hosting service"

Windows Service: Can I configure the current working directory?

By default, Windows services start in the sytem32 directory (usually C:\WINDOWS\system32).

Is there a way to set up a different working directory? I am thinking of some registry parameter beneath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SomeService.

So - can this be done?

  • Like MattB, I don't know of any way to change the service's working directory w/o access to the source code. For this specific scenario, it's likely that the extra directory checks don't impose that much unnecessary disk activity relative to the amount of i/o required for the full text indexing operation. Even if you could optimize them away, the full text index will be disk intensive by the nature of the beast.

    From Fred
  • You could use DLL injection to call SetCurrentDirectory after the process has already launched. This would require you to build an injector application, plus the DLL to inject. Some tutorials exist; probably the two best ones I've found are:

    You'll need a decent amount of C++ programming background (and a working build environment) to get through that.

    However, this assumes that the service is looking at the current directory. Another possibility is that it's using %path%. You say that it "starts at system32, tries a few more locations, and eventually its own directory", so this seems more likely to me.

    Compare the directories you see in procmon with your %path%. If they're the same, consider modifying either the SYSTEM %path% or the %path% of the user running the service, so that the directory you want it to search is first.

    I believe Fred is right, though -- you're unlikely to see any significant performance benefit by doing any of this, unless it's happening very frequently. Simple file open operations are not particularly expensive, especially if it's a local path and the file doesn't actually exist.

    Marnix van Valen : The system PATH environment variable was the first thing that came to mind for me. Inserting the service's path at the start of the PATH variable will however have a negative effect on the performance of just about every other application so I wouldn't advise that.
    fission : I don't have hard numbers to back this up either way, but my intuition tells me that no practical performance gain or loss would occur from modifying the path. This is a fairly common scenario; nobody blames, say, the Windows Support Tools, or SQL Server, for negatively impacting system performance when it modifies the path during installation. This isn't the first time I've seen someone look at procmon and go "omg, look at all those file accesses!", not realizing that it's typical for most applications.
    Tomalak : +1 for creativity. :-) I fully understand that these file operations do not impact performance measurably, so I'm not going to actually bother writing a DLL injection solution. Modifying `%PATH%` for the user account the service runs under is a decent idea, though.
    fission : Thanks! Does this mean you're accepting my answer? :D
    Sunny : Creating a special user to run this service only and modifying the %PATH% for this user sounds as a very good way to go. +1
    Tomalak : @fission: Yes, it does mean I accept your answer. ;) It's not what I had hoped for, but it is as close as it gets, I guess.
    From fission

How do I create a data base in MySQL and add a user?

To start working with MySQL from PHP I need to connect to the MySQL server using "mysql_connect". Doing so, I need to specify user-name and password. But for that I need first to create a user with a password. How can I do it?

After I connect to the MySQL server I need to select a data base. But for that the DB should exist. How do I create a DB? Can I do it from PHP?

  • UPDATE: Since you don't have MySQL installed, here are instructions for installing PHP, Apache and MySQL. howtoforge.com


    You'll need a user setup before you create the DB. This will probably have to be done through mysql itself or through your ISPs control panel

    The MySQL command syntax for adding a user is as follows:

    CREATE USER user [IDENTIFIED BY [PASSWORD] 'password']
        [, user [IDENTIFIED BY [PASSWORD] 'password']] ...
    

    You can then use the GRANT command to allow particular access. Here's the syntax:

    GRANT
        priv_type [(column_list)]
          [, priv_type [(column_list)]] ...
        ON [object_type] priv_level
        TO user [IDENTIFIED BY [PASSWORD] 'password']
            [, user [IDENTIFIED BY [PASSWORD] 'password']] ...
        [REQUIRE {NONE | ssl_option [[AND] ssl_option] ...}]
        [WITH with_option ...]
    
    object_type:
        TABLE
      | FUNCTION
      | PROCEDURE
    
    priv_level:
        *
      | *.*
      | db_name.*
      | db_name.tbl_name
      | tbl_name
      | db_name.routine_name
    
    ssl_option:
        SSL
      | X509
      | CIPHER 'cipher'
      | ISSUER 'issuer'
      | SUBJECT 'subject'
    
    with_option:
        GRANT OPTION
      | MAX_QUERIES_PER_HOUR count
      | MAX_UPDATES_PER_HOUR count
      | MAX_CONNECTIONS_PER_HOUR count
      | MAX_USER_CONNECTIONS count
    

    Here's the method for using mysql_connect to connect to a database: php.net
    Here's the method for creating a db with php: php.net

    Brendan Long : The question is how to create a user, not how to connect to the server.
    Roman : I did it before through control panel (when I used commercial web server). But now I am working on my computer and I have no control panel. So, I think I need to do it via command line in MySQL. And the question is how to do it.
    From Griffo
  • http://dev.mysql.com/doc/refman/5.1/en/create-user.html

    There's a command to set the root user's password, but I don't remember what it is.

  • Have you installed the MySQL server yet? If not try going here:

    http://dev.mysql.com/doc/refman/5.1/en/installing.html

    If you have then you can skip to this section:

    http://dev.mysql.com/doc/refman/5.1/en/unix-post-installation.html

    Roman : When I type mysql in the command line (in Ubuntu) I get: The program 'mysql' is currently not installed. It is also written: You can install it by typing: sudo apt-get install mysql-client-5.0. I think it will be the easiest way if I just type "sudo apt-get install ...". Will it work this way?
    From malonso
  • you can do the following with the mysql command line client

    MYSQL CREATE DB COMMAND - CREATE DATABASE mydbname

    MYSQL CREATE USER COMMAND - CREATE USER 'myuser'@'localhost' IDENTIFIED BY 'mypassword';

    MYSQL ASSIGN USER PRIVILEGES COMMAND - GRANT ALL PRIVILEGES ON mydbname TO 'myuser'@'localhost';

    your php connect script would be something like this

    $dbuser = "myuser";
    $dbpass = "mypassword";
    $dbhost = 'localhost';
    $dbname = "mydbname";
    
    $db = mysql_connect ($dbhost,$dbuser,$dbpass);
    mysql_select_db ($dbname,$db);
    
  • There are lots of good tool providing you with the details about how to do it from the command line. Another method you might want to consider is setting up phpMyAdmin or the Mysql GUI Tools. These interfaces will give you an easier starting point.

    From Zoredache

Is it possible to add a random variable to the querystring using mod-rewrite?

We're having a very odd problem with a proxy that a client uses.

In short, their proxy is caching information that it should not be caching. We have the appropriate information in the header that tells the proxy server not to cache AND it's over SSL, but it's still happening.

I can prevent this / remedy this by appending a random variable to the end of their querystring in the URL.

For example:

/information.php may show cached information, whereas /information.php?randomvariable=12345 will not.

Is there a mod rewrite rule that will accommodate something like this?

Thanks!

Edit -

Per Squillman's request, here's the meta data that we send for caching (I misspoke, it's metadata, not HTTP header information):

<meta http-equiv="CACHE-CONTROL" CONTENT="NO-CACHE">
<meta http-equiv="PRAGMA" CONTENT="NO-CACHE">
<meta http-equiv="Expires" content="Mon, 26 Jul 1997 05:00:00 GMT"/>
<meta http-equiv="Pragma" content="no-cache" />

Hope this helps! Thanks.

Edit 2 -

I've implemented a fix at the application level. I append a random variable (seed=random md5) to the query string for each request. It's dirty -- but it works.

I'll post an update once I figure out why this problem is happening. Thanks for the responses!

  • Couple of questions:

    • Are you sure that it's the proxy that's doing the caching?
    • If so, what have you done to confirm that?
    • Do you know what proxy product the client is using?

    The fact that it's going over SSL means that the proxy should not cache it, period (sorry, missed the SSL bit before I posted my comment). If it's a big name proxy product, then I'd more suspect that it's really the clients misbehaving.

    I'm not sure if it's possible in mod_rewrite. Couldn't you just generate it from within PHP?

    Ian P : The idea that it's a proxy that's causing the problem comes from the client. They stated that they had just implemented a new proxy server and the timing of the issue coincides with their implementation. They have been using our particular product for about a year with no issues, and this certainly would have shown up prior to this. There have been no changes to the product on our end either.. I'm baffled. Thanks again for the info.
    squillman : @Ian P - Yeah, I would really question them on the proxy front. Normal proxies should just straight-up NOT be able to get inside a request within SSL.
    fission : +1 for SSL and no-cache. I agree in principle with what you say here, but it's becoming ever more common for web filters to have the capability to inspect SSL (via explicit proxy and on-the-fly certificate generation), though I can't think of any that can force caching.
    From squillman
  • mod_rewrite has a MapType of rnd which may be able to do what you want.

    See this page under randomized plain text.

Why does any .php page give me a 404 error on Server 2008 with IIS 7.0?

I am using FastCGI to setup PHP. I've followed the instructions on the iis.net website. I added the handler mapping, edited the php.ini file as specified. None of it works, I just get a 404.0 error saying "The page you are looking for has been removed", even though the physical path displayed on the error page exists. After trying this manual method (unzipping php, manually adding handle mapping, etc), I removed everything and I tried the Web Platform Installer (ugh) but I still have the same issue.

A little more information:
The Detailed Error page says the handler is my StaticFile handler (not PHP FastCGI). It also gives error code 0x80070002

When I look at the logs, it shows "GET /php.ini" as giving the 404 error. Why is IIS looking for that?

  • I dealt with a similar issue, all replies I could find on the web were of no help whatsoever. First off I assume you have tried running http://yourservernamehere/check.php and everything seems to come back correct. IF this is the case I would suggest checking in your php.ini file the setting under:

    [Date] ; Defines the default timezone used by the date functions date.timezone =

    Most likely it is blank like mine was, this generates a silly error with newer version of php and for whatever reason you will be dead in the water until you have a timezone specified (mine is America/Chicago).

    I may be assuming a lot about your situation but it sounds very similar to mine and I spent several days frustratedly searching for an answer.

    scotty2012 : It was blank, but adding the timezone did not fix the issue.
    Charles : Sorry to hear that, but maybe you saved yourself some more headche for further down the road at least. Reading your question again I see you went back and reinstalled with the installer, in my experience that does not typically work, I would suggest going back to the manual install method before even beginning to continue troubleshooting. Also, did you add the location of the install to your environment variables?
    scotty2012 : I have started over with the manual installation. It's now showing that it's using my 'PHP via FastCGI' mapping, but still results in a 404 page.
    From Charles
  • Is there any reason you aren't using the installer to do the work for you?

    Also, can you confirm that the .php handler mapping is correctly set?

    Path : *.php
    State : Enabled
    Path Type : File or Folder
    Handler : FastCgiModule
    
    scotty2012 : THe installer resulted in the same problem. When I browse to the page on the server, it shows that it's using the FastCGI module. I am positive the mapping is correct.
  • Well, I'm not sure what I did, but somehow I fixed it. I removed the website and re-added it, then checked my FastCGI Mapping settings, everything looked just like before, but this time it works. I'd still like to know why I was getting the error if possible.

    From scotty2012
  • By default IIS will not serve any file for which it does not have a valid MIME Type mapping and will 404 the response

    If the .php extension does not have a MIME type defined for it for the website that you're trying to run PHP then IIS will not serve the file even if there is a relevant handler for that file type.

    Just checked the IIS 7 Manager on my server and there is no mapping for PHP by default in the MIME Types list, I suspect that if your website existed before you installed FastCGI it does not automatically add the mapping to existing websites whereas when you created the new website FastCGI was already installed.

    I could of course be completely wrong about that last bit but the File extension to MIME Type mapping issue is a security feature of IIS - no mapping = no files served with that extension

    From RobV

Do background processes get a SIGHUP when logging off?

This is a followup to this question.

I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same:

  • Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup.
  • Log off.
  • Log on again.
  • The process is still there, running happily, and is now a direct child of init.

I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP.

So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons).

This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4.

The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening?


Edit

I checked with strace, as per Kyle's answer.
As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

  • It will be sent SIGHUP in my tests:

    Shell1:

    [kbrandt@kbrandt-opadmin: ~] ssh localhost
    [kbrandt@kbrandt-opadmin: ~] perl -e sleep & 
    [1] 1121
    [kbrandt@kbrandt-opadmin: ~] ps
      PID TTY          TIME CMD
     1034 pts/46   00:00:00 zsh
     1121 pts/46   00:00:00 perl
     1123 pts/46   00:00:00 ps
    

    Shell2:

    strace -e trace=signal -p1121
    

    Shell1 Again:

    [kbrandt@kbrandt-opadmin: ~] exit
    zsh: you have running jobs.
    [kbrandt@kbrandt-opadmin: ~] exit
    zsh: warning: 1 jobs SIGHUPed
    Connection to localhost closed.
    

    Shell2 Again:

    strace -e trace=signal -p1121
    Process 1121 attached - interrupt to quit
    pause()                                 = ? ERESTARTNOHAND (To be restarted)
    --- SIGHUP (Hangup) @ 0 (0) ---
    Process 1121 detached
    

    Why does it still run?:
    Advanced Programing in the Unix Environment by Stevens covers this under section 9.10. Orphaned Process Groups. The most relevant seciont being:

    Since the process group is orphaned when the parent terminates, POSIX.1 requires that every process in the newly orphaned process group that is stopped (as our child is) be sent the hang-up signal (SIGHUP) followed by the continue signal (SIGCONT).

    This causes the child to be continued, after processing the hang-up signal. The default action for the hang-up signal is to terminate the process, so we have to provide a signal handler to catch the signal. We therefore expect the printf in the sig_hup function to appear before the printf in the pr_ids function.

    Massimo : But you explicitly sent a SIGHUP to it here; I was talking about what happens when you log off from the shell where you started the process.
    Kyle Brandt : Same results when I type exit, although I get a warning about jobs, but then type exit again. I tested this with ZSH.
    Massimo : I'm using BASH, and this probably depends on the shell. But BASH *should* send SIGHUP to child processes when logging off...
    Kyle Brandt : Bash sends SIGCONT apparently if the job is stopped, but I confirm it doesn't send anything if the job was not stopped.
  • Answer found.

    For BASH, this depends on the huponexit shell option, which can be viewed and/or set using the built-in shopt command.

    Looks like this options is off by default, at least on RedHat-based systems.

    More info on the BASH man page:

    The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h.

    If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive login shell exits.

    CarpeNoctem : Verified. When I performed an "exit", "logout", or CTL-D the child proc (job) would not receive a sighup (both root and reg user). However when I did "kill -HUP $$" to kill the current instance of bash the child processes DID receive a sighup. I then set huponexit and the child process did receive SIGHUP upon exit.
    Warner : Good stuff, man.
    From Massimo
  • I use csh and background processes continue running along when I logoff.

    From Chris S

Copy permissions to identical tree on linux / unix

i have a tree of files with correct permission. then i have a (filewise) identical tree (with different file contents tough) with wrong permissions.

how can i transfer the permissions layout from one tree to another?

  • If you have the source and dest, you can synchronize your permissions with rsync -ar --perms source/ dest

    It will not transfer the data, just permissions...

    yawniek : nope, it will copy files if timestamps differ
    From Dom
  • One thing you could do is use the find command to build a script with the commands you need to copy the permissions. Here is a quick example, you could do a lot more with the various printf options, including get the owner, group id, and so on.

    $ find /var/log -type d -printf "chmod %m %p \n" > reset_perms
    $ cat reset_perms
    chmod 755 /var/log
    chmod 755 /var/log/apt
    chmod 750 /var/log/apache2
    chmod 755 /var/log/fsck
    chmod 755 /var/log/gdm
    chmod 755 /var/log/cups
    chmod 2750 /var/log/exim4
    ...
    
    David : I suspect the -printf argument to find is a GNU extension? HP-UX find doesn't have it.
    mpez0 : Even without the printf option to find, one can use the ls option (or, at worst, pipe to xargs ls -l) and save in a file. A minute or two of search and replace, and one will have a script with chmod for each file.
    From Zoredache
  • It can be done with the following shell line:

    D1=foo; D2=foo2; for entry in $(find $D1  -exec stat -f "%N:%Mp%Lp" {} \;); do $(echo $entry | sed 's#'$D1'#'$D2'#' | awk -F: '{printf ("chmod %s %s\n", $2, $1)}') ; done
    

    simply set the right value for D1 and D2 variables, point them to the source and destination directories, run and the dirs will have permissions in sync.

    Thomas : Find, sed and awk in one line. I love that (I do it too).
    yawniek : bit long but perfect! note: foo should be $D1
    David : This assumes that stat is present. I've found, regrettably, that the command stat is often not present.
    AlberT : @nemtester sure, corrected thanks :)
    AlberT : @David, I don't know of such a system lacking of stat. But it is quite trivial to use the following "octal ls" version and accommodate the given solution accordingly: alias ols="ls -la | awk '{k=0;for(i=0;i<=8;i++)k+=((substr(\$1,i+2,1)~/[rwx]/)*2^(8-i));if(k)printf(\" %0o \",k);print}'"
    From AlberT
  • Two ways:

    1. If it works on your brand of UNIX: cp -ax /src /dest
    2. Or if not, this is the portable version: (cd /src && tar cpf - .) | (cd /dst && tar xpf -)

    (in the latter case /dst must exist)

    Edit: sorry, I misread. Not what you asked.

    theotherreceive : It's worth mentioning that -a (for archive) is a GNU addition to cp, I've never seen it on any other system. It's just short for -dpR (no de-reference, recursive, preserve permissions). The R and p options should be in any version of cp
    From Thomas
  • I think I'd write a perl script to do it. Something like:

    #!/usr/bin/perl -nw
    
    my $dir = $_;
    my $mode = stat($dir)[2];
    my $pathfix = "/some/path/to/fix/";
    chmod $mode, $pathfix . $dir;
    

    Then do something like this:

    cd /some/old/orig/path/ ; find . -type d | perlscript
    

    I wrote this off the top of my head, and it has not been tested; so check it before you let it run rampant. This only fixes permissions on directories that exist; it won't change permissions on files, nor will it create missing directories.

    From David
  • I just learned a new and simple way to accomplish this:

    getfacl -R /path/to/source > /root/perms.ac
    

    This will generate a list with all permissions and ownerships.

    Then go to one level above the destination and restore the permissions with

    setfacl --restore=/root/perms.acl
    

    The reason you have to be one level above is that all paths in perms.acl are relative.

    Should be done as root.

    From marlar

Should I use TCP or UDP to run a web server.

I have just installed Apache web server on my computer. I have managed to use it locally (I can open index.php from my computer using my web browser). But I would like to make my web site available publicly. I found out that for that I need to open port 80. I started to do it and now I have to specify to which protocol I need to apply these rules (TCP or UDP). Can anybody, pleas, help me?

  • Web servers work with the HTTP (and HTTPS) protocol which is TCP based.

    As a general rule, if people neglect to specify whether they mean TCP/UDP/SomethingElse then they probably mean TCP.

    joschi : While your answer is correct from a practical point of view, you might want to note that the HTTP specification actually doesn't specify which transport protocol has to be used. So HTTP over UDP or HTTP over SCTP is as valid as HTTP over TCP, which is commonly used.
    David Spillett : Interesting thought. UDP could make small requests (where the request and response each fit into a packet or two like many AJAX requests, small gifs and css files, ...) quicker by reducing latency, possibly making more difference than keep-alive connections. The presence of unreliability in the network could quickly kill the bonus though, and browsers would have to deal with the added complication of re-requesting lost packets. I wonder if anyone has tried it...
    Daniel Papasian : No, HTTP over UDP is not as valid as HTTP over TCP. From RFC2616 "HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used" UDP does not presume a reliable transport.
    David Spillett : @Daniel: Aye, that is why the browser (and server) would need to be aware of and account for packet loss in my thought-dump (essentially reinventing part of what TCP does for you, but perhaps in a way that might be more efficient in the average case for small requests/responses)
    Daniel Papasian : David, in which case you'd be implementing HTTP over [whatever protocol you invented over UDP] and not merely HTTP over UDP.
  • TCP establishes a connection and UPD just sends packets.

    You will have packet loss with UDP. Sites like youtube.com use UDP for video streaming because it doesn't matter if you miss a few frames. youtube.com uses UDP because it's faster than TCP because that connection isn't established and you probably wouldn't notice missing frames anyway.

    You want to use TCP because you don't want packet loss.

    grawity : When a video stream uses UDP, it also does so with a completely different protocol (such as RTSP). And HTTP - the website itself - always runs over TCP.

Generating and capturing Netflow on a Linux router

We currently have a dual-NIC Ubuntu server at our data centre acting as the gateway router between our public networks and our ISP. We have a /30 cross connect network on the ISP-facing NIC, and one IP from each of the three networks attached to our interior-facing NIC.

I would like to configure network traffic statistic generation and collection on this server, using Cisco's Netflow protocols. This will allow me to confirm our ISP's billing, as well as break-down data flow within our network.

What tools or packages would you recommend to passively capture traffic statistics and record them for later processing? Extra points if the Netflow collector has a MySQL data-store connector.

  • nprobe netflow generator

    And I personally use flow-tools to store flows on disk, generate reports.

    Regards K

    Edit: here are many more tools for logging to mysql, charting, etc.

    From Khb
  • I know argus can read and process netflow data and it is quite good at collecting and processing network flow data by itself.

    I've never used it to create netflow data as I usually just use it to collect and process the data natively, or use it to take a variety of flow types (tcpdump captures, netflows, etc) and use argus for the aggregation and summarization and analysis.

    monomyth : I used argus to generate netflows for tapped/mirrored ports, flows were send to flow-tools for processing. It worked great, I vaguely remember some data missing from flows, but nothing I needed.
    From chris
  • I suggest you looking at argus, as chris says. From my experience it's the best behaving flow collector. But there are good alternatives like flowd and pfflowd that might work for you too. If you have any decent load (terrabytes per day) forget about storing your flows in any SQL database :) oh, and yes, flow-tools are great once you learn all the filtering magic and such.

    From monomyth
  • For generating a tool like nprobe or fprobe will work fine as others have mentioned.

    For collecting I like nfdump/nfsen. It doesn't use mysql, but it is really easy to work with and get data out of it in a machine readable form.

    You probably don't want the full netflow data in mysql, instead it usually makes more sense to run an aggregation query and load the summary into mysql. Having 10,000,000 records in mysql is not going to be fun to work with, but inserting a daily or hourly summary of (ip,total flows,total bytes,total packets) works a lot better.

    David Mackintosh : +1 for SQL summaries. I store (ip, flows, in bytes, outbytes) every five minutes and the database is reasonably usable. I tried dumping netflow data into a database, and found it storage-intensive and incredibly slow to access. For specific flow information, linear searches through netflow files turned out to be faster in all cases (of the scenarios we tried anyways).
    From Justin

What impact does full hard drive encryption have on performance?

We have HP notebooks here at work, and it is policy to have HP's hard drive encryption turned on to protect client databases and IP in the case of loss/theft.

I was wondering if there was any evidence of a performance hit in this situation? The machines are primarily used as development workstations. Anecdotal evidence here suggests that the machines are slower.

Should we be using another approach (i.e. only encrypting the sensitive data as opposed to the entire disk)?

  • The only proof is to measure. Take timings on a laptop with no encryption and compare with one that does. Of course there will be overhead in the encryption, but don't rely on a subjective "feels slower". What encryption are you using? Bitlocker? 3rd party app?

    As to the final question, it is too easy to accidentally miss (or even define) what is sensitive data. So I would keep the whole disk encryption if possible.

    csjohnst : We're using the built in HP encryption - "Drive Encryption for HP Protect Tools"
  • The "HP Protect Tools" is a rebadged McAfee/Safeboot FDE product. The performance impact shouldn't be too bad -- I'm assuming that you're using AES.

    We encrypted about 5,000 laptops three years ago, and our folks didn't report any significant performance issues. A few older boxes blue-screened, that's about it. You may be experiencing slowdowns immediately after enabling encryption... encrypting the disk can take 8-20 hours depending on the vintage of the equipment and size of the disk.

  • To answer this question, we need to know: is your app disk bound, CPU bound, or something else? Traditionally disk encryption involves a minor hit to performance; disk is usually slow that the decryption overhead is miniscule. However, if CPU is a concern, this can get hairy.

    Development workstations are usually CPU powerful, to improve productivity. Faster build times, autocompletion/intellisense, automated unit tests, etc. Normally a laptop's compromises in the name of portability hinder the idea; giving developers a laptop suggests you've already run out of ideas for spare CPU cycles and might be able to afford disk encryption.

    What you need to do as an IT professional is build a model of what developers need computational power for, and benchmark how those tasks fare under proposed conditions: no encryption, full disk encryption, and partial encryption.

    From jldugger
  • We've used Safeguard Easy for years and Truecrypt's whole disk encryption since it came out, and neither has caused a big performance hit; even the older notebooks run development and database software without a noticeable difference in speed. Some people will even tell you that whole disk encryption software makes some operations run considerably faster due to compression, improved drive read routines, pipelining and the like. I wouldn't go that far, but as with most things, the truth is probably somewhere in between.

    The peace of mind from encrypting your disk, particularly if you have any kind of regulatory/compliance threshold in your industry (or are just paranoid) is worth the minimal hit of the encryption software we've used for this purpose.

    Thomas Denton : I have been using truecrypt for about a year now and don't really even know it is there. I have also had my sales team tell me their machines are faster after we encyrpted them.
    From nedm
  • My own experience is that ca 30% of the CPU will be dedicated to crypto, and a 50% hit in disk performance. I've tried several encryption alternatives - SafeGuard, OSX FileVault, PGP WholeDisk.. the same rule of thumb seems to apply. The CPU-use is particularly annoying though, as it affects battery time too.

    A quick google search revealed this test which seems to verify my gut-feeling: http://www.isyougeekedup.com/full-hard-disk-drive-encryption-benchmarks-and-performance/

    sleske : Note that the benchmark on isyougeekedup.com (which has a different address) is a synthetic benchmark. It's not clear how this relates to real usage.
  • For the love of God and all that is pure and right, stay away from Credant!!

    It is used in our (non-technology) company and virtually all the developers have a special security waiver to not have it installed on their PCs nor laptops. It suffers from horrible performance when accessing many files at once, like when compiling code. We also had an issue where we had a service that would read the registry and other configuration files, which started up before user login. Well, as the files were not unencrypted until after the user logged in, the service would die an early, horrible death.

    Also, once this steaming pile of code is installed, it is supposedly as hard to uninstall as IE, but this has not been verified in a non-lab environment because it has usually resulted in hosing the system requiring a reimage. YMMV

    From Ed Griebel
  • Not sure if you can do this (maybe in a lab?) but can you try rerunning these tests with AV uninstalled (or at least disabled). The reason I suggest this is because we had a customer with a similar issue as yours (except they had more delay writing to the drive then deleting; they also had the write cache support issues that you have) and we reran our benchmark tests with AV removed from the system and found that Win2008 (before R2 was released) out preformed Win2003 by alot. Turned out that the AV was responsible and we had to find a different AV provider.) Not sure that it will help you or not but it's something to check if you have the option.

    From Phillip