Jump to content

Server Discussion


SDS

Recommended Posts

So, I'm getting serious about getting a new rig. Unfortunately, I have the personality that renders buying socks a serious task as I look to maximize every aspect of the purchase. Ugh.

 

Currently, we have:

 

dual Xeon 5345 2.3 GHz quadcores

12 GB of 667 RAM

dual 147GB 15k SCSI drives (Raid 1)

 

I'm looking at:

 

Dual Processor Hex Core Xeon 2620 - 2.00GHz (Sandy Bridge) - 2 x 15MB cache

16 GB Registered DDR3 1333 RAM

200GB SSD drive

 

 

Now, the last part is significant and maybe someone can help me determine what we really want. I'm currently paying for 2 SCSI drives and a raid controller, with the idea if one disk crashes the other one steps up to the plate. I'm not sure that is what I really want. What I really want (I think) is:

 

A full disk backup that can run the server (even if at a reduced performance) with limited (not necessarily zero) data loss. With the point being the server doesn't need to be fully reconfigured again if the hardware fails and we are up and limping along with limited down time. I THINK I'm fine with losing a day of data. If so, that lops $100 off per month and puts me in the same ballpark as the current server.

 

Does anyone with server knowledge want to chime in here? I could use some help with determining storage architecture too. Comedians - please find another thread - I need a productive conversation here before the world of youth soccer sucks me back in.

 

Lastly, server is currently in Dallas. Have option for Wash DC. Thinking that might be better for Buffalo based peeps.

Link to comment
Share on other sites

  • Replies 55
  • Created
  • Last Reply

Top Posters In This Topic

Is the backend MS SQL Server or something else? Those specs look good, although I don't know where the system is bound up - at the database level or somewhere else.

 

I wish I had more time to geek out on this with you, because I agree with you that cloud would seem the way to go, however figuring out pricing (it is done by hour, right?) would be confusing. Seems to me cloud would be a better option than owning a server, as you pay for capacity, and don't need to own a monster that gets used for 16 weeks, 4 hours a week.

Link to comment
Share on other sites

Lastly, server is currently in Dallas. Have option for Wash DC. Thinking that might be better for Buffalo based peeps.

Don't know much about the hardware stuff, but unless the site in DC boasts much higher bandwidth I doubt moving the server there would have much of an impact. This is just text, we aren't downloading 4 gb DVD images or anything.

Link to comment
Share on other sites

Is the backend MS SQL Server or something else? Those specs look good, although I don't know where the system is bound up - at the database level or somewhere else.

 

I wish I had more time to geek out on this with you, because I agree with you that cloud would seem the way to go, however figuring out pricing (it is done by hour, right?) would be confusing. Seems to me cloud would be a better option than owning a server, as you pay for capacity, and don't need to own a monster that gets used for 16 weeks, 4 hours a week.

 

MySQL.

 

On game days, it is I/O. Too many people accessing the same exact content (usually a single game day thread). I have to turn off a topic marking feature because of the high amount of writes that locks everything up.

Link to comment
Share on other sites

If you want full back up, don't you need 2 drives in raid 1 configuration? Maybe that's how you save $100/mo, not having the raid?

 

SSD is definitely the way to go with hard drives, however I'm not sure if the speed of the HD is the cause of the slow down. I think... You want as much free memory as possible for your server cache settings, so is 200gb enough vs the trade off for higher speed.

 

I'm not up to date on the latest server processors, but hasn't Sandy Bridge been replaced by Ivy? There might even be advances since Ivy. I can look into that, if needed. I think you get better performance with the newer generation stuff, but of course it costs more. So, my initial thought is what's more important the latest processor board or big, fast hard drives.

 

As far as Dallas vs DC goes, I don't think it matters where the machine sits. As long as it has a good connection to the net and isn't going to be taken out by a hurricane/tornado/disaster, then I wouldn't worry too much about that, personally.

Link to comment
Share on other sites

My observation is that during game day threads, when a big or controversial play happens, 10 people are trying to submit a post at the same time. That's when the site just stops working and freezes.

 

You can predict when it's going to happen before it happens when there is a big Pick 6 or something.

 

As to how hardware relates to this problem, I don't know. I'm not a computer expert.

Link to comment
Share on other sites

I think you look in the wrong direction.

I suppose the goal is to handle many connections, and I suppose too you have a LAMP software hardware solution, with IPB as a PHP solution.

 

You should look NGINX instead of Apache.

what is you PHP version ? 5.5 has a major cache inbuild optimizer (native Zend Optimizer), but you can't run it with old scripts (look IPB), as some functions are

obsolete.

I don't know well IPB (prefer open source solutions), but you should look at your data tables configuration.

 

Hardware is fine and is the last thing you should consider to change.

Edited by Repulsif
Link to comment
Share on other sites

Very quickly...

 

I'm not a big fan of SSDs for server use, as they degrade over time dependent on the number of reads/writes. When I got mine, the expected lifespan as a boot drive was 2 years; I'm sure they've improved by then, but it's just part of their nature to degrade over time. What's more, as that degradation is progressive, you might see server performance slow over time. It may make logical sense anyway, from a price-performance standpoint, but I'm not sure of the performance difference between RAID 1 and SATA 3 or 3.2 (I assume).

 

It seems to me that you might have more processor power than you need, too. If database I/O is your bottleneck, I'd guess you're probably not fully using the processors you have now. Going from 8 cores to 12 won't do much if the I/O is still maxed out (and if you go with an SSD, I'd expect the processors to eventually be underutilized). From a cost-benefit point of view, it might be better to spend more on disk I/O (or mySQL licenses) than processing power. Can't tell without hard numbers, but it's something to consider.

 

Also, someone above mentioned the cloud. The way you've described the server, it sounds like a hosted physical box rather than a VM. One of the benefits you'd get from a cloud VM is some ability to scale according to demand - rather than spec server hardware to your expected peak load, you can provision it seasonally (i.e. increase resources in July, increase it more in January when the Bills make it to the Superbowl, decrease in February after the Bills win, increase it in April for the draft...) It might be more cost-effective. I'd look in to AWS, if you haven't.

Link to comment
Share on other sites

I think you look in the wrong direction.

I suppose the goal is too handle many connections, and I suppose too you have a LAMP software hardware solution, with IPB as a PHP solution.

 

You should look NGINX instead of Apache.

what is you PHP version ? 5.5 has a major cache inbuild optimizer (native Zend Optimizer), but you can't run it with old scripts (look IPB), as some functions are

obsolete.

I don't know well IPB (prefer open source solutions), but you should look at your data tables configuration.

 

Hardware is fine and is the last thing you should consider to change.

Very good points. I' m no expert, but I've read in many places that nginx is the way to go.

 

The other advice, I was just given by the guy that runs our web server is... go big on the ram. SQL uses that to store the data. Generally you want at least twice the memory as the total size of the MDF files on the hd (I'm not exactly sure what the MDF files are :) but that was his advice. Then he started suggesting tweaks based on the page life counter and... That's when I said good enough for now.

 

Link to comment
Share on other sites

Very quickly...

 

I'm not a big fan of SSDs for server use, as they degrade over time dependent on the number of reads/writes. When I got mine, the expected lifespan as a boot drive was 2 years; I'm sure they've improved by then, but it's just part of their nature to degrade over time. What's more, as that degradation is progressive, you might see server performance slow over time. It may make logical sense anyway, from a price-performance standpoint, but I'm not sure of the performance difference between RAID 1 and SATA 3 or 3.2 (I assume).

 

It seems to me that you might have more processor power than you need, too. If database I/O is your bottleneck, I'd guess you're probably not fully using the processors you have now. Going from 8 cores to 12 won't do much if the I/O is still maxed out (and if you go with an SSD, I'd expect the processors to eventually be underutilized). From a cost-benefit point of view, it might be better to spend more on disk I/O (or mySQL licenses) than processing power. Can't tell without hard numbers, but it's something to consider.

 

Also, someone above mentioned the cloud. The way you've described the server, it sounds like a hosted physical box rather than a VM. One of the benefits you'd get from a cloud VM is some ability to scale according to demand - rather than spec server hardware to your expected peak load, you can provision it seasonally (i.e. increase resources in July, increase it more in January when the Bills make it to the Superbowl, decrease in February after the Bills win, increase it in April for the draft...) It might be more cost-effective. I'd look in to AWS, if you haven't.

 

The problem with the cloud, as I mentioned in the previous thread in Nov - I don't feel capable spec'ing what I need other than "what is my cost for my current rig at peak loads" and then going down from there during non-peak times. And although it is really attractive on paper, I can't predict when a coach gets fired or Mario Williams is going to be pursued, etc... I'm not sure how quickly resources can be given to me and what level of intervention is required on my part.

 

I need a massive OS upgrade any way (and all the components in the LAMP stack). The cost of doing that in my time (disk gets formatted and fresh OS is installed - meaning the entire server reconfig process needs to be completed) means the time is right to look at hardware. If I can get the price to be equivalent then it is a free hardware upgrade (with newer "older" hardware) because my monthly cost isn't going down keeping the same hardware going.

Link to comment
Share on other sites

So, I'm getting serious about getting a new rig. Unfortunately, I have the personality that renders buying socks a serious task as I look to maximize every aspect of the purchase. Ugh.

 

Currently, we have:

 

dual Xeon 5345 2.3 GHz quadcores

12 GB of 667 RAM

dual 147GB 15k SCSI drives (Raid 1)

 

I'm looking at:

 

Dual Processor Hex Core Xeon 2620 - 2.00GHz (Sandy Bridge) - 2 x 15MB cache

16 GB Registered DDR3 1333 RAM

200GB SSD drive

 

 

Now, the last part is significant and maybe someone can help me determine what we really want. I'm currently paying for 2 SCSI drives and a raid controller, with the idea if one disk crashes the other one steps up to the plate. I'm not sure that is what I really want. What I really want (I think) is:

 

A full disk backup that can run the server (even if at a reduced performance) with limited (not necessarily zero) data loss. With the point being the server doesn't need to be fully reconfigured again if the hardware fails and we are up and limping along with limited down time. I THINK I'm fine with losing a day of data. If so, that lops $100 off per month and puts me in the same ballpark as the current server.

 

Does anyone with server knowledge want to chime in here? I could use some help with determining storage architecture too. Comedians - please find another thread - I need a productive conversation here before the world of youth soccer sucks me back in.

 

Lastly, server is currently in Dallas. Have option for Wash DC. Thinking that might be better for Buffalo based peeps.

 

Okay, i am typing on a tablet so forgive me if i mis type some stuff. first and foremost, the location of the server is inconsiquental as long as the internet connection is adiquite. The specs you are looking at are okay buy not really enough for what you are doing. The SSD card is great for your OS installation. It's lightning quick with boot ups vs. a standard hard drive.

 

The biggext issue i see is the Memory. You going to be running IIS and most likely a version of SQL Server express or MySQL for any of the backend databases so you want at least 32 gb's of Ram. I would recommand 48 though. My rule of thumb is you never can have enough ram. What SDS was saying for your hard drives is correct. I/O or ihops for the hard drives is going to be the most critical part in regards to performance especially on game day. The more reads and writes your drives can perform the less lag will be noticed. SSD hard drives would be great for this but are very cost prohibitive. The larger drives get VERY expensive. SCSi drives are not necessarily the way to go any more since sata drives became available. Their cheaper than SCSi drives as well. Also while technically 2 drives running on a raid controller are considered configured for raid, i don't. A pair of drives on a raid are striped, and any data that is corrupt is Usually currupted on both drives because of this. I would recommend buying 3 or 4 drives and running them as raid 5. If you buy 3 drives the third is used for redundancy. Buy 3 3tb drives, you get 6tb's of storage and the third is used as the hot swap drive if one of the others fails. It's the same therory if you were to buy 4 drives.

 

As for backups, there are litterally dozens of options for you. That said, all Microsoft servers come mith ms backup. Its a decent product. More than adiquite for what your doing. For backup storage i would recommend a massive external hard drive to store your backups. New ones are usb3 and are very fast. I would not recommend cloud storage, especially if you have large amounts of data because the restore process will be much slower than a local external storage device. Uploads of backups to the cloud can take days because upload speeds are restricted to usually 1.5 mb, 5 mb, or 10mb. 1.5 mb or 5 mb or the norm.

 

Hope this helps. If it means anything, this is what i do and have been doing it now for almost 20 years. If you have any other questions or what to talk more detialed you can send me a p.m.

 

BigPappy

Link to comment
Share on other sites

Okay, i am typing on a tablet so forgive me if i mis type some stuff. first and foremost, the location of the server is inconsiquental as long as the internet connection is adiquite. The specs you are looking at are okay buy not really enough for what you are doing. The SSD card is great for your OS installation. It's lightning quick with boot ups vs. a standard hard drive.

 

The biggext issue i see is the Memory. You going to be running IIS and most likely a version of SQL Server express or MySQL for any of the backend databases so you want at least 32 gb's of Ram. I would recommand 48 though. My rule of thumb is you never can have enough ram. What SDS was saying for your hard drives is correct. I/O or ihops for the hard drives is going to be the most critical part in regards to performance especially on game day. The more reads and writes your drives can perform the less lag will be noticed. SSD hard drives would be great for this but are very cost prohibitive. The larger drives get VERY expensive. SCSi drives are not necessarily the way to go any more since sata drives became available. Their cheaper than SCSi drives as well. Also while technically 2 drives running on a raid controller are considered configured for raid, i don't. A pair of drives on a raid are striped, and any data that is corrupt is Usually currupted on both drives because of this. I would recommend buying 3 or 4 drives and running them as raid 5. If you buy 3 drives the third is used for redundancy. Buy 3 3tb drives, you get 6tb's of storage and the third is used as the hot swap drive if one of the others fails. It's the same therory if you were to buy 4 drives.

 

As for backups, there are litterally dozens of options for you. That said, all Microsoft servers come mith ms backup. Its a decent product. More than adiquite for what your doing. For backup storage i would recommend a massive external hard drive to store your backups. New ones are usb3 and are very fast. I would not recommend cloud storage, especially if you have large amounts of data because the restore process will be much slower than a local external storage device. Uploads of backups to the cloud can take days because upload speeds are restricted to usually 1.5 mb, 5 mb, or 10mb. 1.5 mb or 5 mb or the norm.

 

Hope this helps. If it means anything, this is what i do and have been doing it now for almost 20 years. If you have any other questions or what to talk more detialed you can send me a p.m.

 

BigPappy

 

haha. uh, yeah.

 

48 GB of RAM is an extra $180 per month. Those extra disks about another $100 per month. I'm not buying anything - this is all rented monthly, so the extra costs are forever.

 

I'm not looking to foot a $7200 per year server bill. ;)

Link to comment
Share on other sites

The biggext issue i see is the Memory. You going to be running IIS and most likely a version of SQL Server express or MySQL for any of the backend databases so you want at least 32 gb's of Ram.

 

LAMP stack. No Microsoft apps. More memory's never a bad idea, but 32MB might be overkill.

Link to comment
Share on other sites

LAMP stack. No Microsoft apps. More memory's never a bad idea, but 32MB might be overkill.

 

unfortunately, my choices are 4, 8, 12, 16, 32. Nothing in-between 16-32. I'm actually at 5.2 GB for the total forum database size (not including anything else on this server), so that is probably where I need to go. 16 is probably too small, but clearly better than what I have now - 32 is way over. I also think the faster memory should give a performance boost. 667 to 1333 should help - no?

Link to comment
Share on other sites

 

 

haha. uh, yeah.

 

48 GB of RAM is an extra $180 per month. Those extra disks about another $100 per month. I'm not buying anything - this is all rented monthly, so the extra costs are forever.

 

I'm not looking to foot a $7200 per year server bill. ;)

I certainly understand that. I/O's are going to be your bottle neck. I guess i would say go as big as you can afford then. The other option is, while i enjoy this board as a free discussion board, i'm not apposed to paying a reasonable membership fee. It would help offset some or all of your costs. As i said, i enjoy this site, but one person should not have to be the sole bearer of the cost to support the site if it's costing even half of 7200 bucks.

 

Fwiw

BigPappy

Link to comment
Share on other sites

The problem with the cloud, as I mentioned in the previous thread in Nov - I don't feel capable spec'ing what I need other than "what is my cost for my current rig at peak loads" and then going down from there during non-peak times. And although it is really attractive on paper, I can't predict when a coach gets fired or Mario Williams is going to be pursued, etc... I'm not sure how quickly resources can be given to me and what level of intervention is required on my part.

 

That had occurred to me as I was suggesting it. You can't predict when Doug Marrone and Jim Schwartz are going to be caught getting lap dances at Chippendales. It's why I said "worth a look" and not "you should."

 

I'm familiar with cloud architecture (certified cloud architect, actually - your post is timely, as I'm spec'ing out a cloud server architecture literally right this minute), but not specific commercial offerings and service levels. I've been meaning to do this anyway, so when I get some free time, I'll take a look and let you know what I find - it doesn't hurt to find out. I suspect the pricing is slightly higher, anyway - you'd be paying for increased reliability and service. What's the approximate bandwidth of the site? (My sorry ass is probably responsible for two gig of data alone).

 

Another benefit of the cloud is that Rosen can't crash it. Probably. Not easily, at least.

Link to comment
Share on other sites

 

 

unfortunately, my choices are 4, 8, 12, 16, 32. Nothing in-between 16-32. I'm actually at 5.2 GB for the total forum database size (not including anything else on this server), so that is probably where I need to go. 16 is probably too small, but clearly better than what I have now - 32 is way over. I also think the faster memory should give a performance boost. 667 to 1333 should help - no?

Yes that is correct. The new memory will be much faster. 16 GB's will get you by with that database size. What os are you planning on running?

Link to comment
Share on other sites

That had occurred to me as I was suggesting it. You can't predict when Doug Marrone and Jim Schwartz are going to be caught getting lap dances at Chippendales. It's why I said "worth a look" and not "you should."

 

I'm familiar with cloud architecture (certified cloud architect, actually - your post is timely, as I'm spec'ing out a cloud server architecture literally right this minute), but not specific commercial offerings and service levels. I've been meaning to do this anyway, so when I get some free time, I'll take a look and let you know what I find - it doesn't hurt to find out. I suspect the pricing is slightly higher, anyway - you'd be paying for increased reliability and service. What's the approximate bandwidth of the site? (My sorry ass is probably responsible for two gig of data alone).

 

Another benefit of the cloud is that Rosen can't crash it. Probably. Not easily, at least.

 

October: I stay under 5Mbps total all year. Get game day spikes of 4.5Mbps. 500 GB total throughput in October.

 

physical storage is negligible. I doubt I need 40 GB in disk space. If so, that is the ballpark I need.

 

Yes that is correct. The new memory will be much faster. 16 GB's will get you by with that database size. What os are you planning on running?

 

CentOS

Link to comment
Share on other sites

unfortunately, my choices are 4, 8, 12, 16, 32. Nothing in-between 16-32. I'm actually at 5.2 GB for the total forum database size (not including anything else on this server), so that is probably where I need to go. 16 is probably too small, but clearly better than what I have now - 32 is way over. I also think the faster memory should give a performance boost. 667 to 1333 should help - no?

 

16 is probably good. 24 would be better, if it were available. 32 is a waste of money. If it were the Microsoft stack, 32 would probably be the minimum.

 

I don't know that the difference between 667 and 1333 is going to be meaningful enough - I've never seen a meaningful difference in applications, at least. I think it's more important is matching the memory speed to the processor - the memory can't provide data to the Xeon 2620 faster than the chip will accept it, and the processor can only process what the memory provides. I don't know what the Xeon 2620 calls for, though. That's just a question of cost and efficiency, though - no need to spend money on performance you can't use. It ultimately shouldn't affect board performance, because your bottleneck is still database I/O.

Link to comment
Share on other sites

×
×
  • Create New...