Server from FreeBSD 8.2 (cores-cores-cores)

Hello,

We are planning to move parts of our infrastructure to a new machine (from an old, rusty Core 2 Duo server) and I'm looking at those nice and shiny 12 core Opteron CPUs. Specifically two of Opteron 6172. I wonder if FreeBSD will scale correctly (24 cores, who could imagine it 5-7 years ago...) and if not - can this be fixed by tweaks (like those suggested here and here?).

Planned usage (multiple jails):
* web (heavy)
* DB (heavy)
* mail (light to medium)
* random junk that doesnt really use anything.

Suggestions?

Thanks in advance.
 
Instead of investing in one big machine I'd get several smaller ones. That will allow you to split functionality up which will make tuning a whole lot easier. Different functionality (DB vs. web for instance) will have different requirements. You'll also have a chance to load balance and at the same time create a HA solution.
 
The server will be located at our ISP and we pay per U's, so thats not a reasonable solution in our case. And thats the only place here we can get cheap 100 mbit connection.

HA is not required at that time(regular 3-4h maintenance at nighttime is acceptable even once/week), maybe another server will be added later, but for now we will have to live with just one.
 
If you're using MySQL just avoid ZFS. I had serious performance issues (slowdown) after couple of months after puting everything in production with relative small dataset (~1Gb)

Moving MySQL files to UFS resolved the issue.
 
Hi,

I think max CPUs is 32 in amd64, so as long as you have enough going on to occupy 24 cores then you will use 24 cores. If the server has many different heavy usage roles it would be common to find disk IO a bottle neck, so just consider that when imagining all the things you'd like the box to be able to run. Ie make sure you have enough physical disks, or enough RAM to cache disk data, or some happy medium between the two or your many CPU cores will spend most of their lives sitting around waiting for disk IO...

ta Andy.
 
I couldn't agree more with AndyUKG!

You should seriously consider the use of fast SAS drives. Memory also could be a puzzle and it really depends on how much load you plan to have there but I guess 32G would be a minimum. You should also consider the jail resource allocation which will be implemented in 9-Release, work is being done in current.
 
I'm planning 48G of memory and 15k SAS disks in raid 1+0 for disk io hungry applications + regular 7200 SATA also in raid 1+0 for everything else.

Thanks for answers everyone.

Last question is which SATA/SAS Raid controller has drivers and performs without issues? I've used gmirror before and it was more than enough for my need, but now its poor performance is an issue? The hardware list in the handbook helped, but personal experience is way better.
 
r_t_f_m said:
If you're using MySQL just avoid ZFS. I had serious performance issues (slowdown) after couple of months after puting everything in production with relative small dataset (~1Gb)

Moving MySQL files to UFS resolved the issue.

Hi,

were you using InnoDB? Did you use any of the tips in this document from Sun/Oracle/MySQL?

http://blogs.oracle.com/realneel/entry/mysql_innodb_zfs_best_practices

Do you have any more info on what the issue was out of curiosity?

ta Andy.
 
AndyUKG said:
Hi,

were you using InnoDB? Did you use any of the tips in this document from Sun/Oracle/MySQL?

http://blogs.oracle.com/realneel/entry/mysql_innodb_zfs_best_practices

Do you have any more info on what the issue was out of curiosity?

ta Andy.

Here are some more info.

Server : INTEL S3420GPC
CPU : Xeon(R) CPU X3430 @ 2.40GHz
Memory : 6GB
Harddrives : 2 x WDC WD1002FBYS-02A6B0 03.00C06 (zfs mirror)
HDD controller : Intel 5 Series/3400 Series AHCI SATA controller (using AHCI driver)
ZFS filesystem version 4
ZFS storage pool version 15

~100 tables, each table in a separate file.

Due to the nature of the dataset we have a lot of queries with full table scan and
tables (mix of InnoDB and MyISAM) are constantly updated. For same queries the time
went from under 1sec to >20 sec.

At the beginning I thought it was related to file fragmentation due to COW nature of
ZFS and heavy updating we have, so I did recopy all tables on the same file system.
No change. Create a separate filesystem, still no change.

During slow queries
[CMD="gstat"] -a[/CMD]
was showing avg reading speed around 3-4Mb/s.

Trying to read all files with [CMD="cat"] * | dd of=/dev/null [/CMD]
was variable between 25Mb/s and 85Mb/s.

Recordsize is set to 16k, as suggested on "MySQL Innodb ZFS Best Practices" blog.
Other blocksizes didn't change anything.

Moving all file on separate disk but on UFS restored expected performances and direct read with [CMD="cat"] *| dd of=/dev/null [/CMD]
gave reading speed over 100Mb/s.

Server uptime was around 120 days when I did all tests.

I found some more info here http://tinyurl.com/35ao32
 
r_t_f_m said:
(mix of InnoDB and MyISAM)

Did you split the MyISAM and InnoDB data into different volumes so you could turn ARC caching to metadata only for the InnoDB data to avoid double buffering?

r_t_f_m said:
During slow queries
[CMD=]gstat -a[/CMD]
was showing avg reading speed around 3-4Mb/s.

Gstat of course is only showing you the data IO to the physical disks, it doesn't tell you anything about what data is being read from the ARC. How about %busy on the disks at this time, was that ok or maxing out?
Were you monitoring the memory usage, ie MySQL, swap, ARC size? Were you able to rule out memory related issues as a possible cause?

cheers Andy.
 
Hety said:
Last question is which SATA/SAS Raid controller has drivers and performs without issues? I've used gmirror before and it was more than enough for my need, but now its poor performance is an issue? The hardware list in the handbook helped, but personal experience is way better.

I'd be inclined to recommend ZFS. With at least 2 pools, one for the slow drives and one for the 15K drives. The comments from r_t_f_m are interesting, but I think with the RAM you propose and a decent number of drives you will be fine with ZFS. ZFS can be very fast relative to UFS if you give it a decent amount of RAM, ARC is a very intelligent caching mechanism which will speed up reads and in turn free up disk IO time for writes allowing good write performance too.

A hardware RAID solution with UFS would of course be another legitimate option (you will loose a number of important features by not using ZFS ;)), I can't advise on what controller to use though, so will leave for others to comment...

ta Andy.
 
One thing to note about using ZFS with databases: you must set the ZFS blocksize property to match the databases record size, so that 1 record update in the database equates to 1 block write in ZFS. If you do not lock in the block size, then ZFS tries to get too fancy by coalescing small writes into large writes, which will kill the database performance.

And, if using different table types, which will have different record sizes, you should split your database directory into separate ZFS filesystems with separate block sizes.

This is covered in the best practices guide listed above.
 
Back
Top