Swap partitions on multiple SCSI drives

Hello all.

I plan to install FreeBSD on a PC with 2 SCSI drives. The Handbook says:

On larger systems with multiple SCSI disks (or multiple IDE disks operating on different controllers), it is recommended that a swap is configured on each drive (up to four drives). The swap partitions should be approximately the same size.

What I don't see is a recommendation for this size. Should each be twice as large as the system's RAM? I would find this somewhat excessive but then again, I don't know. Can anyone share knowledge and/or experience on this?
If you ask me, I'd say it's obsolete.

I have 2G ram and only 256M swap
i was thinking of 64M swap, but hdd space is so cheap.
No point in swap=2-3xRAM, it's never used (well for me)

i use FBSD as desktop.

The best what I have read about swap is in OpenBSD.org homepage documentation.

I suggest you find it ;)
It's still recommended to use swap in the size of twice of your RAM...
I don't know exactly why it's still so, but swap partition is used for example for kernel dumps...
I think having swap partitions on split on more hard drives is just because of performance. If your system is not going to use swap extensively, I wouldn't add swap partitions on other drives, it's a waste of space. Today's computers have enough RAM to not use swap at all....
Danger, do you thing anyone would be interested in digging my 2GB memory dump today?
I don't think so.

besides on my machine, i have never seen swap being used more than some 16M [and that was extreme], usually it's few KB's

I tried swap-less set up and in few days, i returned to swap, because once i used imagemagick to convert few images, and ran out of RAM.
X crashed (or entire system, don't remember anymore)
More info

Thanks for the replies!

I should have provided more information and background, I think...

Here goes: the PC in question has been put together from obsolete spare parts that friends and colleagues donated :). It has 512MB of RAM and 2 SCSI hard disks of 17GB and 9GB, respectively. Reserving 1GB of swap on each is not something I would make a fuss over.

My question, put in a more concrete context, is:
If only one of the two disks has a swap partition, will the performance of other disk suffer? If so, do I remedy this by having swaps on both disks?
you should make your swap and put bsd on machine, and then test your machine for while, to see how much is it actually writing to swap.

if it's uses swap very little [like mine, rarely few K sawp], make some few hundred MB swap on one partition, if it's uses swap a lot make bigger swap and divide it to bought disks.

remember: the bigger swap and more it's been used, the slower system can get.

for example you have 512 ram and 1G swap, if all 1G swap is used, then system will probably be as fast as disk i/o

again, i suggest you look in openbsd.org faq,
there's good info on swap

i found it:
read about b partition and swap
From tuning(1):
You should typically size your swap space to approximately 2x main mem-
ory. If you do not have a lot of RAM, though, you will generally want a
lot more swap. It is not recommended that you configure any less than
256M of swap on a system and you should keep in mind future memory expan-
sion when sizing the swap partition. The kernel's VM paging algorithms
are tuned to perform best when there is at least 2x swap versus main mem-
ory. Configuring too little swap can lead to inefficiencies in the VM
page scanning code as well as create issues later on if you add more mem-
ory to your machine.
Finally, on larger systems with multiple SCSI disks
(or multiple IDE disks operating on different controllers), we strongly
recommend that you configure swap on each drive. The swap partitions on
the drives should be approximately the same size. The kernel can handle
arbitrary sizes but internal data structures scale to 4 times the largest
swap partition. Keeping the swap partitions near the same size will
allow the kernel to optimally stripe swap space across the N disks. Do
not worry about overdoing it a little, swap space is the saving grace of
UNIX and even if you do not normally use much swap, it can give you more
time to recover from a runaway program before being forced to reboot.
It's man tunning(7), not 1
and i still believe it's VERY out of date

btw atm I'm running transmission, rebuilding system, running scp etc
and only 112K swap used.

i suggest testing system, for month then adjusting swap. However if you don't care about disk space wasted use the traditional method.
and i still believe it's VERY out of date

And I'm with you. At one point in time, the biggest advantage of a hard drive over a floppy drive was not speed, it was capacity. Later, as processors and disks improved, there was a very big difference in speed. The hard drive responded as fast as RAM. The mitigating factor up to this point was the processor. I recall the president of SUN confidently stating at a keynote at a COMDEX, "The 486 is the fastest CISC processor that will ever be made" That was before he knew anything about it. His reasoning was no processor with varying length instruction sets could never be pipelined. A few months later the 486 revealed his ignorance and could do far more at 25 MHZ than the hottest AMD 386 could do at 40 MHZ. A CISC instruction that required 6 clock cycles could accomplish the same work that took the RISCs 200, which is why the CISC, even with pipeline flushes, is still viable today for more than being Windows compatible. Our ?NIX server often ran better on the Intel architecture. Ever since the 486, feeding the processor from RAM has been a major problem. However, for the most part, all the way through the Pentium III, 512 megs of RAM is all they would support, and 256 was more common. Things have changed a lot since then. A few years ago AMD gave Intel a thrashing when Intel couldn't adequately feed their processors from memory adequately. Intel has only recovered from most of that in the past year or so. So if feeding the processor at RAM speeds is such a huge problem today, how is it that we can somehow make use of gigabytes of disk space to compensate for RAM shortages? Either I'm missing something here, or the hard drive could only help for small snippets that need to be accessed very infrequently. Forget using the hard drive for swap for a moment. Can you even imagine today's processors getting anything done without using substantial RAM for disk caching? Swap would soon end up being a performance governor since no matter how much physical RAM you have, the amount of virtual memory you can use will be limited by the speed at which you can move memory contents back and forth between disk and RAM. With the processor speeds vs. disk speeds today, the amount of useful virtual memory you could actively swap would be a pretty low number. Is it possible that 2 times RAM was meant for the Pentium III and earlier where memory sizes and CPU speeds enabled virtual memory on the hard drive to be of practical benefit?

I'd like to hear from a FreeBSD developer where the gigabytes of dead wood is coming from that can be swapped out of RAM because they surely cannot be participating in active processes where it needs to be swapped in an out, let alone be part of the kernel. The best use for swap that I can think of today for a server is as an indicator when you need more RAM because the kernel decided that moving stuff was more efficient than giving up any more disk caching buffers. I can see it working is on a desktop where you have two dozen programs in the background doing nothing while you are working in one or editing a big file. In that environment switching occurs every 20 seconds or more instead of 1000 times a second.
  • Thanks
Reactions: zhl
I use FreeBSD 7.x on a Phenom 9550 with 4Gb as a workstation doing multiple heavy text processing and inline graphic creation/conversion using convert from ImageMagick. X and Seamonkey (web and email) are always running and I have a dozen or so xterms open to other servers.

The most swap I've ever seen used has been 2.5Mb. A few months ago, when adding a new SATA system disk, I set my swap partition at 1Gb and it has been more than adequate.

Of course, YMMV.. It really comes down to what *you* will be doing on your system (and as someone else mentioned, FreeBSD has been known to panic if it runs out of memory and swap.)
The most swap I've ever seen used has been 2.5Mb....FreeBSD has been known to panic if it runs out of memory and swap.

The only time I've heard of virtual memory trouble is with no swap. I just logged into one of our servers that serves between 6,000 and 15,000 pages per minute outbound. 1/4 to 1/3 of our traffic is getting information from around the world to assemble the pages, and we need to create .png image maps. Swap currently stands at 316K, but it's Saturday, so traffic will be around 6,000 pages per minute. The most I've ever seen used is 1,016K with a server load of 167. Your gig recommendation is over 1,000 times that, which makes it valid in my book.

I find it interesting that every time these discussions come up, there will be comments about what someone else said. I haven't seen the same Oracle recommendation twice. The only rational reason I've heard is for core dumps. One might better ask, where are all the horror stories about, "When I went from 1 gig to 4 gigs of RAM, I ran out of swap and my kernel panicked."? I haven't seen a one.
Disk space is cheap && better safe than sorry.

A simple workaround is to use a swap file, so that if you see you need grander or humbler swap you can mutate it rather easily, albeit I heardly think that craqsh dumps would work there well.
Maybe they should practice what they preach. I just did a FreeBSD 7.1 64 bit setup a few minutes ago. The machine has 4 gigs. For grins, I pressed "A" expecting to see auto partition to give me 8 gigs for the swap. It surprised me by allocating 4 gigs for swap. Curious, I checked out what how much disk space was being used on one of our well used servers that had been upgraded through 7.0 from 6.0, and had lots of web server software on it. The /boot and /var partitions were allocated at almost exactly twice what was being used and we have 2 engineering companies who are mail hogs and at least one db that I loaded for someone from a 1.5 gig SQL file, plus normal sites. Plus there are logs and AWstats for the 6,000 to 15,000 page per minute web site. So their sizing seemed about right for those, and whose to say that the only reason for it being as large as 4 gigs is for a core dump.

Everybody wants a reliable server, but from time to time procedures need to be reevaluated to ensure they are still appropriate. The rule of thumb dates back to the 5 1/4" floppy days, and I just caught FreeBSD not using it themselves in a 7.1 install.

Since in the Linux world a lot of focus is involved on getting the Third World nations in to computing as well as those of us who live in say, America.
I would be wealthy by Third World standards, but in America I could simply not afford new computer equipment. Knowing the nature of Corporate Marketing I'd be very surprised if they offered any kind of discount on computer equipment just because the people can't afford it.

There's a few shining examples, the AMD-India connection. But those examples shine brightly in a really dark Corporate Universe.

My machine is what's technically referred to as "Dinosaur" the BIOS date is 1999. No APIC, ACPI, SATA Support or any of those other groovy initials. I got it as salvage because somebody who had more money than brains didn't know how to reseat a memory stick, got a blank screen and a beep-code and threw it away.

The maximum RAM I could put into this is 1.5 GB and that would assume that I could afford 3 512mB sticks. I can't even afford one. I have 2 256s and one 128. I also have some legacy SCSI 68 pin drives, two of them, also salvaged. A fairly fast SCSI controller and a Video card with 32 megs of Video Ram. Also salvaged. I did pay for the IDE DVD burner and the second IDE hard drive. The rest of my expenses are in labor and pure skills.

There's a big issue now as to what exactly to do with the "legacy" machines that don't belong in landfills and the American market doesn't have much use thereof.

The suggestion and it's a good one, has been made that they be given or sold cheaply to school systems in the Third World nations to teach the kids and the adults as well to repair, condition, configure and use the computers.

It worked in India and China because the governments (just not most of the citizens) had the advantage of a marketplace where companies like AMD could import the facilities for building the components, and facilities and equipment to train the people.

One of the suggestions, hotly contested by Our Friends at Microsoft, is to have Senior Citizen and Dinosaur computers run as workstations off more modern but still Legacy computers. This would be a lot cheaper to do, give the computers to these nations as an outright gift, than it would cost to store the computers indefinitely in Toxic Waste facilities.

Which is what they would be. It costs more to store them than to simply give them away.

The reason Microsoft objects is the client computers wouldn't run any supported version of Windows reliably. Some in the Linux community suggest that NO computer will run Windows reliably.

That essentially leaves Unix clones.

So suggesting that "nobody" would benefit from the hardware tweaks that make such computing possible is kind of missing a Huge demographic in the world.

I can't even afford ONE 512 stick. In the richest country in the world, the poor can't afford modern computers. And I'm not even the poorest of the poor here.

If you look at it from my viewpoint, (try sometime) the notion that "simply EVERYBODY" has a modern computer or can obtain one is really insulting, derogatory and elitist.

If we're to build a New Elite based on talent, meritocracy it's called, then we have to dispose of the really obsolete notion that's driving the First World economies, that of "simply EVERYBODY can afford"..

It's not even the case in America and certainly doesn't extend beyond our borders. That being said, read the next post.
danger@ said:
I think having swap partitions on split on more hard drives is just because of performance. If your system is not going to use swap extensively, I wouldn't add swap partitions on other drives, it's a waste of space. Today's computers have enough RAM to not use swap at all....

What about gstripe?
Which method is better, three 1GB swap partitions on 3 identical disks, or one 3GB swap on gstripe volume on these partitions?
Applying it...

Now, that being said, I've two SCSI hard drives and two IDE HDs on this computer. I'm not running BSD but it is a Unix, Ubuntu 9.10 Karmic.
What drew me to this discussion was googling for some kind of utility that would incease the speed of paging to a Scuzzy drive the way hdparm did for IDE.

I do have a swap partition on each drive, and it enables me to do things that I'm really not supposed to be able to do given the processor speed (500) and the video RAM and of course, System RAM.

For instance I can run Google Earth and certain flight simulators, graphics applications like xsane and da Gimp, lives, and similar applications. I scanned an image last week that was like looking through a microscope and had a raw size of 1 gigabyte. Obviously 1 gig is more than 650 megs, which is the limit of my RAM as it stands today. I could fit it onto what the board would support, with the three 512 sticks, but I wouldn't have enough RAM left to run my OS now, would I? Loading the image file and then loading Gimp to resize it would cost me more RAM than I would have at the best, far less having about a third of that best

So,to the meat of the matter. if I could modify the swap usage parameters to where I can start swapping earlier (done) yet still not burn up my drives (the part sdparms would help with, if it has such a function) I could see an immediate personal advantage. Maybe be able to compile in less than an hour or two...

And Free Software being what it's supposed to be, recontribute that to the community.

And it wouldn't be just me that would be helped.
It would also be extra nice if somebody could figure a way to quickly explain the process in terms readily understandable to he people for whom I rebuild and sell computers, mostly first time computer customers and just about all of them first time *nix users.
vermaden said:
What about gstripe?
Which method is better, three 1GB swap partitions on 3 identical disks, or one 3GB swap on gstripe volume on these partitions?

That would probably be best with the three separate swap partitions. And if two of them are IDE having them on separate controllers.

I get my personal best performance that way.

I'd like to try it out on a SATA enabled system too.
Don't use hardware/software raid0 for swap. One very simple reason, if one disk dies you lose all your swap space in one go. Not sure what would happen but you might end up with a nice panic ;)

It's best to just set multiple swap entries in /etc/fstab and let the system figure it out.

Device          1K-blocks     Used    Avail Capacity
/dev/ad4s1b        262144       12   262132     0%
/dev/ad5s1b        262144        8   262136     0%
/dev/ad6s1b        262144       16   262128     0%
/dev/ad7s1b        262144       12   262132     0%
Total             1048576       48  1048528     0%