Squid on FreeBSD

Hi

I want to setup squid on FreeBSD 7 STABLE for about 50 MB WEB traffic.
Help me on hardware spec and squid version please.

Is the following spec OK?:
--------------------------
CPU : E2200 Intel
RAM : 4 GB DDR2 800 Mhz
H.D.D : 2*160 SATA
--------------------------

Regards
 
For 50 Mbit/sec sustained this looks under-dimensioned. You will need more disks, preferably SCSI with 15K disks or JBOD, certainly not any form of RAID. Even RAID0 will get trashed by the amount of disk activity. This set-up might be a bit too 'desktoppy' to cope with hundreds of disk accesses per second, and Squid will run much better on much more RAM (much faster response due to in-mem caching, and less disk accesses).

To give you a slight idea of what to expect: I'm currently about 12 hours into a 30-45 Mbit/sec day (the busiest period is yet to come), and Squid has already served up 8 million URLs, and even with 6 SCSI/15K disks (six separate cache_dirs), disk writes burst to 500 tps per disk sometimes (due to 32G cache_mem, which serves up most of the hits, or there would be many more disk reads/writes).

Don't underestimate the brute force of 50 Mbit/sec ;)
 
Thanks for your help.
I will check this out and i will try to upgrade my hardware equipments.
Do you know about tproxy on FreeBSD? Is it possible to run tproxy on FreeBSD? Is it possible to run with 1 NIC ?
 
Olav, this is not about reverse proxying, but about 'regular' proxying (local users accesssing the Internet through a proxy).

Andre, I normally use diskd on several UFS2-formatted disks.
 
Well, 500 tps is not that much for a 15k drive. Have you considered ZFS? It may help here as IO might be more optimized at the file system level. You could greatly benefit from separate ZIL etc. Haven't played with squid for few years and just got curious :)

I am also a bit worried about your statement, that squid will trash RAID0. Of course, squid does not need RAID for the redundancy. But why trash RAID0?
 
Every read/write touches every disk unnecessarily, instead of separate disks sharing the read/write load by each handling separate files. There's also probably an administrative penalty for striping the writes on every file, and rebuilding the files on every read. It's better to make multiple squid cache directories on multiple disks and assign 'whole files' to them. There's the added bonus of not losing your entire cache when a cache disk fails.
 
I was wondering, if someone has tested squid on ZFS. Not neccesarily with redundancy and probably with lots more RAM. The idea is, that ZFS can (and will) reorder/group writes and thus achieve better utilization of the disk bandwidth.

You are probably considering the case of using raidz, which indeed does behave this way. But there is no reason to use redundancy of any sort for disposable data such as the squid cache.

To my surprise, ZFS on an USB FLASH drive results in much more responsive I/O, than UFS. USB FLASH drives are usualy severely limited at write IOPS.
 
Back
Top