Transparent Proxy + squid 3 + pf dont working

Hi..
I'm making a transparent proxy with pf and squid 3 in FreeBSD 8.
When I set the configuration of proxy in my browser, the internet worked, but without this configuration, don't work.


I believe in my squid.conf and pf.conf is almost corrects.

This is my pf.conf:

Code:
EXTIF="bge0" #recebe a internet
INTIF="bge1" #compartilha..rede interna

set skip on lo0
scrub in all

nat on $EXTIF from !($EXTIF)->($EXTIF:0)

#regras de rdr

rdr on $INTIF inet proto tcp from any to any port www -> 127.0.0.1 port 3128

pass in on $INTIF inet proto tcp from any to 127.0.0.1 port 3128 keep state
pass out on $EXTIF inet proto tcp from any to any port www keep state

pass in quick on { lo0 $INTIF } all
pass out quick on $EXTIF inet proto {tcp,udp} from any to any keep state

#libera ssh e http de fora pra maquina
pass in quick on $EXTIF inet proto tcp to $EXTIF port { http ssh } flags S/SA keep state

#pass out all

pass out quick on $EXTIF inet proto { tcp,udp,icmp} all


All permissions was configured.In /etc/devfs.conf , in /dev/pf..

This is my squid.conf:

Code:
http_port 3128 transparent

cache_mem 1000 MB 
cache_swap_low 90
cache_swap_high 95
cache_dir ufs /var/spool/squid 45000 16 256

maximum_object_size 30000 KB
maximum_object_size_in_memory 40 KB

access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
pid_filename /var/log/squid/squid.pid 

memory_pools off

diskd_program /usr/local/squid/diskd
unlinkd_program /usr/local/libexec/squid/unlinkd

refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern (cgi-bin|\?)    0       0%      0
refresh_pattern .               0       20%     4320
quick_abort_max 16 KB
quick_abort_pct 95
quick_abort_min 16 KB
request_header_max_size 20 KB
reply_header_max_size 20 KB
request_body_max_size 0 KB

#Defaults:
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT
acl minharede src 192.168.1.0/255.255.255.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl NOCACHE url_regex "/usr/local/etc/squid/direto"\?
no_cache deny NOCACHE

acl negapalavra url_regex "/usr/local/etc/squid/proibidos"
acl liberapalavra url_regex "/usr/local/etc/squid/livres"
http_access allow liberapalavra all
http_access deny all
http_access deny negapalavra all
http_access allow minharede
http_access deny all

cache_mgr *@*
cache_effective_user squid
cache_effective_group squid

Thanks.
Orige
 
It's
Code:
http_port 127.0.0.1:3128 transparent
Look at /usr/local/squid/logs/cache.log for errors.
 
Almost there..

Ok.Now is the other error.
Browser now doesn't work even configuring it.
Look the cache.log

Code:
2010/02/01 14:42:37| Starting Squid Cache version 3.0.STABLE19 for amd64-portbld-freebsd8.0...
2010/02/01 14:42:37| Process ID 1091
2010/02/01 14:42:37| With 11072 file descriptors available
2010/02/01 14:42:37| Performing DNS Tests...
2010/02/01 14:42:37| Successful DNS name lookup tests...
2010/02/01 14:42:37| DNS Socket created at 0.0.0.0, port 42871, FD 7
2010/02/01 14:42:37| Adding domain store from /etc/resolv.conf
2010/02/01 14:42:37| Adding nameserver 10.1.1.1 from /etc/resolv.conf
2010/02/01 14:42:38| Unlinkd pipe opened on FD 12
2010/02/01 14:42:38| Swap maxSize 46080000 + 1024000 KB, estimated 3623384 objects
2010/02/01 14:42:38| Target number of buckets: 181169
2010/02/01 14:42:38| Using 262144 Store buckets
2010/02/01 14:42:38| Max Mem  size: 1024000 KB
2010/02/01 14:42:38| Max Swap size: 46080000 KB
2010/02/01 14:42:38| Version 1 of swap file without LFS support detected... 
2010/02/01 14:42:38| Rebuilding storage in /var/spool/squid (DIRTY)
2010/02/01 14:42:38| Using Least Load store dir selection
2010/02/01 14:42:38| Current Directory is /usr/local/etc/squid
2010/02/01 14:42:38| Loaded Icons.
2010/02/01 14:42:38| commBind: Cannot bind socket FD 14 to 127.0.0.1:3128: (48) Address already in use
FATAL: Cannot open HTTP Port
Squid Cache (Version 3.0.STABLE19): Terminated abnormally.
CPU Usage: 0.011 seconds = 0.006 user + 0.006 sys
Maximum Resident Size: 9696 KB
Page faults with physical i/o: 0

Thankss.
 
You've probably started Squid twice. Use # squid -k shutdown or # /usr/local/etc/rc.d/squid stop and wait at least a minute, because Squid shuts down its operation in the background. You can [cmd=]tail -f /usr/local/squid/logs/cache.log[/cmd] to see when it has actually stopped.

And if you want to use Squid as a transparent proxy, stop setting the proxy in the browser! It has no added value.
 
I'd use something like this for the listen param. I found people were able to use my squid as a springboard from the external interface if it was not excluded by this rule.

Code:
from squid.conf
# Squid normally listens to port 3128
http_port 192.168.1.2:3128

pf rule

#squid
rdr on $int_if inet proto tcp from any to any port www -> 127.0.0.1 port 3128
 
So how is Squid supposed to listen on localhost when you don't tell it to listen there? Transparent proxies belong on localhost.
 
Almost there..
The cache.log seems normal but my internal network don't entered the internet.
See the cache.log:

Code:
2010/02/01 15:56:40| Starting Squid Cache version 3.0.STABLE19 for amd64-portbld-freebsd8.0...
2010/02/01 15:56:40| Process ID 1592
2010/02/01 15:56:40| With 11072 file descriptors available
2010/02/01 15:56:40| DNS Socket created at 0.0.0.0, port 61942, FD 7
2010/02/01 15:56:40| Adding nameserver 201.10.120.3 from squid.conf
2010/02/01 15:56:40| Adding nameserver 201.10.1.2 from squid.conf
2010/02/01 15:56:41| Unlinkd pipe opened on FD 12
2010/02/01 15:56:41| Swap maxSize 46080000 + 1024000 KB, estimated 3623384 objects
2010/02/01 15:56:41| Target number of buckets: 181169
2010/02/01 15:56:41| Using 262144 Store buckets
2010/02/01 15:56:41| Max Mem  size: 1024000 KB
2010/02/01 15:56:41| Max Swap size: 46080000 KB
2010/02/01 15:56:41| Version 1 of swap file without LFS support detected... 
2010/02/01 15:56:41| Rebuilding storage in /var/spool/squid (DIRTY)
2010/02/01 15:56:41| Using Least Load store dir selection
2010/02/01 15:56:41| Current Directory is /usr/local/squid/logs
2010/02/01 15:56:41| Loaded Icons.
2010/02/01 15:56:41| Accepting transparently proxied HTTP connections at 127.0.0.1, port 3128, FD 14.
2010/02/01 15:56:41| HTCP Disabled.
2010/02/01 15:56:41| Ready to serve requests.
2010/02/01 15:56:41| Done reading /var/spool/squid swaplog (92 entries)
2010/02/01 15:56:41| Finished rebuilding storage from disk.
2010/02/01 15:56:41|        92 Entries scanned
2010/02/01 15:56:41|         0 Invalid entries.
2010/02/01 15:56:41|         0 With invalid flags.
2010/02/01 15:56:41|        92 Objects loaded.
2010/02/01 15:56:41|         0 Objects expired.
2010/02/01 15:56:41|         0 Objects cancelled.
2010/02/01 15:56:41|         0 Duplicate URLs purged.
2010/02/01 15:56:41|         0 Swapfile clashes avoided.
2010/02/01 15:56:41|   Took 0.03 seconds (3406.02 objects/sec).
2010/02/01 15:56:41| Beginning Validation Procedure
2010/02/01 15:56:41|   Completed Validation Procedure
2010/02/01 15:56:41|   Validated 209 Entries
2010/02/01 15:56:41|   store_swap_size = 730
2010/02/01 15:56:42| storeLateRelease: released 0 objects
 
Any errors in the web browser or in the log files? Have you tried tcpdump(1) on your internal and loopback interfaces to see where requests go (or don't go)? Did you build Squid with the correct options in 'make config'?
 
I think the problem is the redirection.

When I set up my client browser to my squid proxy(192.168.1.1:3128). It works and I can connect to the internet.

On the server machine.. I use tcpdum -i em1 to view the packets.
I see entries like

Code:
192.168.1.10.55493 > 192.168.1.1.3128
192.168.1.1.3128 > 192.168.1.10.55493

I can assume that It works since on my client browser I put the server address and port number in the proxy server settings.

Ok. now, when I configure the client browser not to use any proxy.. and therefore test the TRANSPARENT thing, I can't connect to the internet. On the server machine, i use again tcpdump -i em1

Code:
192.168.1.10 > mypublicIP.domain: 34202+ A? www.google.com.mydomain
and so I wasn't able to connect to the internet.

I'm thinking that it is about the pf.conf. Here is mione
Code:
i="em1"
x="em0"
lan="192.168.1.0/24"
gw="192.168.1.1"
squid="3128"

rdr on $i inet proto tcp from $lan to any port www -> $gw port $squid
pass in on $i inet proto tcp from $lan to $gw port $squid keep state
pass out on $x inet proto tcp from any to any port www keep state

Is there something wrong with my pf?

Thanks

by the way, i check cache.log for squid and it says a line like this,
Code:
Accepting transparently proxied HTTP connections at 192.168.1.1, port 2128


my squid.conf
http_port 192.168.1.1:3128 transparent
 
I have it working now...

The problem is name resolution, transparency works after setup... I wasn't able to see it works, because name resolution fails in my setup..
 
Dutch,
The tcpdump show me all is normal, but now, the proxy is almost working ..
The sites that I do not put in my rules, he sails, but others that are not in the rules, not work.
When the proxy back to work, the sites that he can navigate the sometimes sail sometimes not.

Make config is allright..i set the integration of pf and default installation.
 
I found that when it stops running the message:
Code:
Limiting icmp unreach response from 202 to 200 packets/sec
So , the proxy stopped.
What do I resolve?

When he works I enter a new site, type bunalti.com
He does not load, then wait a bit and update
then it works.
Why is this happening?
Can I fix this?



Sorry for my confusion..
Regards.
 
Try:

Code:
http_access allow liberapalavra all
http_access deny negapalavra all
http_access allow minharede
http_access deny all

If 'minharede' is the only network on the internal interface and you want to apply the 'libera/nega' rules to the entire network, use:

Code:
http_access allow liberapalavra minharede
http_access deny all

If that doesn't work, explain what it is exactly you are trying to achieve with these rules. Do you want to allow everyone out to 'safe sites'? Do you want to allow a separate priviledged network access to every site, blocking everyone else from bad sites while allowing access to good sites? This is totally unclear to me.

Simply start out with

Code:
http_access allow minharede
http_access deny all

and insert/remove extra rules until you get what you want, e.g.

Code:
http_access deny negapalavra minharede
http_access allow minharede
http_access deny all

will allow minharede to access every site, except ones listed in negapalavra.
 
Allright..

The rules in this order:
Code:
http_access allow liberapalavra minharede
http_access deny all
http_access deny negapalavra minharede
http_access allow minharede
http_access deny all
All that is not in 'liberapalavra' it crashes .. then it disproves the rule 'negapalavra' right?

Is already running but there is this little problem of having to update it to release the site.
That message 'Limiting icmp unreach response from 202 to 200 packets / sec' because it appears not block anything but the rule 'liberapalavra'.
This has resolved it?
 
I still do not understand exactly what you are trying to achieve, but I'll try some scenarios:

1. You want 'minharede' to access only the sites in 'liberapalavra', and nothing else:

Code:
http_access allow liberapalavra minharede
http_access deny all

2. You want 'minharede' to access every website, except the ones in 'negapalavra':

Code:
http_access deny negapalavra minharede
http_access allow minharede
http_access deny all

3. You want 'minharede' to access the sites in 'liberapalavra', not the sites in 'negapalavra' (this assumes some overlap); no other sites are allowed

Code:
http_access deny negapalavra minharede
http_access allow liberapalavra minharede
http_access deny all

4. You want 'minharede' not to access the sites in 'negapalavra', yet allow the sites in 'librapalavra'; no other sites are allowed:

Code:
http_access allow liberapalavra minharede
http_access deny negapalavra minharede
http_access deny all

If I look at your current rules, only access to sites in 'liberapalavra' is allowed:
Code:
http_access allow liberapalavra minharede
[B]http_access deny all[/B] <---- this stops everything else, the rest is ignored
http_access deny negapalavra minharede
http_access allow minharede
http_access deny all


Hope this helps.
 
Ok..I configured like the first scenario..
And the proxy is working..
But this error:
Code:
Limiting icmp unreach response from 202 to 200 packets/sec
Why did it happen?

And why when I enter a site it does not load in time, to update and then there he sails?

Thanks for your attention!
 
Sometimes the proxy stops working, the cache.log, tcpdump and the top show that everything is normal.
 
If you use the first scenario then a visit to an allowed website will succeed, but if that website contains elements (like pictures) from other websites, these will not load. This is extremely limiting.

I don't know what the "ICMP unreachable" packets are about. Squid does not generate them as far as I know. You could try running a 'tcpdump' for 'proto ICMP' on your interfaces and see which IP addresses cause this, and work from there.
 
Ok.I understand.
I chose the second scenario and stay cool.

But, why sometimes the proxy stop?
I guess which the 'Limiting icmp unreach response from 202 to 200 packets/sec' may be causing it.
Do you have any idea?

Thanks..
 
No idea. You appear not to have any block rules in your pf, so I don't know what else causes those messages, unless there's firewalling going on elsewhere. Again, tcpdump should show you the source/cause of those messages.

# tcpdump -s 0 -pnli bge0 'icmp[icmptype] = icmp-unreach'
# tcpdump -s 0 -pnli bge1 'icmp[icmptype] = icmp-unreach'

Example output:

Code:
18:18:09.203166 IP 11.22.33.44 > 44.33.22.11: ICMP 11.22.33.44 udp port 9095 unreachable, length 36

IP 11.22.33.44 tells IP 44.33.22.11 that its attempt to reach 11.22.33.44's UDP port 9095 failed because that port is unreachable.
 
Yes.I don't set almost nothing in my pf.conf 'cause just wanna test the transparent proxy for after execute him.

The tcpdump -s 0 -pnli bge1 proto ICMP show this:

Code:
15:49:17.458519 IP 192.168.1.1 > 192.168.1.2: ICMP 192.168.1.1 udp port 53 unreachable, length 36

Can I release the port 53 in pf rules?
 
Do you have a nameserver running on 192.168.1.1? You have no blocking rules in pf.conf, so there's no reason to assume it should be released. It's more likely that there's no nameserver running on 192.168.1.1.
 
Huuumm..Ok..
Now tcpdump -i bge1 show me:
Code:
16:33:31.584866 IP 192.168.1.1 > 192.168.1.2: ICMP 192.168.1.1 udp port domain unreachable, length 36

You are right , i guess.
But how could i solved this problem?
 
Put your ISP's nameserver(s) in /etc/resolv.conf and restart Squid.
 
Back
Top