[Rabbit-dev] [solved] Re: strange 403 response

Romain Godefroy rgodefroy at thalos.fr
Tue Nov 29 11:44:40 CET 2011


Now it's sure, we must define the default cache handler to BaseHandler 
or after the first ressource that use our forbidden default handler, 
everthing that comes from cache also use forbidden handler.
So it was just a wrong configuration...

NB: we wrote a ClientIpTrafficLogger to log traffic by client IP. When 
our validation will be finished, we will send you the code.

Thanks for all
Romain

Le 29/11/2011 11:35, Romain Godefroy a écrit :
> I found that when we have the problem, the forbidden handler is set on 
> the ressource handler from cache (SCC > proxy.getCacheHandlerFactory)
> It could come from the combinaison of our default handler set to 
> forbidden and empty cache handlers configuration.
> I add:
> ...
> [CacheHandlers]
> defaulthandler=rabbit.handler.BaseHandler
> ...
> And it looks like we don't have the error any more!
>
> We will confirm this with new tests...
>
> Le 29/11/2011 09:41, Romain Godefroy a écrit :
>> Hi,
>>
>> Many thanks for your help.
>>
>> Here is the configuration :
>> ------------------------------------------------------------------
>> ########## Configuration generale
>> [rabbit.proxy.HttpProxy]
>> listen_ip=0.0.0.0
>> port=28082
>> logo=http://$proxy/FileSender/public/logo_thalos.png
>> serverIdentity=ocb_proxy
>> StrictHTTP=false
>> http_generator_factory=rabbit.proxy.StandardHttpGeneratorFactory
>>
>> [logging]
>> access_log_level=FINEST
>> access_log=logs/access_log.%g
>> access_size_limit=1000 # in MB
>> access_num_files=1
>> error_log_level=FINEST
>> error_log=logs/error_log.%g
>> error_size_limit=1000 # in MB
>> error_num_files=1
>>
>> [data_sources]
>>
>> [dns]
>> dnsHandler=rabbit.dns.DNSSunHandler
>>
>> [rabbit.dns.DNSJavaHandler]
>> dnscachetime=8
>>
>> [rabbit.io.ConnectionHandler]
>> keepalivetime=30000
>> usepipelining=false
>>
>> [rabbit.proxy.StandardHttpGeneratorFactory]
>>
>> [rabbit.proxy.FileTemplateHttpGeneratorFactory]
>> error_pages=htdocs/error_pages
>>
>> [sslhandler]
>> allowSSL=443
>>
>> ######### Activation des filtres HTTP
>> [Filters]
>> accessfilters=rabbit.filter.AccessFilter
>> httpinfilters=rabbit.filter.HttpBaseFilter,rabbit.filter.ProxyAuth,rabbit.filter.ReplaceHeaderFilter,rabbit.filter.RevalidateFilter 
>>
>> httpoutfilters=rabbit.filter.HttpBaseFilter
>> conectfilters=
>>
>>
>> ######### Parametrage des filtres HTTP
>> [rabbit.filter.AccessFilter]
>> accessfile=conf/access
>>
>> [rabbit.filter.HttpBaseFilter]
>> remove=Connection,Proxy-Connection,Keep-Alive,Public,Transfer-Encoding,Upgrade,Proxy-Authorization,TE,Proxy-Authenticate,Trailer 
>>
>> userfile=conf/users
>> # cacher les contenus avec un cookie (les sites doivent renvoyer un 
>> don't cache en cas de données sensibles)
>> cookieid=false
>>
>> [rabbit.filter.ProxyAuth]
>> one_ip_only=false
>> cachetime=5
>> authenticator=plain
>> userfile=conf/allowed
>>
>> [rabbit.filter.ReplaceHeaderFilter]
>> # se faire passer pour un mobile
>> request.User-Agent=Mozilla/5.0 (BlackBerry; U; BlackBerry 9800; 
>> en-US) AppleWebKit/534.1+ (KHTML, like Gecko) Version/6.0.0.246 
>> Mobile Safari/534.1+
>>
>> [rabbit.filter.RevalidateFilter]
>> # ce qui est dans le revalidate est toujours demandé au serveur 
>> (vérification du cache)
>> alwaysrevalidate=false
>> # a regexp matching sites to re-validate
>> #revalidate=freshmeat.net/$|slashdot.org/$|http://www/$|newsforge.com/$
>> revalidate=
>>
>>
>> ########## Declaration des handlers sur les content-type
>> [Handlers]
>> # recompresser les images en webp
>> image/.*=rabbit.handler.ImageHandler*webp
>> # filtrer le html
>> text/html(;(charset\=.*)?)?=rabbit.handler.FilterHandler
>> application/xhtml.*=rabbit.handler.FilterHandler
>> text/xhtml(;(charset\=.*)?)?=rabbit.handler.FilterHandler
>> # Seulement compresser ces types mimes :
>> text/plain(;(charset\=.*)?)?=rabbit.handler.GZipHandler
>> text/xml(;(charset\=.*)?)?=rabbit.handler.GZipHandler
>> application/xml(;(charset\=.*)?)?=rabbit.handler.GZipHandler
>> application/postscript(;(charset\=.*)?)?=rabbit.handler.GZipHandler
>> text/css(;(charset\=.*)?)?=rabbit.handler.GZipHandler
>> # politique par defaut :
>> defaulthandler=rabbit.handler.ForbiddenHandler
>>
>> [CacheHandlers]
>>
>> [rabbit.cache.NCache]
>> directory=cache
>> # sans durée de vie, le cache expire au bout de : (en heures)
>> cachetime=720
>> # taille du cache en Mo
>> maxsize=10000
>> # temporisation du scrutateur de nettoyage du cache (en secondes)
>> cleanloop=300
>>
>>
>> ########## Parametrage des handlers sur les content-types
>> # celui pour le filtre HTML
>> [rabbit.handler.FilterHandler]
>>
>> filters=rabbit.filter.ScriptFilter,rabbit.filter.AdFilter
>> #filters=rabbit.filter.AdFilter
>> compress=true
>> repack=true
>>
>> [rabbit.handler.ImageHandler*webp]
>> convert=/usr/bin/gm
>> convertargs=convert -quality 100 -resize "800>" -flatten $filename 
>> +profile "*" jpeg:$filename && /opt/webp/libwebp/cwebp -m 6  -q 10 
>> $filename -o $filename.c
>> min_size=0
>> force_converted=true
>>
>> [rabbit.handler.GZipHandler]
>> compress=true
>>
>> [rabbit.handler.ForbiddenHandler]
>>
>>
>> ########## Parametrage des filtres HTML
>> #XXX RGO/GHU : liste pub ?
>> # The list of evils. A regexp.
>> [rabbit.filter.AdFilter]
>> adlinks=[/.]((c|net|ns|surf|page|imag)?ad([svq]|fu|srv|[sz]erver|log|bannercenter|_?click|verts|finity|force|click|tech)?\d*|banner|linkexchange|acc_clickthru|action|vertising)[/.]|gen_addframe|event.ng|/m=|/ad(num|vert|name)?=|/site_id=|support.net|/redir\.|\?assoc= 
>>
>> adreplacer=http://$proxy/FileSender/public/stop.png
>> --------------------------------------------------------------------------------- 
>>
>>
>> We tried to replace defaulthandler=rabbit.handler.ForbiddenHandler 
>> with BaseHandler but the 403 is still there sometimes.
>>
>> Romain
>>
>> Le 28/11/2011 22:09, Robert Olofsson a écrit :
>>> Hi!
>>>
>>> On Mon, 2011-11-28 at 17:40 +0100, Romain Godefroy wrote:
>>>> For the same ressource, without any change to configuration, sometimes
>>>> Rabbit return a 403, and not to all clients at the same time.
>>> That sounds odd.
>>> I know of only a few different reasons that rabbit start dealing out
>>> 403 results.
>>> 1) SSL traffic that is not allowed (by the configuration).
>>> 2) BlockFilter / SQLBlockFilter
>>> 3) A path for status pages when you run rabbit as a reverse proxy
>>>
>>> You do not say anything about what filters you have enabled so
>>> it is very hard for me to diagnose it like this.
>>>
>>> Can you provide the configuration?
>>>
>>> /robo
>>>
>>>
>>>
>>
>





More information about the Rabbit-dev mailing list