if you have in squid (Mikrotik), a 80 Gb HD, and use this for cache websites, is to big, when a user go to example http://www.google.com the webproxy need search in very big disk. i know the algorithm in squid BUT!!!
now, in you experience which is the magic number for webproxy “Maximun cache size” i am thinking for example put 2 Gb for more fast search in hard disk. and no use “unlimited” only put “unlimited for ram cache”
Personally, I would disagree with that. It’s much more complicated than simply xGB per xMB/s of bandwidth. The amount of users, the frequency of the same site visits, and the cacability of the frequently visited web sites will play a much bigger factor.
8GB on 1 T1 line will mean nothing if 99% of site visits are to sites coded in PHP or ASP (which is extremely rare to be cachable)…
As always, I recommend proper Squid boxes, with propper access to the log files. These logs can then be analysed, and recommendations and improovements can be made as required based on your users browsing habits. On 1 1MB line with 5 users for example, it’s very easy to get a cache to hit (amount of objects fetched out of the proxy) of well over 60% - but this requires massive tweaking and customisations on the proxy configuration - something, that is not possible in MT…