Hi guys,
Anyone is using CacheMARA in your network? How it works? Does it performs better than Thundercache?
Thanks
Hi guys,
Anyone is using CacheMARA in your network? How it works? Does it performs better than Thundercache?
Thanks
Regarding youtube, at least: Should not be difficult to be much more effective than Thundercache.
That code seems to be quite outdated. Youtube did several changes, not properly taken care of in Thundercache.
I have a special squid-setup to cache youtube videos: Between 30% and 35% daily byte-hitrate, with daily youtube traffic of about 20GB (incl. cached data). I have about 2x2TB of disk space allocated for it.
Hi reinerotto,
You mean that the code seems pretty old in Thundercache 7.1 or an older version?
Thanks,
I could only look at the version from sourceforge, which is dated 11.3.2009
I think the Sourceforge program is completely different from http://www.thundercache.com.br which is updated
Your thinking is correct.
I found it after some more searching. However, the src is hidden, so I can not compare, and I do not want to do the complex setup in a lab for benchmarking.
So I still consider my “German Engineering” to be superior, until somebody will give some performance info using Thundercache.
If you have money to spend, get your self a Bluecoat Cacheflow.
If you are tight on budget, but still want to deploy something semi-professional go for Cachemara. But I assure you that you wont see more than %35 bandwidth savings from Cachemara. Dont beleive in anything the marketing says about Cachemara.
If you need more info, tell me so I can provide you.
Thanx for sharing the info. That means, it does not beat my “self-made” solution for squid/youtube regarding bandwidth-savings.
Good to know ![]()
Yeah. I would like to know a little bit more about your performance experience with cache mara. Approximately how much the basic license? They advertise awesome number of concurrent connections compared with Thundercache or squid.
Thanks
Doush
Any idea of the costs of Cachemara and cacheflow? How Can I contact you?
I am seriuosly questioning, whether cacheMARA is so much better compared to squid. Assuming identical HW-resources, of course.
However, it needs some tweaking and tailoring of the squid installation according to the workload.
Only deficit I see here is, that squid is not able to do caching of torrents.
Upgrading an overloaded squid by means of adding more RAM to the machine, or another CPU with more cores, or adding disk space is strait forward, with some knowledge. Or consultation.
I am seriuosly questioning, whether cacheMARA is so much better compared to squid. Assuming identical HW-resources, of course.
However, it needs some tweaking and tailoring of the squid installation according to the workload.
Only deficit I see here is, that squid is not able to do caching of torrents.
Upgrading an overloaded squid by means of adding more RAM to the machine, or another CPU with more cores, or adding disk space is strait forward, with some knowledge. Or consultation.
Hi all,
CacheMARA vs Squid is quite a interesting question. As you all may know, MARA developers did almost all the “funny” things squid does today. So i really think MARA is better than Squid. But it depends on what you seek for. MARA must be more stable, more robust, more hardware efficient. But when it comes to caching, the picture changes. MARA, Squid, Lusca, etc. It all caches based on url of the content. If you want to cache “dynamic” content, you’ll have to wirte an url rewriter, as was ThunderCache 3 and 3.1. MARA must have it’s rewriters.
When developing ThunderCache 7.1, i really was upset about relying in urls to cache objects, it’s really boring have to make “plugins” for dynamic content. So we developed a way of caching whitout relying in urls in ThunderCache 7.1. And some months later, i discovered PeerApp does a similar thing. That allows the system to hit a file that was uploaded to different file sharing systems, with completely different names and url. I don’t know Bluecoat, never saw it running, there’s no much info about it in the web, so can’t tell about it.
I know this is not quite the subject here, but as ThunderCache was got into the middle of it… I noticed some friends here are getting the wrong picture about it. So, nothing better than the poor developer to “clarify” some things.
From now on, i already apologize to Normis and all the other Mikrotik’s guys, as this may sound like “publicity”. Well, there’s no way i could make it without doing this.
First of all, i can’t explain to you all you would need to know about the caching systems mentioned here. But i can tell you we are having migrations from even PeerApp to Thunder, and really, i did’nt think we could do any better than the highly known cache names in the market. And so i asked: why??? The answer was: “Your youtube caching is far more efficient than PeerApp’s”…
. This client has 1.4Gbps traffic and almost 300Mbps is from youtube. It has 50% link saving from youtube traffic with ThunderCache 7.1. His words, not mine: “PeerApp gives 20 to 30% link saving on youtube traffic”.
Well, i know the software, i know exactly what it does. So i realized that i have no information of any other cache system that fully handle HTTP 206 responses caching… another
. I suddenly noticed i neither have information of any other cache system that can “resume” cached downloads… Well, you know,
again.
So here i am, i could write a huge response and talk about all the new features we’ve implemented in ThunderCache 7.1, which is a totally new proxy and caching system, with barely 5% of codes from it’s previous versions.
But i’ll just leave here a hotsite with the explanations of those new features and you are free to read it if you want: http://bmsoftware.org/new/hotsite_en.html.
And last, but not least important, words and pretty images is not what matters. So, if anyone wants a trial period with no costs, i’ll be glad to show you what we can do.
If you have any question, you can just ask here or send me an email.
Thanks for your time.
Interesting response. Hopefully, not to be deleted.
Some comments:
If you want to cache “dynamic” content, you’ll have to wirte an url rewriter, <
Yes, you need rewriters with squid. Otherwise, to rely on any type of DB for indexing the cached objects, generates a performance hit.
That allows the system to hit a file that was uploaded to different file sharing systems, with completely different names and url.<
This is a PLUS for TC. Practically impossible in squid.
This client has 1.4Gbps traffic and almost 300Mbps is from youtube. It has 50% link saving from youtube traffic with ThunderCache 7.1. His words, not mine: “PeerApp gives 20 to 30% link saving on youtube traffic”.<
First of all: Same amount of disk space ?
As I said already, I have about 30% bytehitrate with squid. Usind 2x2TB disks. So, how much disk space has the mentioned client ?
From when are your results ? youtube changed a few things just last month, which makes caching more difficult. My bytehitrate dropped to the 30% because of that. Yesterdays data.
So i realized that i have no information of any other cache system that fully handle HTTP 206 responses caching..<
small PLUS for TC. However, for yt-traffic this is not important any more, at least in my region. Besides, squid can cache the full video, and then serve the parts out of it.
I suddenly noticed i neither have information of any other cache system that can “resume” cached downloads…<
(may be a )PLUS for TC. However, the practical advantage is questionable. Needs some real logs to be analyzed, to determine, how often this feature is really used.
Honest answer, please: Is/was your actual TC affected by the change of yt (id of video not fix any more), or did it not affect yt ?
He had an appliance with 24TB disk space using almost 15TB. Thunder was running with 7TB space using 5TB (a RAID arrange with 12 600GB 15k RPM SAS Disks, although thunder does better with separated disks).
Actually it is not more difficult to cache. They just splited audio and video in 2 different files. Since Youtube changed for ranged requests in url more than a year ago, it needs a special “treatment”. All we had to do was add this treatment to the audio files too, as they come fragmented as well now. And the data of 50% is from couple of weeks ago, before this change. When i told him about the change, and that almost all videos he had in his cache would not be hited anymore, he used this “moment” to upgrade to another machine, and… formatted the cache disks, hehe. It has 6 days running, about 400 thousand objects and is now at 29,64% link saving and rising
.
I don’t know how Squid does it actually. Does it answer the fragmented request, and make another request for the full video for when a client asks it again? Another Brazilian cache does this. Well, and if another client just does’nt ever watch this video again? Another question: You start watching a video, jumps to after the middle and watch to the end. If another client does the same, will squid deliver the cache of the last part without having the full video in cache? Thunder does. And not video anymore, how many clients do you think that uses a download accelerator while download some big file from the web, splitting it in 10 206 ranged responses? How does squid treat this? Thunder caches all 10 responses.
Well, how many times do you skip an youtube ad when it plays before the video? Or how many times you stop watching a video you just opened and find out it’s not what you want? Or how many times you cancel a download, or even lost connection or power? ![]()
As i told before, we don’t rely on the url. So the ID of the video was not important from the beggining
.
Maybe your product is very good, but I never liked the idea of paying a monthly fee for using it.
For instance a Plan T15400 to serve about 3000 customers would cost monthly about $350. In 5 years you would end paying about the cost of a bluecoat cacheflow but without receiving any hardware.
As we are a bit off the mikrotik track here, I have to make it short:
It has 6 days running, about 400 thousand objects and is now at 29,64% link saving and rising
.<
My squid has only space for 200.000 objects; 26.6% bytehit yesterday.
However, I expect this to improove, because just got an idea how to handle the new, varying IDs, which gave a negative impact on byte-hitrate since introduction about a month ago. Probably same time, when video and audio were splitted on some videos.
Does it answer the fragmented request, and make another request for the full video for when a client asks it again?<
No. Full video requested; as soon as necessary part available, tx to client. Next part will be extracted from incoming full video.
Valid for real range requests onyl, of course.
You start watching a video, jumps to after the middle and watch to the end. If another client does the same, will squid deliver the cache of the last part without having the full video in cache?<
For real range requests: No.
However, as yt now uses the “self-made range requests”, answer is YES.
And not video anymore, how many clients do you think that uses a download accelerator while download some big file from the web, splitting it in 10 206 ranged responses? How does squid treat this?<
Just the same: cache whole file, but starts delivering as soon as first part available.
As i told before, we don’t rely on the url. So the ID of the video was not important from the beggining
Then that is the BIG PLUS. Possible to be done using a DB, for example, and quite some coding.
That’s the reason, you deserve some $ for it
Actually, for 3k customers you would need a 6400 Threads plan, which costs $185
.
If you have 3k customers, you must have a traffic of about 200Mbps, i presume (based on the clients we have running here). I don’t know the costs you have now with link, but let’s supose you pay $20/Mbps. Thunder will easily get you a 50Mbps gain in your network (i’m taking easy here). 50Mbps x $20 = $1000 saving monthly. I really don’t think it’s a bad business.
And let’s be honest, how much REAL link saving you think you can get with a bluecoat? PeerApp assumes 20% saving in contract. I don’t have this kind of contract, but i don’t have any client with less than 30% link saving now, and 80% of my clients are between 35% and 45%.
I would love to know how that would be. You pay $20000 in a BlueCoat. Does they offer updates for eternity? What they do in the case of a change like youtube did couple of weeks ago?
If you like, we could start a thread in http://www.overnix.com, which is our official support forum, there i can answer your doubts without being afraid of breaking rules here with MK guys (although i sure must had done it already
).
So, any ranged request involves fetching the entire file, even if it will never be requested again. That’s lame, hehe.
Edit: And, for what i noticed, if a client asks for byte 104857600 (100MB) to 105906176 (101MB) for a 200MB file. Squid only starts sending for the file after downloaded 100MB?
Good idea. Would like to continue the discussion ![]()
May be, you can copy a few posts from here, so it will be easy to be found. And no need to repeat.