I have recently installed a pair of CCRs into a data centre in central
London.
The Data centre we are using bills the cost of colo is per U and per VA - £5.00 per U plus £0.70 per VA per month I have as a result discovered
that the power factor of the CCRs is awful!
0.35A 36.1 watts 234V Power factor 42% to 45% i.e. 81.9 VA i.e. £57.33 per month for the power.
I replaced the PSU in the CCR with a Power Factor-corrected power supply
and the results are MUCH better:
0.19A 41 watts 234V Power factor 89% i.e. 44.46 VA i.e. £31.12 per month for the power.
This improvement has made a significant reduction in my colo costs, despite the actual power increasing slightly.
With the new PSUs, my colo cost savings for me are down by £630 approx per year just by improving the power factor of the two CCRs !!! The payback period for the new PSUs was a couple of months.
Has anyone else noticed this? Would be good to get some comment from MT on whether they can improve the power factor.
Tha ‘standard’ represents the minimum requirements for compliance.
Having a better performance than the standard is acceptable, and highly desirable.
With the new PSUs, my colo cost savings for me are down by £630 approx per year just by improving the power factor of the two CCRs !!! The payback period for the new PSUs was a couple of months.
Please don’t hide behind standards. By all means tell us it will cost more, but standards are not the reason.
Just because the standards don’t require it doesn’t mean it’s not benefical for customers in data centres.
From the example above the difference in costs is ~£26 per month that adds up to considerable amounts especially in an area where margins are already tight. The problem is compounded by lots of people having multiple mikrotiks in their racks.