And here some more fuel for this discussion:
A pro for having only the root server handle all requests could be the following:
lets presume we call the root server level 1
under it you have lever 2 server and under that level 3 etc.
But actually under level 1 there can be two level 2 families, with each a level 3, level 4 etc.
Lets call these level 2A and level 2B. and their next levels 3A and 3B and so on. Families A and B only come across each other at level 1
In case we would handle all dns requests as much as possible in servers close to the requester then if a client in level 4A would request a page resolving this will fill the cache in each level server up to level 1.
Now this resolving is stored in level 4A and the client requests it regularly, after a time the ´higher´ cache levels drain this info since it is not been requested for anymore. The request is handled in level 4A each time.
If we now look to a level 4B client machine. If he would request the same resolving relatively shortly after 4A did, he will be given the result from 3B, which got it from 2B and that one from 1B etc. whoâs got it from the root, level 1 that happen to have this info for him waiting on the shelf. It was just delivered to 2A who gave it to 3A etcâŚ
But now the next day: 4B client wants to go to that same page again and finds the info for this is bled from 4B, 3B, 2B and even root level 1!
While 4A client has still been using it and 4A therefore still stocks it.
In this case the situation would have been better for 4B client if 4A client would have skipped 3A and 2A all the time and just each time asked root level 1.
It would still have been on the shelf waiting for 4B client too!
So, this is a plus for the system like explained by changeip!
[A downer on this policy would be the fact that now the cache will be much bigger than in a distributed network which will in time slow down the resolving in by the root server.]
Another brain wave:
What happens if the client stores the info in his machine itself? (Plenty of browsing accelerators do dns caching like windows does in explorer itself too.) Now info on the root will still bleed over time if no new requests are received. the client simply doesnât request because he knows the answer himself already!
So, yet again I would suggest that probably a best practise would be to have some cache performed in some important ´nodes´ in your network, together with the lowest level (client CPE, or even the PC itself)
It also gives some more redundancy in case the root server has problems.
Rudy