A long story short ...
The evolution of mobile networks, driven by usage and technology development, brought some decissions for next generation.
- 1G was FDMA technology using analogue (FM) modulation. FDMA stands for Frequency Division Multiple Access, which means that every base station used a set of frequency channels ("frequencies") and would assign one of those frequencies to each active user full time. Capacity of the base station was the same as number of frequencies set to be used by the same base station.
As the total number of frequencies is low, frequencies had to be reused on different base stations. However, if signal from different base stations overlapped, it was not possible to use same frequency on both base stations as there was no interference cancelation built in this standard. Radio network planners had thus hard time to do porper "frequency planning".
Example of 1G network is NMT (in Europe) with 25kHz wide frequency channels (the whole 450MHz band was divided to 180 such channels), while AMPS was used in US.
- 2G was usually TDMA technology using digital (e.g. PSK) modulation. TDMA stands for [T]ime Division Multiple Access, which means that every base station used single (or a few) frequency channels and would assign a "time slot" to each active user. Prime example of such network is GSM where frequency channels were 200kHz wide divided to 8 time slots, each time slot has duration of 576.92 μs.
Still there was no interference cancelation mechanism built-in which means that the same frequency re-use consideration as in 1G networks apply as well. Overall capacity increased compared to 1G, GSM900 (band 8, used in Europe) had 124 channels (later on expanded with additional 48 channels) giving capacity of more than 1300 concurrent active users (compare that to 180 in NMT).
As data transmission was growing, they added GPRS on top of GSM. GPRS allowed to transmit around 20kbps per timeslot and muti-timeslot could used. Later EDGE came with its 8PSK modulation raising max throughput to 80kbps per timeslot.
2G could be implemented also as CDMA (IS95 in US). CDMA stands for Code Division Multiple Access, which means that every base station used a single (or a few) frequency channels and would assign "code" to each active user (full time). Used codes are orthogonal in a sense that receiver, when domodulating received signal, would multiply received signal with special wave function of selected (by code) characteristic and would thus filter useful signal from other signals and noise.
- while 1G and 2G being primarily voice network, data became more and more important, so they standardized 3G networks with high data throughput in their mind. Due to scarche frequency resources they decided to go with SFN (Single Frequency Network) concept. In Europe example of 3G is UMTS, which has official name WCDMA (Wide-band CDMA). It used 5Mhz frequency channels (actually 3.84MHz with side guard bands against inter-channer interference). In this network, every base station used same frequency, but used different scrambling codes. At first, for UMTS they standardized completely new (high) frequency band at 2100 MHz (for Europe) with band width of 60 MHz, allowing for 12 carriers with standard 5 MHz channel width. Later they standardized also 900 MHz frequency band (being already used for GSM) with limited capacity of 35 MHz giving maximum 7 UMTS channels but most of MNOs still operate GSM in that band, hence limited UMTS capacity.
Use of single frequency introduced problem of inter-base station interference and many mechanisms to fight it were deployed. One such mechanism was kind of spatial diversity where if a device (phone) was receiving signals from two base stations at comparable signal strength level, both base stations would transmit same data to this terminal and terminal would combine both data streams to improve reception. Same technique was used in UL where both base stations would recieve signal from the device and both data streams were combined in radio network controller (RNC). The bad thing about this technique is that capacity of two (or even three) base stations are consumed by single device.
Use of SFN concept brought a good side effect as well: in mobility, where device moves from one base station to another, device needs to measure signals from other base stations and send measurement reports to RNC. RNC then decides if device needs to hand-over to another base station (if signal of that base station becomes better than signal of currently used one). In MFN networks, device has to perform measurements on different frequency channels which means use of additional receiver (in case of FDMA or CDMA) or re-tuning receiver to another channel during idle periods (in case of TDMA during non-assigned timeslots). In case of SFN receiver can receive signals from other base stations at the same time as receiving data from own base station without any re-tuning, making mibility slightly smoother.
In case when interference between different base stations was simply too bad, another frequency channel was deployed, reducing interference on both channels. However, mobility between the two frequency channels sucks as receiver again has to perform measurements on both frequencies. As normally transmission in both directions (UL and DL) is contionous in time, new technique (compressed mode) was introduced. In this mode, transmission would only happen half of time (with double speed), giving measurement windows during inactive half of time. The "double speed" part causes troubles specially when devices are in poor radio signal but in good radio signal as well.
- 4G is OFDM network, meaning all base stations use same frequency (as in 3G), but instead of using spread-spectrum carrier it uses OFDM sub-carriers (tones). OFDM in 4G (LTE) means that base-band unit (brain of the base station) runs scheduller which assigns OFDM sub-carriers to different concurrent devices. Time resolution (TTI, time to transmit) of assignment is 1ms and in each TTI it can assign same set of OFDM sub-carriers to different device. It came also with flexible frequency channel widths (20, 15, 10, 5, 3, 1.4 MHz) and number of standardized frequency bands (giving possibility to re-farm existing frequencies used by legacy networks, such as 1800 MHz band used by GSM).
To maximize utilization of sub-carriers, devices provide feed-back in form of CQI. Better SINR means higher CQI, higher CQI means cell can use higher order modulations (64QAM v.s. QPSK), higher order MIMO (2x2 MIMO v.s. space diversity) and less robust FEC (7/8 v.s. 1/2), all of them allowing for higher throughput using same number of OFDM sub-carriers and TTIs.
Interference issues are mostly the same as with WCDMA, but techniques to overcome problems are different. Cells (transmitters), controlled by the same base-band unit (most of times, LTE base station features 3 cells, using antennae pointed in different directions and using 3 distinct transmitters and receivers), can coordinate interference between users in different cells. Example would be that one cell transmits lower half of OFDM sub-carriers to one device while other cell transmits upper half of OFDM sub-carriers to another device during same TTI. Without coordination, both cells would use all OFDM sub-carriers, but those would interfer with each other in the device's receiver giving lower CQI (SINR) and consequently equal or even lower overall throughput. Same technique can be applied also between different base-stations (controlled by different base-band units), however things are more complicated: base-band units need to be absolute time synchronized (this is not a problem in typical US network as CDMA required GPS sync for proper operation, but typical EU network doesn't have GPS installed. IEEE1588v2 time sync is usually used instead, but that has its own prerequisites in backhaul networks), have good inter-base station connectivity (delay less than a few milli secondds) and devices have to support it as well (they have to report interferring cell identities so that serving base-band unit knows which other base-band unit it needs to coordinate with).
All of the above description is focusing on FDD (
Frequency
Division
Duplex) mode of operation. 4G can operate also in TDD (
Time
Division
Duplex) mode which is simliar to WiFi operation. However, due to SFN nature it's absolutely required to have all base stations well synchronized (either using GPS or IEEE1588v2) while this is not basic requirement for FDD networks (but it does help as briefly described above). If base stations are not precisely synchronized, interference problems become much worse as unsynchronized bse stations might transmit during reception frames of neighbouring base stations. As base stations use much higher Tx power than devices (up to 100W base stations v.s. 100mW devices) and due to better Rx sensitivity of base stations, this kind of unsynchronized operation would just kill the whole network.
The above is really short, things are actually much more complicated.
Anyhow, mountainous areas are a big problem which can only be solved by proper planning of network (placement of base stations) and, of course, by building more base stations so that in areas with users, there's single base station providing dominant signal. This can be aided by proper selection of devices' antennae and their proper alignment in case of static use (WFA type of use).
Even if OP's use of LTE device falls in FWA category and it would seem that locking device to a particular base station would be benefitial, it's not recommended due to LTE nature (of being mobile network). Signal level vary slightly in short time (seconds) and if many signals from different base stations have comparable level, it can easily happen that signal of bases station, to which device is locked, becomes worse than a few other signals and degradation due to interference overweights lower utilization of that base station.