Dynamic Heterogeneous Learning Games for Opportunistic Access in LTE-Based Macro/Femtocell Deployments Interference is one of the most limiting factors when trying to achieve high spectral efficiency in the deployment of heterogeneous networks (HNs). In this paper, the HN is modeled as a layer of closed-access LTE femtocells (FCs) overlaid upon an LTE radio access network. Within the context of dynamic learning games, this work proposes a novel heterogeneous multiobjective fully distributed strategy based on a reinforcement learning (RL) model (CODIPAS-HRL) for FC self-configuration/optimization. The self-organization capability enables the FCs to autonomously and opportunistically sense the radio environment using different learning strategies and tune their parameters accordingly, in order to operate under restrictions of avoiding interference to both network tiers and satisfy certain quality-of-service requirements. The proposed model reduces the learning cost associated with each learning strategy. We also study the convergence behavior under different learning rates and derive a new accuracy metric in order to provide comparisons between the different learning strategies. The simulation results show the convergence of the learning model to a solution concept based on satisfaction equilibrium, under the uncertainty of the HN environment. We show that intra/inter-tier interference can be significantly reduced, thus resulting in higher cell throughputs.