Large-scale deep neural networks consume expensive training costs, but the training results in less-interpretable weight matrices constructing the networks.Here, we propose a mode decomposition learning that can interpret the weight matrices as a hierarchy of latent modes.These modes are akin to patterns in physics studies of memory networks, but the least number of ds durga hand soap modes increases only logarithmically with the network width and even becomes a constant when the width grows further.The mode decomposition learning not only saves a significant large amount of training costs but also explains the network performance with the leading modes, displaying a striking piecewise power-law behavior.The modes specify a progressively compact latent space across the network hierarchy, making a more disentangled subspace compared edgewater shoes to standard training.
Our mode decomposition learning is also studied in an analytic online learning setting, which reveals multiple stages of learning dynamics with a continuous specialization of hidden nodes.Therefore the proposed mode decomposition learning points to a cheap and interpretable route towards the magical deep learning.