Shrinking massive neural networks used to model language

Deep learning neural networks can be massive, demanding major computing power. In a test of the ‘lottery ticket hypothesis,’ researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.


Click here for original story, Shrinking massive neural networks used to model language


Source: ScienceDaily