SOTAVerified

Deep learning languages: a key fundamental shift from probabilities to weights?

2019-08-02Unverified0· sign in to hype

François Coste

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recent successes in language modeling, notably with deep learning methods, coincide with a shift from probabilistic to weighted representations. We raise here the question of the importance of this evolution, in the light of the practical limitations of a classical and simple probabilistic modeling approach for the classification of protein sequences and in relation to the need for principled methods to learn non-probabilistic models.

Tasks

Reproductions