Two-dimensional text: visualising contextualized language models

Input a phrase or a sentence (5-15 words):

Choose the model:

Choose the model layer:

We will use ELMo models trained on respective corpora to infer contextualized embeddings for words in your query. Then, for each embeddings, our ELMoViz function will find most similar words among other most frequent words in this model's vocabulary. Since contextualized architectures do not store non-contextual word embeddings, we generated them beforehand by averaging contextualized token embeddings of all occurrences of these words in the training corpus.

Lexical substitutes are also known as paradigmatic replacements. These are the words which can in theory replace the corresponding word in your input sentence.

Substitutes will change depending on the context. The larger is the substitute font size, the more certain ELMo is about this word.