Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 14126

torchtext and fasttext vectorization

$
0
0

I found example with use ready vectors for Glove (for FastText the same example) :

`# set up fieldsTEXT = data.Field(lower=True, include_lengths=True, batch_first=True)LABEL = data.Field(sequential=False)# make splits for datatrain, test = datasets.IMDB.splits(TEXT, LABEL)# build the vocabularyTEXT.build_vocab(train, vectors=GloVe(name='6B', dim=300))LABEL.build_vocab(train)# make iterator for splitstrain_iter, test_iter = data.BucketIterator.splits(    (train, test), batch_size=3, device=0)`I try to use my pretrained model FastText and vectors, but I get errrors:`vectors = get_vectors(model_ft, result_df['text_processed'])max_size = 30000TEXT.build_vocab(train_data, vectors=vectors, max_size=max_size)LABEL.build_vocab(train_data)`

But how can I create and use FastText for vectorization my text use my model FastText in pytorchtext?

I didn't find explained in doc(


Viewing all articles
Browse latest Browse all 14126

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>