Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 14011

Google Colab unable to Hugging Face model

$
0
0

I like to tag parts of speech using the BERT model. I used the Hugging face library for this purpose.

When I run the model on Hugging face API I got the outputenter image description here

However, when I run the code on Google Colab I got errors.

My code

from transformers import AutoModelWithHeadsfrom transformers import pipelinefrom transformers import AutoTokenizermodel = AutoModelWithHeads.from_pretrained("bert-base-uncased")adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_pos", source="hf")model.active_adapters = adapter_nametokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")token_classification = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="NONE")res = token_classification("Take out the trash bag from the bin and replace it.")print(res)

The error is

 The model 'BertModelWithHeads' is not supported for token-classification. Supported models are ['AlbertForTokenClassification', 'BertForTokenClassification', 'BigBirdForTokenClassification', 'BloomForTokenClassification', 'CamembertForTokenClassification', 'CanineForTokenClassification', 'ConvBertForTokenClassification', 'Data2VecTextForTokenClassification', 'DebertaForTokenClassification', 'DebertaV2ForTokenClassification', 'DistilBertForTokenClassification', 'ElectraForTokenClassification', 'ErnieForTokenClassification', 'EsmForTokenClassification', 'FlaubertForTokenClassification', 'FNetForTokenClassification', 'FunnelForTokenClassification', 'GPT2ForTokenClassification', 'GPT2ForTokenClassification', 'IBertForTokenClassification', 'LayoutLMForTokenClassification', 'LayoutLMv2ForTokenClassification', 'LayoutLMv3ForTokenClassification', 'LiltForTokenClassification', 'LongformerForTokenClassification', 'LukeForTokenClassification', 'MarkupLMForTokenClassification', 'MegatronBertForTokenClassification', 'MobileBertForTokenClassification', 'MPNetForTokenClassification', 'NezhaForTokenClassification', 'NystromformerForTokenClassification', 'QDQBertForTokenClassification', 'RemBertForTokenClassification', 'RobertaForTokenClassification', 'RobertaPreLayerNormForTokenClassification', 'RoCBertForTokenClassification', 'RoFormerForTokenClassification', 'SqueezeBertForTokenClassification', 'XLMForTokenClassification', 'XLMRobertaForTokenClassification', 'XLMRobertaXLForTokenClassification', 'XLNetForTokenClassification', 'YosoForTokenClassification', 'XLMRobertaAdapterModel', 'RobertaAdapterModel', 'AlbertAdapterModel', 'BeitAdapterModel', 'BertAdapterModel', 'BertGenerationAdapterModel', 'DistilBertAdapterModel', 'DebertaV2AdapterModel', 'DebertaAdapterModel', 'BartAdapterModel', 'MBartAdapterModel', 'GPT2AdapterModel', 'GPTJAdapterModel', 'T5AdapterModel', 'ViTAdapterModel'].---------------------------------------------------------------------------KeyError                                  Traceback (most recent call last)<ipython-input-18-79b43720402e> in <cell line: 12>()     10 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")     11 token_classification = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="NONE")---> 12 res = token_classification("Take out the trash bag from the bin and replace it.")     13 print(res)4 frames/usr/local/lib/python3.10/dist-packages/transformers/pipelines/token_classification.py in aggregate(self, pre_entities, aggregation_strategy)    346                 score = pre_entity["scores"][entity_idx]    347                 entity = {--> 348                     "entity": self.model.config.id2label[entity_idx],    349                     "score": score,    350                     "index": pre_entity["index"],

KeyError: 16

I don't understand if the model ran ok in the Hugging Face API then why it was unable to run on Google Colab?

Thank you in advance.


Viewing all articles
Browse latest Browse all 14011

Trending Articles