KeyError When Loading Custom Model in Transformers Pipeline
I'm trying to use the transformers library to load a custom model named Mihaiii/Llama-3.1-8B-Omni-abliterated for text generation, but I'm encountering a KeyError. Below are the details of my issue.
Steps Taken
I imported the necessary libraries and attempted to create a text generation pipeline with the specified model.
from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="Mihaiii/Llama-3.1-8B-Omni-abliterated", trust_remote_code=True) response = pipe(messages)
Error Message
When I run the code, I receive the following error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[4], line 7
2 from transformers import pipeline
4 messages = [
5 {"role": "user", "content": "Who are you?"},
6 ]
----> 7 pipe = pipeline("text-generation", model="Mihaiii/Llama-3.1-8B-Omni-abliterated", trust_remote_code=True)
8 pipe(messages)
File ~\AppData\Roaming\Python\Python311\site-packages\transformers\pipelines\__init__.py:724, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
722 hub_kwargs["_commit_hash"] = config._commit_hash
723 elif config is None and isinstance(model, str):
--> 724 config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)
725 hub_kwargs["_commit_hash"] = config._commit_hash
727 custom_tasks = {}
File ~\AppData\Roaming\Python\Python311\site-packages\transformers\models\auto\configuration_auto.py:1022, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1020 return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
1021 elif "model_type" in config_dict:
-> 1022 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1023 return config_class.from_dict(config_dict, **unused_kwargs)
1024 else:
1025 # Fallback: use pattern matching on the string.
1026 # We go from longer names to shorter names to catch roberta before bert (for instance)
File ~\AppData\Roaming\Python\Python311\site-packages\transformers\models\auto\configuration_auto.py:723, in _LazyConfigMapping.__getitem__(self, key)
721 return self._extra_content[key]
722 if key not in self._mapping:
--> 723 raise KeyError(key)
724 value = self._mapping[key]
725 module_name = model_type_to_module_name(key)
KeyError: 'omni_speech2s_llama'
This indicates that there is an issue related to the model type when attempting to load it.
Environment Details
- Transformers version: 4.21.1
- Python version: 3.11.0
What I've Tried
- I ensured that the model name is spelled correctly and is available on the Hugging Face Model Hub.
- I attempted to load a different model (e.g.,
"gpt2") to verify that thepipelinefunction is working correctly.
Request for Help
Can anyone help me diagnose why I'm getting this KeyError when loading the specified model, or suggest how I can resolve this issue?
Hello!
This is an adaptation of https://huggingface.co/ICTNLP/Llama-3.1-8B-Omni, which has its own code for inference.
In order to use this model, follow the steps mentioned here: https://github.com/ictnlp/LLaMA-Omni?tab=readme-ov-file#install .
Please be aware that this model is an experiment only.