我正在尝试运行huggingface示例中的语言模型微调脚本(run_language_modeling.py),使用我自己的标记器(刚刚添加了几个标记符,参见注释)。加载令牌器时出现问题。我认为问题出在AutoTokenizer.from_pretrained('local/path/to/directory').
代码:
from transformers import *
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
# special_tokens = ['<HASHTAG>', '<URL>', '<AT_USER>', '<EMOTICON-HAPPY>', '<EMOTICON-SAD>']
# tokenizer.add_tokens(special_tokens)
tokenizer.save_pretrained('../twitter/twittertokenizer/')
tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')错误消息:
OSError Traceback (most recent call last)
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
248 resume_download=resume_download,
--> 249 local_files_only=local_files_only,
250 )
/z/huggingface_venv/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
265 # File, but it doesn't exist.
--> 266 raise EnvironmentError("file {} not found".format(url_or_filename))
267 else:
OSError: file ../twitter/twittertokenizer/config.json not found
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-32-662067cb1297> in <module>
----> 1 tmp = AutoTokenizer.from_pretrained('../twitter/twittertokenizer/')
/z/huggingface_venv/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
190 config = kwargs.pop("config", None)
191 if not isinstance(config, PretrainedConfig):
--> 192 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
193
194 if "bert-base-japanese" in pretrained_model_name_or_path:
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
192 """
193 config_dict, _ = PretrainedConfig.get_config_dict(
--> 194 pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs
195 )
196
/z/huggingface_venv/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
270 )
271 )
--> 272 raise EnvironmentError(msg)
273
274 except json.JSONDecodeError:
OSError: Can't load '../twitter/twittertokenizer/'. Make sure that:
- '../twitter/twittertokenizer/' is a correct model identifier listed on 'https://huggingface.co/models'
- or '../twitter/twittertokenizer/' is the correct path to a directory containing a 'config.json' file如果我将AutoTokenizer更改为BertTokenizer,则上面的代码可以工作。此外,我可以运行脚本没有任何问题,是我加载的快捷方式名称,而不是路径。但在脚本run_language_modeling.py中,它使用AutoTokenizer。我在找一种方法让它运转起来。
有什么想法吗?谢谢!
发布于 2020-05-22 15:03:30
问题是您没有使用任何东西来指示要实例化的正确的记号赋予器。
请参考Huggingface docs中定义的规则。具体地说,由于您使用的是BERT:
包含bert: BertTokenizer (Bert模型)
否则,您必须自己指定确切的类型,正如您所提到的。
发布于 2020-11-12 06:45:57
如果指定的路径不包含模型配置文件,则AutoTokenizer.from_pretrained将失败,而模型配置文件仅是标记化器类实例化所必需的。
在run_language_modeling.py的上下文中,AutoTokenizer的使用是错误的(或者至少是泄漏的)。
如果(可选) tokenizer_name参数与模型名称或路径相同,则没有必要指定该参数。因此,据我所知,它完全支持修改后的标记器的情况。我也发现这个问题非常令人困惑。
我找到的最好的解决方法是将config.json添加到记号赋予器目录中,其中只有“缺少”的配置:
{
"model_type": "bert"
}发布于 2022-02-10 15:12:50
加载修改后的令牌器或预训练的令牌器时,应按如下方式加载:
tokenizer = AutoTokenizer.from_pretrained(path_to_json_file_of_tokenizer,config=AutoConfig.from_pretrained(‘包含模型配置文件的文件夹的路径’)
https://stackoverflow.com/questions/61947796
复制相似问题