fix: textual and env variable adjustment (#12038)
This commit is contained in:
parent
c94032f97e
commit
649390c464
1 changed files with 9 additions and 9 deletions
|
|
@ -41,13 +41,13 @@ python make_table.py <input_dir>
|
|||
```
|
||||
|
||||
## Known Issues
|
||||
### 1.Detected model is a low-bit(sym int4) model, Please use `load_low_bit` to load this model
|
||||
Harness evaluation is meant for unquantified models and by passing the argument precision can the model be converted to target precision. If you load the quantified models, you may encounter the following error:
|
||||
### 1.Detected model is a low-bit(sym int4) model, please use `load_low_bit` to load this model
|
||||
Harness evaluation is meant for unquantified models and by passing the argument `--precision` can the model be converted to target precision. If you load the quantified models, you may encounter the following error:
|
||||
```bash
|
||||
********************************Usage Error********************************
|
||||
Detected model is a low-bit(sym int4) model, Please use load_low_bit to load this model.
|
||||
```
|
||||
However, you can replace the following code in [this line](https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/dev/benchmark/harness/ipexllm.py#L52)
|
||||
However, you can replace the following code in [this line](https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/dev/benchmark/harness/ipexllm.py#L52):
|
||||
```python
|
||||
AutoModelForCausalLM.from_pretrained = partial(AutoModelForCausalLM.from_pretrained,**self.bigdl_llm_kwargs)
|
||||
```
|
||||
|
|
@ -62,14 +62,14 @@ class ModifiedAutoModelForCausalLM(AutoModelForCausalLM):
|
|||
|
||||
AutoModelForCausalLM.from_pretrained=partial(ModifiedAutoModelForCausalLM.load_low_bit, *self.bigdl_llm_kwargs)
|
||||
```
|
||||
### 2.please pass the argument `trust_remote_code=True` to allow custom code to be run.
|
||||
`lm-evaluation-harness` doesn't pass `trust_remote_code=true` to datasets. This may cause errors similar to the following error:
|
||||
### 2.Please pass the argument `trust_remote_code=True` to allow custom code to be run.
|
||||
`lm-evaluation-harness` doesn't pass `trust_remote_code=true` argument to datasets. This may cause errors similar to the following one:
|
||||
```
|
||||
RuntimeError: Job config of task=winogrande, precision=sym_int4 failed.
|
||||
Error Message: The repository for winogrande con tains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https: //hf. co/datasets/winogrande.
|
||||
please pass the argument trust_remote_code=True' to allow custom code to be run.
|
||||
Error Message: The repository for winogrande contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/winogrande.
|
||||
please pass the argument trust_remote_code=True to allow custom code to be run.
|
||||
```
|
||||
Please Refer to these:
|
||||
Please refer to these:
|
||||
|
||||
- [trust_remote_code error in simple evaluate for hellaswag · Issue #2222 · EleutherAI/lm-evaluation-harness (github.com) ](https://github.com/EleutherAI/lm-evaluation-harness/issues/2222)
|
||||
|
||||
|
|
@ -77,4 +77,4 @@ Please Refer to these:
|
|||
|
||||
- [Security features from the Hugging Face datasets library · Issue #1135 · EleutherAI/lm-evaluation-harness (github.com)](https://github.com/EleutherAI/lm-evaluation-harness/issues/1135#issuecomment-1961928695)
|
||||
|
||||
You have to manually add `datasets.config.HF_DATASETS_TRUST_REMOTE_CODE=True` in your pypi dataset package directory.
|
||||
You have to manually run `export HF_DATASETS_TRUST_REMOTE_CODE=1` to solve the problem.
|
||||
Loading…
Reference in a new issue