Display demo.jpg n the README.md of HuggingFace Transformers Agent (#9293)

* Display demo.jpg

* remove demo.jpg
This commit is contained in:
Zheng, Yi 2023-10-27 18:00:03 +08:00 committed by GitHub
parent a4a1dec064
commit 1bff54a378
3 changed files with 7 additions and 2 deletions

View file

@ -24,7 +24,7 @@ python ./run_agent.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --image-path
Arguments info:
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Vicuna model (e.g. `lmsys/vicuna-7b-v1.5`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'lmsys/vicuna-7b-v1.5'`.
- `--image-path IMAGE_PATH`: argument defining the image to be infered. It is default to be `demo.jpg`.
- `--image-path IMAGE_PATH`: argument defining the image to be infered.
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
>
@ -50,6 +50,11 @@ numactl -C 0-47 -m 0 python ./run_agent.py
```
#### 2.3 Sample Output
#### [demo.jpg](https://cocodataset.org/#explore?id=264959)
<p align="center">
<img src="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg" alt="demo.jpg" width="400"/>
</p>
#### [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
```log
Image path: demo.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 122 KiB

View file

@ -26,7 +26,7 @@ if __name__ == "__main__":
parser.add_argument("--repo-id-or-model-path", type=str, default="lmsys/vicuna-7b-v1.5",
help="The huggingface repo id for the Vicuna model to be downloaded"
", or the path to the huggingface checkpoint folder")
parser.add_argument("--image-path", type=str, default="demo.jpg",
parser.add_argument("--image-path", type=str, required=True,
help="Image to generate caption")
args = parser.parse_args()