ipex-llm/python/llm
Shaojun Liu 7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00
..
dev Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970) 2024-05-14 12:58:31 +08:00
example Add cohere example (#10954) 2024-05-08 17:19:59 +08:00
portable-zip Fix baichuan-13b issue on portable zip under transformers 4.36 (#10746) 2024-04-12 16:27:01 -07:00
scripts Add driver related packages version check in env script (#10977) 2024-05-10 15:02:58 +08:00
src/ipex_llm Update README.md (#11003) 2024-05-13 16:44:48 +08:00
test Fix Langchain upstream ut (#10985) 2024-05-11 14:40:37 +08:00
.gitignore [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
setup.py Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 (#10986) 2024-05-11 12:33:35 +08:00
version.txt Update setup.py and add new actions and add compatible mode (#25) 2024-03-22 15:44:59 +08:00