ipex-llm/docs/readthedocs/source/doc/LLM
Shaojun Liu 7f8c5b410b
Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970)
* add entrypoint.sh

* add quickstart

* remove entrypoint

* update

* Install related library of benchmarking

* update

* print out results

* update docs

* minor update

* update

* update quickstart

* update

* update

* update

* update

* update

* update

* add chat & example section

* add more details

* minor update

* rename quickstart

* update

* minor update

* update

* update config.yaml

* update readme

* use --gpu

* add tips

* minor update

* update
2024-05-14 12:58:31 +08:00
..
Inference Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
Overview Deprecate support for pytorch 2.0 on Linux for ipex-llm >= 2.1.0b20240511 (#10986) 2024-05-11 12:33:35 +08:00
Quickstart Quickstart: Run PyTorch Inference on Intel GPU using Docker (on Linux or WSL) (#10970) 2024-05-14 12:58:31 +08:00
index.rst Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00