ipex-llm/python/llm/example/GPU/HuggingFace/LLM
Xu, Shuo ccc18eefb5
Add Modelscope option for chatglm3 on GPU (#12545)
* Add Modelscope option for GPU model chatglm3

* Update readme

* Update readme

* Update readme

* Update readme

* format update

---------

Co-authored-by: ATMxsp01 <shou.xu@intel.com>
2024-12-16 20:00:37 +08:00
..
aquila Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
aquila2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
baichuan Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
baichuan2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
bluelm Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
chatglm2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
chatglm3 Add Modelscope option for chatglm3 on GPU (#12545) 2024-12-16 20:00:37 +08:00
chinese-llama2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
codegeex2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
codegemma Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
codellama Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
codeshell fix: add run oneAPI instruction for the example of codeshell (#11828) 2024-08-16 14:29:06 +08:00
cohere Fix cohere model on transformers>=4.41 (#11575) 2024-07-17 17:18:59 -07:00
deciLM-7b Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
deepseek Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
dolly-v1 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
dolly-v2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
falcon Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
flan-t5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
gemma Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
gemma2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
glm-edge Add GLM-Edge GPU example (#12483) 2024-12-16 14:39:19 +08:00
glm4 update transformers version in example of glm4 (#12453) 2024-11-27 15:02:25 +08:00
gpt-j Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
internlm Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
internlm2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
llama2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
llama3 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
llama3.1 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
llama3.2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
minicpm Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
minicpm3 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
mistral Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
mixtral Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
mpt Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
phi-1_5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
phi-2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
phi-3 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
phixtral Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen1.5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen2.5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
redpajama Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
replit Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
rwkv4 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
rwkv5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
solar Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
stablelm Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
starcoder Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
vicuna Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
yi Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
yuan2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
README.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00

IPEX-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs

You can use IPEX-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.