diff --git a/README.md b/README.md index 8a06ec54..a1ed4078 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,3 @@ -> [!IMPORTANT] -> ***`ipex-llm` will soon move to https://github.com/intel/ipex-llm*** - ---- - # 💫 Intel® LLM Library for PyTorch*

< English | 中文 > @@ -11,7 +6,7 @@ **`IPEX-LLM`** is an LLM acceleration library for Intel [GPU](docs/mddocs/Quickstart/install_windows_gpu.md) *(e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)*, [NPU](docs/mddocs/Quickstart/npu_quickstart.md) and CPU [^1]. > [!NOTE] > - *`IPEX-LLM` provides seamless integration with [llama.cpp](docs/mddocs/Quickstart/llama_cpp_quickstart.md), [Ollama](docs/mddocs/Quickstart/ollama_quickstart.md), [HuggingFace transformers](python/llm/example/GPU/HuggingFace), [LangChain](python/llm/example/GPU/LangChain), [LlamaIndex](python/llm/example/GPU/LlamaIndex), [vLLM](docs/mddocs/Quickstart/vLLM_quickstart.md), [Text-Generation-WebUI](docs/mddocs/Quickstart/webui_quickstart.md), [DeepSpeed-AutoTP](python/llm/example/GPU/Deepspeed-AutoTP), [FastChat](docs/mddocs/Quickstart/fastchat_quickstart.md), [Axolotl](docs/mddocs/Quickstart/axolotl_quickstart.md), [HuggingFace PEFT](python/llm/example/GPU/LLM-Finetuning), [HuggingFace TRL](python/llm/example/GPU/LLM-Finetuning/DPO), [AutoGen](python/llm/example/CPU/Applications/autogen), [ModeScope](python/llm/example/GPU/ModelScope-Models), etc.* -> - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, Whisper, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).* +> - ***70+ models** have been optimized/verified on `ipex-llm` (e.g., Llama, Phi, Mistral, Mixtral, Whisper, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art **LLM optimizations**, **XPU acceleration** and **low-bit (FP8/FP6/FP4/INT4) support**; see the complete list [here](#verified-models).* ## Latest Update 🔥 - [2025/01] We added the guide for running `ipex-llm` on Intel Arc [B580](docs/mddocs/Quickstart/bmg_quickstart.md) GPU @@ -61,8 +56,8 @@ See demos of running local LLMs *on Intel Core Ultra iGPU, Intel Core Ultra NPU, - - + + @@ -83,23 +78,23 @@ See demos of running local LLMs *on Intel Core Ultra iGPU, Intel Core Ultra NPU,
Intel Core Ultra (Series 1) iGPUIntel Core Ultra (Series 2) NPUIntel Core Ultra iGPUIntel Core Ultra NPU Intel Arc dGPU 2-Card Intel Arc dGPUs
- - + +
- Ollama
(Mistral-7B Q4_K)
+ Ollama
(Mistral-7B, Q4_K)
- HuggingFace
(Llama3.2-3B SYM_INT4)
+ HuggingFace
(Llama3.2-3B, SYM_INT4)
- TextGeneration-WebUI
(Llama3-8B FP8)
+ TextGeneration-WebUI
(Llama3-8B, FP8)
- FastChat
(QWen1.5-32B FP6)
+ llama.cpp
(DeepSeek-R1-Distill-Qwen-32B, Q4_K)
diff --git a/README.zh-CN.md b/README.zh-CN.md index 850adb94..0b35e8d6 100644 --- a/README.zh-CN.md +++ b/README.zh-CN.md @@ -1,8 +1,3 @@ -> [!IMPORTANT] -> ***`ipex-llm` 将会迁移至 https://github.com/intel/ipex-llm*** - ---- - # Intel® LLM Library for PyTorch*

< English | 中文 > @@ -11,7 +6,7 @@ **`ipex-llm`** 是一个将大语言模型高效地运行于 Intel [GPU](docs/mddocs/Quickstart/install_windows_gpu.md) *(如搭载集成显卡的个人电脑,Arc 独立显卡、Flex 及 Max 数据中心 GPU 等)*、[NPU](docs/mddocs/Quickstart/npu_quickstart.md) 和 CPU 上的大模型 XPU 加速库[^1]。 > [!NOTE] > - *`ipex-llm`可以与 [llama.cpp](docs/mddocs/Quickstart/llama_cpp_quickstart.zh-CN.md), [Ollama](docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md), [HuggingFace transformers](python/llm/example/GPU/HuggingFace), [LangChain](python/llm/example/GPU/LangChain), [LlamaIndex](python/llm/example/GPU/LlamaIndex), [vLLM](docs/mddocs/Quickstart/vLLM_quickstart.md), [Text-Generation-WebUI](docs/mddocs/Quickstart/webui_quickstart.md), [DeepSpeed-AutoTP](python/llm/example/GPU/Deepspeed-AutoTP), [FastChat](docs/mddocs/Quickstart/fastchat_quickstart.md), [Axolotl](docs/mddocs/Quickstart/axolotl_quickstart.md), [HuggingFace PEFT](python/llm/example/GPU/LLM-Finetuning), [HuggingFace TRL](python/llm/example/GPU/LLM-Finetuning/DPO), [AutoGen](python/llm/example/CPU/Applications/autogen), [ModeScope](python/llm/example/GPU/ModelScope-Models) 等无缝衔接。* -> - ***70+** 模型已经在 `ipex-llm` 上得到优化和验证(如 Llama, Phi, Mistral, Mixtral, Whisper, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V 等), 以获得先进的 **大模型算法优化**, **XPU 加速** 以及 **低比特(FP8FP8/FP6/FP4/INT4) 支持**;更多模型信息请参阅[这里](#模型验证)。* +> - ***70+** 模型已经在 `ipex-llm` 上得到优化和验证(如 Llama, Phi, Mistral, Mixtral, Whisper, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V 等), 以获得先进的 **大模型算法优化**, **XPU 加速** 以及 **低比特(FP8FP8/FP6/FP4/INT4) 支持**;更多模型信息请参阅[这里](#模型验证)。* ## 最新更新 🔥 - [2025/01] 新增在 Intel Arc [B580](docs/mddocs/Quickstart/bmg_quickstart.md) GPU 上运行 `ipex-llm` 的指南。 @@ -61,8 +56,8 @@ - - + + @@ -83,23 +78,23 @@
Intel Core Ultra (Series 1) iGPUIntel Core Ultra (Series 2) NPUIntel Core Ultra iGPUIntel Core Ultra NPU Intel Arc dGPU 2-Card Intel Arc dGPUs
- - + +
- Ollama
(Mistral-7B Q4_K)
+ Ollama
(Mistral-7B, Q4_K)
- HuggingFace
(Llama3.2-3B SYM_INT4)
+ HuggingFace
(Llama3.2-3B, SYM_INT4)
- TextGeneration-WebUI
(Llama3-8B FP8)
+ TextGeneration-WebUI
(Llama3-8B, FP8)
- FastChat
(QWen1.5-32B FP6)
+ llama.cpp
(DeepSeek-R1-Distill-Qwen-32B, Q4_K)