diff --git a/README.md b/README.md index a44aef8f..2be0b999 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,54 @@ > - ***50+ models** have been optimized/verified on `ipex-llm` (including LLaMA2, Mistral, Mixtral, Gemma, LLaVA, Whisper, ChatGLM, Baichuan, Qwen, RWKV, and more); see the complete list [here](#verified-models).* ## `ipex-llm` Demo + +See demos of running local LLMs *on Intel Iris iGPU, Intel Core Ultra iGPU, single-card Arc GPU, or multi-card Arc GPUs* using `ipex-llm` below. + +
| Intel Iris iGPU | +Intel Core Ultra iGPU | +Intel Arc dGPU | +2-Card Intel Arc dGPUs | +
+
+
+
+ |
+
+
+
+
+ |
+
+
+
+
+ |
+
+
+
+
+ |
+
| + llama.cpp (Phi-3-mini Q4_0) + | ++ Ollama (Mistral-7B Q4_K) + | ++ TextGeneration-WebUI (Llama3-8B FP8) + | ++ FastChat (QWen1.5-32B FP6) + |