ipex-llm/python/llm/example/CPU/PyTorch-Models/Model
Yining Wang a6a8afc47e Add qwen vl CPU example (#9221)
* eee

* add examples on CPU and GPU

* fix

* fix

* optimize model examples

* add Qwen-VL-Chat CPU example

* Add Qwen-VL CPU example

* fix optimize problem

* fix error

* Have updated, benchmark fix removed from this PR

* add generate API example

* Change formats in qwen-vl example

* Add CPU transformer int4 example for qwen-vl

* fix repo-id problem and add Readme

* change picture url

* Remove unnecessary file

---------

Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>
2023-10-25 13:22:12 +08:00
..
bark LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
bert LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
flan-t5 add cpu and gpu examples of flan-t5 (#9171) 2023-10-24 15:24:01 +08:00
llama2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
mistral LLM: add mistral examples (#9121) 2023-10-11 13:38:15 +08:00
openai-whisper LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
phi-1_5 phi-1_5 CPU and GPU examples (#9173) 2023-10-24 15:08:04 +08:00
qwen-vl Add qwen vl CPU example (#9221) 2023-10-25 13:22:12 +08:00
README.md Add qwen vl CPU example (#9221) 2023-10-25 13:22:12 +08:00

BigDL-LLM INT4 Optimization for Large Language Model

You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

Verified models

Model Example
LLaMA 2 link
ChatGLM link
Openai Whisper link
BERT link
Bark link
Mistral link
Flan-t5 link
Phi-1_5 link
Qwen-VL link

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:

pip install bigdl-nano
source bigdl-nano-init