ipex-llm/python/llm/example/CPU/PyTorch-Models/Model
dingbaorong f855a864ef add llava gpu example (#9324)
* add llava gpu example

* use 7b model

* fix typo

* add in README
2023-11-02 14:48:29 +08:00
..
bark LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
bert LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm3 LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
flan-t5 add cpu and gpu examples of flan-t5 (#9171) 2023-10-24 15:24:01 +08:00
llama2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
llava add llava gpu example (#9324) 2023-11-02 14:48:29 +08:00
mistral LLM: add mistral examples (#9121) 2023-10-11 13:38:15 +08:00
openai-whisper LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
phi-1_5 phi-1_5 CPU and GPU examples (#9173) 2023-10-24 15:08:04 +08:00
qwen-vl use coco image in Qwen-VL (#9298) 2023-10-30 14:32:35 +08:00
README.md LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00

BigDL-LLM INT4 Optimization for Large Language Model

You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:

pip install bigdl-nano
source bigdl-nano-init