ipex-llm/python/llm/example/CPU/Applications/streaming-llm
ZehuaCao 56cb992497
LLM: Modify CPU Installation Command for most examples (#11049)
* init

* refine

* refine

* refine

* modify hf-agent example

* modify all CPU model example

* remove readthedoc modify

* replace powershell with cmd

* fix repo

* fix repo

* update

* remove comment on windows code block

* update

* update

* update

* update

---------

Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
2024-05-17 15:52:20 +08:00
..
streaming_llm Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
README.md LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
run_streaming_llama.py Stream llm example for both GPU and CPU (#9390) 2024-02-27 15:54:47 -08:00

Low-Bit Streaming LLM using IPEX-LLM

In this example, we apply low-bit optimizations to Streaming-LLM using IPEX-LLM, which can deploy low-bit(including FP4/INT4/FP8/INT8) LLMs for infinite-length inputs. Only one code change is needed to load the model using ipex-llm as follows:

from ipex_llm.transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, load_in_4bit=True, trust_remote_code=True, optimize_model=False)

Prepare Environment

We suggest using conda to manage environment:

On Linux

conda create -n llm python=3.11
conda activate llm

pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu

On Windows:

conda create -n llm python=3.11
conda activate llm

pip install --pre --upgrade ipex-llm[all]

Run Example

python ./run_streaming_llama.py  --repo-id-or-model-path REPO_ID_OR_MODEL_PATH  --enable-streaming

arguments info:

  • --repo-id-or-model-path: str value, argument defining the huggingface repo id for the large language model to be downloaded, or the path to the huggingface checkpoint folder, the value is 'meta-llama/Llama-2-7b-chat-hf' by default.
  • --data-root: str value, the directory to save downloaded questions data.
  • --enable-streaming: to enable efficient streaming while computing.
  • --start-size: int value, the start size of recent KV cache.
  • --recent-size: optional str value. The path to load low-bit model.

Sample Output for Inference

'decapoda-research/llama-7b-hf' Model

USER: Draft a professional email seeking your supervisor's feedback on the 'Quarterly Financial Report' you prepared. Ask specifically about the data analysis, presentation style, and the clarity of conclusions drawn. Keep the email short and to the point.

ASSISTANT: Dear Mr. Smith,

I am writing to seek your feedback on the 'Quarterly Financial Report' I prepared for the company. I have attached the report for your reference.
The report contains data analysis of the company's performance during the quarter ending 31st March 2019...