ipex-llm/python/llm/example/GPU/Long-Context
Jin Qiao 9a96af4232
Remove oneAPI pip install command in related examples (#11030)
* Remove pip install command in windows installation guide

* fix chatglm3 installation guide

* Fix gemma cpu example

* Apply on other examples

* fix
2024-05-16 10:46:29 +08:00
..
Chatglm3-32K Remove oneAPI pip install command in related examples (#11030) 2024-05-16 10:46:29 +08:00
LLaMA2-32K Remove oneAPI pip install command in related examples (#11030) 2024-05-16 10:46:29 +08:00
README.md LLM: add README.md for Long-Context examples. (#10765) 2024-04-17 15:34:59 +08:00

Running Long-Context generation using IPEX-LLM on Intel Arc™ A770 Graphics

Long-Context Generation is a critical aspect in various applications, such as document summarization, extended conversation handling, and complex question answering. Effective long-context generation can lead to more coherent and contextually relevant responses, enhancing user experience and model utility.

This folder contains examples of running long-context generation with IPEX-LLM on Intel Arc™ A770 Graphics(16GB GPU memory):

  • LLaMA2-32K: examples of running LLaMA2-32K models with INT4/FP8 precision.
  • ChatGLM3-32K: examples of running ChatGLM3-32K models with INT4/FP8 precision.

Maximum Input Length for Different Models with INT4/FP8 Precision.

  • INT4

    Model Name Low Memory Mode Maximum Input Length Output Length
    LLaMA2-7B-32K Disable 10K 512
    Enable 12K 512
    ChatGLM3-6B-32K Disable 9K 512
    Enable 10K 512
  • FP8

    Model Name Low Memory Mode Maximum Input Length Output Length
    LLaMA2-7B-32K Disable 7K 512
    Enable 9K 512
    ChatGLM3-6B-32K Disable 8K 512
    Enable 9K 512

Note: If you need to run longer input or use less memory, please set IPEX_LLM_LOW_MEM=1 to enable low memory mode, which will enable memory optimization and may slightly affect the performance.