ipex-llm/python/llm/example/transformers/transformers_int4
Yuwen Hu 349bcb4bae [LLM] Add more transformers int4 example (Dolly v1) (#8517)
* Initial commit for dolly v1

* Add example for Dolly v1 and other small fix

* Small output updates

* Small fix

* fix based on comments
2023-07-13 16:13:47 +08:00
..
dolly/v1 [LLM] Add more transformers int4 example (Dolly v1) (#8517) 2023-07-13 16:13:47 +08:00
mpt [LLM] Add more transformers int4 example (Dolly v1) (#8517) 2023-07-13 16:13:47 +08:00
README.md [LLM] Add more transformers_int4 examples (MPT) (#8498) 2023-07-13 09:41:16 +08:00
transformers_int4_pipeline.py Initial LLM Transformers example refactor (#8491) 2023-07-10 17:53:57 +08:00
transformers_int4_pipeline_readme.md Initial LLM Transformers example refactor (#8491) 2023-07-10 17:53:57 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model

You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration

For better performance, it is recommended to set environment variables with the help of BigDL-Nano:

pip install bigdl-nano

following with

Linux Windows (powershell)
source bigdl-nano-init bigdl-nano-init