# BigDL-LLM Examples: GPU Here, we provide some examples on how you could apply BigDL-LLM INT4 optimizations on popular open-source models in the community. To run these examples, please first refer to [here](./install_gpu.html) for more information about how to install ``bigdl-llm``, requirements and best practices for setting up your environment. ```eval_rst .. important:: Only Linux system is supported now, Ubuntu 22.04 is prefered. ``` The following models have been verified on either servers or laptops with Intel GPUs. | Model | Example | |-----------|----------------------------------------------------------| | LLaMA 2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/llama2) | | MPT | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/mpt) | | Falcon | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/falcon) | | ChatGLM2 | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/chatglm2) | | Qwen | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/qwen) | | Baichuan | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/baichuan) | | StarCoder | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/starcoder) | | InternLM | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/internlm) | | Whisper | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/whisper) | | GPT-J | [link](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu/gpt-j) |