ipex-llm/python/llm/example/langchain
2023-07-06 17:16:44 +08:00
..
docqa.py LLM: refactor transformers and langchain class name (#8470) 2023-07-06 17:16:44 +08:00
README.md LLM: update langchain and cpp-python style API examples (#8456) 2023-07-06 14:36:42 +08:00
streamchat.py LLM: refactor transformers and langchain class name (#8470) 2023-07-06 17:16:44 +08:00

Langchain examples

The examples here shows how to use langchain with bigdl-llm.

Install bigdl-llm

Follow the instructions in Install.

Install Required Dependencies for langchain examples.

pip install langchain==0.0.184
pip install -U chromadb==0.3.25
pip install -U typing_extensions==4.5.0

Note that typing_extensions==4.5.0 is required, or you may encounter error TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' when running the examples.

Convert Models using bigdl-llm

Follow the instructions in Convert model.

Run the examples

1. Streaming Chat

python ./streamchat.py -m CONVERTED_MODEL_PATH -x MODEL_FAMILY -q QUESTION -t THREAD_NUM

arguments info:

  • -m CONVERTED_MODEL_PATH: required, path to the converted model
  • -x MODEL_FAMILY: required, the model family of the model specified in -m, available options are llama, gptneox and bloom
  • -q QUESTION: question to ask. Default is What is AI?.
  • -t THREAD_NUM: specify the number of threads to use for inference. Default is 2.

2. Question Answering over Docs

python ./docqa.py -m CONVERTED_MODEL_PATH -x MODEL_FAMILY -i DOC_PATH -q QUESTION -c CONTEXT_SIZE -t THREAD_NUM

arguments info:

  • -m CONVERTED_MODEL_PATH: required, path to the converted model in above step
  • -x MODEL_FAMILY: required, the model family of the model specified in -m, available options are llama, gptneox and bloom
  • -i DOC_PATH: required, path to the input document
  • -q QUESTION: question to ask. Default is What is AI?.
  • -c CONTEXT_SIZE: specify the maximum context size. Default is 2048.
  • -t THREAD_NUM: specify the number of threads to use for inference. Default is 2.