* update convert
* change output name
* add discription for input_path, add check for input_values
* basic support for command line
* fix style
* update based on comment
* update based on comment
* Init commit for bigdl.llm.transformers.AutoModelForCausalLM
* Temp change to avoid name conflicts with external transformers lib
* Support downloading model from huggingface
* Small python style fix
* Change location of transformers to avoid library conflicts
* Add return value for converted ggml binary ckpt path for convert_model
* Avoid repeated loading of shared library and adding some comments
* Small fix
* Path type fix anddocstring fix
* Small fix
* Small fix
* Change cache dir to pwd
* Renamed all bloomz to bloom in ggml/model & utls/convert_util.py
* Add an optional parameter for specific the model conversion path to avoid running out of disk space
* add convert_model api
* change the model_path to input_path
* map int4 to q4_0
* fix blank line
* change bloomz to bloom
* remove default model_family
* change dtype to lower first
* Add dev wheel building script for LLM package on Windows
* delete conda
* delete python version check
* minor adjust
* wheel name fixed
* test check
* test fix
* change wheel name
* first commit of CMakeFiles.txt to include llama & gptneox
* initial support of quantize
* update cmake for only consider linux now
* support quantize interface
* update based on comment