Rename all serving doc containing word: zoo (#4024)
This commit is contained in:
		
							parent
							
								
									cd06909e0f
								
							
						
					
					
						commit
						1422e99b56
					
				
					 8 changed files with 39 additions and 39 deletions
				
			
		| 
						 | 
				
			
			@ -263,12 +263,12 @@
 | 
			
		|||
     "output_type": "stream",
 | 
			
		||||
     "text": [
 | 
			
		||||
      "Cluster Serving has been properly set up.\n",
 | 
			
		||||
      "You did not specify ANALYTICS_ZOO_VERSION, will download 0.9.0\n",
 | 
			
		||||
      "ANALYTICS_ZOO_VERSION is 0.9.0\n",
 | 
			
		||||
      "You did not specify BIGDL_VERSION, will download 0.9.0\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.9.0\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.12.1\n",
 | 
			
		||||
      "SPARK_VERSION is 2.4.3\n",
 | 
			
		||||
      "2.4\n",
 | 
			
		||||
      "--2021-02-07 10:01:46--  https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-bigdl_0.12.1-spark_2.4.3/0.9.0/bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
      "--2021-02-07 10:01:46--  https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
      "Resolving child-prc.intel.com (child-prc.intel.com)... You are installing Cluster Serving by pip, downloading...\n",
 | 
			
		||||
      "\n",
 | 
			
		||||
      "SIGHUP received.\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -316,7 +316,7 @@
 | 
			
		|||
   "outputs": [],
 | 
			
		||||
   "source": [
 | 
			
		||||
    "# if you encounter slow download issue like above, you can just use following command to download\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-bigdl_0.12.1-spark_2.4.3/0.9.0/bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "# if you are using wget to download, call mv *serving.jar bigdl.jar again after downloaded."
 | 
			
		||||
   ]
 | 
			
		||||
| 
						 | 
				
			
			@ -442,7 +442,7 @@
 | 
			
		|||
      "OK\n",
 | 
			
		||||
      "OK\n",
 | 
			
		||||
      "SLF4J: Class path contains multiple SLF4J bindings.\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-bigdl_0.12.0-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -454,7 +454,7 @@
 | 
			
		|||
      "Cluster Serving job submitted, check log in log-cluster_serving-serving_stream.txt\n",
 | 
			
		||||
      "To list Cluster Serving job status, use cluster-serving-cli list\n",
 | 
			
		||||
      "SLF4J: Class path contains multiple SLF4J bindings.\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-bigdl_0.12.0-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -691,7 +691,7 @@
 | 
			
		|||
   "outputs": [],
 | 
			
		||||
   "source": [
 | 
			
		||||
    "# start the http server via jar\n",
 | 
			
		||||
    "# ! java -jar bigdl-bigdl_0.10.0-spark_2.4.3-0.9.0-SNAPSHOT-http.jar"
 | 
			
		||||
    "# ! java -jar bigdl-spark_2.4.3-0.9.0-SNAPSHOT-http.jar"
 | 
			
		||||
   ]
 | 
			
		||||
  },
 | 
			
		||||
  {
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -98,7 +98,7 @@ model = tf.keras.models.load_model("./model.h5")
 | 
			
		|||
tf.saved_model.save(model, "saved_model")
 | 
			
		||||
```
 | 
			
		||||
### Model - ckpt to Frozen Graph
 | 
			
		||||
[freeze checkpoint example](https://github.com/intel-analytics/bigdl/tree/master/pyzoo/bigdl/examples/tensorflow/freeze_checkpoint)
 | 
			
		||||
[freeze checkpoint example](https://github.com/intel-analytics/bigdl/tree/master/python/orca/example/freeze_checkpoint)
 | 
			
		||||
### Notes - Use SavedModel
 | 
			
		||||
If model has single tensor input, then nothing to notice.
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -263,12 +263,12 @@
 | 
			
		|||
     "output_type": "stream",
 | 
			
		||||
     "text": [
 | 
			
		||||
      "Cluster Serving has been properly set up.\n",
 | 
			
		||||
      "You did not specify ANALYTICS_ZOO_VERSION, will download 0.9.0\n",
 | 
			
		||||
      "ANALYTICS_ZOO_VERSION is 0.9.0\n",
 | 
			
		||||
      "You did not specify BIGDL_VERSION, will download 0.9.0\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.9.0\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.12.1\n",
 | 
			
		||||
      "SPARK_VERSION is 2.4.3\n",
 | 
			
		||||
      "2.4\n",
 | 
			
		||||
      "--2021-02-07 10:01:46--  https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-bigdl_0.12.1-spark_2.4.3/0.9.0/bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
      "--2021-02-07 10:01:46--  https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
      "Resolving child-prc.intel.com (child-prc.intel.com)... You are installing Cluster Serving by pip, downloading...\n",
 | 
			
		||||
      "\n",
 | 
			
		||||
      "SIGHUP received.\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -316,7 +316,7 @@
 | 
			
		|||
   "outputs": [],
 | 
			
		||||
   "source": [
 | 
			
		||||
    "# if you encounter slow download issue like above, you can just use following command to download\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-bigdl_0.12.1-spark_2.4.3/0.9.0/bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "# if you are using wget to download, or get \"bigdl-xxx-serving.jar\" after \"ls\", please call mv *serving.jar bigdl.jar after downloaded."
 | 
			
		||||
   ]
 | 
			
		||||
| 
						 | 
				
			
			@ -442,7 +442,7 @@
 | 
			
		|||
      "OK\n",
 | 
			
		||||
      "OK\n",
 | 
			
		||||
      "SLF4J: Class path contains multiple SLF4J bindings.\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-bigdl_0.12.0-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -454,7 +454,7 @@
 | 
			
		|||
      "Cluster Serving job submitted, check log in log-cluster_serving-serving_stream.txt\n",
 | 
			
		||||
      "To list Cluster Serving job status, use cluster-serving-cli list\n",
 | 
			
		||||
      "SLF4J: Class path contains multiple SLF4J bindings.\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-bigdl_0.12.0-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/bigdl-spark_2.4.3-0.9.0-SNAPSHOT-serving.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: Found binding in [jar:file:/home/user/dep/flink-1.11.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]\n",
 | 
			
		||||
      "SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\n",
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -252,8 +252,8 @@
 | 
			
		|||
      "Config file found in pip package, copying...\r\n",
 | 
			
		||||
      "Config file ready.\r\n",
 | 
			
		||||
      "Cluster Serving has been properly set up.\r\n",
 | 
			
		||||
      "You did not specify ANALYTICS_ZOO_VERSION, will download 0.9.0\r\n",
 | 
			
		||||
      "ANALYTICS_ZOO_VERSION is 0.9.0\r\n",
 | 
			
		||||
      "You did not specify BIGDL_VERSION, will download 0.9.0\r\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.9.0\r\n",
 | 
			
		||||
      "BIGDL_VERSION is 0.12.1\r\n",
 | 
			
		||||
      "SPARK_VERSION is 2.4.3\r\n",
 | 
			
		||||
      "2.4\r\n",
 | 
			
		||||
| 
						 | 
				
			
			@ -303,7 +303,7 @@
 | 
			
		|||
   "outputs": [],
 | 
			
		||||
   "source": [
 | 
			
		||||
    "# if you encounter slow download issue like above, you can just use following command to download\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-bigdl_0.12.1-spark_2.4.3/0.9.0/bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar\n",
 | 
			
		||||
    "\n",
 | 
			
		||||
    "# if you are using wget to download, or get \"bigdl-xxx-serving.jar\" after \"ls\", please call mv *serving.jar bigdl.jar after downloaded."
 | 
			
		||||
   ]
 | 
			
		||||
| 
						 | 
				
			
			@ -318,7 +318,7 @@
 | 
			
		|||
     "name": "stdout",
 | 
			
		||||
     "output_type": "stream",
 | 
			
		||||
     "text": [
 | 
			
		||||
      "bigdl-bigdl_0.12.1-spark_2.4.3-0.9.0-serving.jar  config.yaml  wget-log\r\n"
 | 
			
		||||
      "bigdl-spark_2.4.3-0.9.0-serving.jar  config.yaml  wget-log\r\n"
 | 
			
		||||
     ]
 | 
			
		||||
    }
 | 
			
		||||
   ],
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -23,9 +23,9 @@ $ ./src/redis-server
 | 
			
		|||
```
 | 
			
		||||
in IDE, embedded Flink would be used so that no dependency is needed.
 | 
			
		||||
 | 
			
		||||
Once set up, you could copy the `/path/to/bigdl/scripts/cluster-serving/config.yaml` to `/path/to/bigdl/config.yaml`, and run `zoo/src/main/scala/com/intel/analytics/zoo/serving/ClusterServing.scala` in IDE. Since IDE consider `/path/to/bigdl/` as the current directory, it would read the config file in it.
 | 
			
		||||
Once set up, you could copy the `/path/to/bigdl/scripts/cluster-serving/config.yaml` to `/path/to/bigdl/config.yaml`, and run `scala/serving/src/main/com/intel/analytics/bigdl/serving/ClusterServing.scala` in IDE. Since IDE consider `/path/to/bigdl/` as the current directory, it would read the config file in it.
 | 
			
		||||
 | 
			
		||||
Run `zoo/src/main/scala/com/intel/analytics/zoo/serving/http/Frontend2.scala` if you use HTTP frontend.
 | 
			
		||||
Run `scala/serving/src/main/com/intel/analytics/bigdl/serving/http/Frontend2.scala` if you use HTTP frontend.
 | 
			
		||||
 
 | 
			
		||||
Once started, you could run python client code to finish an end-to-end test just as you run Cluster Serving in [Programming Guide](https://github.com/intel-analytics/bigdl/blob/master/docs/docs/ClusterServingGuide/ProgrammingGuide.md#4-model-inference).
 | 
			
		||||
### Test Package
 | 
			
		||||
| 
						 | 
				
			
			@ -33,14 +33,14 @@ Once you write the code and complete the test in IDE, you can package the jar an
 | 
			
		|||
 | 
			
		||||
To package,
 | 
			
		||||
```
 | 
			
		||||
cd /path/to/bigdl/zoo
 | 
			
		||||
cd /path/to/bigdl/scala
 | 
			
		||||
./make-dist.sh
 | 
			
		||||
```
 | 
			
		||||
Then, in `target` folder, copy `bigdl-xxx-flink-udf.jar` to your test directory, and rename it as `zoo.jar`, and also copy the `config.yaml` to your test directory.
 | 
			
		||||
Then, in `target` folder, copy `bigdl-xxx-flink-udf.jar` to your test directory, and rename it as `bigdl.jar`, and also copy the `config.yaml` to your test directory.
 | 
			
		||||
 | 
			
		||||
You could copy `/path/to/bigdl/scripts/cluster-serving/cluster-serving-start` to start Cluster Serving, this scripts will start Redis server for you and submit Flink job. If you prefer not to control Redis, you could use the command in it `${FLINK_HOME}/bin/flink run -c com.intel.analytics.zoo.serving.ClusterServing zoo.jar` to start Cluster Serving.
 | 
			
		||||
You could copy `/path/to/bigdl/scripts/cluster-serving/cluster-serving-start` to start Cluster Serving, this scripts will start Redis server for you and submit Flink job. If you prefer not to control Redis, you could use the command in it `${FLINK_HOME}/bin/flink run -c com.intel.analytics.bigdl.serving.ClusterServing bigdl.jar` to start Cluster Serving.
 | 
			
		||||
 | 
			
		||||
To run frontend, call `java -cp zoo.jar com.intel.analytics.zoo.serving.http.Frontend2`.
 | 
			
		||||
To run frontend, call `java -cp bigdl.jar com.intel.analytics.bigdl.serving.http.Frontend2`.
 | 
			
		||||
 | 
			
		||||
The rest are the same with test in IDE.
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -51,7 +51,7 @@ Data connector is the producer of Cluster Serving. The remote clients put data i
 | 
			
		|||
 | 
			
		||||
To define a new data connector to, e.g. Kafka, Redis, or other database, you have to define a Flink Source first.
 | 
			
		||||
 | 
			
		||||
You could refer to `com/intel/analytics/zoo/serving/engine/FlinkRedisSource.scala` as an example.
 | 
			
		||||
You could refer to `com/intel/analytics/bigdl/serving/engine/FlinkRedisSource.scala` as an example.
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
class FlinkRedisSource(params: ClusterServingHelper)
 | 
			
		||||
| 
						 | 
				
			
			@ -72,11 +72,11 @@ class FlinkRedisSource(params: ClusterServingHelper)
 | 
			
		|||
  }
 | 
			
		||||
}
 | 
			
		||||
```
 | 
			
		||||
Then you could refer to `com/intel/analytics/zoo/serving/engine/FlinkInference.scala` as the inference method to your new connector. Usually it could be directly used without new implementation. However, you could still define your new method if you need.
 | 
			
		||||
Then you could refer to `com/intel/analytics/bigdl/serving/engine/FlinkInference.scala` as the inference method to your new connector. Usually it could be directly used without new implementation. However, you could still define your new method if you need.
 | 
			
		||||
 | 
			
		||||
Finally, you have to define a Flink Sink, to write data back to data pipeline.
 | 
			
		||||
 | 
			
		||||
You could refer to `com/intel/analytics/zoo/serving/engine/FlinkRedisSink.scala` as an example.
 | 
			
		||||
You could refer to `com/intel/analytics/bigdl/serving/engine/FlinkRedisSink.scala` as an example.
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
class FlinkRedisSink(params: ClusterServingHelper)
 | 
			
		||||
| 
						 | 
				
			
			@ -98,20 +98,20 @@ class FlinkRedisSink(params: ClusterServingHelper)
 | 
			
		|||
Please note that normally you should do the space (memory or disk) control of your data pipeline in your code.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Please locate Flink Source and Flink Sink code to `com/intel/analytics/zoo/serving/engine/`
 | 
			
		||||
Please locate Flink Source and Flink Sink code to `com/intel/analytics/bigdl/serving/engine/`
 | 
			
		||||
 | 
			
		||||
If you have some method which need to be wrapped as a class, you could locate them in `com/intel/analytics/zoo/serving/pipeline/`
 | 
			
		||||
If you have some method which need to be wrapped as a class, you could locate them in `com/intel/analytics/bigdl/serving/pipeline/`
 | 
			
		||||
#### Python Code (The Client)
 | 
			
		||||
You could refer to `pyzoo/zoo/serving/client.py` to define your client code according to your data connector.
 | 
			
		||||
You could refer to `python/serving/src/bigdl/serving/client.py` to define your client code according to your data connector.
 | 
			
		||||
 | 
			
		||||
Please locate this part of code in `pyzoo/zoo/serving/data_pipeline_name/`, e.g. `pyzoo/zoo/serving/kafka/` if you create a Kafka connector.
 | 
			
		||||
Please locate this part of code in `python/serving/src/bigdl/serving/data_pipeline_name/`, e.g. `python/serving/src/bigdl/serving/kafka/` if you create a Kafka connector.
 | 
			
		||||
##### put to data pipeline
 | 
			
		||||
It is recommended to refer to `InputQueue.enqueue()` and `InputQueue.predict()` method. This method calls `self.data_to_b64` method first and add data to data pipeline. You could define a similar enqueue method to work with your data connector.
 | 
			
		||||
##### get from data pipeline
 | 
			
		||||
It is recommended to refer to `OutputQueue.query()` and `OutputQueue.dequeue()` method. This method gets result from data pipeline and calls `self.get_ndarray_from_b64` method to decode. You could define a similar dequeue method to work with your data connector.
 | 
			
		||||
 | 
			
		||||
## Benchmark Test
 | 
			
		||||
You could use `zoo/src/main/scala/com/intel/analytics/zoo/serving/engine/Operations.scala` to test the inference time of your model. 
 | 
			
		||||
You could use `scala/serving/src/main/com/intel/analytics/BIGDL/serving/engine/Operations.scala` to test the inference time of your model. 
 | 
			
		||||
 | 
			
		||||
The script takes two arguments, run it with `-m modelPath` and `-j jsonPath` to indicate the path to the model and the path to the prepared json format operation template of the model.
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -50,7 +50,7 @@ pip install bigdl-serving
 | 
			
		|||
#### Install nightly version
 | 
			
		||||
Download package from [here](https://sourceforge.net/projects/bigdl/files/cluster-serving-py/), run following command to install Cluster Serving
 | 
			
		||||
```
 | 
			
		||||
pip install analytics_zoo_serving-*.whl
 | 
			
		||||
pip install bigdl_serving-*.whl
 | 
			
		||||
```
 | 
			
		||||
For users who need to deploy and start Cluster Serving, run `cluster-serving-init` to download and prepare dependencies.
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -99,7 +99,7 @@ You need to put your model file into a directory with layout like following acco
 | 
			
		|||
**note:** `.pb` is the weight file which name must be `frozen_inference_graph.pb`, `.json` is the inputs and outputs definition file which name must be `graph_meta.json`, with contents like `{"input_names":["input:0"],"output_names":["output:0"]}`
 | 
			
		||||
 | 
			
		||||
***Tensorflow Checkpoint***
 | 
			
		||||
Please refer to [freeze checkpoint example](https://github.com/intel-analytics/bigdl/tree/master/pyzoo/bigdl/examples/tensorflow/freeze_checkpoint)
 | 
			
		||||
Please refer to [freeze checkpoint example](https://github.com/intel-analytics/bigdl/tree/master/python/orca/example/freeze_checkpoint)
 | 
			
		||||
 | 
			
		||||
**Pytorch**
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -107,7 +107,7 @@ Please refer to [freeze checkpoint example](https://github.com/intel-analytics/b
 | 
			
		|||
|-- model
 | 
			
		||||
   |-- xx.pt
 | 
			
		||||
```
 | 
			
		||||
Running Pytorch model needs extra dependency and config. Refer to [here](https://github.com/intel-analytics/bigdl/blob/master/pyzoo/bigdl/examples/pytorch/train/README.md) to install dependencies, and set environment variable `$PYTHONHOME` to your python, e.g. python could be run by `$PYTHONHOME/bin/python` and library is at `$PYTHONHOME/lib/`.
 | 
			
		||||
Running Pytorch model needs extra dependency and config. Refer to [here](https://github.com/intel-analytics/bigdl/blob/master/python/orca/example/torchmodel/train/README.md) to install dependencies, and set environment variable `$PYTHONHOME` to your python, e.g. python could be run by `$PYTHONHOME/bin/python` and library is at `$PYTHONHOME/lib/`.
 | 
			
		||||
 | 
			
		||||
**OpenVINO**
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -49,7 +49,7 @@ User can download a bigdl-${VERSION}-http.jar from the Nexus Repository with GAV
 | 
			
		|||
```
 | 
			
		||||
<groupId>com.intel.analytics.bigdl</groupId>
 | 
			
		||||
<artifactId>bigdl-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}</artifactId>
 | 
			
		||||
<version>${ZOO_VERSION}</version>
 | 
			
		||||
<version>${BIGDL_VERSION}</version>
 | 
			
		||||
```
 | 
			
		||||
User can also build from the source code:
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -58,7 +58,7 @@ mvn clean package -P spark_2.4+ -Dmaven.test.skip=true
 | 
			
		|||
#### Start the HTTP Server
 | 
			
		||||
User can start the HTTP server with following command.
 | 
			
		||||
```
 | 
			
		||||
java -jar bigdl-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ZOO_VERSION}-http.jar
 | 
			
		||||
java -jar bigdl-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${BIGDL_VERSION}-http.jar
 | 
			
		||||
```
 | 
			
		||||
And check the status of the HTTP server with:
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -68,7 +68,7 @@ If you get a response like "welcome to BigDL web serving frontend", that means t
 | 
			
		|||
#### Start options
 | 
			
		||||
User can pass options to the HTTP server when start it:
 | 
			
		||||
```
 | 
			
		||||
java -jar bigdl-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${ZOO_VERSION}-http.jar --redisHost="172.16.0.109"
 | 
			
		||||
java -jar bigdl-bigdl_${BIGDL_VERSION}-spark_${SPARK_VERSION}-${BIGDL_VERSION}-http.jar --redisHost="172.16.0.109"
 | 
			
		||||
```
 | 
			
		||||
All the supported parameter are listed here:
 | 
			
		||||
* **interface**: the binded server interface, default is "0.0.0.0"
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -10,7 +10,7 @@ This section provides a quick start example for you to run BigDL Cluster Serving
 | 
			
		|||
Use one command to run Cluster Serving container. (We provide quick start model in older version of docker image, for newest version, please refer to following sections and we remove the model to reduce the docker image size).
 | 
			
		||||
```
 | 
			
		||||
(bigdl-cluster-serving publish is in progress, so use following for now)
 | 
			
		||||
docker run --name cluster-serving -itd --net=host intelanalytics/zoo-cluster-serving:0.9.1
 | 
			
		||||
docker run --name cluster-serving -itd --net=host intelanalytics/bigdl-cluster-serving:0.9.1
 | 
			
		||||
```
 | 
			
		||||
Log into the container using `docker exec -it cluster-serving bash`, and run
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue