GE_USE_STATIC_MEMORY
Description
Sets the memory allocation mode when the network is running. The options are as follows:
- 0: dynamic memory allocation. Memory is dynamically allocated based on the actual size.
- 2: dynamic memory expansion supported by only static shape. When the network is running, this environment variable can be used to implement memory reuse between multiple graphs in the same session. That is, the memory required by the maximum graph is allocated. For example, if the memory required by the current graph exceeds the memory of the previous graph, the memory of the previous graph is directly released. The memory is reallocated based on the memory required by the current graph.
- 3: dynamic memory expansion supported by only dynamic shape, which solves the fragment problem during dynamic memory allocation and reduces the memory usage of the dynamic-shape network.
- 4: dynamic memory expansion supported by both static and dynamic shapes.
The default value is 0. To be compatible with earlier versions, the system dynamically expands the memory based on the value 2 even if this environment variable is set to 1.
- This environment variable will be deprecated in later versions.
- In the training and online inference scenarios, if multiple graphs are concurrently executed, this environment variable cannot be set to 2 or 4.
- In the TensorFlow training and online inference scenarios, this environment variable cannot be used together with static_memory_policy. Otherwise, conflicts will occur during network running. You are advised to use the static_memory_policy parameter of the TF Adapter to configure the memory allocation mode during network running.
Example
export GE_USE_STATIC_MEMORY=2
Restrictions
None