mindspore.hal
Hal encapsulates interfaces for device, stream, event, and memory. MindSpore abstracts the corresponding modules from different backends, allowing users to schedule hardware resources at the Python layer.
Device
API Name |
Description |
Supported Platforms |
Returns device count of specified backend. |
|
|
Get the architecture list this MindSpore was compiled for. |
|
|
Get specified device's capability. |
|
|
Get specified device's name. |
|
|
Get specified device's properties. |
|
|
Returns whether specified backend is available. |
|
|
Returns whether specified backend is initialized. |
|
Stream
API Name |
Description |
Supported Platforms |
Return current stream used on this device. |
|
|
Return default stream on this device. |
|
|
Sets the current stream.This is a wrapper API to set the stream. |
|
|
Synchronize all streams on current device.(Each MindSpore process only occupies one device) |
|
|
Wrapper around a device stream. |
|
|
Context-manager that selects a given stream. |
|
Event
API Name |
Description |
Supported Platforms |
Wrapper around a device event. |
|
Memory
API Name |
Description |
Supported Platforms |
Returns the peak memory size of the memory pool actually occupied by Tensor since the process was started. |
|
|
Returns the peak value of the total memory managed by the memory pool since the process was started. |
|
|
Returns the actual memory size currently occupied by Tensor. |
|
|
Returns the total amount of memory currently managed by the memory pool. |
|
|
Returns status information queried from the memory pool. |
|
|
Returns readable memory pool status information. |
|
|
Reset the peak memory size managed by the memory pool. |
|
|
Reset the peak memory size of the memory pool actually occupied by Tensor. |
|
|
Reset the "peak" stats tracked by memory manager. |
|
|
Release all memory fragments in the memory pool, so that memory arrangement will be optimized. |
|