torch_geometric.profile¶
A decorator to facilitate profiling a function, e.g., obtaining training runtime and memory statistics a specific model on a specific dataset. |
|
A decorator to facilitate timing a function, e.g., obtaining the runtime time of a specific model on a specific dataset. |
|
Creates a summary of collected runtime and memory statistics. |
|
Given a |
|
Given a |
|
Given a |
|
Returns the used CPU memory in bytes, as reported by the Python garbage collector. |
|
Returns the used GPU memory in bytes, as reported by the Python garbage collector. |
|
Returns the free and used GPU memory in megabytes, as reported by |
- profileit()[source]¶
A decorator to facilitate profiling a function, e.g., obtaining training runtime and memory statistics a specific model on a specific dataset. Returns a
Stats
object with the attributestime
,max_active_cuda
,max_reserved_cuda
,max_active_cuda
,nvidia_smi_free_cuda
,nvidia_smi_used_cuda
.@profileit() def train(model, optimizer, x, edge_index, y): optimizer.zero_grad() out = model(x, edge_index) loss = criterion(out, y) loss.backward() optimizer.step() return float(loss) loss, stats = train(model, x, edge_index, y)
- timeit()[source]¶
A decorator to facilitate timing a function, e.g., obtaining the runtime time of a specific model on a specific dataset.
@timeit() @torch.no_grad() def test(model, x, edge_index): return model(x, edge_index) z, time = test(model, x, edge_index)
- get_stats_summary(stats_list: List[torch_geometric.profile.profile.Stats])[source]¶
Creates a summary of collected runtime and memory statistics. Returns a
StatsSummary
object with the attributestime_mean
,time_std
,max_active_cuda
,max_reserved_cuda
,max_active_cuda
,min_nvidia_smi_free_cuda
,max_nvidia_smi_used_cuda
.- Parameters
stats_list (List[Stats]) – A list of
Stats
objects, as returned byprofileit()
.
- count_parameters(model: torch.nn.modules.module.Module) → int[source]¶
Given a
torch.nn.Module
, count its trainable parameters.- Parameters
model (torch.nn.Model) – The model.
- get_model_size(model: torch.nn.modules.module.Module) → int[source]¶
Given a
torch.nn.Module
, get its actual disk size in bytes.- Parameters
model (torch model) – The model.
- get_data_size(data: Union[torch_geometric.data.data.Data, torch_geometric.data.hetero_data.HeteroData]) → int[source]¶
Given a
torch_geometric.data.Data
object, get its theoretical memory usage in bytes.- Parameters
data (torch_geometric.data.Data or torch_geometric.data.HeteroData) – The
Data
orHeteroData
graph object.
- get_cpu_memory_from_gc() → int[source]¶
Returns the used CPU memory in bytes, as reported by the Python garbage collector.
- get_gpu_memory_from_gc(device: int = 0) → int[source]¶
Returns the used GPU memory in bytes, as reported by the Python garbage collector.
- Parameters
device (int, optional) – The GPU device identifier. (default:
1
)