torch_geometric.data.InMemoryDataset

class InMemoryDataset(root: Optional[str] = None, transform: Optional[Callable] = None, pre_transform: Optional[Callable] = None, pre_filter: Optional[Callable] = None, log: bool = True, force_reload: bool = False)[source]

Bases: Dataset

Dataset base class for creating graph datasets which easily fit into CPU memory. See here for the accompanying tutorial.

Parameters:
  • root (str, optional) – Root directory where the dataset should be saved. (optional: None)

  • transform (callable, optional) – A function/transform that takes in a Data or HeteroData object and returns a transformed version. The data object will be transformed before every access. (default: None)

  • pre_transform (callable, optional) – A function/transform that takes in a Data or HeteroData object and returns a transformed version. The data object will be transformed before being saved to disk. (default: None)

  • pre_filter (callable, optional) – A function that takes in a Data or HeteroData object and returns a boolean value, indicating whether the data object should be included in the final dataset. (default: None)

  • log (bool, optional) – Whether to print any console output while downloading and processing the dataset. (default: True)

  • force_reload (bool, optional) – Whether to re-process the dataset. (default: False)

property raw_file_names: Union[str, List[str], Tuple[str, ...]]

The name of the files in the self.raw_dir folder that must be present in order to skip downloading.

property processed_file_names: Union[str, List[str], Tuple[str, ...]]

The name of the files in the self.processed_dir folder that must be present in order to skip processing.

property num_classes: int

Returns the number of classes in the dataset.

len() int[source]

Returns the number of data objects stored in the dataset.

get(idx: int) BaseData[source]

Gets the data object at index idx.

classmethod save(data_list: Sequence[BaseData], path: str) None[source]

Saves a list of data objects to the file path path.

load(path: str, data_cls: ~typing.Type[~torch_geometric.data.data.BaseData] = <class 'torch_geometric.data.data.Data'>) None[source]

Loads the dataset from the file path path.

static collate(data_list: Sequence[BaseData]) Tuple[BaseData, Optional[Dict[str, Tensor]]][source]

Collates a list of Data or HeteroData objects to the internal storage format of InMemoryDataset.

copy(idx: Optional[Union[slice, Tensor, ndarray, Sequence]] = None) InMemoryDataset[source]

Performs a deep-copy of the dataset. If idx is not given, will clone the full dataset. Otherwise, will only clone a subset of the dataset from indices idx. Indices can be slices, lists, tuples, and a torch.Tensor or np.ndarray of type long or bool.

to_on_disk_dataset(root: Optional[str] = None, backend: str = 'sqlite', log: bool = True) OnDiskDataset[source]

Converts the InMemoryDataset to a OnDiskDataset variant. Useful for distributed training and hardware instances with limited amount of shared memory.

root (str, optional): Root directory where the dataset should be saved.

If set to None, will save the dataset in root/on_disk. Note that it is important to specify root to account for different dataset splits. (optional: None)

backend (str): The Database backend to use.

(default: "sqlite")

log (bool, optional): Whether to print any console output while

processing the dataset. (default: True)

to(device: Union[int, str]) InMemoryDataset[source]

Performs device conversion of the whole dataset.

cpu(*args: str) InMemoryDataset[source]

Moves the dataset to CPU memory.

cuda(device: Optional[Union[int, str]] = None) InMemoryDataset[source]

Moves the dataset toto CUDA memory.

download() None

Downloads the dataset to the self.raw_dir folder.

get_summary() Any

Collects summary statistics for the dataset.

property has_download: bool

Checks whether the dataset defines a download() method.

property has_process: bool

Checks whether the dataset defines a process() method.

index_select(idx: Union[slice, Tensor, ndarray, Sequence]) Dataset

Creates a subset of the dataset from specified indices idx. Indices idx can be a slicing object, e.g., [2:5], a list, a tuple, or a torch.Tensor or np.ndarray of type long or bool.

property num_edge_features: int

Returns the number of features per edge in the dataset.

property num_features: int

Returns the number of features per node in the dataset. Alias for num_node_features.

property num_node_features: int

Returns the number of features per node in the dataset.

print_summary() None

Prints summary statistics of the dataset to the console.

process() None

Processes the dataset to the self.processed_dir folder.

property processed_paths: List[str]

The absolute filepaths that must be present in order to skip processing.

property raw_paths: List[str]

The absolute filepaths that must be present in order to skip downloading.

shuffle(return_perm: bool = False) Union[Dataset, Tuple[Dataset, Tensor]]

Randomly shuffles the examples in the dataset.

Parameters:

return_perm (bool, optional) – If set to True, will also return the random permutation used to shuffle the dataset. (default: False)

to_datapipe() Any

Converts the dataset into a torch.utils.data.DataPipe.

The returned instance can then be used with built-in DataPipes for baching graphs as follows:

from torch_geometric.datasets import QM9

dp = QM9(root='./data/QM9/').to_datapipe()
dp = dp.batch_graphs(batch_size=2, drop_last=True)

for batch in dp:
    pass

See the PyTorch tutorial for further background on DataPipes.