deepmd.pt.model.descriptor.repformer_layer

Module Contents

Classes

Atten2Map

Base class for all neural network modules.

Atten2MultiHeadApply

Base class for all neural network modules.

Atten2EquiVarApply

Base class for all neural network modules.

LocalAtten

Base class for all neural network modules.

RepformerLayer

Base class for all neural network modules.

Functions

get_residual(→ torch.Tensor)

Get residual tensor for one update vector.

_make_nei_g1(→ torch.Tensor)

Make neighbor-wise atomic invariant rep.

_apply_nlist_mask(→ torch.Tensor)

Apply nlist mask to neighbor-wise rep tensors.

_apply_switch(→ torch.Tensor)

Apply switch function to neighbor-wise rep tensors.

deepmd.pt.model.descriptor.repformer_layer.get_residual(_dim: int, _scale: float, _mode: str = 'norm', trainable: bool = True, precision: str = 'float64') torch.Tensor[source]

Get residual tensor for one update vector.

Parameters:
_dimint

The dimension of the update vector.

_scale

The initial scale of the residual tensor. See _mode for details.

_mode

The mode of residual initialization for the residual tensor. - “norm” (default): init residual using normal with _scale std. - “const”: init residual using element-wise constants of _scale.

trainable

Whether the residual tensor is trainable.

precision

The precision of the residual tensor.

deepmd.pt.model.descriptor.repformer_layer._make_nei_g1(g1_ext: torch.Tensor, nlist: torch.Tensor) torch.Tensor[source]

Make neighbor-wise atomic invariant rep.

Parameters:
g1_ext

Extended atomic invariant rep, with shape nb x nall x ng1.

nlist

Neighbor list, with shape nb x nloc x nnei.

Returns:
gg1: torch.Tensor

Neighbor-wise atomic invariant rep, with shape nb x nloc x nnei x ng1.

deepmd.pt.model.descriptor.repformer_layer._apply_nlist_mask(gg: torch.Tensor, nlist_mask: torch.Tensor) torch.Tensor[source]

Apply nlist mask to neighbor-wise rep tensors.

Parameters:
gg

Neighbor-wise rep tensors, with shape nf x nloc x nnei x d.

nlist_mask

Neighbor list mask, where zero means no neighbor, with shape nf x nloc x nnei.

deepmd.pt.model.descriptor.repformer_layer._apply_switch(gg: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]

Apply switch function to neighbor-wise rep tensors.

Parameters:
gg

Neighbor-wise rep tensors, with shape nf x nloc x nnei x d.

sw

The switch function, which equals 1 within the rcut_smth range, smoothly decays from 1 to 0 between rcut_smth and rcut, and remains 0 beyond rcut, with shape nf x nloc x nnei.

class deepmd.pt.model.descriptor.repformer_layer.Atten2Map(input_dim: int, hidden_dim: int, head_num: int, has_gate: bool = False, smooth: bool = True, attnw_shift: float = 20.0, precision: str = 'float64')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) Atten2Map[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.repformer_layer.Atten2MultiHeadApply(input_dim: int, head_num: int, precision: str = 'float64')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(AA: torch.Tensor, g2: torch.Tensor) torch.Tensor[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) Atten2MultiHeadApply[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.repformer_layer.Atten2EquiVarApply(input_dim: int, head_num: int, precision: str = 'float64')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(AA: torch.Tensor, h2: torch.Tensor) torch.Tensor[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) Atten2EquiVarApply[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.repformer_layer.LocalAtten(input_dim: int, hidden_dim: int, head_num: int, smooth: bool = True, attnw_shift: float = 20.0, precision: str = 'float64')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(g1: torch.Tensor, gg1: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) LocalAtten[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.repformer_layer.RepformerLayer(rcut, rcut_smth, sel: int, ntypes: int, g1_dim=128, g2_dim=16, axis_neuron: int = 4, update_chnnl_2: bool = True, update_g1_has_conv: bool = True, update_g1_has_drrd: bool = True, update_g1_has_grrg: bool = True, update_g1_has_attn: bool = True, update_g2_has_g1g1: bool = True, update_g2_has_attn: bool = True, update_h2: bool = False, attn1_hidden: int = 64, attn1_nhead: int = 4, attn2_hidden: int = 16, attn2_nhead: int = 4, attn2_has_gate: bool = False, activation_function: str = 'tanh', update_style: str = 'res_avg', update_residual: float = 0.001, update_residual_init: str = 'norm', smooth: bool = True, precision: str = 'float64', trainable_ln: bool = True, ln_eps: float | None = 1e-05)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

cal_1_dim(g1d: int, g2d: int, ax: int) int[source]
_update_h2(h2: torch.Tensor, attn: torch.Tensor) torch.Tensor[source]

Calculate the attention weights update for pair-wise equivariant rep.

Parameters:
h2

Pair-wise equivariant rep tensors, with shape nf x nloc x nnei x 3.

attn

Attention weights from g2 attention, with shape nf x nloc x nnei x nnei x nh2.

_update_g1_conv(gg1: torch.Tensor, g2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]

Calculate the convolution update for atomic invariant rep.

Parameters:
gg1

Neighbor-wise atomic invariant rep, with shape nb x nloc x nnei x ng1.

g2

Pair invariant rep, with shape nb x nloc x nnei x ng2.

nlist_mask

Neighbor list mask, where zero means no neighbor, with shape nb x nloc x nnei.

sw

The switch function, which equals 1 within the rcut_smth range, smoothly decays from 1 to 0 between rcut_smth and rcut, and remains 0 beyond rcut, with shape nb x nloc x nnei.

static _cal_hg(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor, smooth: bool = True, epsilon: float = 0.0001) torch.Tensor[source]

Calculate the transposed rotation matrix.

Parameters:
g2

Neighbor-wise/Pair-wise invariant rep tensors, with shape nb x nloc x nnei x ng2.

h2

Neighbor-wise/Pair-wise equivariant rep tensors, with shape nb x nloc x nnei x 3.

nlist_mask

Neighbor list mask, where zero means no neighbor, with shape nb x nloc x nnei.

sw

The switch function, which equals 1 within the rcut_smth range, smoothly decays from 1 to 0 between rcut_smth and rcut, and remains 0 beyond rcut, with shape nb x nloc x nnei.

smooth

Whether to use smoothness in processes such as attention weights calculation.

epsilon

Protection of 1./nnei.

Returns:
hg

The transposed rotation matrix, with shape nb x nloc x 3 x ng2.

static _cal_grrg(h2g2: torch.Tensor, axis_neuron: int) torch.Tensor[source]

Calculate the atomic invariant rep.

Parameters:
h2g2

The transposed rotation matrix, with shape nb x nloc x 3 x ng2.

axis_neuron

Size of the submatrix.

Returns:
grrg

Atomic invariant rep, with shape nb x nloc x (axis_neuron x ng2)

symmetrization_op(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor, axis_neuron: int, smooth: bool = True, epsilon: float = 0.0001) torch.Tensor[source]

Symmetrization operator to obtain atomic invariant rep.

Parameters:
g2

Neighbor-wise/Pair-wise invariant rep tensors, with shape nb x nloc x nnei x ng2.

h2

Neighbor-wise/Pair-wise equivariant rep tensors, with shape nb x nloc x nnei x 3.

nlist_mask

Neighbor list mask, where zero means no neighbor, with shape nb x nloc x nnei.

sw

The switch function, which equals 1 within the rcut_smth range, smoothly decays from 1 to 0 between rcut_smth and rcut, and remains 0 beyond rcut, with shape nb x nloc x nnei.

axis_neuron

Size of the submatrix.

smooth

Whether to use smoothness in processes such as attention weights calculation.

epsilon

Protection of 1./nnei.

Returns:
grrg

Atomic invariant rep, with shape nb x nloc x (axis_neuron x ng2)

_update_g2_g1g1(g1: torch.Tensor, gg1: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]

Update the g2 using element-wise dot g1_i * g1_j.

Parameters:
g1

Atomic invariant rep, with shape nb x nloc x ng1.

gg1

Neighbor-wise atomic invariant rep, with shape nb x nloc x nnei x ng1.

nlist_mask

Neighbor list mask, where zero means no neighbor, with shape nb x nloc x nnei.

sw

The switch function, which equals 1 within the rcut_smth range, smoothly decays from 1 to 0 between rcut_smth and rcut, and remains 0 beyond rcut, with shape nb x nloc x nnei.

forward(g1_ext: torch.Tensor, g2: torch.Tensor, h2: torch.Tensor, nlist: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor)[source]
Parameters:
g1_extnf x nall x ng1 extended single-atom chanel
g2nf x nloc x nnei x ng2 pair-atom channel, invariant
h2nf x nloc x nnei x 3 pair-atom channel, equivariant
nlistnf x nloc x nnei neighbor list (padded neis are set to 0)
nlist_masknf x nloc x nnei masks of the neighbor list. real nei 1 otherwise 0
swnf x nloc x nnei switch function
Returns:
g1: nf x nloc x ng1 updated single-atom chanel
g2: nf x nloc x nnei x ng2 updated pair-atom channel, invariant
h2: nf x nloc x nnei x 3 updated pair-atom channel, equivariant
list_update_res_avg(update_list: List[torch.Tensor]) torch.Tensor[source]
list_update_res_incr(update_list: List[torch.Tensor]) torch.Tensor[source]
list_update_res_residual(update_list: List[torch.Tensor], update_name: str = 'g1') torch.Tensor[source]
list_update(update_list: List[torch.Tensor], update_name: str = 'g1') torch.Tensor[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) RepformerLayer[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.