# Post-ops¶

Post-ops are operations that are appended after a primitive. They are implemented using the Attributes mechanism. If there are multiple post-ops, the are executed in the order they have been appended.

The post-ops are represented by dnnl::post_ops which is copied once it is attached to the attributes using dnnl::primitive_attr::set_post_ops() function. The attributes then need to be passed to a primitive descriptor creation function to take effect. Below is a simple sketch:

dnnl::post_ops po; // default empty post-ops
assert(po.len() == 0); // no post-ops attached

po.append_SOMETHING(params); // append some particular post-op
po.append_SOMETHING_ELSE(other_params); // append one more post-op

// (!) Note that the order in which post-ops are appended matters!
assert(po.len() == 2);

dnnl::primitive_attr attr; // default attributes
attr.set_post_ops(po); // attach the post-ops to the attr
// any changes to po after this point don't affect the value stored in attr

primitive::primitive_desc op_pd(params, attr); // create a pd with the attr


Note

Different primitives may have different post-ops support. Moreover, the support might also depend on the actual implementation of a primitive. So robust code should be able to handle errors accordingly. See the Attribute Related Error Handling.

Note

Post-ops do not change memory format of the operation destination memory object.

The post-op objects can be inspected using the dnnl::post_ops::kind() function that takes an index of the post-op to inspect (that must be less than the value returned by dnnl::post_ops::len()), and returns its kind.

## Supported Post-ops¶

### Eltwise Post-op¶

The eltwise post-op is appended using dnnl::post_ops::append_eltwise() function. The dnnl::post_ops::kind() returns dnnl::primitive::kind::eltwise for such a post-op.

The eltwise post-op replaces:

$\dst[:] = \operatorname{Op}(...)$

with

$\dst[:] = scale \cdot \operatorname{eltwise}(\operatorname{Op}(...))$

The intermediate result of the $$\operatorname{Op}(...)$$ is not preserved.

The $$scale$$ factor is supported in int8 inference only. For all other cases the scale must be 1.0.

### Sum Post-op¶

The sum post-op accumulates the result of a primitive with the existing data and is appended using dnnl::post_ops::append_sum() function. The dnnl::post_ops::kind() returns dnnl::primitive::kind::sum for such a post-op.

Prior to accumulating the result, the existing value us multiplied by scale. The scale parameter can be used in The $$scale$$ factor is supported in int8 inference only and should be used only when the result and the existing data have different magnitudes. For all other cases the scale must be 1.0.

The sum post-op replaces

$\dst[:] = \operatorname{Op}(...)$

with

$\dst[:] = scale \cdot \dst[:] + \operatorname{Op}(...)$

### Examples of Chained Post-ops¶

Post-ops can be chained together by appending one after another. Note that the order matters: the post-ops are executed in the order they have been appended.

#### Sum -> ReLU¶

This pattern is pretty common for the CNN topologies of the ResNet family.

dnnl::post_ops po;
po.append_sum(
/* scale = */ 1.f);
po.append_eltwise(
/* scale     = */ 1.f,
/* algorithm = */ dnnl::algorithm::eltwise_relu,
/* neg slope = */ 0.f,
/* unused for ReLU */ 0.f);

dnnl::primitive_attr attr;
attr.set_post_ops(po);

convolution_forward::primitive_desc(conv_d, attr, engine);


This will lead to the following computations:

$\dst[:] = \operatorname{ReLU}(\dst[:] + \operatorname{conv}(\src[:], \weights[:])$

## API¶

struct dnnl::post_ops

Post-ops.

Post-ops are computations executed after the main primitive computations and are attached to the primitive via primitive attributes.

Public Functions

post_ops()

Constructs an empty sequence of post-ops.

int len() const

Returns the number of post-ops entries.

primitive::kind kind(int index) const

Returns the primitive kind of post-op at entry with a certain index.

Return

Primitive kind of the post-op at the specified index.

Parameters
• index: Index of the post-op to return the kind for.

void append_sum(float scale = 1.)

Appends an accumulation (sum) post-op. Prior to accumulating the result, the previous value would be multiplied by a scaling factor scale.

The kind of this post-op is dnnl::primitive::kind::sum.

This feature may improve performance for cases like residual learning blocks, where the result of convolution is accumulated to the previously computed activations. The parameter scale may be used for the integer-based computations when the result and previous activations have different logical scaling factors.

In the simplest case when the accumulation is the only post-op, the computations would be dst[:] := scale * dst[:] + op(...) instead of dst[:] := op(...).

Note

This post-op executes in-place and does not change the destination layout.

Parameters
• scale: Scaling factor.

void get_params_sum(int index, float &scale) const

Returns the parameters of an accumulation (sum) post-op.

Parameters
• index: Index of the sum post-op.

• scale: Scaling factor of the sum post-op.

void append_eltwise(float scale, algorithm aalgorithm, float alpha, float beta)

Appends an elementwise post-op.

The kind of this post-op is dnnl::primitive::kind::eltwise.

In the simplest case when the elementwise is the only post-op, the computations would be dst[:] := scale * eltwise_op (op(...)) instead of dst[:] <- op(...), where eltwise_op is configured with the given parameters.

Parameters
• scale: Scaling factor.

• aalgorithm: Elementwise algorithm.

• alpha: Alpha parameter for the elementwise algorithm.

• beta: Beta parameter for the elementwise algorithm.

void get_params_eltwise(int index, float &scale, algorithm &aalgorithm, float &alpha, float &beta) const

Returns parameters of an elementwise post-up.

Parameters
• index: Index of the post-op.

• scale: Output scaling factor.

• aalgorithm: Output elementwise algorithm kind.

• alpha: Output alpha parameter for the elementwise algorithm.

• beta: Output beta parameter for the elementwise algorithm.