LostTech.TensorFlow : API Documentation

Type tf.keras.activations

Namespace tensorflow

Public static methods

object deserialize(IDictionary<string, object> name, IDictionary<string, object> custom_objects)

Inverse of the `serialize` function.
Parameters
IDictionary<string, object> name
IDictionary<string, object> custom_objects
Optional dictionary mapping names (strings) to custom objects (classes and functions) to be considered during deserialization.
Returns
object
A Keras Optimizer instance.

object deserialize(string name, IDictionary<string, object> custom_objects)

Return an `Initializer` object from its config.

object deserialize(PythonClassContainer name, IDictionary<string, object> custom_objects)

Instantiates a layer from a config dictionary.
Parameters
PythonClassContainer name
IDictionary<string, object> custom_objects
dict mapping class names (or function names) of custom (non-Keras) objects to class/functions
Returns
object
Layer instance (may be Model, Sequential, Network, Layer...)

Tensor elu(IGraphNodeBase x, double alpha)

Exponential linear unit.
Parameters
IGraphNodeBase x
A tensor or variable to compute the activation function for.
double alpha
A scalar, slope of negative section.
Returns
Tensor
A tensor.

object elu_dyn(object x, ImplicitContainer<T> alpha)

Exponential linear unit.
Parameters
object x
A tensor or variable to compute the activation function for.
ImplicitContainer<T> alpha
A scalar, slope of negative section.
Returns
object
A tensor.

object exponential(IGraphNodeBase x)

Exponential activation function.
Parameters
IGraphNodeBase x
Input tensor.
Returns
object
The exponential activation: `exp(x)`.

object exponential_dyn(object x)

Exponential activation function.
Parameters
object x
Input tensor.
Returns
object
The exponential activation: `exp(x)`.

object get(PythonFunctionContainer identifier)

object hard_sigmoid(IGraphNodeBase x)

Segment-wise linear approximation of sigmoid.

Faster than sigmoid. Returns `0.` if `x < -2.5`, `1.` if `x > 2.5`. In `-2.5 <= x <= 2.5`, returns `0.2 * x + 0.5`.
Parameters
IGraphNodeBase x
A tensor or variable.
Returns
object
A tensor.

object hard_sigmoid_dyn(object x)

Segment-wise linear approximation of sigmoid.

Faster than sigmoid. Returns `0.` if `x < -2.5`, `1.` if `x > 2.5`. In `-2.5 <= x <= 2.5`, returns `0.2 * x + 0.5`.
Parameters
object x
A tensor or variable.
Returns
object
A tensor.

Tensor linear(IGraphNodeBase x)

Linear activation function.
Parameters
IGraphNodeBase x
Input tensor.
Returns
Tensor
The linear activation: `x`.

object linear_dyn(object x)

Linear activation function.
Parameters
object x
Input tensor.
Returns
object
The linear activation: `x`.

Tensor relu(IGraphNodeBase x, double alpha, object max_value, int threshold)

Rectified Linear Unit.

With default values, it returns element-wise `max(x, 0)`.

Otherwise, it follows: `f(x) = max_value` for `x >= max_value`, `f(x) = x` for `threshold <= x < max_value`, `f(x) = alpha * (x - threshold)` otherwise.
Parameters
IGraphNodeBase x
A tensor or variable.
double alpha
A scalar, slope of negative section (default=`0.`).
object max_value
float. Saturation threshold.
int threshold
float. Threshold value for thresholded activation.
Returns
Tensor
A tensor.

object relu_dyn(object x, ImplicitContainer<T> alpha, object max_value, ImplicitContainer<T> threshold)

Rectified linear unit.

With default values, it returns element-wise `max(x, 0)`.

Otherwise, it follows: `f(x) = max_value` for `x >= max_value`, `f(x) = x` for `threshold <= x < max_value`, `f(x) = alpha * (x - threshold)` otherwise.
Parameters
object x
A tensor or variable.
ImplicitContainer<T> alpha
A scalar, slope of negative section (default=`0.`).
object max_value
float. Saturation threshold.
ImplicitContainer<T> threshold
float. Threshold value for thresholded activation.
Returns
object
A tensor.

object selu(IGraphNodeBase x)

Scaled Exponential Linear Unit (SELU).

The Scaled Exponential Linear Unit (SELU) activation function is: `scale * x` if `x > 0` and `scale * alpha * (exp(x) - 1)` if `x < 0` where `alpha` and `scale` are pre-defined constants (`alpha = 1.67326324` and `scale = 1.05070098`). The SELU activation function multiplies `scale` > 1 with the `[elu](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations/elu)` (Exponential Linear Unit (ELU)) to ensure a slope larger than one for positive net inputs.

The values of `alpha` and `scale` are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see [`lecun_normal` initialization] (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)) and the number of inputs is "large enough" (see references for more information).

![](https://cdn-images-1.medium.com/max/1600/1*m0e8lZU_Zrkh4ESfQkY2Pw.png) (Courtesy: Blog on Towards DataScience at https://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9)

Example Usage:
Parameters
IGraphNodeBase x
A tensor or variable to compute the activation function for.
Returns
object
The scaled exponential unit activation: `scale * elu(x, alpha)`.

# Note - To be used together with the initialization "[lecun_normal] (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)". - To be used together with the dropout variant "[AlphaDropout] (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout)".

References: [Self-Normalizing Neural Networks (Klambauer et al, 2017)] (https://arxiv.org/abs/1706.02515)
Show Example
3
            n_classes = 10 #10-class problem
            model = models.Sequential()
            model.add(Dense(64, kernel_initializer='lecun_normal', activation='selu',
            input_shape=(28, 28, 1))))
            model.add(Dense(32, kernel_initializer='lecun_normal', activation='selu'))
            model.add(Dense(16, kernel_initializer='lecun_normal', activation='selu'))
            model.add(Dense(n_classes, activation='softmax')) 

object selu_dyn(object x)

Scaled Exponential Linear Unit (SELU).

The Scaled Exponential Linear Unit (SELU) activation function is: `scale * x` if `x > 0` and `scale * alpha * (exp(x) - 1)` if `x < 0` where `alpha` and `scale` are pre-defined constants (`alpha = 1.67326324` and `scale = 1.05070098`). The SELU activation function multiplies `scale` > 1 with the `[elu](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations/elu)` (Exponential Linear Unit (ELU)) to ensure a slope larger than one for positive net inputs.

The values of `alpha` and `scale` are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see [`lecun_normal` initialization] (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)) and the number of inputs is "large enough" (see references for more information).

![](https://cdn-images-1.medium.com/max/1600/1*m0e8lZU_Zrkh4ESfQkY2Pw.png) (Courtesy: Blog on Towards DataScience at https://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9)

Example Usage:
Parameters
object x
A tensor or variable to compute the activation function for.
Returns
object
The scaled exponential unit activation: `scale * elu(x, alpha)`.

# Note - To be used together with the initialization "[lecun_normal] (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)". - To be used together with the dropout variant "[AlphaDropout] (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout)".

References: [Self-Normalizing Neural Networks (Klambauer et al, 2017)] (https://arxiv.org/abs/1706.02515)
Show Example
3
            n_classes = 10 #10-class problem
            model = models.Sequential()
            model.add(Dense(64, kernel_initializer='lecun_normal', activation='selu',
            input_shape=(28, 28, 1))))
            model.add(Dense(32, kernel_initializer='lecun_normal', activation='selu'))
            model.add(Dense(16, kernel_initializer='lecun_normal', activation='selu'))
            model.add(Dense(n_classes, activation='softmax')) 

object serialize(string activation)

object serialize(IDictionary<object, object> activation)

Tensor softmax(IGraphNodeBase x, int axis)

Softmax of a tensor.
Parameters
IGraphNodeBase x
A tensor or variable.
int axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
Returns
Tensor
A tensor.

object softmax_dyn(object x, ImplicitContainer<T> axis)

Softmax of a tensor.
Parameters
object x
A tensor or variable.
ImplicitContainer<T> axis
The dimension softmax would be performed on. The default is -1 which indicates the last dimension.
Returns
object
A tensor.

Tensor softplus(IGraphNodeBase x)

Softplus activation function.
Parameters
IGraphNodeBase x
Input tensor.
Returns
Tensor
The softplus activation: `log(exp(x) + 1)`.

object softplus_dyn(object x)

Softplus of a tensor.
Parameters
object x
A tensor or variable.
Returns
object
A tensor.

Tensor softsign(IGraphNodeBase x)

Softsign activation function.
Parameters
IGraphNodeBase x
Input tensor.
Returns
Tensor
The softplus activation: `x / (abs(x) + 1)`.

object softsign_dyn(object x)

Softsign of a tensor.
Parameters
object x
A tensor or variable.
Returns
object
A tensor.

object tanh(IGraphNodeBase x)

Hyperbolic Tangent (tanh) activation function.
Parameters
IGraphNodeBase x
Input tensor.
Returns
object
A tensor of same shape and dtype of input `x`. The tanh activation: `tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x)))`.
Show Example
# Constant 1-D tensor populated with value list.
            a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32)
            b = tf.keras.activations.tanh(a) #[-0.9950547,-0.7615942,
            0.,0.7615942,0.9950547] 

Public properties

PythonFunctionContainer deserialize_fn get;

PythonFunctionContainer exponential_fn get;

PythonFunctionContainer hard_sigmoid_fn get;

PythonFunctionContainer linear_fn get;

PythonFunctionContainer serialize_fn get;

PythonFunctionContainer sigmoid_fn get;

PythonFunctionContainer softmax_fn get;

PythonFunctionContainer softplus_fn get;

PythonFunctionContainer softsign_fn get;