Final Up to date on July 6, 2022
Activation capabilities play an integral position in neural networks by introducing non-linearity. This nonlinearity permits neural networks to develop complicated representations and capabilities based mostly on the inputs that will not be attainable with a easy linear regression mannequin.
There have been many alternative non-linear activation capabilities proposed all through the historical past of neural networks. On this put up, we are going to discover three common ones: sigmoid, tanh, and ReLU.
After studying this text, you’ll study:
- Why nonlinearity is necessary in a neural community
- How totally different activation capabilities can contribute to the vanishing gradient drawback
- Sigmoid, tanh, and ReLU activation capabilities
- How one can use totally different activation capabilities in your TensorFlow mannequin
Let’s get began.

Utilizing Activation Features in TensorFlow
Photograph by Victor Freitas. Some rights reserved.
Overview
This text is cut up into 5 sections; they’re:
- Why do we’d like nonlinear activation capabilities
- Sigmoid perform and vanishing gradient
- Hyperbolic tangent perform
- Rectified Linear Unit (ReLU)
- Utilizing the activation capabilities in follow
Why Do We Want Nonlinear Activation Features
You could be questioning, why all this hype about non-linear activation capabilities? Or why can’t we simply use an identification perform after the weighted linear mixture of activations from the earlier layer. Utilizing a number of linear layers is mainly the identical as utilizing a single linear layer. This may be seen by means of a easy instance. Let’s say we now have a one hidden layer neural community, every with two hidden neurons.
We are able to then rewrite the output layer as a linear mixture of the unique enter variable if we used a linear hidden layer. If we had extra neurons and weights, the equation can be lots longer with extra nesting and extra multiplications between successive layer weights however the thought stays the identical: we are able to signify your entire community as a single linear layer. To make the community to signify extra complicated capabilities, we would want nonlinear activation capabilities. Let’s begin with a well-liked instance, the sigmoid perform.
Sigmoid Operate and Vanishing Gradient
The sigmoid activation perform is a well-liked selection for the non-linear activation perform for neural networks. One motive for its recognition is that it has output values between 0 and 1 which mimic likelihood values and is therefore used to transform the actual valued output of a linear layer to a likelihood, which can be utilized as a likelihood output. This has additionally allowed it to be an necessary a part of logistic regression strategies which can be utilized instantly for binary classification.
The sigmoid perform is usually represented by $sigma$ and has the shape $sigma = frac{1}{1 + e^{-1}}$. In TensorFlow, we are able to name the sigmoid perform from the Keras library as follows:
import tensorflow as tf from tensorflow.keras.activations import sigmoid
input_array = tf.fixed([–1, 0, 1], dtype=tf.float32) print (sigmoid(input_array)) |
This provides us the output:
tf.Tensor([0.26894143 0.5 0.7310586 ], form=(3,), dtype=float32) |
We are able to additionally plot the sigmoid perform as a perform of $x$,
When wanting on the activation perform for the neurons in a neural community, we must also be considering its by-product because of backpropagation and the chain rule which might have an effect on how the neural community learns from knowledge.
Right here, we are able to observe that the gradient of the sigmoid perform is all the time between 0 and 0.25. And because the $x$ tends to constructive or damaging infinity, the gradient tends to zero. This might contribute to the vanishing gradient drawback, which when the enter are at some massive magnitude of $x$ (e.g., because of the output from earlier layers), the gradient is simply too small to provoke the correction.
Vanishing gradient is an issue as a result of we use the chain rule in backpropagation in deep neural networks. Recall that in neural networks, the gradient (of the loss perform) at every layer is the gradient at its subsequent layer multiplied with the gradient of its activation perform. As there are a lot of layers within the community, if the gradient of the activation capabilities are lower than 1, the gradient at some layer distant from output will probably be near zero. And any layer with a gradient near zero will cease the gradient propagate additional again to the sooner layers.
Because the sigmoid perform is all the time lower than 1, a community with extra layers would exacerbate the vanishing gradient drawback. Moreover, there’s a saturation area the place the gradient of the sigmoid tends to 0, which is the place the magnitude of $x$ is massive. So, if the output of the weighted sum of activations from earlier layers is massive then we’d have a really small gradient propagating by means of this neuron because the by-product of the activation $a$ with respect to the enter to the activation perform can be small (in saturation area).
Granted, there may be additionally the by-product of the linear time period with respect to the earlier layer’s activations which could be larger than 1 for the layer for the reason that weight could be massive and it’s a sum of derivatives from the totally different neurons. Nonetheless, it’d nonetheless increase concern in the beginning of coaching as weights are often initialized to be small.
Hyperbolic Tangent Operate
One other activation perform we are able to contemplate is the tanh activation perform, in any other case referred to as the hyperbolic tangent perform. It has a bigger vary of output values in comparison with the sigmoid perform and has a bigger most gradient as effectively. The tanh perform is hyperbolic analogue to the traditional tangent perform for circles that most individuals are acquainted with.
Plotting out the tanh perform,
Let’s take a look at the gradient as effectively,
Discover that the gradient now has a most worth of 1, in comparison with the sigmoid perform the place the most important gradient worth is at 0. This makes a community with tanh activation much less prone to the vanishing gradient drawback. Nonetheless, the tanh perform additionally has a saturation area, the place the worth of the gradient tends in the direction of because the magnitude of the enter $x$ will get bigger.
In TensorFlow, we are able to implement the tanh activation on a tensor utilizing the tanh
perform in Keras’ activations module
import tensorflow as tf from tensorflow.keras.activations import tanh
input_array = tf.fixed([–1, 0, 1], dtype=tf.float32) print (tanh(input_array)) |
which supplies the output
tf.Tensor([–0.7615942 0. 0.7615942], form=(3,), dtype=float32) |
Rectified Linear Unit (ReLU)
The final activation perform we’ll take a look at intimately is the Rectified Linear Unit, additionally popularly referred to as ReLU. It has develop into common just lately because of its comparatively easy computation which helps to hurry up neural networks and appears to get empirically good efficiency, which makes it a superb beginning selection for the activation perform.
The ReLU perform is an easy $max(0, x)$ perform, which can be considered a piecewise perform with all inputs lower than 0 mapping to 0 and all inputs larger than or equal to 0 mapping again to themselves (i.e., identification perform). Graphically,
Subsequent up, we are able to additionally take a look at the gradient of the ReLU perform:
Discover that the gradient of ReLU is 1 every time the enter is constructive, which is useful in addressing the vanishing gradient drawback. Nonetheless, every time the enter is damaging, the gradient is 0 which might trigger one other drawback, the lifeless neuron/dying ReLU drawback, which is a matter if a neuron is persistently inactivated. On this case, the neuron isn’t capable of study and its weights are by no means up to date because of the chain rule because it has a 0 gradient as one in all its phrases. If this occurs for all knowledge in your dataset then it may be very troublesome for this neuron to study out of your dataset until the activations within the earlier layer change such that the neuron is now not “lifeless”.
To make use of the ReLU activation in TensorFlow,
import tensorflow as tf from tensorflow.keras.activations import relu
input_array = tf.fixed([–1, 0, 1], dtype=tf.float32) print (relu(input_array)) |
which supplies us the output:
tf.Tensor([0. 0. 1.], form=(3,), dtype=float32) |
Over the three activation capabilities we reviewed above, we see that they’re all monotonically growing capabilities. That is required or in any other case we can not apply the gradient descent algorithm.
Now that we’ve explored some frequent activation capabilities and methods to use them in TensorFlow, let’s check out how we are able to use these in follow in an precise mannequin.
Utilizing Activation Features in Apply
Earlier than we discover using activation capabilities in follow, let’s take a look at one other frequent method that we are able to use activation capabilities when combining them with one other Keras layer. Let’s say we need to add a ReLU activation on prime of a Dense layer. A technique we are able to do that following the above strategies proven is to do
x = Dense(models=10)(input_layer) x = relu(x) |
Nonetheless, for a lot of Keras layers, we are able to additionally use a extra compact illustration so as to add the activation on prime of the layer:
x = Dense(models=10, activation=”relu”)(input_layer) |
Utilizing this extra compact illustration, let’s construct our LeNet5 mannequin utilizing Keras:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.layers import Dense, Enter, Flatten, Conv2D, BatchNormalization, MaxPool2D from tensorflow.keras.fashions import Mannequin
(trainX, trainY), (testX, testY) = keras.datasets.cifar10.load_data()
input_layer = Enter(form=(32,32,3,)) x = Conv2D(filters=6, kernel_size=(5,5), padding=“similar”, activation=“relu”)(input_layer) x = MaxPool2D(pool_size=(2,2))(x) x = Conv2D(filters=16, kernel_size=(5,5), padding=“similar”, activation=“relu”)(x) x = MaxPool2D(pool_size=(2, 2))(x) x = Conv2D(filters=120, kernel_size=(5,5), padding=“similar”, activation=“relu”)(x) x = Flatten()(x) x = Dense(models=84, activation=“relu”)(x) x = Dense(models=10, activation=“softmax”)(x)
mannequin = Mannequin(inputs=input_layer, outputs=x)
print(mannequin.abstract())
mannequin.compile(optimizer=“adam”, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=“acc”)
historical past = mannequin.match(x=trainX, y=trainY, batch_size=256, epochs=10, validation_data=(testX, testY)) |
And working this code offers us the output
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
Mannequin: “mannequin” _________________________________________________________________ Layer (kind) Output Form Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0
conv2d (Conv2D) (None, 32, 32, 6) 456
max_pooling2d (MaxPooling2D (None, 16, 16, 6) 0 )
conv2d_1 (Conv2D) (None, 16, 16, 16) 2416
max_pooling2d_1 (MaxPooling (None, 8, 8, 16) 0 2D)
conv2d_2 (Conv2D) (None, 8, 8, 120) 48120
flatten (Flatten) (None, 7680) 0
dense (Dense) (None, 84) 645204
dense_1 (Dense) (None, 10) 850
================================================================= Complete params: 697,046 Trainable params: 697,046 Non-trainable params: 0 _________________________________________________________________ None Epoch 1/10 196/196 [==============================] – 14s 11ms/step – loss: 2.9758 acc: 0.3390 – val_loss: 1.5530 – val_acc: 0.4513 Epoch 2/10 196/196 [==============================] – 2s 8ms/step – loss: 1.4319 – acc: 0.4927 – val_loss: 1.3814 – val_acc: 0.5106 Epoch 3/10 196/196 [==============================] – 2s 8ms/step – loss: 1.2505 – acc: 0.5583 – val_loss: 1.3595 – val_acc: 0.5170 Epoch 4/10 196/196 [==============================] – 2s 8ms/step – loss: 1.1127 – acc: 0.6094 – val_loss: 1.2892 – val_acc: 0.5534 Epoch 5/10 196/196 [==============================] – 2s 8ms/step – loss: 0.9763 – acc: 0.6594 – val_loss: 1.3228 – val_acc: 0.5513 Epoch 6/10 196/196 [==============================] – 2s 8ms/step – loss: 0.8510 – acc: 0.7017 – val_loss: 1.3953 – val_acc: 0.5494 Epoch 7/10 196/196 [==============================] – 2s 8ms/step – loss: 0.7361 – acc: 0.7426 – val_loss: 1.4123 – val_acc: 0.5488 Epoch 8/10 196/196 [==============================] – 2s 8ms/step – loss: 0.6060 – acc: 0.7894 – val_loss: 1.5356 – val_acc: 0.5435 Epoch 9/10 196/196 [==============================] – 2s 8ms/step – loss: 0.5020 – acc: 0.8265 – val_loss: 1.7801 – val_acc: 0.5333 Epoch 10/10 196/196 [==============================] – 2s 8ms/step – loss: 0.4013 – acc: 0.8605 – val_loss: 1.8308 – val_acc: 0.5417 |
And that’s how we are able to use totally different activation capabilities in our TensorFlow fashions!
Additional Studying
Different examples of activation capabilities:
Abstract
On this put up, you have got seen why activation capabilities are necessary to permit for the complicated neural networks that we see frequent in deep studying at this time. You might have additionally seen some common activation capabilities, their derivatives, and methods to combine them into your TensorFlow fashions.
Particularly, you realized:
- Why non-linearity is necessary in a neural community
- How totally different activation capabilities can contribute to the vanishing gradient drawback
- Sigmoid, tanh, and ReLU activation capabilities
- How one can use totally different activation capabilities in your TensorFlow mannequin