The Basic Weights Calculation Behind of DeepLearning

Understanding the weights calculation with dot product and biases behind of Deep Learning

ade sueb
2 min readSep 23, 2020
X = feature, w = weight, b = bias and Y is label

Weight

First of all if you use Keras as library, Keras provides some random weights. These weights will be the multiplied by features given from training parameter. And sum all the results.

The value of the weight will be optimized during training process, with optimizer, depends on the result of each epoch.

Dot Product

The calculation between weights and features uses dot product.

Why dot Product? because this calculation much simply to get similarity from some values (equation of vector).

As we can see in the image above, if two vector is similar, then the result of dot product is between 0 and 1, and the opposite is between 0 and -1.

The more similar of two values, then the number will be higher. And if the two values is the same, then the value will be 1.

Also, the basic is from best fit linear, or we call it regression linear. When we have multiple dimension of regression linear, so the simple formula from mx+b become Dot Product formula.

Biases

Since the weight values are random, so that sometimes the weights calculation (dot poduct) gives result with the very high number (outside the range). After that the next epoch will gives us more and more wrong values. And the training will be failed. And Biases can help to fix that.

Some reasons why we use Bias values :

  • Make the error value not getting higher.
  • Make calculation not linear.
  • And to make the calculation back on track.

On Keras Library, the default initiate value is zero. and will be changing during training process depends on training result.

As we can see the first image on the above, the bias values will be added up after dot product calculation.

Biases move the line of vector, not changing the line.

--

--

ade sueb

Still believe, can change the world with code..