Artificial Neuron in neural networks

Hirunika Karunathilaka
3 min readNov 19, 2019

--

In this chapter, let’s get to know about the basic building block of a neural network which is nothing but the “Artificial Neuron” . We’ll also see what are connection weights, bias and activation functions on the way :-D .

Basically, an artificial neuron is a computational unit which does some computation based on other factors connected to it.

Artificial neuron is connected to some input description which we want to extract the information from. As in the above diagram, the input description directly connected to the neuron will be x1,x2 ,x3,…,x n and some constant value b (For now, just ignore this value). It means that there are n features of information (or dimensions) which help to extract some new output information by doing some calculation inside our artificial neuron.

The 2 processes of the neuron computation will be,

Neuron Pre-activation (input activation)

Pre-activation equation

X is the vector which comprises the set of scalar input features (x1,x2,x3,..). xi will be the i th element of the input vector X.

W is the vector of connection weights. This represents the connection of the strengths (w i) of each input scalar (x i) and the neuron.

b is called the bias and it is usually a constant value we assign before training a network.If there are no inputs to a several neuron, the a(X) or the pre-activation value becomes the value of the bias by default. By observing the input values, the a(X) value will go away from the initial pre-activation of the neuron. Here, bias acts like a constant which helps the model to fit the given data

Basically, the a(X) will be the summation of the scalar value b (bias) and the product of weight vector W and input vector X. From this pre-activation value a(X) , neuron will now calculate its output value h(X) .

Neuron (output) activation

Output activation equation

Output activation value h(X) is calculated by passing the a(X) value through an activation function g.

Basically activation functions can be divided in to 2 types as,

  1. Linear or Identity Activation Function
  2. Non-linear Activation Functions

Some commonly used non-linear activation functions are sigmoid, Tanh, ReLU etc . We’ll talk more on these in a next chapter.

That’s it for this chapter. Enjoy !!!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Hirunika Karunathilaka
Hirunika Karunathilaka

No responses yet

Write a response