Implementation of Linear Regression

G Vishnu Vardhan Reddy
5 min readJun 19, 2022

--

Photo by Pietro Jeng on Unsplash

Hi everyone, I will move into great detail about the practical implementation of linear regression using mathematical coding and using the Sklearn library in python.

Before diving into details, here is a small introduction to linear regression from my previous blog

Linear regression is part of a supervised learning algorithm that builds the relationship between the given data’s dependent variables and predicts the output as continuous values.

This blog mainly focuses on giving greater details on the mathematical implementation of linear regression using python as a programming language.

Without wasting time, let’s dive into the mathematical implementation of Linear Regression

Mathematical implementation of the algorithm

Photo by ThisisEngineering RAEng on Unsplash

Don’t be scared of maths, I will make the topic so simple and present it in an easily understandable form 😁😃.

Let’s create a machine learning model to learn a linear equation y=2x+3 using linear regression.

The above equation is of the form y=w*x+b where w and b are parameters and x is an input variable.

Generating w and b values

we can initialize weight(w) and bias(b) with zero or with any random values. Here I’m initializing with random values using random.randn() function from NumPy library.

numpy.random.randn return a sample (or samples) from the “standard normal” distribution.

initialize() is a user-defined function which initiates the w and b with random values and returns both.

import numpy as npdef initialize():
w=np.random.randn()
b=np.random.randn()
return w,b

Hypothesis function

Once after generating w and b values, it’s time to generate create hypothesis function which is defined as h=w*x+b and returns the value of h. To make this possible let’s create a user-defined function called hypothesis() which takes x,w, and b as arguments.

def hypothesis(x,w,b):
h=w*x+b
return h

Calculating the cost

The hypothesis function returns the instant predictions of the linear regression but it may be wrong, so we need to correct the algorithm so, it goes on the right path towards prediction. To make this possible, there is a need measure to the loss. Here we’re using the mean squared error to calculate the loss which is embedded in a user-defined function named cost_function which takes the size of the input dataset (m), predicted value(h) and original value (y) as arguments.

def cost_func(m,h,y):
J=(1/(2*m))*np.sum((h-y)**2)
return J

Gradient Descent — correcting the error

A gradient descent algorithm tries to decrease the cost of the data, to achieve greater accuracies.

For more details about gradient descent, please refer to my previous blog on linear regression

Linear Regression in detail

Let’s create a user-defined function gradient_descent() function which takes h,x,m and y as arguments to calculate the gradients and returns the same

def gradient_descent(h,x,m,y):
dw=(1/m)*np.sum(h-y)*x
db=(1/m)*np.sum(h-y)
return dw,db

Building the model — Embedd all functions

Photo by La-Rel Easter on Unsplash

Let’s create the model user-defined function which uses all the functions which we created and build the machine learning model to make the algorithm learn the data.

Let’s learn more about the model function

def model(x,y,n_iter,learning_rate):
cost=[]
w,b=initialize()
m=len(x)
for iter in range(n_iter):
h=hypothesis(w,b,x)
dw,db=gradient_descent(h,w,m,y)
w-=learning_rate*dw
b-=learning_rate*db
cost_value=cost_func(m,h,y)
cost.append(cost_value)
return w,b,cost

model is a user-defined function which takes x, y, n_iter and learning_rate as arguments.

n_iter=Hyper parameter which signifies the number of times the loop would run

learning_rate=says how fast the descent should be. If alpha is large then the step size of decent will be large and vice versa.

In the above code, we’re initializing an empty list with name cost which stores the cost after each iteration once the for loop starts.

w and b values are initialized using initialize function which we created before.

m is a variable which stores the length of input array (x)

dw and db stores the gradient values of weight and bias which are calculated by the gradient_descent() function which we created before.

w and b values are updated using the following equations

w=w-learning_rate*dw
b=b-learning_rate*db

cost_value stores the cost of the algorithm in that particular iteration and it is appended to the cost array.

Finally, after the loop, the model returns the updated weight, bias and cost list.

Now, it’s time to run the model which we built

Let’s generate the input array using np.arange() function

np.arange(100) contains the values starting from 0 to 99 in the array

x=np.arange(100)

Let’s generate the output array

[2*x+3 for x in range(100)] generates the list of values which maps the function 2*x+3

y=np.array([2*x+3 for x in range(100)])

Let n_iter=1000 and learning_rate=0.006

Running the model

w,b,cost=model(x,y,1000,0.006)

For a better analysis of how our model performed, let’s plot a graph for the data in the cost array using matplotlib library.

Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python.

import matplotlib.pyplot as plt
plt.plot(cost)
plt.show()

The output is as follows:

Loss Graph

We can see the loss of the model is decreasing, now can we say our model has good accuracy?

Is it enough, just create the model?? we’ve to make predictions for new data right? So, let’s create the function named predict which takes the new input as an argument and returns the predictions.

def predict(x):
h=w*x+b
return h
print(predict(10))>>>23.179091533986764

The actual output is 2*10+3=23, but our model prediction is so near to the original value, we can say, our model is performing good in real-time also.

Here comes the end of the blog, if you like the blog, support it with the claps.

Comment your opinions and thoughts…..

Connect me on LinkedIn — https://www.linkedin.com/in/techyvishnu

--

--

G Vishnu Vardhan Reddy
G Vishnu Vardhan Reddy

Written by G Vishnu Vardhan Reddy

Tech enthusiast, web developer, interested in AI

No responses yet