In this tutorial, I will explain how to implement a simple machine learning in Python from scratch. We will implement the linear regression algorithm for predicting bike-sharing users based on temperature. It is a simple linear regression that has one input feature, which is temperature. The output of this linear regression is the number of bike-sharing users.


In this tutorial, I assume you are quite familiar with linear regression model. If not, then you can learn the fundamental of linear regression form Andrew Ng’s machine learning lecture (lecture 2.1 – 2.7).

Data Set

For the sake of simplicity, we will use a randomly generated data set. There are 100 samples in this data set with one input feature (independent variable) and one output (dependent variable). The following figure shows the plot of temperature as a function of number of bike-sharing users. The data set is divided into train set and test set. The ratio between train set and test set is 80:20.

First of all, import the following libraries: numpy and matplotlib:

Then, the following code defines the train set (x_train and y_train) and test set (x_test and y_test):

Finally, you can plot the train set and test set using matplotlib:

Hypothesis Function

Hypothesis function is a function that approximates the data set. We will use this function to make a prediction. The hypothesis  function for this data set is defined as:


where \(\theta_{0}\) and \(\theta_{1}\), are the parameters that we will get through training using gradient descent or using normal equation. \(x\) is the input (x_train or x_test), and \(h_{\theta}\) is the predicted output.

This linear regression model and its mathematical notation are based on Andrew Ng’s machine learning course. If you are not familiar with this mathematical model, then you can learn the details from this lecture (lecture 2.1 – 2.7).

The following code defines the \(\theta_{0}\) and \(\theta_{1}\) as global variables. The we define the hypothesis function.

Gradient Descent

In order to get the \(\theta_{0}\) and \(\theta_{1}\), we can use either gradient descent or normal equation. In this section, we are going to use the gradient descent. The normal equation is going to be implemented in the next section.

Gradient descent is an iterative method to solve the \(\theta_{0}\) and \(\theta_{1}\), while normal equation solves the \(\theta_{0}\) and \(\theta_{1}\) analytically. For a very large data set, the gradient descent method is preferred.

The gradient descent algorithm is defined as:

repeat \; until \; convergence \; \{ \\
\qquad temp0:=\theta_{0}-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x^{(i)})-y^{(i)}) \\
\qquad temp1:=\theta_{1}-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_{\theta}(x^{(i)})-y^{(i)})x^{(i)} \\
\qquad \theta_{0}:=temp0 \\
\qquad \theta_{1}:=temp1 \\

where \(\alpha\) is the learning rate, and \(m\) is number of train set, which is \(80\).

The following code defines a function for calculating the gradient descent:

Then, runs the gradient descent for 10000 iterations and \(\alpha=0.003\). Finally, you can print the result.

You will get \(\theta_{0}\approx 25\) and \(\theta_{1}\approx 1.25\).

Plot the Hypothesis Function

The following code plots the hypothesis function \(h_{\theta}(x)=\theta_{0}+\theta_{1}x\) with the obtained value of \(\theta_{0}\) and \(\theta_{1}\):

As you can see in the following figure, the hypothesis function is a straight line.

Normal Equation

Normal equation is another method that can solves the \(\theta_{0}\) and \(\theta_{1}\) analytically.¬† It is also called closed-form solution. In normal equation method, we don’t need to define the number of iteration and learning rate.

The normal equation is defined as


The following code implements the normal equation method:

You should get the \(\theta_{0}\) and \(\theta_{1}\) given by gradient descent and normal equation are approximately equal.


In this tutorial, you have learned how to implements linear regression from scratch in Python. There are two method that can be used for solving the parameters of hypothesis function, namely gradient descent and normal equation.

Next: Linear Regression in TensorFlow.js and Bootstrap UI

Leave a Reply

Close Menu