Machine Learning with Python

Bikash Santra

Research Scholar, Indian Statistical Institute, Kolkata

Python Packages

a) Image Input / Output: NumPy
b) Feature Extraction (Deep Convolutional Neural Network): pytorch
c) Classification (Deep Convolutional Neural Network): pytorch
d) Classification (Random Forest): scikit-learn

Convolutional Neural Network (CNN)¶

AlexNet [2012]- ImageNet Dataset


In [1]:
# Pytorch libraries
import torch
import torchvision
import torchvision.transforms as transforms

# For displaying images and numpy operations
import matplotlib.pyplot as plt
import numpy as np

# For CNN Purpose
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

# Loss function and optimizer
import torch.optim as optim
Initializing Data Loader¶

The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1]

In [2]:
transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
Files already downloaded and verified
Files already downloaded and verified
Functions to show an image¶
In [3]:
def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()

# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# print(labels)

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
  cat   cat   dog  deer
Define a Convolution Neural Network¶
In [4]:
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.relu1 = nn.ReLU()
        self.pool1 = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.relu2 = nn.ReLU()
        self.pool2 = nn.MaxPool2d(2, 2)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.relu3 = nn.ReLU()
        self.fc2 = nn.Linear(120, 84)
        self.relu4 = nn.ReLU()
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = self.relu1(x)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.relu2(x)
        x = self.pool2(x)
        x = x.view(-1, 16 * 5 * 5)
        x = self.fc1(x)
        x = self.relu3(x)
        x = self.fc2(x)
        x2 = self.relu4(x)
        x = self.fc3(x2) 
        return (x,x2)

net = Net()
In [5]:
print(net)
Net(
  (conv1): Conv2d (3, 6, kernel_size=(5, 5), stride=(1, 1))
  (relu1): ReLU()
  (pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1))
  (conv2): Conv2d (6, 16, kernel_size=(5, 5), stride=(1, 1))
  (relu2): ReLU()
  (pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1))
  (fc1): Linear(in_features=400, out_features=120)
  (relu3): ReLU()
  (fc2): Linear(in_features=120, out_features=84)
  (relu4): ReLU()
  (fc3): Linear(in_features=84, out_features=10)
)
In [6]:
# Using GPU, if available
use_gpu = torch.cuda.is_available()
print(use_gpu)

if use_gpu:
    net.cuda()
True
Define Loss function and optimizer¶
In [7]:
# Let’s use a Classification Cross-Entropy loss and SGD with momentum
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
Train the network¶
In [8]:
# This is when things start to get interesting. 
# We simply have to loop over our data iterator, and feed the inputs to the network and optimize

for epoch in range(20):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs
        inputs, labels = data
        
        # wrap them in Variable
        if use_gpu:
            inputs, labels = Variable(inputs.cuda()), \
                Variable(labels.cuda())
        else:
            inputs, labels = Variable(inputs), Variable(labels)

        # zero the parameter gradients
        optimizer.zero_grad()

        outputs, features = net(inputs)
        
        # forward + backward + optimize
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        
        # print statistics
        running_loss += loss.data[0]
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0
    
print('Finished Training')
[1,  2000] loss: 2.161
[1,  4000] loss: 1.834
[1,  6000] loss: 1.698
[1,  8000] loss: 1.588
[1, 10000] loss: 1.526
[1, 12000] loss: 1.459
[2,  2000] loss: 1.396
[2,  4000] loss: 1.382
[2,  6000] loss: 1.351
[2,  8000] loss: 1.326
[2, 10000] loss: 1.309
[2, 12000] loss: 1.309
[3,  2000] loss: 1.224
[3,  4000] loss: 1.188
[3,  6000] loss: 1.234
[3,  8000] loss: 1.209
[3, 10000] loss: 1.213
[3, 12000] loss: 1.182
[4,  2000] loss: 1.115
[4,  4000] loss: 1.123
[4,  6000] loss: 1.120
[4,  8000] loss: 1.139
[4, 10000] loss: 1.107
[4, 12000] loss: 1.107
[5,  2000] loss: 1.042
[5,  4000] loss: 1.040
[5,  6000] loss: 1.054
[5,  8000] loss: 1.045
[5, 10000] loss: 1.045
[5, 12000] loss: 1.041
[6,  2000] loss: 0.950
[6,  4000] loss: 0.995
[6,  6000] loss: 0.995
[6,  8000] loss: 1.007
[6, 10000] loss: 1.001
[6, 12000] loss: 1.006
[7,  2000] loss: 0.911
[7,  4000] loss: 0.940
[7,  6000] loss: 0.954
[7,  8000] loss: 0.973
[7, 10000] loss: 0.931
[7, 12000] loss: 0.982
[8,  2000] loss: 0.878
[8,  4000] loss: 0.902
[8,  6000] loss: 0.893
[8,  8000] loss: 0.921
[8, 10000] loss: 0.917
[8, 12000] loss: 0.932
[9,  2000] loss: 0.814
[9,  4000] loss: 0.859
[9,  6000] loss: 0.882
[9,  8000] loss: 0.894
[9, 10000] loss: 0.886
[9, 12000] loss: 0.913
[10,  2000] loss: 0.808
[10,  4000] loss: 0.841
[10,  6000] loss: 0.830
[10,  8000] loss: 0.842
[10, 10000] loss: 0.863
[10, 12000] loss: 0.882
[11,  2000] loss: 0.782
[11,  4000] loss: 0.786
[11,  6000] loss: 0.831
[11,  8000] loss: 0.798
[11, 10000] loss: 0.867
[11, 12000] loss: 0.849
[12,  2000] loss: 0.724
[12,  4000] loss: 0.780
[12,  6000] loss: 0.800
[12,  8000] loss: 0.809
[12, 10000] loss: 0.830
[12, 12000] loss: 0.834
[13,  2000] loss: 0.733
[13,  4000] loss: 0.746
[13,  6000] loss: 0.775
[13,  8000] loss: 0.786
[13, 10000] loss: 0.782
[13, 12000] loss: 0.803
[14,  2000] loss: 0.689
[14,  4000] loss: 0.731
[14,  6000] loss: 0.753
[14,  8000] loss: 0.768
[14, 10000] loss: 0.795
[14, 12000] loss: 0.794
[15,  2000] loss: 0.688
[15,  4000] loss: 0.697
[15,  6000] loss: 0.733
[15,  8000] loss: 0.766
[15, 10000] loss: 0.774
[15, 12000] loss: 0.783
[16,  2000] loss: 0.655
[16,  4000] loss: 0.696
[16,  6000] loss: 0.705
[16,  8000] loss: 0.742
[16, 10000] loss: 0.767
[16, 12000] loss: 0.751
[17,  2000] loss: 0.635
[17,  4000] loss: 0.680
[17,  6000] loss: 0.718
[17,  8000] loss: 0.724
[17, 10000] loss: 0.754
[17, 12000] loss: 0.773
[18,  2000] loss: 0.635
[18,  4000] loss: 0.664
[18,  6000] loss: 0.693
[18,  8000] loss: 0.730
[18, 10000] loss: 0.718
[18, 12000] loss: 0.745
[19,  2000] loss: 0.622
[19,  4000] loss: 0.653
[19,  6000] loss: 0.675
[19,  8000] loss: 0.730
[19, 10000] loss: 0.714
[19, 12000] loss: 0.724
[20,  2000] loss: 0.596
[20,  4000] loss: 0.674
[20,  6000] loss: 0.668
[20,  8000] loss: 0.692
[20, 10000] loss: 0.722
[20, 12000] loss: 0.734
Finished Training
In [10]:
# Saving the model
torch.save(net.state_dict(), 'cifar10Model.pth')
In [11]:
# Loading the model
net.load_state_dict(torch.load('cifar10Model.pth'))
In [9]:
# Set model to evaluation mode
net.eval()
Out[9]:
Net(
  (conv1): Conv2d (3, 6, kernel_size=(5, 5), stride=(1, 1))
  (relu1): ReLU()
  (pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1))
  (conv2): Conv2d (6, 16, kernel_size=(5, 5), stride=(1, 1))
  (relu2): ReLU()
  (pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), dilation=(1, 1))
  (fc1): Linear(in_features=400, out_features=120)
  (relu3): ReLU()
  (fc2): Linear(in_features=120, out_features=84)
  (relu4): ReLU()
  (fc3): Linear(in_features=84, out_features=10)
)

Feature Extraction and Classification using CNN¶

We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all.

We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.

In [10]:
dataiter = iter(testloader)
images, labels = dataiter.next()

Okay, first step. Let us display an image from the test set to get familiar.

In [11]:
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
('GroundTruth: ', '  cat  ship  ship plane')

Okay, now let us see what the neural network thinks these examples above are:

In [12]:
if use_gpu:
    outputs,_ = net(Variable(images.cuda()))
else:
    outputs,_ = net(Variable(images))
# outputs = F.softmax(outputs)
outputs = outputs.cpu()

The outputs are energies for the 10 classes. Higher the energy for a class, the more the network thinks that the image is of the particular class. So, let’s get the index of the highest energy:

In [13]:
_, predicted = torch.max(outputs.data, 1)
predicted = predicted.numpy()
className = list(classes)
print('Predicted: ', ' '.join('%5s' % className[predicted[j]]
                              for j in range(4)))
('Predicted: ', '  cat  ship truck plane')

The results seem pretty good.

Let us look at how the network performs on the whole dataset.

In [14]:
correct = 0
total = 0
for data in testloader:
    images, labels = data
    if use_gpu:
        outputs,_ = net(Variable(images.cuda()))
        outputs = outputs.cpu()
    else:
        outputs,_ = net(Variable(images))

    _, predicted = torch.max(outputs.data, 1)
    total += labels.size(0)
    correct += (predicted == labels).sum()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))
Accuracy of the network on the 10000 test images: 60 %

Hmmm, what are the classes that performed well, and the classes that did not perform well:

In [15]:
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
    images, labels = data
    if use_gpu:
        outputs,_ = net(Variable(images.cuda()))
        outputs = outputs.cpu()
    else:
        outputs,_ = net(Variable(images))
        
    _, predicted = torch.max(outputs.data, 1)
    c = (predicted == labels).squeeze()
    for i in range(4):
        label = labels[i]
        class_correct[label] += c[i]
        class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))
Accuracy of plane : 62 %
Accuracy of   car : 73 %
Accuracy of  bird : 45 %
Accuracy of   cat : 44 %
Accuracy of  deer : 62 %
Accuracy of   dog : 34 %
Accuracy of  frog : 70 %
Accuracy of horse : 67 %
Accuracy of  ship : 65 %
Accuracy of truck : 76 %

Feature Extraction using CNN¶

Train Set of CIFAR-10 Dataset

In [16]:
for i, data in enumerate(trainloader, 0):
    # get the inputs
    inputs, labels = data

    # wrap them in Variable
    if use_gpu:
        inputs, labels = Variable(inputs.cuda()), \
            Variable(labels.cuda())
    else:
        inputs, labels = Variable(inputs), Variable(labels)

    # extracting features
    _, features = net(inputs)
    
    if use_gpu:
        features = features.cpu()
        labels = labels.cpu()
    feature = features.data.numpy()
    label = labels.data.numpy()
    label = np.reshape(label,(labels.size(0),1))

    if i==0:
        featureMatrix = np.copy(feature)
        labelVector = np.copy(label)
    else:
        featureMatrix = np.vstack([featureMatrix,feature])
        labelVector = np.vstack([labelVector,label])
    
print('Finished Feature Extraction for Train Set')
Finished Feature Extraction for Train Set

Test Set of CIFAR-10 Dataset

In [17]:
for i, data in enumerate(testloader, 0):
    # get the inputs
    inputs, labels = data

    # wrap them in Variable
    if use_gpu:
        inputs, labels = Variable(inputs.cuda()), \
            Variable(labels.cuda())
    else:
        inputs, labels = Variable(inputs), Variable(labels)

    # extracting features
    _, features = net(inputs)
    
    if use_gpu:
        features = features.cpu()
        labels = labels.cpu()
    feature = features.data.numpy()
    label = labels.data.numpy()
    label = np.reshape(label,(labels.size(0),1))

    if i==0:
        featureMatrixTest = np.copy(feature)
        labelVectorTest = np.copy(label)
    else:
        featureMatrixTest = np.vstack([featureMatrixTest,feature])
        labelVectorTest = np.vstack([labelVectorTest,label])
    
print('Finished Feature Extraction for Test Set')
Finished Feature Extraction for Test Set

Classification Using Random Forest¶

In [18]:
# Import Packages for Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.externals import joblib

from sklearn.ensemble import RandomForestClassifier
In [19]:
# Defining Random Forest Claasifier
clf = RandomForestClassifier(n_estimators = 100)
print(clf.get_params())
{'warm_start': False, 'oob_score': False, 'n_jobs': 1, 'verbose': 0, 'max_leaf_nodes': None, 'bootstrap': True, 'min_samples_leaf': 1, 'n_estimators': 100, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'criterion': 'gini', 'random_state': None, 'min_impurity_split': 1e-07, 'max_features': 'auto', 'max_depth': None, 'class_weight': None}
In [20]:
# Train the Random Forest using Train Set of CIFAR-10 Dataset
clf.fit(featureMatrix, np.ravel(labelVector))
Out[20]:
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=100, n_jobs=1, oob_score=False, random_state=None,
            verbose=0, warm_start=False)
In [21]:
# Test with Random Forest for Test Set of CIFAR-10 Dataset
labelVectorPredicted = clf.predict(featureMatrixTest)
Glimpse of Classifcation Results¶
In [23]:
labelVectorTest = np.ravel(labelVectorTest)
className = list(classes)
print('GroundTruth', 'Predicted')
print('--------', '--------')
for i in range(10):
    print(className[labelVectorTest[i]], className[labelVectorPredicted[i]])
('GroundTruth', 'Predicted')
('--------', '--------')
('cat', 'cat')
('ship', 'ship')
('ship', 'ship')
('plane', 'plane')
('frog', 'deer')
('frog', 'frog')
('car', 'car')
('frog', 'deer')
('cat', 'cat')
('car', 'car')
Classification Performance over Whole Test Dataset¶
In [24]:
correct = (labelVectorPredicted == labelVectorTest).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / labelVectorTest.shape[0]))
Accuracy of the network on the 10000 test images: 61 %
In [25]:
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
c = (labelVectorPredicted == labelVectorTest).squeeze()
for i in range(labelVectorTest.shape[0]):
    label = labelVectorTest[i]
    class_correct[label] += c[i]
    class_total[label] += 1
    
for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))
Accuracy of plane : 67 %
Accuracy of   car : 76 %
Accuracy of  bird : 47 %
Accuracy of   cat : 42 %
Accuracy of  deer : 55 %
Accuracy of   dog : 47 %
Accuracy of  frog : 70 %
Accuracy of horse : 65 %
Accuracy of  ship : 71 %
Accuracy of truck : 72 %

Determining Feature Importance using Random Forest¶

In [26]:
print(clf.feature_importances_)
[ 0.03769717  0.00411907  0.01273275  0.02429402  0.01217808  0.01625152
  0.01021522  0.00586192  0.01477374  0.0116498   0.00859497  0.00311773
  0.01481239  0.01126818  0.00971793  0.0110501   0.01023917  0.00853402
  0.00436566  0.01453332  0.01789686  0.00968902  0.01726169  0.00865376
  0.01160932  0.00419128  0.00540201  0.00819441  0.00530911  0.00370293
  0.01436403  0.01705612  0.00559488  0.03495427  0.0336677   0.01501326
  0.01592497  0.00787054  0.01127615  0.00892795  0.02133391  0.00415869
  0.02130999  0.01605735  0.0205778   0.01030982  0.00605934  0.00362365
  0.00661472  0.00428696  0.01031341  0.01831779  0.00665974  0.01552463
  0.01033614  0.01291516  0.00708685  0.00592911  0.00561768  0.00631969
  0.00637401  0.02571075  0.00552464  0.00832367  0.00843367  0.0067034
  0.01800309  0.00530255  0.00994869  0.01335903  0.00444832  0.02132011
  0.00646178  0.00555633  0.02113542  0.00296896  0.00386136  0.01553458
  0.00774702  0.01703795  0.00776225  0.00415027  0.01633417  0.04207852]
In [27]:
# Saving and loading models
joblib.dump(clf, 'fc1_forest1.pkl')
ctl = joblib.load('fc1_forest1.pkl')

References

a) http://pytorch.org//
b) http://scikit-learn.org/stable/
c) Ho, Tin Kam. "Random decision forests." In Document analysis and recognition, 1995., proceedings of the third international conference on, vol. 1, pp. 278-282. IEEE, 1995.
d) Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." In Advances in neural information processing systems, pp. 1097-1105. 2012.

Click here to download the source code of this tutorial.