Skip to content

MOD2 LAB

Tarik Salay edited this page Dec 11, 2019 · 1 revision

i) Introduction

by Tuyen Vu and Tarik Salay Lab's repository can be found here and the brief video can be found here

ii) Objectives

Tasks:

  1. Build a Sequential model using keras to implement Linear Regression with any data set of your choice except the datasets being discussed in the class or used before

a. Show the graph on TensorBoard b. Plot the loss and then change the below parameter and report your view how the result changes in each case a. learning rate b. batch size c. optimizer d. activation function

  1. Implement *Logistic Regression on following dataset https://www.kaggle.com/ronitf/heart-disease-uci .

            a. Normalize the data before feeding it to the model
    

b. Show the Loss on TensorBoard c. Change three hyperparameter and report how the accuracy changes

*Logistic regression: for understanding the difference between Linear Regression and Logistic Regression refer to this link: https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression

  1. Implement the image classification with CNN model on anyone of the following datasets

    https://www.kaggle.com/slothkong/10-monkey-species

    https://www.kaggle.com/prasunroy/natural-images

  2. Implement the text classification with CNN model on the following movie reviews dataset https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews/data.

  3. Implement the text classification with LSTM model on the following movie reviews dataset. https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews/data

  4. Compare the results of CNN and LSTM models, for the text classification and describe, which model is best for the text classification based on your results. Tune the hyperparameters to attain good accuracy and show the results.

  5. Apply Autoencoders on MNIST dataset and show the encoding and decoding on a particular image. Make sure you document each and every line of the code.

iii) Approaches/Methods

iv) Workflow

Task1

Task2

Task3

Task4

Task5

Code has run and we forgot to save the model, and when we tried to run it again it took so long to run and computer froze

Task6

LSTM performed better than CNN but since it frozen at the second time we cannot provide the exact number.

Task7

v) Datasets

Kaggle has been really helpful to us throughout this project. Datasets could be found at:

Dataset1 Dataset2 Dataset3

vi) Parameters

None

vii) Evaluation & Conclusion

While at some examples adding more epochs helped to increase the accuracy, some of them didn't, as adding more layers. We had some issues with ML libraries and their compatibility with each other, final versions used in this project are Keras 2.2.5 and Tensorflow 1.10, but at some tasks I needed to change it again. This lab is a good exercise for anyone to have a better understanding with especially Keras library.

Clone this wiki locally