Implement the EM algorithm for fitting a Gaussian mixture model for the MNIST dataset.

Implement the EM algorithm for fitting a Gaussian mixture model for the MNIST dataset. We reduce the dataset to be only two cases, of digits “2” and “6” only. Thus, you will fit GMM with C = 2. Use the data file data.mat. True label of the data are also provided in label.mat. The matrix images is of size 784-by-1990, i.e., there are totally 1990 images, and each column of the matrix corresponds to one image of size 28-by-28 pixels (the image is vectorized; the original imagecan be recovered by map the vector into a matrix.)
(a) (5 points) Select from data one raw image of “2” and “6” and visualize them, respectively.
(b) (15 points) Use random Gaussian vector with zero mean as random initial means, and two identitymatrices “I” as initial covariance matrices for the clusters. Plot the log-likelihood function versus the number of iterations to show your algorithm is converging.
(c) (15 points) Report, the fitting GMM model when EM has terminated in your algorithms,including the weights for each component and the mean vectors (please reformat the vectors into28-by-28 images and show these images in your submission). Ideally, you should be able to see these means corresponds to “average” images. No need to report the covariance matrices.
(d) (15 points) Use the pic to infer the labels of the images, and compare with the true labels. Reportthe miss classiification rate for digits “2” and “6” respectively. Perform K-means clustering with K = 2 (you may call a package). Find out the miss classification rate for digits “2” and “6” respectively, and compare with GMM. Which one achieves the better performance?
I need the python code for each question asked

Attachments:

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount