logo Use CA10RAM to get 10%* Discount.
Order Nowlogo
(5/5)

Calculate the log odds for the data. Write the discriminant function in terms of the log odds.

INSTRUCTIONS TO CANDIDATES
ANSWER ALL QUESTIONS

Assignment 1

Data: A dataset, “Raison_Dataset.csv”, is provided to you. Data description can be found in “Raison_Dataset.txt”. Please read through the document and then use the data to do the following tasks.

Note: You may consider each column a variable. The input attributes include: “Area”, “MajorAxisLength”, “MinorAxisLength”, “Electricity”, “ConvexArea”, “Extent”, “Perimeter”. The samples are independent and identically distributed (iid).

 

 

Part 1 (70 pts)

1. What is the number of classes in this dataset? (2 pts)

2. Calculate the log odds for the data. Write the discriminant function in terms of the log odds. (6 pts)

3. Assume that the input attributes are multivariate normal. Further assume that the input attributes in each class follow a different multivariate distribution. Calculate the mean vector and covariance matrix for the input attributes in each class. (Hint: consider your answer in 1. You should obtain this many sets of mean vector and covariance matrix.) (8 pts)

4. Given your answer in 3, generate 10 samples from each of the multivariate distributions. (Hints: the number of samples generated should be 10 times number of classes.) (10 pts)

5. Given the assumption that input attributes are multivariate normal, visualize the joint distribution of “MajorAxisLength” and “MinorAxisLength” for each class. Based on the “multivariate normal” assumption, do you think that “MajorAxisLength” and “MinorAxisLength” are both univariate normal, and why? (Hint: use your results from 3 and visualize the parametric form of distribution. Create grids for [0, 800] × [0, 800] for 3D plots.) (10 pts)

6. Given your answers in 3, write the functional form of the likelihood ratio. You may define notations for the mean and covariance of each class. (6 pts)

7. Given your answers in 3 and 6, write the discriminant function for each class. Then, calculate the discriminant functions it for each sample point and label each of them with the class name. (Hint: see Eq. (4.20) in textbook. The “label” here is based on your calculated discriminant. You may store the labels in an Excel or .csv file.) (10 pts)

8. Given your answers in 3 and 6, if pooling the covariance of all classes, write the discriminant function for each class. Then, calculate the discriminant functions it for each sample point and label each of them with the class name. (Hint: see Eq. (5.21) and (5.22) in textbook. The “label” here is based on your calculated discriminant. You may store the labels in an Excel or .csv file.) (10 pts)

 

9. Use a confusion matrix to show the classification results with the discriminant function in 7 and 8, respectively. Calculate the classification accuracy for both and compare the results. Briefly describe your findings. (Hint: you will obtain 2 confusion matrices, one for the result in 7 and the other for 8.) (8 pts)

 

Part 2 (60 pts)

Do 4-fold cross validation for the dataset and perform classification analysis: (1) randomly shuffle the samples, (2) partition the data into 4 folds, (3) choose 3 out of the 4 folds as training data and the rest 1 as testing data (you can do this for 4 times by choosing 3 different folds each time).

For each of the 4 replicates, do the following:

1. Assume that the input attributes are multivariate normal. Calculate the mean vector and covariance matrix for the input attributes in each class using the training data. (10 pts)

2. Given your answers in 1, calculate the discriminant function for the testing data. Then label each testing sample with the class name. Finally, create a confusion matrix to show the classification result for testing data. (Hint: You may store the labels in an Excel or .csv file.) (15 pts)

3. Given your answers in 1, if pooling the covariance of all classes, calculate the discriminant function for the testing data. Then label each testing sample with the class name. Finally, create a confusion matrix to show the classification result for testing data. (Hint: You may store the labels in an Excel or .csv file.) (15 pts)

4. For the discriminant functions in 2 and 3, respectively, calculate the average false positive rate, false negative rate, true positive rate, and true negative rate for the classification results throughout the 4 replicates that you have completed. (Hint: you will get four rates for each classification method.) (15 pts)

5. Briefly describe the performance of each discrimination method and identify the best one for this dataset based on the average performance across 4-fold cross validation. (5 pts)

 

 

Part 3 (70 pts)

For this part, take “Area”, “MajorAxisLength”, “MinorAxisLength”, “Electricity”, “Extent”, “Perimeter” as independent variables, and “ConvexArea” as dependent variables.

1. Visualize “ConvexArea” against each independent variable and describe the trend and patterns in your plots. (Hints: you will get 6 plots, each with “Area” as the vertical axis and an independent variable as the horizontal axis.) (8 pts)

2. Use the first 600 samples in the dataset as the training data and the rest as the testing data. Calculate the correlation matrix for all dependent and independent variables for the training data. Based on the correlation matrix, identify which independent variables have major impact to the dependent variable. Does the impact imply a causal relationship and why? (Hint: Save the correlations in an Excel or .csv file.) (8 pts)

3. Use Python to fit a linear regression model using the training data. Summarize the model coefficients. Based on the coefficients, which independent variables have more impact to the dependent variable? (10 pts)

4. Use the model fitted in 3 to make predictions for testing data. Calculate the mean squared error for the testing samples with respect to the predictions. Do you think the model has a good prediction performance? (Hint: input the testing samples of independent variables into your fitted model and then evaluate the prediction against the true sample values of dependent variable.) (8 pts)

 

5. Based on result in 3, do you think that the independent variables are mutually linearly independent? What’s the influence on the linear regression model with the appearance of linear dependence among independent variables? (5 pts)

6. Perform Principal Component Analysis on the training data matrix of independent variables. Show the variance explained by each principal component. (6 pts)

7. Visualize a Pareto chart for the variance explained by each principal component. (10 pts)

8. Take the first 3 principal components from 7 and fit a multivariate regression model. Do prediction with the model for testing samples and calculate the mean squared error. (Hint: You need to transform the testing data to principal components as well.) (10 pts)

9. Give a practical scenario when you will use Principal Component Analysis to reduce the data dimensionality before fitting a regression model; give a practical scenario when you will NOT use Principal Component Analysis to reduce the data dimensionality before fitting a regression model. (Hint: you can name any data source and/or situations, which is not necessarily related to the Raison dataset.) (5 pts)

 

 

(5/5)
Attachments:

Related Questions

. Introgramming & Unix Fall 2018, CRN 44882, Oakland University Homework Assignment 6 - Using Arrays and Functions in C

DescriptionIn this final assignment, the students will demonstrate their ability to apply two ma

. The standard path finding involves finding the (shortest) path from an origin to a destination, typically on a map. This is an

Path finding involves finding a path from A to B. Typically we want the path to have certain properties,such as being the shortest or to avoid going t

. Develop a program to emulate a purchase transaction at a retail store. This program will have two classes, a LineItem class and a Transaction class. The LineItem class will represent an individual

Develop a program to emulate a purchase transaction at a retail store. Thisprogram will have two classes, a LineItem class and a Transaction class. Th

. SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their instance variables we wish to define:

1 Project 1 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of

. Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of Sea Ports. Here are the classes and their instance variables we wish to define:

1 Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of

Ask This Question To Be Solved By Our ExpertsGet A+ Grade Solution Guaranteed

expert
Um e HaniScience

964 Answers

Hire Me
expert
Muhammad Ali HaiderFinance

645 Answers

Hire Me
expert
Husnain SaeedComputer science

905 Answers

Hire Me
expert
Atharva PatilComputer science

677 Answers

Hire Me
March
January
February
March
April
May
June
July
August
September
October
November
December
2025
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
SunMonTueWedThuFriSat
23
24
25
26
27
28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
4
5
00:00
00:30
01:00
01:30
02:00
02:30
03:00
03:30
04:00
04:30
05:00
05:30
06:00
06:30
07:00
07:30
08:00
08:30
09:00
09:30
10:00
10:30
11:00
11:30
12:00
12:30
13:00
13:30
14:00
14:30
15:00
15:30
16:00
16:30
17:00
17:30
18:00
18:30
19:00
19:30
20:00
20:30
21:00
21:30
22:00
22:30
23:00
23:30