葡萄酒质量数据分析

问题描述 投票:1回答:1

我有一个数据集,根据酸含量,密度,pH值等因素来解释葡萄酒的质量。我附上链接,它将显示葡萄酒质量数据集。根据数据集,我们需要使用多类分类算法来使用训练和测试数据来分析此数据集。如果我错了请纠正我?

Wine_Quality.csv数据集

https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/

我还使用了主成分分析算法来处理这个数据集。以下是我使用的代码: -

# -*- coding: utf-8 -*-
"""
Created on Sun Aug 26 14:14:44 2018

@author: 1022316
"""

# Wine Quality testing
#Multiclass classification - PCA

#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

#importing the Dataset
dataset = pd.read_csv('C:\Machine learning\winequality-red_1.csv')
X = dataset.iloc[:, 0:11].values
y = dataset.iloc[:, 11].values

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

#Applying the PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2 )
X_train = pca.fit_transform(X_train)
X_test = pca.fit_transform(X_test)
explained_variance = pca.explained_variance_ratio_

# Fitting Logistic Regression to the Training set
#from sklearn.tree import DecisionTreeClassifier
#classifier = DecisionTreeClassifier(max_depth = 2).fit(X_train, y_train)
#y_pred = classifier.predict(X_test)

#classifier = LogisticRegression(random_state = 0)
#classifier.fit(X_train, y_train)

#Fiiting the Logistic Regression model to the training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)


#Predicting thr Test set results
y_pred = classifier.predict(X_test)


# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

如果我使用此数据集的正确算法,请告诉我。此外,正如我所看到的,我们有9个类,其中将分割此数据集。还请让我知道如何在不同的类中相应地可视化和绘制数据。

python-3.x machine-learning classification pca multiclass-classification
1个回答
1
投票

根据数据集,我们需要使用多类分类算法来使用训练和测试数据来分析此数据集。如果我错了请纠正我?

正确。

如果我使用此数据集的正确算法,请告诉我。

是。但是更系统地应用它们的方法是:首先使用PCA直观地探索类的可分性和组件的相对信息(你使用的是前两个)。然后,逻辑回归应用于原始高维和PCA低维特征空间。

#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import  seaborn as sns

#importing the Dataset
dataset = pd.read_csv('winequality-red.csv', sep=';') # https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
sns.countplot(dataset['quality'])

observation: 6 classes and high class imbalance

观察:6类和高级不平衡(6可能是因为我们在您共享的页面中使用不同的数据集)。

此外,正如我所看到的,我们有9个类,其中将分割此数据集。还请让我知道如何在不同的类中相应地可视化和绘制数据。

# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit_transform(X)

#Applying the PCA
from sklearn.decomposition import PCA
fig = plt.figure(figsize=(12,6))
pca = PCA()
pca_all = pca.fit_transform(X)
pca1 = pca_all[:, 0]
pca2 = pca_all[:, 1]

fig.add_subplot(1,2,1)
plt.bar(np.arange(pca.n_components_), 100*pca.explained_variance_ratio_)
plt.title('Relative information content of PCA components')
plt.xlabel("PCA component number")
plt.ylabel("PCA component variance % ")

fig.add_subplot(1,2,2)
plt.scatter(pca1, pca2, c=y, marker='x', cmap='jet')
plt.title('Class distributions')
plt.xlabel("PCA Component 1")
plt.ylabel("PCA Component 2")

enter image description here

量化多类分类性能有许多指标。使用accuracy

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split

#Fiiting the Logistic Regression model to the training set
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score

classifier = LogisticRegression(random_state = 0)

# PCA 2D space
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(data=pca_all).iloc[:,0:2], y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_pca_2d = accuracy_score(y_test, y_pred)

# PCA 3D space
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(data=pca_all).iloc[:,0:3], y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_pca_3d = accuracy_score(y_test, y_pred)

# PCA 2D space
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)
accuracy_original = accuracy_score(y_test, y_pred)

plt.figure()
sns.barplot(x=['pca 2D space', 'pca 3D space', 'original space'], y=[accuracy_pca_2d, accuracy_pca_3d, accuracy_original])
plt.ylabel('accuracy')

enter image description here

这表明在减少的PCA 2D空间中进行分类具有负面影响;至少,根据这个措施和设置。

为了可视化混淆矩阵,可以使用this。申请原始空间案例:

enter image description here

© www.soinside.com 2019 - 2024. All rights reserved.