1可选实验室: 线性回归使用 Scikit-Learn

有一个开源的、商业上可用的机器学习工具包,叫做 scikit-learn。本工具包包含您将在本课程中使用的许多算法的实现。

1.1目标

在这个实验室里:

  • 利用 scikit-学习使用线性回归梯度下降法来实现

1.2工具

您将利用 scikit-learn 以及 matplotlib 和 NumPy 中的函数。

2梯度下降

Scikit-learn 有一个梯度下降法回归模型 skearn.line _ model..SGDRegressor。与前面的梯度下降法实现一样,这个模型在标准化输入时表现最好。sklearn.StandardScaler 将像以前的实验室一样执行 z 分数标准化。在这里它被称为“标准分数”。

2.1加载数据集

import numpy as np
np.set_printoptions(precision=2)
from sklearn.linear_model import LinearRegression, SGDRegressor
from sklearn.preprocessing import StandardScaler
from lab_utils_multi import  load_house_data
import matplotlib.pyplot as plt
dlblue = '#0096ff'; dlorange = '#FF9300'; dldarkred='#C00000'; dlmagenta='#FF40FF'; dlpurple='#7030A0'; 
plt.style.use('./deeplearning.mplstyle')

X_train, y_train = load_house_data()
X_features = ['size(sqft)','bedrooms','floors','age']

2.2缩放/归一化训练数据

scaler = StandardScaler()
X_norm = scaler.fit_transform(X_train)
print(f"Peak to Peak range by column in Raw        X:{np.ptp(X_train,axis=0)}")   
print(f"Peak to Peak range by column in Normalized X:{np.ptp(X_norm,axis=0)}")

输出:

Peak to Peak range by column in Raw        X:[2.41e+03 4.00e+00 1.00e+00 9.50e+01]
Peak to Peak range by column in Normalized X:[5.85 6.14 2.06 3.69]

2.3创建并拟合回归模型

sgdr = SGDRegressor(max_iter=1000)
sgdr.fit(X_norm, y_train)
print(sgdr)
print(f"number of iterations completed: {sgdr.n_iter_}, number of weight updates: {sgdr.t_}")

输出:

SGDRegressor(alpha=0.0001, average=False, epsilon=0.1, eta0=0.01,
       fit_intercept=True, l1_ratio=0.15, learning_rate='invscaling',
       loss='squared_loss', max_iter=1000, n_iter=None, penalty='l2',
       power_t=0.25, random_state=None, shuffle=True, tol=None, verbose=0,
       warm_start=False)
number of iterations completed: 1000, number of weight updates: 99001.0

2.4参数视图

注意,这些参数与规范化的输入数据相关联。拟合参数非常接近以前的实验室发现的这些数据。

b_norm = sgdr.intercept_
w_norm = sgdr.coef_
print(f"model parameters:                   w: {w_norm}, b:{b_norm}")
print(f"model parameters from previous lab: w: [110.56 -21.27 -32.71 -37.97], b: 363.16")

输出:

model parameters:                   w: [110.56 -21.26 -32.69 -37.98], b:[363.17]
model parameters from previous lab: w: [110.56 -21.27 -32.71 -37.97], b: 363.16

2.5预测

预测训练数据的目标。使用预测程序和使用 w 和 b 进行计算。

# make a prediction using sgdr.predict()
y_pred_sgd = sgdr.predict(X_norm)
# make a prediction using w,b. 
y_pred = np.dot(X_norm, w_norm) + b_norm  
print(f"prediction using np.dot() and sgdr.predict match: {(y_pred == y_pred_sgd).all()}")

print(f"Prediction on training set:\n{y_pred[:4]}" )
print(f"Target values \n{y_train[:4]}")

输出:

prediction using np.dot() and sgdr.predict match: True
Prediction on training set:
[295.17 486.03 389.56 492.2 ]
Target values 
[300.  509.8 394.  540. ]

2.6Plot结果

# plot predictions and targets vs original features    
fig,ax=plt.subplots(1,4,figsize=(12,3),sharey=True)
for i in range(len(ax)):
    ax[i].scatter(X_train[:,i],y_train, label = 'target')
    ax[i].set_xlabel(X_features[i])
    ax[i].scatter(X_train[:,i],y_pred,color=dlorange, label = 'predict')
ax[0].set_ylabel("Price"); ax[0].legend();
fig.suptitle("target versus prediction using z-score normalized model")
plt.show()

 3总结

在这个实验室里:

  • 使用了一个开源的机器学习工具包 scikit-learn
  • 使用工具包中的线性回归梯度下降法和特性标准化来实现

更多推荐