第二个编程作业,果然一放假就会放松,进度拖得太多了,心塞。。。
先来看看,给的每个文件的作用:
Files included in this exercise
ex2.m - Octave/MATLAB script that steps you through the exercise
ex2 reg.m - Octave/MATLAB script for the later parts of the exercise
ex2data1.txt - Training set for the first half of the exercise
ex2data2.txt - Training set for the second half of the exercise
submit.m - Submission script that sends your solutions to our servers
mapFeature.m - Function to generate polynomial features
plotDecisionBoundary.m - Function to plot classifier’s decision bound-
ary
[?] plotData.m - Function to plot 2D classification data
[?] sigmoid.m - Sigmoid Function
[?] costFunction.m - Logistic Regression Cost Function
[?] predict.m - Logistic Regression Prediction Function
[?] costFunctionReg.m - Regularized Logistic Regression Cost
? indicates files you will need to complete
Throughout the exercise, you will be using the scripts ex2.m and ex2 reg.m.These scripts set up the dataset for the problems and make calls to functions that you will write. You do not need to modify either of them. You are only required to modify functions in other files, by following the instructions in this assignment.
1 Logistic Regression
“ you will build a logistic regression model to predict whether a student gets admitted into a university”
大致意思就是假设自己是一个大学的管理者,请你根据学生两次考试的成绩来决定他们是否会被录取。整个模拟流程在ex2.m
中
1.1 Visualizing the data
Before starting to implement any learning algorithm, it is always good to visualize the data if possible. In the first part of ex2.m, the code will load the data and display it on a 2-dimensional plot by calling the function plotData.
在你实现任何学习算法的时候,一个好的习惯是尽可能的先图形化一下数据集!
You will now complete the code in plotData so that it displays a figure like Figure 1, where the axes are the two exam scores, and the positive and negative examples are shown with different markers.
任务1:完成plotData函数,使其绘制成如图Figure 1 一样的函数图像
这个考察octave的画图语法
data = load('ex2data1.txt');
X = data(:, [1, 2]); y = data(:, 3);
plotData(X, y);
% Put some labels
hold on;
% Labels and Legend
xlabel('Exam 1 score')
ylabel('Exam 2 score')
% Specified in plot order
legend('Admitted', 'Not admitted')
hold off;
其中plotData.m
完成以后如下
function plotData(X, y)
% Create New Figure
figure; hold on;
pos = find(y==1);
neg = find(y==0);
plot(X(pos,1),X(pos,2),'k+','LineWidth',2,'MarkerSize',7);
plot(X(neg,1),X(neg,2),'ko','markerFaceColor','y','MarkerSize',7);
hold off;
end
任务2:实现S型函数(Sigmoid function)
实现好以后根据S型函数的性质,自变量是大整数的时候,应当无限接近1,是大负数的时候无限接近0,当自变量是0的时候等于0.5
我的答案:
g = 1/(1+e.^(-z));
错误提示:
operator /: nonconformant arguments (op1 is 1x1, op2 is 20x3)
正确答案:
g = 1./(1+exp(-z));
这里是我考虑的不够详细了,题中已经反复强调变量 z 可能是一个矩阵,向量或者标量,我还只考虑了标量的情况,并且用exp()
函数应当更加准确一点
所以最终的实现文件sigmoid.m
文件为
function g = sigmoid(z)
%SIGMOID Compute sigmoid functoon
% J = SIGMOID(z) computes the sigmoid of z.
% You need to return the following variables correctly
g = zeros(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
% vector or scalar).
g = 1./(1+exp(-z));
% =============================================================
end
任务3:实现逻辑回归中的代价和梯度函数
实现costFunction.m
文件中的内容,代码如下图所示
function [J, grad] = costFunction(theta, X, y)
%COSTFUNCTION Compute cost and gradient for logistic regression
% J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
% parameter for logistic regression and the gradient of the cost
% w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
%
% Note: grad should have the same dimensions as theta
%
J = -(y'*log(sigmoid(X*theta))+(1-y)'*log(1-sigmoid(X*theta)))/m;
grad =X'*(sigmoid(X*theta)-y)/m;
end
到这里我们基本可以总结出一些代码的套路
- 大写的
X
一般每次代表的都是整个训练集 - 求
1~m
的所有训练集的循环操作一般都是直接用整个向量X'*(X*theta-y)
(或者转至放在后面的括号里也行)来表示
任务4:使用fminunc
来获得J(θ)的最小值。
Octave/Matlab 的fminunc
函数是一个最优化的解决器,可以找到一个无约束函数的最小值。
代码是作者已经完成了的,在ex2.m中
%% ============= Part 3: Optimizing using fminunc =============
% In this exercise, you will use a built-in function (fminunc) to find the
% optimal parameters theta.
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
% Print theta to screen
fprintf('Cost at theta found by fminunc: %f\n', cost);
fprintf('theta: \n');
fprintf(' %f \n', theta);
% Plot Boundary
plotDecisionBoundary(theta, X, y);
% Put some labels
hold on;
% Labels and Legend
xlabel('Exam 1 score')
ylabel('Exam 2 score')
% Specified in plot order
legend('Admitted', 'Not admitted')
hold off;
fprintf('\nProgram paused. Press enter to continue.\n');
pause;
任务5:Evaluating logistic regression
完成predict.m
文件,预测函数应当只返回0或者1
这个就属于比较简单的了
function p = predict(theta, X)
%PREDICT Predict whether the label is 0 or 1 using learned logistic
%regression parameters theta
% p = PREDICT(theta, X) computes the predictions for X using a
% threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)
m = size(X, 1); % Number of training examples
% You need to return the following variables correctly
p = zeros(m, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
% your learned logistic regression parameters.
% You should set p to a vector of 0's and 1's
%
p = sigmoid(X*theta)>0.5;
end
任务6:正则化逻辑函数
到这个任务开始已经和上面的任务没有什么关系了,重新举了个例子来说明需要正则化的情况
任务:预测一个工厂的芯片是否能通过质量检查
此时的任务整个流程就放在了ex2_reg.m
中,我们这次要做的只是实现costFunctionReg.m
即可
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
% J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
end
正确答案
J = -(y'*log(hx)+(1-y)'*log(1-hx))/m + lambda/(2*m)*theta(2:size(theta),:)'*theta(2:size(theta),:);
theta(1)=0;
grad = (X'*(hx-y)+lambda*theta)/m;
这里的时候我的代码遇到了一个问题:
hx = sigmoid(X*theta);
thetaReg = theta(2:size(theta),:);
J = -(y'*log(hx)+(1-y)'*log(1-hx))/m + lambda/(2*m)*thetaReg*thetaReg';
grad = (X'*(hx-y)+lambda*theta)/m;
我本来是想本着软件工程的思想不想让J那么长 ,谁知竟然聪明反被聪明误,换成thetaReg以后,反而pass不了了。
原因分析:
首先我们来看ex2_reg
的整个流程:
%% =========== Part 1: Regularized Logistic Regression ============
% In this part, you are given a dataset with data points that are not
% linearly separable. However, you would still like to use logistic
% regression to classify the data points.
%
% To do so, you introduce more features to use -- in particular, you add
% polynomial features to our data matrix (similar to polynomial
% regression).
%
% Add Polynomial Features
% Note that mapFeature also adds a column of ones for us, so the intercept
% term is handled
X = mapFeature(X(:,1), X(:,2));
% Initialize fitting parameters
initial_theta = zeros(size(X, 2), 1);
% Set regularization parameter lambda to 1
lambda = 1;
% Compute and display initial cost and gradient for regularized logistic
% regression
[cost, grad] = costFunctionReg(initial_theta, X, y, lambda);
fprintf('Cost at initial theta (zeros): %f\n', cost);
fprintf('\nProgram paused. Press enter to continue.\n');
pause;
%% ============= Part 2: Regularization and Accuracies =============
% Optional Exercise:
% In this part, you will get to try different values of lambda and
% see how regularization affects the decision coundart
%
% Try the following values of lambda (0, 1, 10, 100).
%
% How does the decision boundary change when you vary lambda? How does
% the training set accuracy vary?
%
% Initialize fitting parameters
initial_theta = zeros(size(X, 2), 1);
% Set regularization parameter lambda to 1 (you should vary this)
lambda = 1;
% Set Options
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Optimize
[theta, J, exit_flag] = ...
fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options);
% Plot Boundary
plotDecisionBoundary(theta, X, y);
hold on;
title(sprintf('lambda = %g', lambda))
% Labels and Legend
xlabel('Microchip Test 1')
ylabel('Microchip Test 2')
legend('y = 1', 'y = 0', 'Decision boundary')
hold off;
% Compute accuracy on our training set
p = predict(theta, X);
fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100);
错误原因暂时保留。