博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
gradient descent
阅读量:4149 次
发布时间:2019-05-25

本文共 2024 字,大约阅读时间需要 6 分钟。

clear all; close all; clcx = load('ex2x.dat'); y = load('ex2y.dat');plot(x,y,'o')ylabel('Height in meters')xlabel('Age')m = length(y)x=[ones(m,1),x];theta0=0;theta1=0;alpha = 0.07;n= 1500;theta=[theta0;theta1];theta_array=[];for k=1:n    % for k=1:2  %x的维                        %sum1 = 0;      %sum0 = 0;         %sum=[sum0;sum1];      sum = x'*(x*theta - y);           % for i=1:m %样本个数                          %sum =sum +(theta'*x(i,:)'-y(i)).*x(i,:)';          % sum1 = sum1 + (theta*x(i,:)-y(i))*x(i,2);           % sum0 = sum0 + (theta*x(i,:)-y(i))*x(i,1);          %sum1 = sum1 + (theta1*x(i,2)+theta0*x(i,1)-y(i))*x(i,2);                %sum0 = sum0 + (theta1*x(i,2)+theta0*x(i,1)-y(i))*x(i,1);     % end         theta_array = [theta_array, theta];      theta = theta - alpha*(1/m)*sum;           %theta1 = theta1 - alpha*(1/m)*sum1;      %theta0 = theta0 - alpha*(1/m)*sum0;       end%theta1%theta0hold onplot(x(:,2),theta(2)*x(:,2)+theta(1)*x(:,1))%计算代价函数J_vals = zeros(100, 100);   % initialize Jvals to 100x100 matrix of 0'stheta0_vals = linspace(-3, 3, 100);theta1_vals = linspace(-1, 1, 100);for i = 1:length(theta0_vals)	  for j = 1:length(theta1_vals)	  t = [theta0_vals(i); theta1_vals(j)];	  J_vals(i,j) =1/(2*m).* (x * t - y)' * (x * t - y); %% YOUR CODE HERE %%    endend% Plot the surface plot% Because of the way meshgrids work in the surf command, we need to % transpose J_vals before calling surf, or else the axes will be flippedJ_vals = J_vals';figure;surf(theta0_vals, theta1_vals, J_vals)xlabel('\theta_0'); ylabel('\theta_1')% Contour plotfigure;% Plot J_vals as 15 contours spaced logarithmically between 0.01 and 100contour(theta0_vals, theta1_vals, J_vals, logspace(-2, 2, 15))%画出等高线hold onplot(theta_array(1,:),theta_array(2,:),'r'); %把迭代得到的theta值在等高线的图上也绘制出来xlabel('\theta_0'); ylabel('\theta_1');%类似于转义字符,但是最多只能是到参数0~9 这个代码,我真的调试了很久很久,不过终于搞定了。以后遇到梯度下降法做回归,不用担心了。我接下来将要做的是希望能够使用C++来实现梯度下降法。 小结: 在机器学习中,常说的有监督学习和非监督学习,其中监督的学习可以分为(分类)和(回归),其中它们的区别在于分类的output是离群的,而回归的output是连续的。 梯度下降法中的cost>

转载地址:http://tfpti.baihongyu.com/

你可能感兴趣的文章
[LeetCode]Valid Palindrome
查看>>
[LeetCode]Valid Parentheses
查看>>
[LeetCode]Valid Sudoku
查看>>
[LeetCode]Validate Binary Search Tree
查看>>
[LeetCode]Wildcard Matching
查看>>
[LeetCode]Word Ladder
查看>>
[LeetCode]Word Ladder II
查看>>
[LeetCode]Word Search
查看>>
[LeetCode]ZigZag Conversion
查看>>
Divide and Conquer
查看>>
Greedy
查看>>
Dynamic Programming
查看>>
Sorting
查看>>
Bit manipulation
查看>>
Swap all odd and even bits
查看>>
Binary representation of a given number
查看>>
Position of rightmost set bit
查看>>
Write one line C function to find whether a no is power of two
查看>>
Efficient way to multiply with 7
查看>>
Write a C program to find the parity of an unsigned integer
查看>>