Week+of+4-22+to+4-28

I am now on the home stretch of this textbook. Just four sections left in Chapter 14 and I've gotten through the whole thing. Section 14.3 is about finding zeros of functions. The author does a cliff notes version of the theory behind zero-hunting for an ambiguous x function. It is a process of guessing, testing derivatives to find which direction to go to get closer to 0, and guessing again until you cross the axis by changing sign, and then repeating the process with an even smaller change in x continually until reaching a desired accuracy. MATLAB has the 'roots' function for polynomials, but there is no way to have a definite way of finding the zeros of any ambiguous function. So, MATLAB uses a more sophisticated approximation process in the command 'fzero'. In order to call 'fzero' one needs the function stored in a MATLAB function and a second parameter of either an initial guess or a 1x2 matrix of a range in which the function changes sign. This function, in addition to being used to find zeros, can be used to find intersection points of two functions entered as one minus the other. Section 14.4 is about minimization. The MATLAB function 'fminbnd' finds the local minimum of a function stored in a MATLAB function in a certain range of x values. One danger of this function is that it does not have the ability to discern whether there is more than one local minimum - it just spits back the first one it finds. There is an additional command, 'fminsearch', that finds the nearest minimum to a supplied guess. This can be used for as many dimensions as one desires, given that the guess is an array of a value for each dimension but one. The author then provides an example of a double Gaussian line and a program that solves it from a group of points that have slight variance from it that is frankly beyond my understanding and not worth my time to attempt to comprehend. The point it proves is that the 'fminsearch' function can be used to find the line by attempting to minimize the root mean square using the first variables in its guesses and finding the optimal combination of six values to minimize the RMS. Yet another capability of the minimization of MATLAB is the ability to solve a system of equations. The book goes through a process in which each equation is set equal to zero and then a new function is made. This function is the quadratic formula, using a as the first equation set to 0 and b as the second equation set to 0. The idea is that if they both equal zero, this new equation will equal zero, and that is the only time it possibly can equal zero. Therefore one can use MATLAB to minimize this new equation and thus find the pair of x and y which solve the system of equations. Section 14.5 is about solving differential equations with MATLAB. I will be honest, the author totally lost me here with most of this section. Once again, I do not believe it useful for me to wring out all my brainpower on this because it is not relevant to what I will be doing in the rest of this class. If it by some crazy unfortunate circumstance is, I will surely make sure to spend a big chunk of time attempting to digest it. For now, I will stop at a very basic summary. Apparently MATLAB has loads of commands for solving different kinds of differential equations, all beginning with 'ode' followed by various numbers and sometimes letters. The core function that the book suggests is 'ode45'. He then explains that, as far as I can tell, the way to solve the ordinary differential equations is to cleverly set them up as a two-equation system and run the chosen 'ode' command. Finally, section 14.6 addresses eigenvalues and eigenvectors. I had never heard of either of them before, but they seem like straightforward enough concepts (although the math behind them can clearly be crazy). An eigenvector is one that will give a scalar multiple of itself when multiplied by the matrix it belongs to. An eigenvalue is one that goes with a certain eigenvector and is the scalar it changes by when multiplied by the matrix. Essentially, the matrix * the eigenvector = the eigenvalue * the eigenvector. In MATLAB, 'eig' returns two arrays which contain all eigenvectors and eigenvalues for the matrix in question. The first matrix is one where each column is an eigenvector, and the second one is a matrix in which all elements are 0 except the main diagonal is the eigenvalues. The 'diag' function can be used to extract the eigenvalues from that matrix. I doubt I'll ever use this, but it was at least interesting to learn, but not worth it to explore. That concludes the textbook. My next post will begin my next project, likely the minor improvements on the CMS GUI I planned a while back.