The second derivative and second partial derivatives are important in optimization problems because they tell you whether a critical point (where the first derivatives equal zero) is a local maximum, minimum or indeterminate. For example, for the function x^3
the critical point at x = 0 is neither a local min or max. The second derivative is 6x = 0. If the second derivative were negative, the critical point would be a local max and if it were positive, a local min. The reason is that the second derivative tells you how the first derivative (the slope) is changing. If the second derivative is negative, the slope is decreasing, so you have a hill with a maximum. If the second derivative is positive, the slope in increasing, so you have a valley with a minimum. For the function x^3, the slope is decreasing and then increasing around the critical point, so you have neither a hill nor a valley.
It two dimensions, the second partials have the same function. For example here
is another case where there is a critical point that is neither a min or a max. But in two dimensions, you have to look at the 2x2 matrix of second partial derivatives called the Hessian matrix.