I think we can do this in a straightforward manner, but let's borrow Soroban's trick. I think it makes things a touch clearer:
Let . Then
Now, even though doesn't exist, we know that the cosine function only varies between -1 and 1. So we can pretend that the limit does exist and is some finite number between -1 and 1. BUT for any finite a. Thus:
1. Translation. Given the graph of y = f(x), we know that the graph of y' = f(x - h) is a translation of the original graph by h units to the right. Similarly y' = f(x + h) is a translation of the original graph by h units to the left.
2. Dilation (aka "stretching" and "shrinking"). Given the graph y = f(x), we know that the graph of y' = f(ax) for a > 1 "shrinks" the x-axis by a factor of a. (The graph becomes thinner.) For y'' = f(ax) and 0 < a < 1 the x-axis is "stretched" by a factor of 1/a. (The graph becomes wider.)
For y = cos(3x), take the graph of y = cos(x) and shrink the axis by a factor of 3. That is, instead of the function having a period of it will now have a period of , so the interval now represents 3 oscillations.
For the other problem, you could simply take the graph of y = sin(x), shrink the x-axis by a factor of 2, then translate the graph units to the left. However, I personally prefer to do my dilations last, so I'd do the following:
So we first want to take the graph of y = sin(x) and translate it units to the left, then shrink the x-axis by a factor of 2.