Could someone please explain how Big O can be transferred from code and its notation, into functions of f(x) and g(x), etc., and shown using Mathematics and logarithms?

It's a new topic for me, so I'm quite lost and would appreciate some help!

Additionally, I'm looking at Bubble Sort and Quick Sort, however I only know an example of coding for both. I do not know how to explain them in terms of the mathematical perspective. I want to start by analyzing their examples, and then (using diagrams) talk about their difference. But I'm entirely lost!

Please help! (Step-by-step explanations are really helpful, since I'm a relatively slow learner and don't 'see' things immediately.)