Basically if you have an unconstrained PDF, you can select a subset and use that as your new probability space (you may have to normalize it if it's not normalized yet).
If the PDF is defined, then usually what happens is you restrict the domain. So for example you might have a normal distribution but put a constraint that X >= 0 (But this still behaves the same way probabilistically as the un-constrained). So if this is the case, you mask out the negative values, re-normalize and this becomes your distribution.
You also have what I call "conditional slicing". Basically all this means is in a joint distribution, you fix one parameter and then you get the distribution for the other given whatever fixed parameter. Mathematically it would look something like P(Y|X = x) or P(Y|X < x) or something similar. You don't have to fix one value though: you can do something like P(Y|a < X < b) or something along those lines.
So in short, you start with some unconstrained density function (possibly many variables) and then you select a subset of that probability space and typically if you want to treat this as a PDF, then you normalize it and it becomes its own distribution and you can do all the things you do with a normal distribution like expectation, variance, cumulative probability, whatever.