The obvious thing to do is to treat each months separately. That is, for January, divide by 30, Feb divide by 28 (29 in leap years), March, divide by 30, etc.
Not sure the best way to describe this unfortunately, but basically I have a large data set and I want to graph a count of the number of records for each day of the month. The line graph will have 31 total points, one for each day of the month. However, since February has only 28 days, and roughly half of the months have only 31 days, the 31st point on the graph will look artificially low as far as a linear trend goes. To put it in context these are payments from vendors. Some vendors cut checks every day, some on Mondays only, some on the first and 15th of the month only, some on the 15th and last day of the month (which could be the 28th, 30th, or 31st, which is why this is important), etc etc.
Can anyone suggest an easy way to "normalize" these so that the 31st or 30th etc don't look like false positives or false negatives?