Hi All

I'm not sure if this classifies as an "advanced" question. It's challenging for me and I studied maths in university (although admittedly it wasn't my strongest point). Heh...

Here's the problem:

I'm in the process of regularly scraping a range of online deal pages.
Each time the scraper picks up a new/unique deal, an item representing it is created in the database, we store this

Every time the scraper runs, it also notes the revenue generated (e.g. buy count * cost) for the new or existing item, as well as the time it picked this up, and stores this snapshot data on the deal in a format as follows:

[ {buy_count1 * cost, time1}, {buy_count2 * cost, time2}, {buy_count3 * cost, time2} ]
The question is, how would I ultimately determine how to rank all deals by popularity given all their buy_count * cost/time data points?

My understanding is that simple averages would favour new deals over old ones due to the older deals existing during periods of low traffic (e.g. night time).

Any help is much appreciated