I put a counter in on a packaging machine at work that measures feet of film traveling through a machine. The machine then creates pouches of liquid. The film itself is 18" wide and in each "stroke" creates 6, 3" wide packages. The packages are all supposed to be 3.5" in length. So my thought was # of packages (z) * 3.5" (y) / 12 (inches per foot) = total film used (x).
Seemed pretty simple, but then after plugging in data from the counter my x was significantly larger than I had expected. At that point I decided I wanted to know how much film was used on average in excess of 3.5" on each package. This is where I got stuck.
measured feet (m) - x (from above) = excess feet(n)
n / measured packs produced (p) = excess feet per pack (o)
o * 12 = excess inches per pack (i)
m - x = n
n / p = o
o * 12 = i
So to me it was the original 3.5" + (i) was the "average" amount of film used on each package, however my boss disagrees.
He feels it should be
m / p = o
o * 12 = i
At this point with the same data we are at different values of i. From here he determines that (i) should be multiplied by the number of packs produced in each stroke (6) and then further divided by (2) two sides of film are used. This arrives at a number representing the total number of inches in length of each package.
I don't think there is any reason to worry about the width in our analysis, and I don't see a point in worrying about (2) sides of film as the measured feet of film is already accounting for that but how can I prove which equation is right?
From a logical standpoint both of our answers are plausible. Using the data 32509 measured feet and 318600 packs produced, my analysis shows i = 0.0578 (3.5578 inches per pack) and his analysis shows i = 1.224 (3.67 inches per pack).
Thanks for any thoughts, even if it is "this needs moved to less than pre-algebra"