Hello. Would really appreciate some help answering the following question and making sure my though process even makes sense.

Context:

- I have a population of 4219 items.

- Some number > 0 of the items are calculated wrong

- 30 items were randomly selected and it was found 100% of them are correct

- there is no reason to assume that errors would be clustered

My questions:

- What is my current confidence that less than 1% of items contain an error based on my sample of 30 that didn't.

- What sample size would I need to be very confident (95%) the error rate is less than 1%? 5%?

- What sample size would be considered to accurately represent the overall population error rate?

I am really looking to minimize manual checking of the data as much as possible since it takes exceedingly long to do (5 minutes per item or more).

Appreciate any help.