algorithm - Can Big(O) be affirmed by measurement? -


let's have designed algorithm might think runs in o(n). if measure time runs 1000 input , increase input 10x , measure again. can infer o(n) correct iff run time 10 times first try?

how stupid be? repeating test give better accuracy wanna know if makes sense @ all.

often, answer 'yes'. if increase problem size 10 , time goes 10, you're correct assume o(n). however, number unlikely quite beautiful.

if go 1,000 10,000, o(n.logn) algorithm goes factor of 13, (see bc below). that's not far off 10, , might mistakenly think increase of 12 indicates o(n.logn) rather o(n). however, if increase 10 , time goes 100, you're dealing non-linear algorithm — o(n2) probably. so, 2 points not enough, indicative. multiple runs , more data points both help.

sometimes, though, else kicks in. example, might start using memory program paged instead of running. slow down dramatically, though algorithm still linear given enough resources.

also, beware caching effects , optimization effects. caching can make things seem faster. if optimizer concludes calculation ignored, might eliminate entire calculation. have cautious.

but, bit of luck, can scale problem few orders of magnitude (or, @ least, few different numbers) , come reasonable guess whether linear or else.

o(n.logn) 1,000 vs 10,000

$ bc -l n=1000 n*l(n) 6907.75527898213705205000 a=n*l(n) m=n*10 m*l(m) 92103.40371976182736070000 b=m*l(m) b/a 13.33333333333333333333 quit $ 

Popular posts from this blog