Post Sunday, 10th January 2016, 12:26

Benchmarking CLUA

In order to benchmark a CLUA script of mine, I've tried taking the time from the crawl.millis() function on CBRO. Unfortunately, the results are, to be curt, unusable* - either due to server load balancing or due to some weirdness of the millis function itself. So, what would be the recommended way to benchmark code line-by-line? If there are such limitations on webtiles, I'd assume the local version doesn't have it - can I specify some kind of slowdown for the CLUA? (Locally, the results are in the 0-3ms range, 7ms total, which makes them useless, too unless I could get us some way; on CBRO, the same takes ~3.5s.)

For reference, the code is here (probably still buggy, haven't been able to test it while playing): http://pastebin.com/C7MVHfEp

(*: - Earlier, resource heavy functions increase the time of multiple later ones from 0 to 1024ms
- Everything seems to take either 0ms or a substantial number, repeatable but "randomly" switching when the code changes only a little
- Functions may add up in some weird way - improve the performance of one bit and the time it takes jumps to the next one
- etc.)

Furthermore, any reason why the millis function is DLUA? It's tied to the "real world" unlike CLUA functions, yeah, but apart from that I don't really see a reason to prevent people from using it.

Edit: Also, is there a better place for such questions? I find it weird that there's practically zero activity in this subforum. ##crawl-dev is very helpful when somebody is on but just doesn't really work for some issues and I don't want to spam it constantly. The people who developed the various bots must have had some kind of resource beside the code itself, right?

Edit2: I'm a moron, it's not just the 1024 that's near 2^n, all the other values are, too... they just vary more. So why does millis only return those numbers?