Why is that not in the current one, is it hard to make that happen, asking from the coding point of view??
Depends on whether you want the BARS stuff to contain useful information.
If you just want a pretty picture, that would be relatively easy to do - but it would possibly change each time you looked at the wagon wheel, and the system would probably break when half of an AI batsman's innings is randomly generated, and half is representing what's been happening while you're down at the non-strikers end.
If you want to use it to, say, get a lead on the line and length the bowler has been concentrating on (or conversely, where the batsman has been scoring most of his runs), you'd need for the system to generate full results for each delivery.
How easy that would be comes down to how Big Ant have done the AI vs AI stuff - and how tied in with the animations it is. One effect (assuming they _can_ decouple it from the animations) would be to make simulation take longer. Potentially a lot longer because you need to also simulate where the field is, every delivery. It would also increase the size of the save files, and possibly memory requirements.
The thing is we're not talking about human vs human interaction. So there's a limited amount of useful information in an AI vs AI wagon wheel in the first place - the batsman _isn't_ making decisions to focus on playing straight because the balls moving around, the bowler _isn't_ deciding to bang them in hard because the AI batsman has a tendency to hook. So the kinds of inferences one can draw from a wagon wheel generated in the meat can't really be drawn from one you might get by simulating here.
If it's just a pretty picture, it might as well be the same every time ...