The enduring issue is what makes a player 'feel' good or bad (realistic or unrealistic) when the player is using them. Similarly, what makes them 'good' or 'bad' when the AI is in control vs the players. Certain skills seem to have too much impact on gameplay (150 metre sixes), others don't and for some it's difficult to tell without extensive testing. You'd need to track player performance from AI control and then different user performances with those players to try and narrow down what constitutes a realistic performance within some bounds of acceptability; e.g. we want Steve Smith averaging -5/+5 of his real averages and -10/+10 of his strike rates.
I remember from my lockdown tournament I did a deep dive on the data that it pumped out and there were glaring differences between the game and real life. Openers had too much dominance in team totals, tail enders often averaged higher than real life, team wouldn't lose as many wickets. But, that was just an AI v AI tournament, so there's was no comparison for how those players perform when controlled by or as opposition to the players. Even then you have to factor in that some people will 'role-play' as though it's real cricket and others will try to 'beat the game' and find every possible advantage which could nullify the AI anyway. Then you have difficulties and sliders, which I've extensively rattled on about, complicating this sort of thing.
It's why, when it comes to skills, I simply go for balance in the hope that the worlds best are at least comparatively skilled and there would hopefully be noticeable difference between them and the worst players.
I don't think this is exclusively a Big Ant issue. When I used to play PES I'd always want players with high pace because they were such an advantage to play with that many other skills could be overlooked.