There have been upscaling AIs for a few years, which can take a blurry picture and then e.g. guess that some pixels are probably hair, so it’ll swap those out for a custom rendered version of hair.
Sometimes that works well, but you often still have Uncanny Valley stuff going on. I also certainly don’t feel like they’re better at actually interpreting low-res images than humans, not in their current state.
And well, it should also be noted that if you prime such an AI with an image of the suspect, it will absolutely find a way to make a blurry mess of pixels look like that. So, it certainly shouldn’t serve as the only evidence.
I quite like the star-button on Mastodon for this. Just pings the comment author that you appreciated their comment. So, it’s not an indication to some algorithm that this comment is incredibly relevant for everyone, because well, some comments just aren’t.