Interesting.
Yes, the more specific your target is the better assumptions you can make.
Another thought I had is that as what you measure is a relative change (rather than an absolute value) the choice of a browser is less important. Your tool isn’t a browser compatibility testing tool.
I assume (but might be wrong) that most visual regressions are caused by the code (the website) rather than the browser.
Having that in mind, you might want to consider it from a developer point of view, rather than a visitor point of view. Meaning, it might be more important to reflect the site as the developer is likely to see it, rather than the visitor.
Another thought is that whatever you end up choosing, it might be a good idea to be transparent about it and the display the details of the test (for example the User-Agent
value).