probably not really your question answering but reading your quest started me to think about what is optical and what is perspective, non lens error , but point of view?
And this artical is rather clearing that up.
and spherical aberration
and chromatic aberration
So the optical module on dxo does three things in one automatic go,
pincushion/barrel things, aberration and vignetting.
and some things like lens corner light losses. Shading which is the mild form of vignetting.
comparing to my other program vignetting can be done better at first glans but i find out that dxo has a wider image angle, uses less crop or uses more of the sensor?, to cut out vignetting. cropping to the same angle is the same.
same as chromatic ab. at first it looked less good to fine tune but because of the non realtime viewing of the corrected image the ab correction kicks in when zoomed in.
there lens sharpening optic module does give a instant kick in the right direction.
So as far as optical correction without any manual help it is doing a fine job i think.
as far as viewpoint correction the auto mode does surprise me that it over shoots on occation at 100% and gets unnatural. but tuning down to 75% does help.
That artical does showed me why, The brain corrects by sampling wile looking up, focussing at smal points and rebuilding that to straight lines in your brain like to look at it at a more distance,
If you over correct the image by straighten al lines, your place of eye hight is lifting to a point that you standing at 4m high. in stead of on the ground. and thus looking unnatural. the same theory goes for horizontal lines so some / and ( •> in horizontal lining is needed.
Al this has nothing to do with your question but more understanding what your doing wile correcting and that is also the case in applications. Atleast in my case, i learn every time how less i know of theoretical details as a non trained photographer. who just enjoys taking pictures and enhance them in post.(read correct my flaws😋)
As platypus explained correction is changing location of pixels and i believe its best to do that with the highest resolution, most pixels, and in one application from raw.
and if one app does things better then others most is due knowledge of handling the tools or there automated algorithm does help you more.
So i think its best to do all corrections geometric/viewpoint and optical distortions, like barrel things and flaring and aberrations in one raw developer. to get less artifacts and mismatches by faulty placed pixels.
And if a other app does something better and if i used that application longer its more likely my lack of craft. (That all said i heard LR is rather good in autocorrecting, )
so in sort (ish): i practice and learning as much as i can of one application and try not to export to much in threetrap stages, . i did it before between two rawdevelopers and it started to frustrated me that there wasn’t a perfect transition point for general alkinds of images. color errors. you can do double correction in dng of wb , color and exposure and contrast and such , dng is linear raw so not real raw. So colors didn’t looked the same after transition but also not nontouched, double work in color taste and WB,
So it is the dxo labguys job to close the gabs and holes in Pl suite. so we don’t need LR or Capture one or Silkypix or …