Why would we use Wide Gamut Color Space when we have sRGB monitors and export mostly in Jpeg for screen view?

I have no real calibrated workflow and have not realy a “demand” for it as hobby photographer. So colorspace and rendering and calibrating and all such which a professional should know about in order to replicate photo’s in every surface and digital enviroment the same look i don’t need but like to grasp enough to detect flaws in my workflow.
Lots of people where feature asking a wider working colorspace so they could compete with prophoto and for there printing demands. Which inclined there must be a softproofing came before.
Well we have all this.
So far so good.
Photolab is using protect saturated color for years to render camera colorspace colors inside there Adobergb based working colorspace in order to adapt to your editing changes in contrast and saturation. (set protect saturated color tool at automodes and you see it react.)

We have discussed about the how it would worked.
Someone made a workflow diagram about that, cam’t find it quickly so we link that later.(a hole thread about that few months before.)

My start on this is.
What are the practicle advantages in working in Wide Gamut?
1 yes we work in a larger space then Adobe so less possible compressed colors out extracted from the rawfiles camera colorspace.
2 more saturated colors possible which you see in the preview back.
3 the add is mosly dark red and bright saturated green. Deep Blue sky is much less improved in WGCS.

So my struggle is why would i go through the trouble of editing in WGCS and must use masking technique as monitor out of gamut and softproofing in order to see if there are colors which my monitor can’t show correctly?
For printing?
If my goal is mostly view on a 4k smart TV?
Then what?
More original detail in shadow part of the rawfile shown so i can pull and drag that arround with intended purpose?

The automated rendering intents which DxO provides in order to give you as much as possible relative reality of the scene you captured back on your screen worked most of the time fine right?

What do we missout if we just ignore the 2e step in compression and go straight to the 3e step near export colorspace as Adobergb is.
Yes i know more and more devices support P3 Display so every image you now export in edited for P3 files would not need a new export when your viewing screen is supporting P3 in the future.
Do we mis out on controlling tools and detection tools in order to master the wider gamut and the image inside it?
One i have in mind is
The masking blue and red of monitor out of gamut and softproofing is a yes or no filter.
Not knowing how much out of colorspace/gamut and which channel?

(i have a party to day so i will be back late this day or tomorrow on this thread.)

Please start without me and i catch up .:relaxed:

I like to have a open discussion about this.


I already answered this question in another old thread before the new wide Gamut was here.

IMHO, there’s no need to change anything to this answer (beside the fact that DxO Wide Gamut has some advantages over Prophoto). A large working space and 16-bit bit depth provide more accurate computations and avoid artefacts that would appear when using a narrower working space from the beginning. This reasoning applies to any engineering computations : be as accurate as possible all the way down to the final result and only then round it if necessary.

and export mostly in Jpeg for screen view

A general statement that do not apply to any photographer, by far :slightly_smiling_face:.


"Why would we use Wide Gamut Color Space when we have sRGB monitors and export mostly in jpeg for screen view? "

DXO team on their website has written about this a lot, but the short answer is that sRGB is outdated old sadly still a standard for universally supported color space that is supposed to provide a standardized way of showing color on monitors of its day. Eventually digital displays, from smartphones to desktop monitors to tablets could display wider gamut colors and so can many printers. Adobe RGB 1998 was an attempt to provide a wider gamut color space in order to accommodate for this discrepancy and to be used as working color space as well. ProPhoto RGB was intended to be an archival color space. Since most cameras can capture very wide gamut of colors, and you might want to preserve that ProPhoto RGB was archival space, but some raw processors like Adobe adopted a flavor of it for the raw processing.

DXO used either Adobe RGB which was sometimes too small, or they could go with ProPhotoRGB to accommodate for all colors in a working color space. But such a crazy color space like ProPhoto RGB is not actually ideal for working color space, since it’s easy to work with in it and change colors that your monitor, your eye and no printer can display. Because some of the colors are even beyond color humans can see, because it’s all represented by math.

DXO Wide Gamut Color Space is updated Adobe RGB with more clever solutions. Its wider than Adobe RGB hence solving the problem of a situation where some colors don’t fit with AdobeRGB limitations, but yet it’s not so wide like ProPhotoRGB so that you can easily screw up colors that you can change but cannot possibly see. DXO’s solution was intelligent compromise of just wide enough color space that fits all the important colors one can see and display, but keeping it small enough to minimize the problematic color correction operations.

DXO Wide Gamut Color space is similar to what Davinci from BlackMagic tried to do with their solution to the same problem. Davinci Wide Gamut working color space.

It’s important to remember that DXO’s Wide Gamut Color space is a working color space, not a delivery one. Technically, you could make one to use in an image, but it’s not meant for that. It’s meant to be used in DXO PhotoLab as a working color space, and then export in desired color space for other needs. This is where the embedded color space like sRGB becomes important. But since sRGB is smaller in gamut than DXO wide gamut, another clever but often overlooked addition to workflow is soft proofing panel and preserve color details slider and check box for export.

When you go from working wider gamut space to narrow output space like sRGB you might lose some colors. It’s unavoidable. The question is how do you make the compromise between destaturated colors and those that have more tonal quality, the details the textures.

Relative colorimetric and perceptual, etc. are old school algorithms that try to either cut off out of gamut colors leaving that part of the image less saturated but also with no details. Or the try to proportionally bring all the wider gamut into smaller gamut space, giving you the details and less saturation, buts a crude way to mess up even the colors that were inside the desired output gamut. Again DXO innovated and provided us with another best practice. Their protected color details feature, de saturates just the colors out of the gamut, but not others and keeps the details in the area of high saturated, it does not simply cut it off. You can try to simply desturate or do something complex with masking, but all that is done accurately, easy and automatically by DXO.

DXO innovation and implementation are the best on the market right now, for something that was a problem for decades and got no love from other companies.

Many years ago I made some tutorials explaining these problems.

Color Management “Lost Tapes” Part 5 – Introduction to color spaces

Color Management “Lost Tapes” Part 6 – RGB Working Spaces in ColorThink Pro

Color Management “Lost Tapes” Part 7 – RGB Working Spaces and monitor in ColorThink Pro

Color Management “Lost Tapes” Part 8 – RGB Working Spaces and Output Spaces in ColorThink Pro


please check here … → about the video from Rob Trek

1 Like

Seconded! I watched this earlier today and it was most enlightening. I already use the Wide Gamut WCS, and now I understand a lot more about the Protect saturated colours slider I will be paying attention to that, too.

On the subject of “only sRGB”, I have a moderately expensive monitor that has Display P3 and I know a lot of people who own modern Mac laptops and iPads (and iPhones) that also have Display P3 screens.

My photos are mostly intended for my enjoyment, so I now always exportfinal images as Display P3 JPEGs, and that’s also what I upload to Flickr. Yes, they won’t look exactly like I want them to on some people’s monitors, but then no matter what you do, some people’s monitors will be terribly adjusted, or just plain terrible, anyway.

I sometimes show photos to people in my office on their company-provided screens and the results are… less than satisfactory. Heck, until recently I used two 24" monitors in the office and the only demand I made of them was that both be capable of displaying the same colours! That was not an easy ask, either.

I have for a long time said that the two best camera upgrades I have ever made were Apple ‘Retina’ displays and PhotoLab.

I am getting old …
We had a BBQ for 50 isch people and lots of drinks of all sort.
Good food, nice drinks, good and nice people what more do you need?
The most dangerous one of the drinks we had is homemade grog :grimacing: caribean sugar rum like. (do NOT drink Fanta with it in the same moment you will regret that…:grin:)
That one is like the devil, sweet in the mouth, smouth to drink but hell it stings in the morning. :persevere:

I think i have to recover a bit more then just 8 hours sleep…

Thanks for posting.

@MSmithy , even with my clouded brain i find your post easy to read and it explains very well the reason for WGCS. I will watch your video’s a bit later when the fog is gone out of my head :sweat_smile:
I openend this thread not because of do we need WGCS yes or no? Because that answere is given a long time ago. A hard yes.
I was more interested in the workflow change from relative compact working colorspace and a big step down from raw colordata to Adobergb to this for my hardware non showable larger space. The practicle use and controle of this much wider playground wile the endresults are stil limited in the old classic sRGB.(by choice of me for now)
I think there are more people who uses dxophotolab but not be geared up to a point of profesional specs.(yet) And i ask myself should i change the setup of my jpeg export to display P3 already to avoid in the future the need for re-exporting in a larger colorspace because the in the future bought smart tv/monitors delivers this larger colorspace and all are set for this as standard? That could be thousants of images who needs to be re-edit or re-exported.
So why add even more images in the archivd from this moment to the moment i can view in display P3? (i remember we talked about this earlier: Would you export in a larger space then needed in order to have better results in the future?)

Absolutely right. I am awhere of this fact. I think we can safely say lots of dxo users are printing alot for there own use or for others. Some of us arn’t. (probably because mine arn’t that good that i would like them printed🤣)

The downstream accuracy is indeed something what’s with a colorspace of Adobergb from the point of demosiacing to work with a bit compromised.

May i say that the softproofing one color, in or out does not give much information about what and which color is out of colorspace of choice. It just marks the spot in the image what’s out but not how much out. Such a 3d model in which your imagedata floats inside a colorspace of choice would be a good extra tool to see how much and what is out of colorspace. That would be much easier to grasp for people who arn’t very experienced in colorspace “hopping”.

Thanks. A lot happend wile i was offline :sweat_smile:
I have read alot to catch up.

So you archive all images now in Display P3?
Did you go back in time and re-edit the 5stars photo’s in Display P3?
This is most likely what my idea would be of practical use of WGCS for my purposes.

1 Like

your screen is limited to sRGB. Why would you then export as JPEG with a different colour space,
just to get in trouble when your viewer does not handle*) colours appropriately?

*) viewers work/react differently

  • some can be set to correctly interpret the (embedded?) colour profile
    to recognize your monitor profile without requiring manual input


1 Like

Well, there are a few things I would say are worth considering.

a) As a general rule, it’s best to keep RAW files, the original raw files, for archival purposes, or if one really wants to store them for a long time, to keep them in DNG format, which was originally intended to be an archival format. Open source so that people could use open source rather than proprietary software to read the raw data at some point in the future, when proprietary software may not be around anymore. This is an archive of everything. Tone and color data

b) When it comes to color, one thing I forgot to mention that is important is that the whole issue of gamut and out-of-gamut colors, is proportional to the saturation of the colors in the image. and, to some extent, hue values. But if you have a monochromatic, low saturated image, like a foggy morning in the UK, sRGB can be enough to fit all the colors in that image. And if you shoot in B&W or grayscale, the whole story about color becomes moot point. What matters are resolution, pixel dimensions, and, of course, bit depth. How many shades can you reproduce.

c) The same thing as I’ve mentioned before. With B&W, the image gamut problems become virtually nonexistent; it’s the bit depth that is more important. But if you don’t have smooth gradients and you don’t edit a lot, even a bit depth can be fairly low, like 8-bits per channel to represent all you need.

The point I’m trying to make is that what gamut or bit depth one uses is dependent on the content of the image itself, what you do with it, and how the final output will be. Working in 32-bit ProPhotoRGB on an image that is a fine art shot of a textured wall in B&W would obviously be overkill. Even for archiving. So it’s a context sensitive decision.

Also worth noting is that whether it was originally sRGB, AdobeRGB, or whatever, DXO will convert the JPEG or TIFF you import to its working space of DXO wide gamut. This, of course, does not increase the saturation of the colors in the image any more than loading up 8-bpc images into 16-bpc working mode does. What it does is help to preserve what was already there during editing when it comes to bit depth, and when it comes to colors, as you boost the saturation, assuming that is what you want, DXO Wide Gamut Color Space has got your back. Sort to speak.

When it comes to long-term archiving. Assuming your images are very saturated. Saving them out as ProtPhotoRGB color space, which is like DNG meant to be archival space, ensures that none of the colors, no matter how saturated, are clipped. So for purely long-term archival reasons with images that have a lot of saturated colors in them, I would use either the original RAW as DNG, or the second-best thing is TIFF and/or DNG container with ProPhoto RGB. But if your images don’t have a lot of saturated colors or are B&W or monochrome, then even sRGB will do just fine.

It’s also possible there will be some kind of new technology and AI-powered methods to make most of this not as important as it once was. So we will be able to “invent” colors that extend beyond the original. Make B&W color, etc. But that is a more artificial approach, so it’s a matter of what you are after. Accurate representation of what the camera captured or eye-pleasing final results achieved in whichever way It’s a personal choice, I guess.

1 Like

Separate from the discussion of gamut, I also believe that the original file (or maybe a DNG conversion, but I’ve had trouble with EXIF data being saved), is the best archive format.

RAW editors continue to evolve and improve. You can often go back to an older RAW file and end up with a better final product. Going back to a JPG or TIF won’t allow that.


I can’t comment on EXIF data not being saved, I don’t think I had that issue. Or maybe different needs. But yes, as the technology improves its good to keep the originals, because at some point either because of personal skills or new tools or both , one can extract better results at some point in the future.

I was doing some AI up scaling on both photo and video and some AI de noising, and technology improved so I had to revisit the old images and videos and because I had access to originals, I could re.process them with better tools for superior results. Yes.

1 Like

A yes in the farfarfar future it could be the case that my rw2’s of panasonic can’t be read anymore but that change is very slim. It’s gona be rather the case that the conversion is gona be better in the future and the denoising also. (see dxo PL v1 till now v6 which has major leeps wile v1 prime already was a major leap for my m43 files. Silkypix to tiff plus nic’s denoise couldn’t compete against prime v1.

B yes understood, often many images are well within the adobergb or even in the sRGB so with monitor OOG or softproofing you don’t see any masking.
(i recon we stil loose some color graduation in the raw to colorspace conversion because of the mapping color rendering something to do with bitdepth?.)

C understood.

I have almost all my unproccesed images, raw jpeg, scanned tiff archived in a original folder tree which has a twin like tree in processed jpegs.
The only things i throw away are redundant Out of Camera jpegs, the bad , the ugly and the misfits.

1 Like

Open Source doesn’t save you at all here even if you and many others may belive so. Even if people developed in Open Source there would be nothing but proprietary solutions available to store for example the “edit metadata”.

When I made my first migration from Rawshooter Premium to Lightroom after their hostile take over of the Danish Pixmantec company all of us lost all our previous work in RawShooter since their sidecars data could not be migrated by Lightroom. We had to start at 0 again. When I left Lightroon for Capture One ten years ago nothing had happened with the migration possibilities.

There are RAW-fileformats that are no longer supported and that is of course a problem but so far this us a limited problem. DNG is often no reliable solution either because very often it’s a dead end too really and Photolab is no exception since it either doesn’t accept DNG it hasn 't saved itself or it just refuses to open DNG created by from RAW it lacks it’s own proprietary DXO camera profiles for not to mention the problems with linear DNG.

1 Like

I think you miss the point of DNG and DXO relationship. I wrote about this in great many details before, DNG is a container that can have all kinds of image data inside. However anyone can write an app to read it if they wanted to. DXO is not a DNG reading app. DXO reads DNG container, but requires additional data inside to do its thing. This is critical to what DXO is as an app. People who claim DXO should read all DNG’s have no understand what that means. DXO reads DNG wrapper, but it relies on what it in the DNG wrapper to perform its function. It can also output DNG, so called linear DNG with explicit purpose of being read by other apps that need carry on where DXO left off. This is a deliberate decision to treat RAW files in this way and save them as DNG for a very particular workflow.

DNG that is linear DXO exports, can also be archival format if one need it to be. The point of DNG as open source is not that what is inside is same as raw, because while it can be, its not limited to. In fact one can embedded original RAW file in the DNG, having best of both worlds if one so chooses. Point being that DNG wrapper is like .zip archive or .mp4 wrapper.

For whatever strange reason, people seem to keep having misunderstanding about DNG and what it is or what is meant to be and its relashionshp with DXO. I’ve spent enough ink in previous posts to go in detail about all that again, but here is some more info that I haven’t yet posted.

Article: Archive File Formats | dpBestflow

"Archive files, like Working files, are of two types: originals and derivatives. Originals may be raw files, camera-derived DNG files, JPEG files or possibly TIFF files. Derivatives may be DNG files made from proprietary raw, or any other second-generation rendered file types made from camera originals.

In general, archiving camera originals is recommended, with the possible exception of replacing proprietary raw originals with DNG files. dpBestflow® also recommends archiving master files and preserving their layers when present. You may also want to archive derivative files such as those prepared for printing or delivery.

Proprietary raw

Proprietary raw files have all the advantages of a digital negative: more control over the rendering process, greater bit-depth, and widest color gamut. They may be processed many times over and in a wide variety of software.

Another advantage of raw files is that they are typically only one third the size of uncompressed rendered files such as TIFF. This compact size is due to the fact that the raw data hasn’t been converted to three-channel color and the raw data has lossless compression (or optionally a visually lossless compression scheme).

Small size is maintained even for files that have had parametric image edits, since PIE edits are just tiny text files — usually saved in XMP format. The question is, are they a good archive format?

The main disadvantage to proprietary raw as an archive format is the proprietary and undocumented aspect of these unique and multiplying formats. As we have seen with a wide variety of other proprietary technologies, it becomes inevitable that some, if not most, of these formats will become unsupported as the cameras that made them retreat into the past.

While you may not continue to own these older cameras because you replaced them with new and better models, you will likely want to access the image files these cameras generated long into the future.

You may well hope that your children and grandchildren will be able to have access to these images as many may have recorded significant family events.


The DNG format preserves the original raw sensor data just the same as the proprietary raw files. Nothing is left out. DNG is a safer archival container for several important reasons. The first is that it is a documented format. Its specification is openly published and how DNG files are constructed is openly shared with other software vendors.

The second reason is that, unlike any other raw format, DNG contains a file verification tool known as a “hash” that can tell if the raw image data remains unchanged and uncorrupted. This hash only references the raw image data, so a DNG file can be processed an infinite number of times and the XMP instruction set(s) and embedded JPEG preview(s) can be redone an infinite number of times, but the underlying raw data does not change, so it can continue to be verified forever.

One disadvantage of DNG has nothing to do with the format itself but has to do with the number of software vendors that choose to support DNG. Since not all do, DNG files cannot be processed in every possible raw file processor out there, especially the camera manufacturer’s software.

DNG can, however, contain even the proprietary raw file within the DNG container, so if this is a concern, you can choose to save your DNG files with the proprietary raw files embedded. The file verification hash will then also protect the proprietary raw data as well as the DNG raw image data.

This, in fact, is currently the only way to verify proprietary raw files. DNG files can sometimes be smaller than proprietary raw since DNG uses a very efficient lossless compression scheme on the raw image data. DNG files can be the same size or slightly larger than proprietary raw if they contain full size JPEG previews. DNG files can be twice the size of proprietary raw if the proprietary raw file is optionally embedded.


TIFF files are considered by some to be the best archival format since it is a standard documented format likely to be supported long into the future, and is of the highest quality formats when saved with either no compression or lossless compression. The drawbacks to TIFF format are due to it being a fully rendered file. Consequently, TIFF files are much larger than raw files, especially when saved as 16-bit and/or layered files.

Additional drawbacks to rendered file formats are that any pixel edits result in loss of image data, or come at the price of saving an additional layer. Even fairly small adjustment layers take up considerably more space than the very tiny XMP files that are saved with raw files.

Format obsolescence

A major challenge with regard to the preservation of digital image files is the long- term readability of file formats. This is especially true if they are proprietary, which describes most camera makers’ raw formats.

Camera makers have already orphaned some proprietary camera raw formats, and we are only a few years into the process. The sheer number of raw formats, many if not most rewritten with every new camera launch, and the fact that they are undocumented, makes it unlikely that all of these formats will be readable decades from now.

In addition to the proprietary raw problem, other image formats are unreadable on newer operating systems. One of the more shocking format failures is the Kodak Photo CD format. Although Kodak addressed the issue of media permanence, it was all for naught, as there is no longer any application support for their proprietary format. Many museums and other archiving institutions ended up having to convert thousands of these discs to other formats and storage media before all the contents became unavailable. There are many more examples of data locked in obsolete formats that are unreadable.

While converting raw image data to JPEG or TIFF is one strategy for avoiding image format obsolescence, the lossy nature of JPEG and the large size and fixed nature of TIFF are problematic. Converting to a standard raw format is a better choice for image archives.

Currently, Adobe DNG format is the only candidate. Keep in mind, even DNG files may need to be migrated to a subsequent DNG version or a replacement format as yet unknown.

An important feature of the current DNG specification is that all data is preserved. Even data that is not understood or used by Adobe or third party software is preserved.

Although it is too early to tell how successful it will be, the Phase One EIP format may offer another path. It uses the open ZIP format to wrap up the raw image data with processing instructions and any applicable lens cast correction data. Unfortunately, these processing instructions are only read by Phase One software with no guarantee that work done in a current version will be honored in a subsequent version of the software.

The Adobe DNG format, on the other hand, has forward and backwards software compatibility built in."

I also think this tread also shows very quickly and clear where we end up if we abandon the road of taking mutual joint responsibility to save our images in a format people have a chance to read and on monitors that is still set for sRGB.

I can see P3 gaining momentum on displays and it seems like a better option than a mixture with Adobe RGB and sRGB since I feel that it lays "“closer” to sRGB than the more blueish Adobe RGB. When I look at P3 in XnView I think it gives images displayed on SRGB-monitor that disturbs me less than Adobe RGB-images.

For sure we can turn of all influence of the profiles in the images like Photolab already does when using the gamut of the monitors all over but that doesn’t save us everytime because at least I develop an sRGB differently than an Adobe or a P3 and not to mention these blueish or violet ProPhoto-files.

Here we have me and Oxidant that have standardised since long saving in sRGB 4K for TV. He doesn’t calibrate at all as most people and even some I know about despite they are use high end cameras and publishing on the net.

Some care about how their images are going to look on image consumers screens and some don’t, already with sRGB but that is nothing about how things will differ in looks when people are talking about saving files in ProPhoto of all gamuts. That gamut is just not suitable at all to display on the sRGB-divices that are still the standard out there on the net.

That said I do sympatize with the thought of standardizing on P3 in the future both for monitor viewing and printing. I now finally have an Adobe RGB and P3 compatible monitor and I do print in ARGB but it really has fucked up the clean efficiency of my old sRGB-workflow both for viewing and printing but still I’m not ready at all to migrate to a P3-workflow for all, despite I like that gamut better than Adobe RGB.

The most important still for me is to have a sync between my monitor and my prints despite I will print in ARGB or P3 and the existence now of JPEG in sRGB, ARGB and soon P3 will call for a far better possibility in Photolab to quickly check files ICC-profiles than is possible today.

No I don’t misundrstand this at all. I have a solid knowledge about using DNG as a fileformat in cultural heritage environment. I designed and developed most of the DNG-workflows in the DAM of the City Museum of Stockholm. Photolab could never have been a part of that workflow of the reasons I have written.

Today all people I know working in that sector use Adobe-products since they are the only ones not causing any strange problems in these workflows. I would never recommend anyone to use Photolab in flows like that, Lightroom has a lot of unique and smart DNG-related features that Photolab just doesn’t have and DNG itself provides some features that are almost tailormade to be used in smart DAM-flows.

Great post’s to read. Thank all.
DxO has made there DNG choice wider : even the dng rawfile optical correction and denoise only has a lost ability: Due the fact that Chromatic Abberation is done is there in extreme WB changes a form of shifting and halo’s can be formed.
So the wrapper DNG has wrapped a pixelised and a floating WB(can i call this the fact that WB isn’t set?) image where optical correction is done and denoise of choice.
Which means
Workflow needs denosiing choice and set strenght. Then even when you don’t need to for the rawdng , set WB and brightnes in order to find the right CA correction and needed denoising strength. You make a cleaned “negative” with the exif and iptc properties written in to. And that could be the same problem as the original rawfile.
Can we read this in 50 years?
So when i need to save rawfiles in the future i hope that i not have to “edit” the rawfile before saving raw in to a new container/wrapper one by one…:sweat_smile:

About calibration, the more i read about imc, icc, profiles, etc the more i think geesh how much time do i have to spent on optimizing my viewinggear for the last 10% improvement? When i buy a spyder x pro can i calibrate the Smart TV also?
I know how to calibrate an office printer but we are not selling fiery’s anymore because we don’t sell for printingshops as in graphic market so i just set it up for base definition, default. And lett the user set her or his preference inside the plc6 or postscript driver.
At home i run the menu driven smart tv calibration when i buy it and with the manual in the hand run through all settings but after that i am afraid i return to that only when i have a visiual problem. I read that calibration is only usefull if you repeat it every few weeks on all devices…

On the other hand keeping dxo working colorspace in WideGamut even when you uncalibrated export jpegs in sRGB for home use seems to be the better choice.

I hope this thread can continue in adding interesting info about this difficult thema.
It will help a lot of people to understand which choices would be the right thing to do.:slightly_smiling_face:

Interesting that you bring up TVs because it’s the video / motion picture industry which has almost complete control over the displays we will be using. In that world, the DCI-P3 variants, Rec 2020, and Rec 2100 standards prevail and guide future development. It’s a wide gamut / HDR world out there, Peter.

Photography is a tiny market relatively, and of little consequence in this regard. Moreover, photography and video are rapidly converging, our smartphone cameras a case in point. Technically speaking, they are video cameras that can produce still images.

While we’re talking about the future, I’ve also been noticing that that the new JPEG XL format appears to be gaining at least some traction as the eventual JPEG replacement. Lots of advantages for photography and web use, wide gamut and all. Maybe this time?

1 Like

Ik know, the HDR modes arn’t that nice to watch too “plastic looking”.
I started with a LG oled panel2 but the colors where not stable. Content switches ment colors adjusting.
I bought this one

There XR processing chip is much better then the LG’s variant which is setup as gaming panel too much saturation and bad skintones
(They have the same panel inside but total different color profiles.)
Sony has a special colorprofile for netflix series and directors colorprofiling.
I did the calibration and initial setup and that’s enough. The XR chip adapts great to different content.(which could be a problem with stills by the way.)

97 98 % Display P3 capable so from that point i could export for p3.

Jpeg XL has a bigger bit depth?
Smoother color transitions?

Super TV – I’m jealous!

Some readable background on JPEG XL and a recent update indicating that Apple is on board.

1 Like

I only export to high quality JPEG those photos I want to put on Flickr (and I also add them directly to iCloud Photo Library for easy personal viewing). As of PL6, all such exports are Display P3 JPEGs. They represent around 11.5% of my photos this year, for example.

I do also have a “complete archive” process I do periodically, in which I export every photo to a moderately-sized JPEG as a kind of “fall back” for anyone who might be looking at my computer after I die. Those are much lower quality and I haven’t bothered setting those to something other than the default sRGB.

As for revisiting “5 star” photos, I use a different means of labelling the good stuff, but yes I do revisit them. Some photos have been revisited multiple times. In early 2019 I made a trip to Singapore and took many photos. They were processed in Luminar 3. I bought my first PhotoLab version (v3) later that year and, impressed with PRIME noise reduction, I went through all of them and reprocessed. When DeepPRIME appeared, I went back over many of them again. With PL6 and the wide gamut WCS I went through those same photos again. This time ensuring WG and P3 output. I also took the opportunity to update my watermarking and tinkered with some shots where my extra experience told me they could be improved (chiefly in highlight/shadow treatment).

I have just finished revisiting every photo taken (and published) with my current camera (some 1470+) to this latest standard. That took me back to August 2017.

I have also revisited many much older photos, going back as far as 2008. I stopped at that time only because my prior camera is not supported by DxO, otherwise I would! I have been astonished what PL6 can make out of a 2007 model camera with only 10 Megapixels. They would originally have been processed with Lightroom, Aperture, or Luminar.

That has to be the most succinct description I have seen yet.