Linux support to keep our data safe

Hello all,

I’m new on the forum but I’ve been a DXO user for many years. I see and realize the question for Linux support has been raised before but I would like to raise it again. DXO is one of the few European companies that produce high quality photo software like this.

Without getting too political, I for one would like to keep my data in Europe according to european laws, something that can’t be guaranteed with either Windows or MacOS as it is now. This is especially important the way the US’s political environment looks at the moment.

Please, this is important and is no longer just a wish for the software to work on Linux as a “nice to have feature”. It is now a “must have”.

And no, I have no wish for this to turn into a flame war.

Out of curiosity, what is the brand of your smartphone?

2 Likes

I know what you are asking/getting at, sorry to say it is, at the moment, Google Pixel. This too is something that I will change from and go Eurobased, at least if things keep heading the way they are. But, this has nothing to do with DXO and Linux support. But, as you kindly asked, I kindly replied.

Like politics, such support is far from simple. It’s a “must have” for a (probably small) subset of customers. More customers, by far I would think, think less bugs and better performance are must have. No-one is getting what they want.

You are, of course right, it’s not simple. Though Linux support should be very possible. And you are of course right in what most people want as far as less bugs and better performance.

That was exactly the point of my question. Good luck in finding a “Eurobased” smartphone. And IMHO the Pixel is a GREAT phone.

But my provocation is not actually only a provocation.

You are trusting all your important data (bank stuff, identity, etc) to a smartphone that is US or China based (or both). And no, you don’t have “Eurobased” alternatives.

And after trusting your important data to US or China on your phone, you are making a fuss about trusting your PHOTOS (which are way less important) to PL which runs under Windows? Makes no sense.

Suggestion: keep a dual boot PC. Use Linux for everything but the photos. Keep Windows for Photolab only.

Result: no risk for security (or paranoia), maximum safety, you still can use the best RAW development program in the world, satisfaction on both fronts.

I think you will that Apple’s privacy policy is much tighter than Windows as they have chosen to follow EU directives. Certain OS features are not available in Europe.

On the desktop, macOS has a 16% market share but Linux only has a 4% share. Compare those with Windows which comes in at 71%.

Since it usually takes the same size team for each platform, and bearing in mind that PhotoLab is a very small part of those markets for photo editing tools, you are asking for a massively disproportionate effort and cost for a very small user base.

As has been said, countless times already. It just simply is not going to happen. Maybe when Adobe make Photoshop for Linux :crazy_face:

1 Like

Have you tried one of the existing Linux options? I recently decided to move to darktable because of dissatisfaction with DxO and PL (not because it doesn’t run on Linux), and am finding it to be an excellent alternative.

1 Like

One can always flash their Pixel with GrapheneOS which completely decouples the phone from Google’s services and apparently ensures more privacy than mostly anything else out currently out there (if you feel that you can trust what these privacy non-profits say).

I actually tried it myself, and it was pretty good. My family is just too heavily “in” Apple’s ecosystem and it was too much of a pain to make the move because of that. But otherwise I probably would’ve stuck with it.

1 Like

For me, the only reason to stay at win is the lack of Linux support by DxO. If it would be, I would not hesitate 1 sec to shut down my win installation for ever. MAC never was an option to me.

There is a useful rule of thumb in software development: if the effort required to develop application software on one system and on another system is 100% in each case, the effort required to develop application software that runs on both systems and is interoperable is 400%. Currently, not even dop files are interchangeable between the MAC and Windows worlds—for the reasons mentioned above. Developing a Linux variant here would certainly be too great a risk. A Linux variant would be a good starting point if interoperability between platforms were to be addressed. Then, on this basis, it would be possible to test an infrastructure that already has to be developed to be portable in the Linux environment—there are already quite a few variants. In addition, it would be necessary to establish a friendly relationship with Linus Torwalds so that the important additions could be integrated into the kernel :joy:

So much for the “it’s only a few days work” brigade, who have never worked on sizeable multi-platform projects.:clap::clap::clap::clap::grinning_face::grinning_face::grinning_face::grinning_face:

2 Likes

Same here. Gotta wonder how many of us are out here. FWIW, I’ve tried to get PLv8 running on Linux using WINE. It didn’t go all that well.

Now I’m looking at Docker Containers with Windows base images (just found them as I was about to tell someone that they weren’t a thing, did a thorough search and found ‘em. Still not sure what the deal might be with Win licensing, but that would absolutely be a way to keep any Win bits from reporting back to the Ms ‘mothership’ and might perform better than WINE.

1 Like

My experiences as well, DxO in WINE is working very poor. I assume win under Linux will be the better option. Nevertheless, I guess if it would work performance will be a topic.

I’m starting to think those win containers might be a dead end. Potential licensing issues running them on a machine other than Windows. Still trying to figure it out.

Jolla and Fairphone are a couple of decent options.

Fairphone is Android. So it has exactly the same “privacy issues” (for those who are concerned) of a Pixel (which is a way superior phone).

Jolla is Sailfish OS, so good luck running some critical apps (banking, etc) which do OS checks. They have made progress, but some banking apps still can be very picky.

As usual Linux-fans can´t think of other things than operating systems while totally ignoring the local applications or the impact from the Internet. This is really fascinating and when it comes to pictures a lot of pictures contains GPS-info and XMP and/or IPTC-metadata that can make a lot of difference to whether you get exposed or not using your computer.

Using Linux is not at all a vaccin from file realated problems or dependenses coming with using all sorts of resources on the Internet - or are you Linux-folks livingb in a local sandbox not using the Internet or any modern cloud-services at all?? What about your use of social media?

The big divide when it comes to the computer world will not at all be betwen using Linux or Windows but to run things locally or using cloud-services on the Internet and it is a fact that the higher demands locally run AI-models will put on local computer resources the more people will be forced to use the cloud-services of pure economical reasons.

BUT, buying the hardware that will make it possible to isolate yourselves from the AI-cloud-services dependencies and run more powerful local AI-models locally is quickly becoming a very exclusive option that less users can afford.

I have just made some tests comparing if my new tailormade computer with an Nvidia RTX 5070 Ti card with 16 GB better can meet my demands running local AI-models than my older 3060 Ti-system just 8GB.

My old system could not handle Google Gemma 3 12b that demanded at least 16 GB VRAM at the GPU in order to meet my demands that was to be able to use iMatch Autotagger to identify animal and plant species in my pictures. The smaller Gemma 3 4b that was the biggest Gemma-model I could use earlier on 8 GB just could not do the job. So earlier I was forced to use either Google Gemini Flash 2.5 or Open AI GPT 4.1 to meet those requirements.

Well, yesterday I installed the bigger AI-model Gemma 4 12B on my new computer, more than twice as powerful as the old one. That model really put those 16 GB VRAM to work BUT it really managed to solve these problems while running my 160 test pictures locally instaed via the cloud. I must say I didn´t expect that since the big French cloud AI-service Mistral 3.1 to a large extent failed to solve the problems.

I had chosen 100 example pictures with animals and plants to see if Gemma 3 12b had what it takes to identify and classify those pictures and write both relevant Descriptions and Keywords. I also processed 60 pictures with global architecture pictures where the task was to try to identify landmarks and write Descripotions and Keywords using the concepts of an architect.

I have published these pictures in two portfolios on the biggest photo site in Sweden called Fotosidan (translates to “The Photo Page”) so people can see for themselves how it looks and what to expect when it comes to these results.

It is also important to stress that this is not an absolute measurement of what to expect because what is produced and achieved is a result of my prompting in iMatch that has four promts that has to be developed and fine tuned to your liking. There are different promts for Descriptions, Keywords, Landmarks and also an Ad-hoc-prompts where specific data for special selections of pictures can be added.

Animals and Plants in East-Africa:

Sten-Åke Sändh - Portfolio

Global Architecture

Sten-Åke Sändh - Portfolio

What is not reflected in these picture texts in the Portfolios is that the data also is formatted and structured to increase readability. That formatting has been stripped by Fotosidan but is visible för exemple in Photolab 9 or Capture One and of cource even in Imatch DAM where I have processed these pictures metadata. This is how this structure looks in other applications than Photosidan.

You also need alocal AI-platform like Ollama or LM Studio to run these local models ands even if I use Googles Gemma in this case Google can never get access to these data because the process is entirely local and this way you are free to use lots of local free AI-models without paying a cent to any american AI cloud platform owner.

Since Gemma 3 12b meets all my demands I can now abort my Open AI API if I want without any problems when it comes to my picture metadata.

So this is ONE way to go if you don´t want to depend on the big American cloud services and has access to a software like iMatch DAM that is really open to a great variety of both American, Chinese or European cloud services or local ones of your liking and it has nothing at all to do with Linux. Even DXO, Adobe and Capture One could offer the same kind of AI-interfaces but so far they are strictly proprietary.

Off-topic.

There was a concept 30+ years ago that very soon we will all use cheap “net computers” (don’t remember the exact name), kind of terminals using networked resources. The concept was DOA (Dead On Arrival) since Internet was too slow, storage too small, and because of privacy concerns.

In most cases AI=ML and Machine Learning training is very different from actual ML usage in terms of computational power required. Looks like a common misunderstanding. It seems you have DAM image recognition in mind, in which case offline processing by some super-computer would make sense, like for some video processing, and perhaps advanced still image processing involving generative AI or things like that. But this has nothing to do with everyday photo editing, which is interactive thing, requiring low latency. As a side remark, some demosaicking algorithms require 1000 or more computational time than others, resulting only in 0.1% of better “image quality” (whatever it means) – it’s a true story. Which one would you choose?

Certainly true, but it much depends on IQ of model creators (I mean Intelligence Quotient here, not Image Quality), its particular purpose, and network bandwidth and latency requirements. Think of live 100mpx image corrections, for example. We don’t have terabits/sec terminals yet at our homes, not sure we’ll ever have. Latency brings some limits too and we still can’t help the light speed limit :slight_smile:

Not sure what is Gemma and why it appeared here.

Gemma 3 is one of the most popular locally run AI-models used with iMatch for picture analyzes. It appeared here as a mean of processing pictures locally instead ogf sending them to american clous services. and instead of believing that using a Linux computer would save you from the problems with cloud services. I say that a local AI-model in my case prevent me and others from those problems from using and exposing myself from to american cloud-services which some apparently feel is problematic. Personally I don´t care except that I am fed up to support United Crook.

I can tell you an interesting thing that the big cloud models actually has a lower latency and is slightly faster that using Gemma 3 12b for these jobs. the new Open AI GPT 5.2 is definitelly faster than earlier models from Open AI and both Google Gemini Flash 2.5 and Gemini 3 are fast and efficient with low latency. I´m pretty sure that they would be as fast as the locally run Photolab 9 premade AI-models that are not fast at all. You would be surprised how fast they are.

A software like iMatch is just sending small thumbnails to the cloud and that is really fast if you send a batch. Processing a batch picture takes maybe two seconds a picture. Editing pictures with the same flow in a future version of Photolab doesn´t necessarily need to do anything more different than pasting the same thumbnails to the cloud and applying the masking instructions the cloud has identified in the cloud as it should have done with the local AI-model. That way even machines with less local power could use the premade AI-presets they just can´t use today.

Desiding what to mask with a cloud-service doesn´t necessarily demand sending 100 MP pixel to the cloud, Even if I never haved used it Topaz is using cloud-services for some editing, so that would not have been the first time anyone have tried even that.

Already today local editing with the AI-models DXO are using is not working for many because their machines are not powerful enough. Isn´t just that a fact that proves that some sort of clod-serviced alternative has to be open too for people refusing to upgrade today because of the hardware costs. The only realistic alternative for that group will be migration if an option like the one discussed above will not be awailable for those users in a close future.

What we are talking about here has very little with machine learning to do so Wolfgang is a bit out of track there. Here it is just about image analyzes performed locally or in the cloud and applying that to the pictures locally as metadata or as instructions for Photolab to apply. I don´t know why you lifted ML here Wolfgang. It is not relevant at all in this case.

This is about AI-autonomy vs dependence of American cloud sevices and that´s why some people like myself are preparing us for a decoupling from American AI-services. I don´t see any point at all in not using Windows 11 or Mac here since we already have paid for it and switching to Linux will not make any difference at all when it comes to protect us from the bad effects of dependencies on US-clouds no matter if they are AI-related or not. Those services are totally agnostic to what OS we are using.

I think the only way to really be sure not to be exposed to the backsides of US AI-cloudservices is to run your mostly totally free AI-models locally. There is a vast selection of free AI-models from all over the world to use and they come in all sorts of shapes - from the ones possible to run on just 6-8 GB VRAM to others demanding 64-128-256 GB. Even a small modell like Gemma 3 4b that can be run om most machines is surprisingly competent to generate Descriptions and Keywords with iMatch DAM.