Hello, I am very sorry to answer so late :(
I had lots of work and didn't have time to answer properly.
Thanks a lot for your answer, which was very informative and much faster than mine.
Regarding my script-fu command not working - the problem had a source in my mis-understanding of gimp script-fu (well, no wonder, for me the most terrible lenguage I've ever seen :) ).
The pre-defined script commands, (batch-gimp-lqr-full-use-id) expect to get a prepared xcf image with layers, while I wanted to use jpg files directly from disk. I wrote my own script with just the options I needed, here it is:
as said, only difference is that I build layers from files and then call the plugin.
regarding possible optimisations:
reducing discard layer to 1 channel helped.
I didn't find time rebuilding the plugin yet, but I had an idea of how possibly the plugin could run with less memory, correct me if this is wrong/stupid:
If I get it right, the plugin builds an oriented weighted graph along 1 axis, with color differences as weights. Then it finds shortest paths with some options, these then become the discarded or copied seams.(?)
My idea is:
I presume the graph itself with operations done on it takes the most memory, not the cached seams.
Instead of using tiles the image could be split into strips. E.g. when scaling happens along x, the strips would be done along y axis. then you have strips (xmin,ymin,x1,ymax)(x1,ymin,x2,ymax)(xn,ymin… These can be processed separately and just a certain amount of best results of the paths can be saved. then another graph of these strip results can be done, where whole seams are used as nodes, and evaluated. I'm not sure if for this the strips would have to overlap by 1 pixel or not.
well, just ideas. I hope to buy a stronger computer this month and test a brute force approach with 12gb of ram ;)
Btw, my project are large collages made from text by a software which auto generates them, here's a sample: