April 28, 2016
On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote:
>
> What's the benefit? I mean after CTFE-decompression they are going to add weight to the binary as much as decompressed files.
>
> Compression on the other hand might be helpful to avoid precompressing everything beforehand.

The compiler can load files faster, that are being used by ctfe only.
Which would be stripped out by the linker later.
And keep in mind that it also works at runtime.

Memory is scarce at compiletime and this can help reducing the memory requirements. When a bit of structure is added on top.

April 28, 2016
On Thursday, 28 April 2016 at 17:58:50 UTC, Stefan Koch wrote:
> On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote:
>>
>> What's the benefit? I mean after CTFE-decompression they are going to add weight to the binary as much as decompressed files.
>>
>> Compression on the other hand might be helpful to avoid precompressing everything beforehand.
>
> The compiler can load files faster, that are being used by ctfe only.
> Which would be stripped out by the linker later.
> And keep in mind that it also works at runtime.
>
> Memory is scarce at compiletime and this can help reducing the memory requirements. When a bit of structure is added on top.

Considering the speed and memory consumption of CTFE, I'd bet on the exact reverse.

Also, the damn thing is allocation in a loop.

April 28, 2016
On 28-Apr-2016 21:31, deadalnix wrote:
> On Thursday, 28 April 2016 at 17:58:50 UTC, Stefan Koch wrote:
>> On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote:
>>>
>>> What's the benefit? I mean after CTFE-decompression they are going to
>>> add weight to the binary as much as decompressed files.
>>>
>>> Compression on the other hand might be helpful to avoid
>>> precompressing everything beforehand.
>>
>> The compiler can load files faster, that are being used by ctfe only.
>> Which would be stripped out by the linker later.
>> And keep in mind that it also works at runtime.
>>
>> Memory is scarce at compiletime and this can help reducing the memory
>> requirements. When a bit of structure is added on top.
>
> Considering the speed and memory consumption of CTFE, I'd bet on the
> exact reverse.

Yeah, the whole CTFE to save compile-time memory sounds like a bad joke to me;)
>
> Also, the damn thing is allocation in a loop.
>


-- 
Dmitry Olshansky
April 28, 2016
On Thursday, 28 April 2016 at 18:31:25 UTC, deadalnix wrote:
>
> Also, the damn thing is allocation in a loop.

I would like a have an allocation primitive for ctfe use.
But that would not help too much as I don't know the size I need in advance.
storing that in the header is optional, and unfortunately lz4c does not store it by default.

decompressing the lz family takes never more space then uncompressed size of the data.
The working set is often bounded. In the case of lz4 it's restricted to 4k in the frame format.
and to 64k by design.


April 28, 2016
On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote:
>
> Compression on the other hand might be helpful to avoid precompressing everything beforehand.

I fear that is going to be pretty slow and will eat at least 1.5 the memory of the file you are trying to store.
If you want a good compression ratio.

then again... it might be fast enough to still be useful.

April 28, 2016
On Wednesday, 27 April 2016 at 06:55:46 UTC, Walter Bright wrote:
>
> Sounds nice. I'm curious how it would compare to:
>
> https://www.digitalmars.com/sargon/lz77.html
>
> https://github.com/DigitalMars/sargon/blob/master/src/sargon/lz77.d

lz77 took 176 hnecs uncompressing
lz4 took 92 hnecs uncompressing

And another test in reversed order using the same data.

lz4 took 162 hnecs uncompressing
lz77 took 245 hnecs uncompressing


April 28, 2016
On Thursday, 28 April 2016 at 20:12:58 UTC, Stefan Koch wrote:
> On Wednesday, 27 April 2016 at 06:55:46 UTC, Walter Bright wrote:
>>
>> Sounds nice. I'm curious how it would compare to:
>>
>> https://www.digitalmars.com/sargon/lz77.html
>>
>> https://github.com/DigitalMars/sargon/blob/master/src/sargon/lz77.d
>
> lz77 took 176 hnecs uncompressing
> lz4 took 92 hnecs uncompressing
>
> And another test in reversed order using the same data.
>
> lz4 took 162 hnecs uncompressing
> lz77 took 245 hnecs uncompressing

Though the compression ratio is worse.
But that is partially fixable.
April 30, 2016
On Thursday, 28 April 2016 at 20:58:25 UTC, Stefan Koch wrote:
> Though the compression ratio is worse.
> But that is partially fixable.

I have to go back on that, due to restrictions in the lz4 spec many _very_ small files will have significant overhead.

Work on improving the compression ratio is ongoing but there is not more then 0.5-1.5% improvement to expect.
1 2
Next ›   Last »