January 19, 2016
On Tuesday, 19 January 2016 at 11:56:38 UTC, tcak wrote:
> On Tuesday, 19 January 2016 at 10:09:01 UTC, Ola Fosheim Grøstad wrote:
>> On Monday, 18 January 2016 at 05:59:15 UTC, tcak wrote:
>>> Is there anything like this in Phobos, or shall I start my own implementation?
>>
>> It isn't really clear to me what you are trying to do. IIRC the C++ deque is usually implemented as an array of pointers to fixed sized memory blocks.
>>
>> Is that what you are trying to achieve?
>
> First of all, I have started implementation, and a part of implementation is completed.
>
> For your question, I can explain it as follows:
>
> Example I have 3 memory blocks.
>
> Memory block 1: Ptr = 0x1000, Len = 100
> Memory block 2: Ptr = 0x2000, Len = 5
> Memory block 3: Ptr = 0x3000, Len = 150
>
> Into the class FragmentedMemory, you append those three memory blocks. Then it provides you an interface like those three memory blocks are consecutive.
>
> Let's say: fragmem[ 125 ] = 5;
>
> Because the index 125 is in memory block 3, the value is written to memoryBlock3[ 20 ];
> (125 = 100 + 5 + 20 )
>
> Reading is in the same way. I think you can think about use cases on your side.
>
> Currently, set, get, append operations are completed.
>
> I will implement bulk memory copy and assign operations as well.

If the number of blocks is known at compile time use chain [1], otherwise use joiner [2].
At the moment joiner does not provide opIndex, opSlice, length and such, but this can be fixed.

[1] http://dlang.org/phobos/std_range.html#.chain
[2] http://dlang.org/phobos/std_algorithm_iteration.html#joiner
January 19, 2016
On Tue, 19 Jan 2016 10:09:01 +0000, Ola Fosheim Grøstad wrote:

> On Monday, 18 January 2016 at 05:59:15 UTC, tcak wrote:
>> Is there anything like this in Phobos, or shall I start my own implementation?
> 
> It isn't really clear to me what you are trying to do. IIRC the C++ deque is usually implemented as an array of pointers to fixed sized memory blocks.
> 
> Is that what you are trying to achieve?

It's a rope-style data structure, but for void[] rather than strings.

https://en.wikipedia.org/wiki/Rope_(data_structure)

An interesting property of this is that, while you can issue pointers to memory within this rope, it takes caution to ensure that a pointer remains within the rope. Instead of just incrementing the pointer, you would need to call a function on the rope to ensure that it promotes the pointer to the next memory segment as appropriate.

This also means you can't store a multibyte object in the rope and use it in place. (Consider: the first block is one byte, the second is seven bytes; you store a ulong in the rope.) You must always copy it out, use it, and optionally store it back.

A friendly API would have convenience methods for mutating structured data within the rope.

This is why just using joiner isn't that friendly, even if joiner did have opSlice, opIndex, etc.
January 20, 2016
Am Mon, 18 Jan 2016 05:59:15 +0000
schrieb tcak <1ltkrs+3wyh1ow7kzn1k@sharklasers.com>:

> I, due to a need, will start implementation of distributed memory system.
> 
> Idea is that:
> 
> Let's say you have allocated 1 GiB space in memory. This memory is blocked into 4 KiB.
> 
> After some reservation, and free operations, now only the blocks 0, 12, and 13 are free to be allocated.
> 
> Problem is that those memory blocks are not consecutive.

Well ... there is a way that is a bit arcane. If you don't
need a lot of such remappings and you use page sized blocks
anyways, you could remap the scattered virtual memory to a new
consecutive range and be done.
Basically you reserve a new uncommitted (e.g. not backed by
physical memory) virtual memory range, remap your scattered
blocks into that range in the desired order and then release
the original blocks. I used this technique in a circular
buffer implementation, mirroring the start of the buffer to the
end, thus avoiding the wrapping problems that normally occur
when accessing data close to the end in a traditional
circular buffer.
Caveats: You'd need to allocate/free virtual memory directly
in multiples of the allocation granularity (which is the same
as the page size on POSIX) and you put some stress on the
virtual memory system. A process may have at most 2^16
memory mappings at a time on my Linux system.

-- 
Marco
January 20, 2016
On Wednesday, 20 January 2016 at 07:58:20 UTC, Marco Leise wrote:
> Am Mon, 18 Jan 2016 05:59:15 +0000
> schrieb tcak <1ltkrs+3wyh1ow7kzn1k@sharklasers.com>:
>
>> I, due to a need, will start implementation of distributed memory system.
>> 
>> Idea is that:
>> 
>> Let's say you have allocated 1 GiB space in memory. This memory is blocked into 4 KiB.
>> 
>> After some reservation, and free operations, now only the blocks 0, 12, and 13 are free to be allocated.
>> 
>> Problem is that those memory blocks are not consecutive.
>
> Well ... there is a way that is a bit arcane. If you don't
> need a lot of such remappings and you use page sized blocks
> anyways, you could remap the scattered virtual memory to a new
> consecutive range and be done.
> Basically you reserve a new uncommitted (e.g. not backed by
> physical memory) virtual memory range, remap your scattered
> blocks into that range in the desired order and then release
> the original blocks. I used this technique in a circular
> buffer implementation, mirroring the start of the buffer to the
> end, thus avoiding the wrapping problems that normally occur
> when accessing data close to the end in a traditional
> circular buffer.
> Caveats: You'd need to allocate/free virtual memory directly
> in multiples of the allocation granularity (which is the same
> as the page size on POSIX) and you put some stress on the
> virtual memory system. A process may have at most 2^16
> memory mappings at a time on my Linux system.

What you are saying is true, and before starting the implementation, it was
in my mind, but it doesn't fit to shared memory structure. If very big part
of memory is already used, there won't be any more space to allocate a
consecutive memory to bring the content of non-consecutive memory blocks.

Anyway, I am done with the implementation. At least implemented as "shared" class.
I am aware that, because of calculations, it won't be as fast as linear memory
access, but solve the problem at least. I am writing unittest right now.
1 2
Next ›   Last »