July 21, 2013
On 2013-07-21 08:45, Namespace wrote:
> But D isn't like Ada. It's more like C++ and there Heap allocations is
> often used too.
> It would be really cool if we had allocators already.
> Something like:
> ----
> with (AllocatorX) { /// will use malloc and free instead of calling the GC
>      float[] arr;
>      arr ~= 42;
> }
> ----
>
> And I still don't know what a 'TLS scratch pad buffer' is.

Perhaps:

float[4000] scratchPadBuffer;

void foo ()
{
    // use scratchPadBuffer here
}

I guess he just refers to a some temporary data you need during the execution of a function and at the end of the function you don't care about it.

-- 
/Jacob Carlborg
July 21, 2013
On Sunday, 21 July 2013 at 09:16:47 UTC, Jacob Carlborg wrote:
> On 2013-07-21 08:45, Namespace wrote:
>> But D isn't like Ada. It's more like C++ and there Heap allocations is
>> often used too.
>> It would be really cool if we had allocators already.
>> Something like:
>> ----
>> with (AllocatorX) { /// will use malloc and free instead of calling the GC
>>     float[] arr;
>>     arr ~= 42;
>> }
>> ----
>>
>> And I still don't know what a 'TLS scratch pad buffer' is.
>
> Perhaps:
>
> float[4000] scratchPadBuffer;
>
> void foo ()
> {
>     // use scratchPadBuffer here
> }
>
> I guess he just refers to a some temporary data you need during the execution of a function and at the end of the function you don't care about it.

But then I have mostly far too much and maybe a few times a bit too less store. It's not flexible. But maybe with a smaller granule.
What's about this:

----
struct Chunk(T, ushort maxSize = 1024) {
public:
	static T[maxSize] _chunk;

	T* ptr;
	size_t length;
	size_t capacity;

	this(size_t capacity) {
		this.capacity = capacity;

		if (capacity < maxSize)
			this.ptr = &_chunk[0];
		else
			this.ptr = cast(T*) .malloc(this.capacity * T.sizeof);
	}

	@disable
	this(this);

	~this() {
		if (this.ptr && this.capacity > maxSize)
			.free(this.ptr);
	}

	void ensureAddable(size_t capacity) {
		if (capacity > this.capacity) {
			this.capacity = capacity;

			if (this.capacity > maxSize)
				this.ptr = cast(T*) .realloc(this.ptr, this.capacity * T.sizeof);
		}
	}

	void opOpAssign(string op : "~", U)(auto ref U item) {
		this.ptr[this.length++] = cast(T) item;
	}

	void opOpAssign(string op : "~", U, size_t n)(auto ref U[n] items) {
		foreach (ref U item; items) {
			this.ptr[this.length++] = cast(T) item;
		}
	}
}
----
July 21, 2013
Namespace:

> But D isn't like Ada.

But D should be a bit more like Ada, for such things :-)

Bye,
bearophile
July 21, 2013
On Sunday, 21 July 2013 at 10:30:16 UTC, bearophile wrote:
> Namespace:
>
>> But D isn't like Ada.
>
> But D should be a bit more like Ada, for such things :-)
>
> Bye,
> bearophile

I agree, but how? ;)
You know by yourself that Walter & Andrei are very hard to convince by such things.
July 21, 2013
Namespace:

> I agree, but how? ;)
> You know by yourself that Walter & Andrei are very hard to convince by such things.

I think there is only a small number of Ada ideas worth copying, like:
- Integral overflows;
- Ranged integers;
- A library of collections allocated in-place in the standard library (Bounded Collections of Ada2012);
- More deterministic multi processing;
- Nicer and more handy enumerated sets;
- No undefined behaviours;
- Stack-allocated fixed-length arrays;
- etc.

Some of such things are work in progress (no undefined behaviours), some of them can go in the D standard library (bounded collections, variable length arrays with a bit of help from the compiler), some of them can be implemented with help from the compiler (partially library-defined integral type with overflow tests). Some of them maybe can be implemented once some bugs are sorted out and alias this gets better (ranged integers). Some of them are just currently not present in D (more deterministic multi processing), but D is supposed to have something to replace them ("scope", a better "shared", Walter is introducing a bit of object ownership behind the curtains, etc).

And then it becomes a matter of using some of those idioms.

Bye,
bearophile
July 21, 2013
21-Jul-2013 00:46, Namespace пишет:
> On Saturday, 20 July 2013 at 20:22:56 UTC, Dmitry Olshansky wrote:
>> 21-Jul-2013 00:19, Namespace пишет:
>>> Let us assume we have a method of a class which is used often and the
>>> method is called periodically and must allocate every time a array
>>> between 100 and 4000 elements. What would you do?
>>>
>>> 1. Simple: float[] array;
>>> 2. Reserve: float[] array; array.reserve(N); /// N is a parameter value
>>> 3. Use manual memory allocation (e.g. with malloc) and free the memory
>>> immediately at the end of the scope.
>>> 4. Use stack allocated memory (But maybe 4000 is too big?)
>>>
>>> Currently I would prefer #3, the manual memory allocation because I can
>>> then control when _exactly_ the memory is released. But I want to hear
>>> other opinions. :)
>>
>> 5. Keep a TLS scratch pad buffer (static  class member) for said
>> 100-4000 floats and re-use it.
>
> TLS scratch pad buffer?

My analogy goes as follows: a chunk of memory for temporary needs => scratch pad (as in sheet of paper for quick notes/sketches).

Something along the lines of:

class A{
	static float[] buffer;
	static this(){
		buffer = new float[as_big_as_it_gets];
	}

	void foo(){
		float[] tempAlloc = buffer[0..need_this_much];
		tempAlloc[] = 0.0;
		...
	}	
}

As long as foo is not called recursively should just work. Other thing that may wreck this is if foo is called in Fiber context and uses yeild internally.

One may as well fall back to option 3 in rare cases where scratch pad is too small to fit the bill.

-- 
Dmitry Olshansky
July 21, 2013
21-Jul-2013 00:42, Ali Çehreli пишет:
> On 07/20/2013 01:22 PM, Dmitry Olshansky wrote:
>
>  > 21-Jul-2013 00:19, Namespace пишет:
>  >> Let us assume we have a method of a class which is used often and the
>  >> method is called periodically and must allocate every time a array
>  >> between 100 and 4000 elements. What would you do?
>
>  > 5. Keep a TLS scratch pad buffer (static  class member) for said
>  > 100-4000 floats and re-use it.
>
> As long as the function finishes with that buffer, there shouldn't be
> reentrancy issues in general. But if the function calls the same
> function, say on another object perhaps indirectly, then the two objects
> would be sharing the same buffer:
>

Yes, this case is problematic. It cannot be called recursively even with another instance.

> class C
> {
>      void foo() {
>          /* modify the static buffer here */
>          auto c = new C();
>          c.foo();
>          /* the static buffer has changed */
>      }
> }
>
> Ali
>


-- 
Dmitry Olshansky
July 21, 2013
> My analogy goes as follows: a chunk of memory for temporary needs => scratch pad (as in sheet of paper for quick notes/sketches).
>
> Something along the lines of:
>
> class A{
> 	static float[] buffer;
> 	static this(){
> 		buffer = new float[as_big_as_it_gets];
> 	}
>
> 	void foo(){
> 		float[] tempAlloc = buffer[0..need_this_much];
> 		tempAlloc[] = 0.0;
> 		...
> 	}	
> }
>
> As long as foo is not called recursively should just work. Other thing that may wreck this is if foo is called in Fiber context and uses yeild internally.
>
> One may as well fall back to option 3 in rare cases where scratch pad is too small to fit the bill.

I really like the idea.
I've changed it a bit:
I have a float[1024] buffer which is used, as long as the requested size is less than 1024. If it's greater, I will temporary allocate the whole array with new float[Size];
Any improvements? Or is 1024 to small / big?
July 21, 2013
Namespace:

> I have a float[1024] buffer which is used, as long as the requested size is less than 1024. If it's greater, I will temporary allocate the whole array with new float[Size];

That's one of the ways I solve similar problems.


> Any improvements? Or is 1024 to small / big?

An idea is to add some debug{} code that computes statistics for the array sizes...

Bye,
bearophile
July 21, 2013
On Sunday, 21 July 2013 at 21:31:08 UTC, bearophile wrote:
> Namespace:
>
>> I have a float[1024] buffer which is used, as long as the requested size is less than 1024. If it's greater, I will temporary allocate the whole array with new float[Size];
>
> That's one of the ways I solve similar problems.
>
>
>> Any improvements? Or is 1024 to small / big?
>
> An idea is to add some debug{} code that computes statistics for the array sizes...
>
> Bye,
> bearophile

Too much effort. But if some of you have this experiences already, I would gladly profit from them. ;)