May 18, 2015
On 05/18/2015 05:26 AM, John Colvin wrote:
> On Monday, 18 May 2015 at 11:40:13 UTC, thedeemon wrote:
>> On Monday, 18 May 2015 at 10:24:25 UTC, Dennis Ritchie wrote:
>>
>>> No, afraid not. Function capacity is not an analogue of fill-pointers!
>>
>> It's exactly the same.
>
> But in D capacity is affected by other things.
>
> auto a = new int[20];
> auto b = arr[0..10];
> //can't now append to b without re-allocating or using assumeSafeAppend.

Perfect opportunity to inject my newly-discovered issue with capacity:

void main()
{
    auto a = new int[20];
    foo(a);
    //can't now append to a
}

void foo(const(int)[] p)
{
    p ~= 42;
}

Ali

May 18, 2015
On 5/18/15 1:43 PM, Ali Çehreli wrote:
> On 05/18/2015 05:26 AM, John Colvin wrote:
>> On Monday, 18 May 2015 at 11:40:13 UTC, thedeemon wrote:
>>> On Monday, 18 May 2015 at 10:24:25 UTC, Dennis Ritchie wrote:
>>>
>>>> No, afraid not. Function capacity is not an analogue of fill-pointers!
>>>
>>> It's exactly the same.
>>
>> But in D capacity is affected by other things.
>>
>> auto a = new int[20];
>> auto b = arr[0..10];
>> //can't now append to b without re-allocating or using assumeSafeAppend.
>
> Perfect opportunity to inject my newly-discovered issue with capacity:
>
> void main()
> {
>      auto a = new int[20];
>      foo(a);
>      //can't now append to a

Well, sure you can :)

a ~= 5; // works fine

But I understand you mean that an append to 'a' will reallocate

> }
>
> void foo(const(int)[] p)
> {
>      p ~= 42;
> }
>

Not an issue, intended behavior.

For instance if foo did this:

p ~= 42;
someGlobal = p;

Now, if you didn't prevent appending a from stomping on memory, then someGlobal would be stomped.

BTW, the way to prevent this is to do something like:

a) dup p on append
b) const x = p; scope(exit) x.assumeSafeAppend();

Hm... an interesting wrapper type would be an 'always reallocating' slice type. It would have an extra boolean to indicate it must realloc upon append (which would then clear after the first append).

-Steve
May 18, 2015
On Monday, 18 May 2015 at 17:43:50 UTC, Ali Çehreli wrote:
> On 05/18/2015 05:26 AM, John Colvin wrote:
>> On Monday, 18 May 2015 at 11:40:13 UTC, thedeemon wrote:
>>> On Monday, 18 May 2015 at 10:24:25 UTC, Dennis Ritchie wrote:
>>>
>>>> No, afraid not. Function capacity is not an analogue of fill-pointers!
>>>
>>> It's exactly the same.
>>
>> But in D capacity is affected by other things.
>>
>> auto a = new int[20];
>> auto b = arr[0..10];
>> //can't now append to b without re-allocating or using assumeSafeAppend.
>
> Perfect opportunity to inject my newly-discovered issue with capacity:
>
> void main()
> {
>     auto a = new int[20];
>     foo(a);
>     //can't now append to a
> }
>
> void foo(const(int)[] p)
> {
>     p ~= 42;
> }
>
> Ali

I don't understand what's counterintuitive here. Slices are just pointer and length, everything else is global state.
May 18, 2015
On 05/18/2015 11:19 AM, John Colvin wrote:> On Monday, 18 May 2015 at 17:43:50 UTC, Ali Çehreli wrote:
>> On 05/18/2015 05:26 AM, John Colvin wrote:
>>> On Monday, 18 May 2015 at 11:40:13 UTC, thedeemon wrote:
>>>> On Monday, 18 May 2015 at 10:24:25 UTC, Dennis Ritchie wrote:
>>>>
>>>>> No, afraid not. Function capacity is not an analogue of fill-pointers!
>>>>
>>>> It's exactly the same.
>>>
>>> But in D capacity is affected by other things.
>>>
>>> auto a = new int[20];
>>> auto b = arr[0..10];
>>> //can't now append to b without re-allocating or using assumeSafeAppend.
>>
>> Perfect opportunity to inject my newly-discovered issue with capacity:
>>
>> void main()
>> {
>>     auto a = new int[20];
>>     foo(a);
>>     //can't now append to a
>> }
>>
>> void foo(const(int)[] p)
>> {
>>     p ~= 42;
>> }
>>
>> Ali
>
> I don't understand what's counterintuitive here. Slices are just pointer
> and length, everything else is global state.

Nothing counterintuitive. I have discovered recently that when there are multiple slices with equal length, capacity is shared until one of them appends, in which case that first appender wins the capacity.

I have always known about non-stomping and why this has to be so. What was new to me is the initial suspense when we don't know who will win the capacity. Just news to me.

In all other cases where there is one longest slice (like your example), then there is only one owner of capacity.

Ali

May 18, 2015
On 05/18/2015 10:52 AM, Steven Schveighoffer wrote:

> On 5/18/15 1:43 PM, Ali Çehreli wrote:

>> void main()
>> {
>>      auto a = new int[20];
>>      foo(a);
>>      //can't now append to a
>
> Well, sure you can :)
>
> a ~= 5; // works fine
>
> But I understand you mean that an append to 'a' will reallocate
>
>> }
>>
>> void foo(const(int)[] p)
>> {
>>      p ~= 42;
>> }

> BTW, the way to prevent this is to do something like:
>
> a) dup p on append
> b) const x = p; scope(exit) x.assumeSafeAppend();

Exactly! That recent discovery of mine made me come up with this guideline: "Never append to a parameter slice."

No, I may not follow that guideline myself but it makes sense to me:


http://forum.dlang.org/thread/mi21dq$l6l$1@digitalmars.com#post-mi739e:241v83:241:40digitalmars.com

Ali

May 18, 2015
On Monday, 18 May 2015 at 17:14:46 UTC, Steven Schveighoffer wrote:
> capacity is analogous to the number of elements in the vector (as returned by array-dimension according to https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node162.html).
>
> arr.length is analogous to the fill pointer.
>
> example:
>
> int[] arr = new int[](5);
> assert(arr.capacity > 5);
> assert(arr.length == 5);
>
> arr.reserve(100); // expand arr memory block to be able to hold *at least* 100 ints
>
> assert(arr.capacity >= 100);
> assert(arr.length == 5);
>
> auto ptr = arr.ptr; // for later assert
>
> arr ~= 1; // increment length by 1, 'fill in' tail of array with '1'
>
> // this should demonstrate how it works
> assert(arr.length == 6); // new fill pointer
> assert(arr.capacity >= 100); // capacity unchanged
> assert(arr.ptr is ptr); // array still lives in same memory block
>
> Apologies for not translating to lisp, I don't know it.
>
> -Steve

Thank you. This is what you need!
May 18, 2015
On 5/18/15 2:40 PM, Ali Çehreli wrote:
> On 05/18/2015 11:19 AM, John Colvin wrote:> On Monday, 18 May 2015 at
> 17:43:50 UTC, Ali Çehreli wrote:
>  >> On 05/18/2015 05:26 AM, John Colvin wrote:
>  >>> On Monday, 18 May 2015 at 11:40:13 UTC, thedeemon wrote:
>  >>>> On Monday, 18 May 2015 at 10:24:25 UTC, Dennis Ritchie wrote:
>  >>>>
>  >>>>> No, afraid not. Function capacity is not an analogue of
> fill-pointers!
>  >>>>
>  >>>> It's exactly the same.
>  >>>
>  >>> But in D capacity is affected by other things.
>  >>>
>  >>> auto a = new int[20];
>  >>> auto b = arr[0..10];
>  >>> //can't now append to b without re-allocating or using
> assumeSafeAppend.
>  >>
>  >> Perfect opportunity to inject my newly-discovered issue with capacity:
>  >>
>  >> void main()
>  >> {
>  >>     auto a = new int[20];
>  >>     foo(a);
>  >>     //can't now append to a
>  >> }
>  >>
>  >> void foo(const(int)[] p)
>  >> {
>  >>     p ~= 42;
>  >> }
>  >>
>  >> Ali
>  >
>  > I don't understand what's counterintuitive here. Slices are just pointer
>  > and length, everything else is global state.
>
> Nothing counterintuitive. I have discovered recently that when there are
> multiple slices with equal length, capacity is shared until one of them
> appends, in which case that first appender wins the capacity.
>
> I have always known about non-stomping and why this has to be so. What
> was new to me is the initial suspense when we don't know who will win
> the capacity. Just news to me.
>
> In all other cases where there is one longest slice (like your example),
> then there is only one owner of capacity.

It's no different. No slice "owns" the capacity, the runtime does. There can be multiple slices that access the capacity.

Also note that the longest slice doesn't necessarily have access to appending. All that is required is that the slice end lands on the array end:

int[] arr = new int()[5];
auto arr2 = arr[4..5];
arr = arr[0..3];
assert(arr.length > arr2.length);
assert(arr2.capacity);
assert(arr.capacity == 0);

It is a very difficult concept to conceptualize. I should write a pseudo array type to demonstrate how the array runtime code works as an object instead of as a collection of obtuse runtime functions.

-Steve
May 18, 2015
On 5/18/15 2:45 PM, Ali Çehreli wrote:

> Exactly! That recent discovery of mine made me come up with this
> guideline: "Never append to a parameter slice."

I think this may not be an appropriate guideline. It's perfectly fine to append to a parameter slice. You just need to leave it the way you found it. Unless the point of the function is to append to it (maybe you return the result for example).

-Steve
May 18, 2015
On 05/18/2015 11:52 AM, Steven Schveighoffer wrote:

> Also note that the longest slice doesn't necessarily have access to
> appending. All that is required is that the slice end lands on the array
> end:

That explains a lot. Thanks.

Ali

May 19, 2015
On Monday, 18 May 2015 at 16:40:30 UTC, Dennis Ritchie wrote:
> On Monday, 18 May 2015 at 12:49:56 UTC, Kagamin wrote:
>> Filling a buffer is usually done this way: http://dlang.org/phobos/std_stdio.html#.File.rawRead
>
> Here such example, the task. There is a flow stream, associated, for example, with any socket. It wrote several bytes at a time. To once again not to pull the socket, we start buffer as an array with Phill-pointer. Adding byte array - using the vector-push. When the buffer is stuffed, dump it into the stream and moves pointer to zero. How to do it with the help of readRaw or there writeRaw?

If you want to implement a buffered output stream, it's done manually like this: https://github.com/D-Programming-Language/phobos/blob/master/std/stream.d#L1711 or this: https://github.com/schveiguy/phobos/blob/new-io3/std/io/text.d#L665
1 2
Next ›   Last »