August 12, 2016
On 12/08/16 22:07, Walter Bright wrote:
> On 8/12/2016 7:41 AM, Shachar Shemesh wrote:
>> That table was not expensive to compute, and its constantness wasn't
>> crucial
>> enough even for me to put a wrapper pointer and only access it through
>> it. Had
>> that not been the case, and had that table been more expensive to
>> computer, I'd
>> probably compute at compile time with an external tool.
>
> What I do (and is done in building DMD) is write a program (optabgen.c)
> to generate the tables and write a C++ source file, then compile the
> generated file into DMD.

Yes, I'm sorry. I meant to say "build time" instead of "compile time". That is precisely what I meant.

Shachar
August 12, 2016
On 12/08/16 17:50, mùsdl wrote:
> On Friday, 12 August 2016 at 14:41:14 UTC, Shachar Shemesh wrote:
>> I cannot say the same thing about the things in C++ I miss when I
>> write D.
>
> Be constructive and enumerate them.
>
>

I'll give some highlights, but those are, mostly, things that I've already listed in this forum and in my lightening talk.

- No RAII support, despite the fact everybody here seems to think that D supports RAII.
- Recursive const makes many cases where I can use const in C++ (and enjoy the protection it provides) simply mutable in D.
- This one I have not complained about yet. Operator overloads stepping on each other's toes. In my case, I have a container (with opIndex that accepts a custom type and opOpAssign!"~") and I place in it a struct with some operator overloads as well (have not reduced the cause yet, hence no previous complaint about this one). So, when I write

Container[IndexType] ~= Type;

And the compiler assumes that means:
Container.opIndexOpAssign!"~"(IndexType, Type);

but since nothing like that is defined, the code doesn't compile. I ended up writing (actual code from the Weka code base):

blockIds[diskIdx].opOpAssign!"~"(makeBlockId(stripeIdx+i, placement.to!SlotIdx(diskIdx)));

Took me almost ten minutes and consulting someone else to find this solution.

- GC. GC. GC. Some more GC.
- Integral type operations promotion and the constant need for casts.
- No warning for signed/unsigned comparisons. An unfailing source for bugs.
- No ref type.

These are just off the top of my head. There are more. Like I said, my frustrations with D are daily.

Shachar
August 12, 2016
On 08/12/2016 08:04 PM, Andrei Alexandrescu wrote:
> On 08/12/2016 01:21 PM, Steven Schveighoffer wrote:
[...]
>> shared int x;
>> ++x; // error, must use atomicOp.
>> x = x + 1; // OK(!)
>
> How is this broken and how should it behave? -- Andrei

I may be responsible for some confusion here, including my own.

Disclaimer: I'm far from being an expert on thread-safety. I may have misunderstandings about fundamentals.

Recently, I made a thread titled "implicit conversions to/from shared":
http://forum.dlang.org/post/nlth0p$1l7g$1@digitalmars.com

Before starting that thread, and before messing with atomicOp, I had assumed that D enforces atomic reads/writes on shared types. Then I noticed that I can read/write shared types that are too large for atomic ops.

Example:

----
alias T = ubyte[1000];

enum T a = 1;
enum T b = 2;

shared T x = a;

import core.thread: Thread;

void write()
{
    bool flip = false;
    foreach (i; 0 .. 1_000_000)
    {
        if (flip) x = a; else x = b; // non-atomic write
        flip = !flip;
    }
}

void main()
{
    auto t = new Thread(&write);
    t.start();
    foreach (i; 0 .. 1_000_000)
    {
        T r = x; // non-atomic read
        assert(r == a || r == b); // fails
    }
    t.join();
}
----

I tested a bunch of stuff, and I remember having a test case that failed with a primitive type like int, but I can't find or recreate it now, and a little googling suggests that reading/writing should be atomic for primitive types (on X86 with various restrictions). So I probably just had an error in that test.

But the above test case stands, and it also fails with -m32 and ulong:

----
alias T = ulong;

enum T a = 0x01_01_01_01__01_01_01_01;
enum T b = 0x02_02_02_02__02_02_02_02;

/* rest as above */
----

So, `shared int x; x = x + 1;` is ok, as far as I see now. But with other types, unsafe reads/writes are generated.
August 12, 2016
On 8/12/2016 12:14 PM, Shachar Shemesh wrote:
> On 12/08/16 22:07, Walter Bright wrote:
>> On 8/12/2016 7:41 AM, Shachar Shemesh wrote:
>>> That table was not expensive to compute, and its constantness wasn't
>>> crucial
>>> enough even for me to put a wrapper pointer and only access it through
>>> it. Had
>>> that not been the case, and had that table been more expensive to
>>> computer, I'd
>>> probably compute at compile time with an external tool.
>>
>> What I do (and is done in building DMD) is write a program (optabgen.c)
>> to generate the tables and write a C++ source file, then compile the
>> generated file into DMD.
>
> Yes, I'm sorry. I meant to say "build time" instead of "compile time". That is
> precisely what I meant.

I'm surprised that I've never seen anyone else use such a technique. It's so unusual I've seen it be unrepresentable in some 'make' replacements.

I suppose it's like unittest and ddoc. Sure, you can do it with some contortions and/or some external tooling, but having it conveniently built in to the language changes everything.

August 12, 2016
On 8/12/2016 12:29 PM, Shachar Shemesh wrote:
> - No RAII support, despite the fact everybody here seems to think that D
> supports RAII.

Please explain.

> - Recursive const makes many cases where I can use const in C++ (and enjoy the
> protection it provides) simply mutable in D.

Right, in C++ you can have a const pointer to mutable. On the other hand, in C++ you cannot have 'const T' apply to what is accessible through T.

> - This one I have not complained about yet. Operator overloads stepping on each
> other's toes. In my case, I have a container (with opIndex that accepts a custom
> type and opOpAssign!"~") and I place in it a struct with some operator overloads
> as well (have not reduced the cause yet, hence no previous complaint about this
> one). So, when I write
>
> Container[IndexType] ~= Type;
>
> And the compiler assumes that means:
> Container.opIndexOpAssign!"~"(IndexType, Type);
>
> but since nothing like that is defined, the code doesn't compile. I ended up
> writing (actual code from the Weka code base):
>
> blockIds[diskIdx].opOpAssign!"~"(makeBlockId(stripeIdx+i,
> placement.to!SlotIdx(diskIdx)));
>
> Took me almost ten minutes and consulting someone else to find this solution.
>
> - GC. GC. GC. Some more GC.
> - Integral type operations promotion and the constant need for casts.

I don't understand this. Integral type promotions are the same as in C++. The casts are needed for integral type demotions (i.e. narrowing conversions). Having implicit narrowing conversions in C++ is a significant source of bugs, as the most significant bits are silently discarded.

> - No warning for signed/unsigned comparisons. An unfailing source for bugs.

Understood, but to be pedantic, such warnings are extensions to C++ compilers, not part of the language.

> - No ref type.

D has the ref storage class. What difference are you seeing?

August 12, 2016
On 12/08/16 22:55, Walter Bright wrote:
> 
> I'm surprised that I've never seen anyone else use such a technique. It's so unusual I've seen it be unrepresentable in some 'make' replacements.
> 
> I suppose it's like unittest and ddoc. Sure, you can do it with some contortions and/or some external tooling, but having it conveniently built in to the language changes everything.
> 

Actually, even with it being easilly accessible in the compiler, there are sometimes reasons to still do it with an external tool.

My next task at work requires precomputing a table. This might prove to be a quite intensive task, and the result will change quite rarely. Under those circumstances, I believe I will not wish to compute it each time the system is compiled, but to compute it when it changes and use a cached version.

This means I still need a build rule for it, which means that if the build system does not support it, sucks to be me.

Shachar
August 12, 2016
On Friday, 12 August 2016 at 19:29:42 UTC, Shachar Shemesh wrote:
> - Recursive const makes many cases where I can use const in C++ (and enjoy the protection it provides) simply mutable in D.

FWIW HeadConst can be easily done in a library and will hopefully soon be in Phobos.

https://github.com/dlang/phobos/pull/3862

Note that the reverse of having a easy to use recursive const in library code of C++ is not true.
August 12, 2016
On 8/12/2016 11:34 AM, Jonathan M Davis via Digitalmars-d wrote:
> It does not surprise me in the least if there are bugs related to shared in
> the compiler, and we definitely don't deal with it properly in druntime with
> regards to stuff like Mutex and Condition. But I don't agree with the idea
> that shared is fundamentally broken.

I'd put it more as being largely unimplemented. However, it still works as separating by type data that is shared vs data that is local, and that is still immensely valuable.

Also, one thing at a time. Scope has been largely unimplemented forever, and this DIP intends to fix that.

August 12, 2016
On 08/12/2016 02:29 PM, H. S. Teoh via Digitalmars-d wrote:
> On Fri, Aug 12, 2016 at 02:21:04PM -0400, Andrei Alexandrescu via Digitalmars-d wrote:
>> On 08/12/2016 02:01 PM, H. S. Teoh via Digitalmars-d wrote:
>>> On Fri, Aug 12, 2016 at 02:04:53PM -0400, Andrei Alexandrescu via Digitalmars-d wrote:
>>>> On 08/12/2016 01:21 PM, Steven Schveighoffer wrote:
>>>>> On 8/12/16 1:04 PM, Jonathan M Davis via Digitalmars-d wrote:
>>>>>>
>>>>>> Honestly, I don't think that shared is broken.
>>>>>
>>>>> Yes. It is broken.
>>>>>
>>>>> shared int x;
>>>>> ++x; // error, must use atomicOp.
>>>>> x = x + 1; // OK(!)
>>>>
>>>> How is this broken and how should it behave? -- Andrei
>>>
>>> ?!
>>>
>>> Isn't it obvious that assigning to a shared int must require
>>> atomicOp or a cast? `x = x + 1;` clearly has a race condition
>>> otherwise.
>>
>> It is not to me, and it does not seem like a low-level race condition
>> to me (whereas ++x is).
>
> The problem is that the line between "low-level" and "high-level" is
> unclear and arbitrary.

I'd need to think a bit before agreeing or disagreeing, but it's a plausible take. In this case fortunately, the matters can be distinguished.

> Doesn't ++x lower to x = x + 1 on some CPUs
> anyway (or vice versa, if the optimizer decides to translate it the
> other way)?

This doesn't matter. The question is what is explicit and what is implicit in the computation.

++expr

is a RMW ("read-modify-write") expression, equivalent to:

((ref x) => { return x = cast(typeof(x)) (x + 1); })(expr)

In contrast,

expr1 = expr2 + 1

is an expression consisting of distinct read and write, both under the control of the code written by the programmer. As I explained, distinguishing in the general case when expr1 and expr2 refer to the same memory location is not doable statically. So it stands to reason that the compiler generates the code for one read and one write because that is literally what it has been asked to do.

It may actually be the case that one wants to do x = x + 1 and exercise a benign high-level race.

> Why should the user have to worry about such details?

What details are you referring to, and how would compiler technology help with those?

> Wouldn't that make shared kinda useless to begin with?

It doesn't seem that way.

>> Currently the compiler must ensure that an atomic read and an atomic
>> write are generated for x. Other than that, it is the responsibility
>> of the user.  The use of "shared" does not automatically relieve the
>> user from certain responsibilities.
>>
>> I agree that it would be nice to have stronger protection against
>> higher-level bugs, but those are outside the charter of "shared".
>> Consider:
>>
>> x = *p + 1;
>>
>> How would the compiler reject the right uses but not the case when p
>> == &x?
> [...]
>
> The compiler should reject it (without the appropriate casts) if p has a
> shared type, and the aliasing situation with x is unknown / unclear.

I think you are not right here.



Andrei

August 12, 2016
On 08/12/2016 03:55 PM, Walter Bright wrote:
> On 8/12/2016 12:14 PM, Shachar Shemesh wrote:
>> On 12/08/16 22:07, Walter Bright wrote:
>>> On 8/12/2016 7:41 AM, Shachar Shemesh wrote:
>>>> That table was not expensive to compute, and its constantness wasn't
>>>> crucial
>>>> enough even for me to put a wrapper pointer and only access it through
>>>> it. Had
>>>> that not been the case, and had that table been more expensive to
>>>> computer, I'd
>>>> probably compute at compile time with an external tool.
>>>
>>> What I do (and is done in building DMD) is write a program (optabgen.c)
>>> to generate the tables and write a C++ source file, then compile the
>>> generated file into DMD.
>>
>> Yes, I'm sorry. I meant to say "build time" instead of "compile time".
>> That is
>> precisely what I meant.
>
> I'm surprised that I've never seen anyone else use such a technique.

It's a matter of frequenting the appropriate circles. The technique is in wide use.

Andrei