December 06, 2017
On 12/6/17 12:17 PM, Steven Schveighoffer wrote:

> So why wouldn't the compiler fail? Because it has no idea yet what you mean by Nullable. It doesn't even know if Nullable will be available or not. You could even import Nullable, but Nullable!T may be an error.

To give an example of why the compiler waits until instantiation:

class C(T) : T
{
   void foo() { doesthisexist(); }
}

class D { void doesthisexist(); }

auto x = new C!D; // OK
auto y = new C!Object: // fail

-Steve
December 06, 2017
On Wednesday, 6 December 2017 at 18:09:45 UTC, Steven Schveighoffer wrote:
> On 12/6/17 12:17 PM, Steven Schveighoffer wrote:
>
>> So why wouldn't the compiler fail? Because it has no idea yet what you mean by Nullable. It doesn't even know if Nullable will be available or not. You could even import Nullable, but Nullable!T may be an error.
>
> To give an example of why the compiler waits until instantiation:
>
> class C(T) : T
> {
>    void foo() { doesthisexist(); }
> }
>
> class D { void doesthisexist(); }
>
> auto x = new C!D; // OK
> auto y = new C!Object: // fail
>
> -Steve

It also doesn't parse or do semantic checks on unit tests unless you add the flag...apparently.

This compiles...

unittest
{
    WHY DOESNT THE COMPILER FIND A PROBLEM HERE!?
}

It seems D's fast compile times are achieved by skipping semantic checking and even parsing when it doesn't feel it's needed. I strongly disagree with this decision. This could leave complex dormant time bombs that break builds unexpectedly and even accidentally. It's understandable in certain situations where there is enough information, but the first step to testing code, is first making sure it compiles...I don't want the compiler making decisions on what is worthy to compile. If I pass a d source file into it, I want to know if it's valid. This is unfortunate. This might be a deal breaker for me.
December 06, 2017
On Wednesday, 6 December 2017 at 19:19:09 UTC, A Guy With a Question wrote:
> It seems D's fast compile times are achieved by skipping semantic checking and even parsing when it doesn't feel it's needed. I strongly disagree with this decision. This could leave complex dormant time bombs that break builds unexpectedly and even accidentally. It's understandable in certain situations where there is enough information, but the first step to testing code, is first making sure it compiles...I don't want the compiler making decisions on what is worthy to compile. If I pass a d source file into it, I want to know if it's valid. This is unfortunate. This might be a deal breaker for me.

I'm very concerned of working with a language that, at minimum, doesn't let me know if a file I passed in even contains valid code.
December 06, 2017
On Wednesday, 6 December 2017 at 19:40:49 UTC, A Guy With a Question wrote:
> On Wednesday, 6 December 2017 at 19:19:09 UTC, A Guy With a Question wrote:
>> It seems D's fast compile times are achieved by skipping semantic checking and even parsing when it doesn't feel it's needed. I strongly disagree with this decision. This could leave complex dormant time bombs that break builds unexpectedly and even accidentally. It's understandable in certain situations where there is enough information, but the first step to testing code, is first making sure it compiles...I don't want the compiler making decisions on what is worthy to compile. If I pass a d source file into it, I want to know if it's valid. This is unfortunate. This might be a deal breaker for me.
>
> I'm very concerned of working with a language that, at minimum, doesn't let me know if a file I passed in even contains valid code.

It does let you know if it contains valid code - if you're actually building it.

If you write unit tests but never compile them in, whether or not they make any sense is IMHO irrelevant. If you write a template and never instantiate it, does it make a sound?*

Imagine this:

version(Windows) int i = 0;
else foobarbaz;

Should it fail to compile on Linux? How is this any different from:

#ifdef _WIN32
    int i = 0;
#else
    ohnoes
#endif

As noted by others, C++ templates work similarly. And for good reason!

Atila

* https://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest
December 06, 2017
On 12/6/17 2:19 PM, A Guy With a Question wrote:
> On Wednesday, 6 December 2017 at 18:09:45 UTC, Steven Schveighoffer wrote:
>> On 12/6/17 12:17 PM, Steven Schveighoffer wrote:
>>
>>> So why wouldn't the compiler fail? Because it has no idea yet what you mean by Nullable. It doesn't even know if Nullable will be available or not. You could even import Nullable, but Nullable!T may be an error.
>>
>> To give an example of why the compiler waits until instantiation:
>>
>> class C(T) : T
>> {
>>    void foo() { doesthisexist(); }
>> }
>>
>> class D { void doesthisexist(); }
>>
>> auto x = new C!D; // OK
>> auto y = new C!Object: // fail
>>
>> -Steve
> 
> It also doesn't parse or do semantic checks on unit tests unless you add the flag...apparently.
> 
> This compiles...
> 
> unittest
> {
>      WHY DOESNT THE COMPILER FIND A PROBLEM HERE!?
> }
> 
> It seems D's fast compile times are achieved by skipping semantic checking and even parsing when it doesn't feel it's needed.

This is a red herring. The compile times are fast even with unittests enabled. They are just faster (and use less memory) when unittests aren't compiled in.

If a unit test is instantiating tons of templates that I'm not using in my main code, I don't want that being compiled and then thrown away when I don't care about it!

> I strongly disagree with this decision. This could leave complex dormant time bombs that break builds unexpectedly and even accidentally. 

This decision was made by reality. How do you compile incomplete code? that is essentially what a template is.

In terms of unit tests that don't compile, I can't ever imagine someone writing a unittest and then not compiling with -unittest, and being upset their code "passed".

Note, this was done pretty recently, and simply to make the compiler consume less memory (and run a bit faster).

> It's understandable in certain situations where there is enough information, but the first step to testing code, is first making sure it compiles...I don't want the compiler making decisions on what is worthy to compile.

The first step to testing code is writing tests. And the idea that the compiler is making these decisions arbitrarily is incorrect. You are telling the compiler how to compile.

> If I pass a d source file into it, I want to know if it's valid. This is unfortunate. This might be a deal breaker for me.

There is literally only one exception to the rule, unittests. Everything else must be valid syntax-wise.

Even version statements must have valid syntax inside them.

Hardly a deal-breaker, since you aren't going to write unittests and then not run them.

-Steve
December 07, 2017
On Wednesday, 6 December 2017 at 16:07:41 UTC, A Guy With a Question wrote:
> Does dmd not compile all source code?

It doesn't. I like to build with a few different options to explicitly test (e.g. build for Windows and Linux and -m32 and -m64 to ensure those all  exercised) and for templates, do a -unittest that actually runs it - e.g. instantiate Array!int - to make sure that works and compile with -unittest every so often too. Might also do a `__traits(compiles` or static assert on it too.

Others have explained why this is, but this is a simple way to ensure it compiles at least in some cases to catch typos like this.

Another worry btw: the opDispatch feature will just say `no such property` if it fails to compile its innards. A test might also want to call it explicitly to force error messages.
December 07, 2017
On Wednesday, 6 December 2017 at 19:19:09 UTC, A Guy With a Question wrote:
> On Wednesday, 6 December 2017 at 18:09:45 UTC, Steven Schveighoffer wrote:
>> On 12/6/17 12:17 PM, Steven Schveighoffer wrote:
>>
>>> So why wouldn't the compiler fail? Because it has no idea yet what you mean by Nullable. It doesn't even know if Nullable will be available or not. You could even import Nullable, but Nullable!T may be an error.
>>
>> To give an example of why the compiler waits until instantiation:
>>
>> class C(T) : T
>> {
>>    void foo() { doesthisexist(); }
>> }
>>
>> class D { void doesthisexist(); }
>>
>> auto x = new C!D; // OK
>> auto y = new C!Object: // fail
>>
>> -Steve
>
> It also doesn't parse or do semantic checks on unit tests unless you add the flag...apparently.
>
> This compiles...
>
> unittest
> {
>     WHY DOESNT THE COMPILER FIND A PROBLEM HERE!?
> }
>
> It seems D's fast compile times are achieved by skipping semantic checking and even parsing when it doesn't feel it's needed. I strongly disagree with this decision. This could leave complex dormant time bombs that break builds unexpectedly and even accidentally.

That's why measuring the level of coverage obtained by the unittests is important.
It's not just about the templates, standard if conditions that are never tested can also be time bombs, standard functions that are never tested can also be time bombs. Even more pernicious in a way.
1 2
Next ›   Last »