July 09, 2008
Koroskin Denis wrote:
> Moreover, I would be happy to have an `unused` modifier in addition to in, out and inout (doh!) to denote that a variable is not going to be used. In this case compiler will show an error if the variable is used by chance. It could help programmer to catch potential bugs at early stage once he eventually start using it. Besides, it really fits well into D, IMO:
> 
> void bar( unused int foo ) // no warning is generated
> {
> }

Just do:

void bar(int) {}

I.e. don't name the variable. And you will get an error if you try to use it regardless, as you might expect. <g>

-- 
E-mail address: matti.niemenmaa+news, domain is iki (DOT) fi
July 09, 2008
Matti Niemenmaa wrote:

> you will get an error if you try to use it

This is true only, if the name he tries to use isn't declared in any visible scope.

-manfred
-- 
Maybe some knowledge of some types of disagreeing and their relation can turn out to be useful: http://blog.createdebate.com/2008/04/07/writing-strong-arguments/
July 09, 2008
"Walter Bright" <newshound1@digitalmars.com> wrote in message news:g51uc7$1let$1@digitalmars.com...
> Nick Sabalausky wrote:
>> It sounds like (previously unknown to me) there's a rift between the reality of warnings and the perceptions that many programmers (excluding us) have about warnings. As I understand it, you consider it more important to design around common perceptions of warnings, even if they're mistaken perceptions (such as warnings, by definition, not actually being errors). My disagreement is that I consider it better to design around the realities, and use a more education-based approach (I don't necessarily mean school) to address misperceptions. Is this a fair assessment of your stance, or am I still misunderstanding?
>
> It's a fair assessment. I give more weight to designing a language around the way programmers are and the way they tend to work, rather than trying to force them adapt to the language.
>

The way I program, I tend run into situations such as the two Koroskin Denis pointed out. Unless this hypothetical D lint tool actually ends up materializing, then I'm forced to adapt to a compiler that refuses to let me know about a condition that I *want* to know about.

Someone else in this thread just mentioned that DMD's warnings are always treated as errors, instead of only being treated as errors with a "warnings as errors" switch. I wasn't aware of this. That approach *certainly* confuses the issue of "warning" vs. "error" and creates what are effectively multiple languages (and, as other people pointed out, makes such "'warnings'-but-not-really-true-warnings" useless when using outside source libraries).

(If you're wondering how I could have not known DMD treats warnings as errors since I'm obviously so pro-warning that I would certainly be using the -w switch, it's because at the moment, I seem to be having trouble getting DMD 1.029 to emit any warnings, even when deliberately trying to trigger the ones it's supposed to support. *But* for all I know right now this may be a rebuild or IDE issue, I haven't had a chance to look into it yet.)

> As for the needs of programming managers, I think D is the only language that has attempted to address those needs. At least I've never ever heard of any other language even acknowledge the existence of such needs.

If there's a legitimate need that programming managers have that can be met by a compiler without creating any problems for the actual programmers, then I'm all for it.

But when there's a "programming manager" that's steadfast about "all warnings must always be treated as errors", *BUT* refuses to be practical about it and entertain any notion that there may actually be some warnings that are NOT really problems (in other words, "delusional" by the very definition of the word), then said "programming manager" is clearly incompetent and by no means should be indulged. That's like creating a programming language where 2 + 2 equals 7, just because you find out that there are "programmers" who are incompetent enough to insist that 2 + 2 really does equal 7.


July 09, 2008
"Manfred_Nowak" <svv1999@hotmail.com> wrote in message news:g52faq$2s3g$1@digitalmars.com...
> Koroskin Denis wrote:
>
>> You asked an example, I provided one. There is another one:
> [...]
>
> Bearophile asked for a _practical_ example. But your example seems to illustrate consequences rooted in a coding style and not rooted in the absence of an `unused' keyword and its semantics.
>
>
>> I was going to modify local variable, but not a member.
>
> This is a well known phenomenon. But again no need for an `unused' keyword shows up. To the contrary: within the function you want to use both variables, although one of them only for reading.
>
> Pollution of the namspace within the function causes the problem. But would you really want to write import statements for variables from surrounding scopes?
>

Can you prove that namespace pollution is the root cause of "unintentially-unused variable" errors in the general case?

>> Compiler could warn me that I don't use it
>
> This claim comes up once in a while, but seems to be unprovable in
> general. It might be provable in your special case though. But without
> the general proof one may have both:
> - many false warnings
> - many true bugs without warnings
>
> Do you have a proof for the general case?
>

Do you have a general-case proof that an "unused variable" warning would cause too many false warnings/etc.? Would the proof still hold with the proposed "unused" keyword (or some functionaly-equivilent alternative)?


July 09, 2008
"Manfred_Nowak" <svv1999@hotmail.com> wrote in message news:g51tmn$1kb3$1@digitalmars.com...
> Nick Sabalausky wrote:
>
> > Like I've said, compiler warnings are essentialy a built-in lint tool.
>
> Finally the contradiction seems to show up: if lint is a tool in its own right, then one must have strong arguments to incorporate it into any other tool.

1. There is no sufficient D lint tool either currently in existence or on the foreseeable horizon (at least as far as I'm aware).

2. The compiler is already in a position to provide such diagnostics (and in fact, already does for certain other conditions).

> In the paralell posting Walter remarks but not emphasizes that compilers have more goals, than enabling the evaluation of sources, on which your OP concentrates. Building large software systems and migrating some application source to another architecture are in the duties of compilers.
>
> For at least huge parts of these latter tasks a reevaluation of some static aspects of semantics of the application is useless but time consuming.

Hence, optional.

> In addition and by definition lint tools are not capable of doing more than this.

Which is part of what makes a compiler inherently more general-purpose, and a lint tool a mere symptom of a compiler's shortcomings.

> This is the central point: lint tools are only capable of informing about possible bugs. If a warning emitted by a lint tool would be a sure hint for a bug in the program, then the compiler should have emitted an error.

Warings are never sure hints about a bug in the program either. That's what makes them warnings.

> Thus there should be no need to incorporate any lint tool into any compiler. I am ready to read your counter arguments.

I could turn that around and say that with a sufficient lint tool incorporated into the compiler (activated optionally, of course), there would be no need for an external lint tool. Plus, an external lint tool is, by necessity, going to incorporate a lot of duplicated functionality from the compiler (roughly the whole front-end). Although I suppose that could be moved into a shared library to avoid duplicated maintenance efforts. But since you mentioned that having lint work being done in the compiler would be uselessly time consuming (Again, uselessly time consuming only if there's no switch to turn such checking on/off. Also, I assume you're referring to the speed of compiling), then I should point out that with an external lint tool, you're likely to have plenty of duplicated processing going on (lexing and parsing once for the external lint, and again for the real compiler).


July 09, 2008
"Manfred_Nowak" <svv1999@hotmail.com> wrote in message news:g51vnt$1o9n$1@digitalmars.com...
> Nick Sabalausky wrote:
>
>> turned out to be variables I had intended to use but forgot to
>
> I am trying to tackle such time wastings with "protocols" in drokue. If one would be able to formally attach ones intentions to variables then such bugs could possibly be prevented.
>
> In your case the intention might have been to write and read the variable several times, of course starting with a write followed by some read. This intention can be expressed by a regular expression like:
>
>   write+ read ( write | read )*
>
> For evaluating this at runtime (!) one may attach a DFA to the variable---a DFA that interpretes the operations on the variable as input. Of course the DFA has to be initiated somewhere before the first operation on the variable. At program termination the DFA can than be checked, whether it is in a final state.
>
> If at program termination the DFA is not in a final state then an "intention violation"-error can be reported. This way your time wouldn't have been wasted.
>

I've been seeing alot in these last few years about such...I'm going to call it "intent oriented programming".

There's a lot of good stuff that works that way (unit testing and D's function contracts, for instance. I've seen some other new things from the Java world as well), and your idea is certainly interesting.

But I worry that eventually we'll get to some point where all code either is or can be generated straight from "intents" syntax. Now that certainly sounds great, but at that point all we really would have done is reinvent "programming language" and we'd be left with the same problem we have today: how can we be sure that the "code"/"intents" that we wrote are really what we intended to write? The solution would have just recreated the problem.

Regarding your specific idea, my concern with that is that the whole "write and read the variable several times, starting with a write followed by some read" is tied too closely to the actual implementation. Change your algorithm/approach, and you gotta go update your intents. I'd feel like I'd gone right back to header files.

> Please note, that such cannot be done by a lint tool.

True, since lint tools only do front-end work. But the compiler would be able to do it by injecting appropriate code into its output. An external lint tool *could* be made to do it by using ultra-fancy CTFE, but then the ultra-fancy-CTFE engine would effectively be a VM (or, heck, even real native code), which would mean adding a backend to the lint tool which would basically turn it into a compiler. Thus, in a manner of speaking, there would be correctness-analysis that a compiler could do that a lint tool (by a definition of "lint tool" that would have admittedly become rather debatable by that point) couldn't. ;)


July 09, 2008
The reason for treating warnings as errors when warnings are enabled is so that, for a long build, they don't scroll up and off your screen and go unnoticed.
July 09, 2008
Nick Sabalausky wrote:
> Warings are never sure hints about a bug in the program either. That's what makes them warnings.

Not always. Sometimes they are the result of an inability to change the language specification (because it's a standard). Common C++ compiler warnings are indications of bugs in the standard that can't be fixed.
July 09, 2008
Reply to Nick,

> I've been seeing alot in these last few years about such...I'm going
> to call it "intent oriented programming".
> 

look up intentional programming.

> But I worry that eventually we'll get to some point where all code
> either is or can be generated straight from "intents" syntax. Now that
> certainly sounds great, but at that point all we really would have
> done is reinvent "programming language" and we'd be left with the same
> problem we have today: how can we be sure that the "code"/"intents"
> that we wrote are really what we intended to write? The solution would
> have just recreated the problem.

But the hope is that this stuff will be easier for you to read/evaluate, smaller, written in terms that are closer to how you think of the problem and further from how it's implemented.

At some point the issue arises of "is this what the end user wants?" (lint can't help you there :)


July 09, 2008
"Walter Bright" wrote
> The reason for treating warnings as errors when warnings are enabled is so that, for a long build, they don't scroll up and off your screen and go unnoticed.

I've been following this thread, and I'm not really sure which side of the issue I'm on, but this, sir, is one of the worst explanations for a feature. Ever heard of 'less'?  or 'more' on Windows?  Maybe piping to a file?  Maybe using an IDE that stores all the warnings/errors for you?

Please stop saving poor Mr. ignorant programmer from himself.  Education is the key to solving this problem, not catering to the ignorance.

Sorry for the harshness, but seriously!

-Steve