November 06, 2009
Andrei Alexandrescu, el  5 de noviembre a las 19:10 me escribiste:
> Walter Bright wrote:
> >Andrei Alexandrescu wrote:
> >>Walter Bright wrote:
> >>>Jason House wrote:
> >>>>I posted in the other thread how casting to immutable/shared can be just as bad. A leaked reference prior to casting to immutable/shared is in effect the same as casting away shared. No matter how you mix thread local and shared, or mutable and immutable, you still have the same undefined behavior
> >>>
> >>>Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.
> >>
> >>Are we in agreement that @safe functions have bounds checking on regardless of -release?
> >
> >You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties.
> >
> >If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.
> 
> This is a showstopper.
> 
> What kind of reputation do you think D would achieve if "safe" code has buffer overrun attacks?

If you compiled it with the -unsafe (or -disable-bound-check) flag,
I think there should be no impact in the reputation. It the
*users*/*maintainer* (whoever compiles the code) choice if he assumes the
risks.

> A function that wants to rely on hand-made verification in lieu of bounds checks may go with @trusted. There is absolutely no way a @safe function could allow buffer overruns in D, ever.

Again, the problem is with code you don't control. I want to be able to turn bound-checking off (and any other runtime safety, but not compile-time safety) without modifying other people's code.

-- 
Leandro Lucarella (AKA luca)                     http://llucax.com.ar/
----------------------------------------------------------------------
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
----------------------------------------------------------------------
22% of the time a pizza will arrive faster than an ambulance in Great-Britain
November 06, 2009
Steven Schveighoffer wrote:
> However, I'll let it go, I don't know the ramifications since allocating immutable objects is a rare occurrence, and I'm not sure how it will be done.  I am also not sure how solid a use case this is (allocating an object, then manipulating it via methods before changing it to immutable).

I don't have any solid history with this, it's just my opinion that the right place for it is at the function level. Experience may show your idea to be better, but it's easier to move in that direction later than to try and turn off support for unsafeness at the statement level.
November 06, 2009
On 2009-11-05 22:22:39 -0500, Leandro Lucarella <llucax@gmail.com> said:

> Michel Fortin, el  5 de noviembre a las 19:43 me escribiste:
>> But if you remove bound checking, it isn't safe anymore, is it?
> 
> 100% safe doesn't exist. If you think you have it because of
> bound-checking, you are wrong.

True. What I meant was some things that were supposed to be safe in SafeD (arrays) are no longer safe, pretty much destroying the concept of SafeD being memory safe.

>> Sometime safety is more important than performance. [...]
> 
> What if I'm using an external library that I don't control? *That's* the
> problem for me, I want to be able to compile things I *trust* as if they
> were *trusted* :)
> 
> I vote for an -unsafe (and/or -disable-bound-check). Safe should be the
> default.

You're right. Having "-unsafe" to disable runtime checks is better than "-safe" to enable them because then the default behavior is safe. And it allows you to recompile any library you want with "-unsafe" to remove runtime checks from @safe functions when you don't care about safety.

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

November 06, 2009
== Quote from Walter Bright (newshound1@digitalmars.com)'s article
> Steven Schveighoffer wrote:
> > Sounds
> > good to me.  Should you also be able to mark a whole struct/class as
> > @safe/@trusted, since it's generally a container for member functions?
> Yes.
> > Care to define some rules for "undefined behavior?"
> I suppose I need to come up with a formal definition for it, but it's
> essentially meaning your program is going to do something arbitrary
> that's outside of the specification of the language. Basically, you're
> stepping outside of the domain of the language.
> For example, assigning a random value to a pointer and then trying to
> read it is undefined behavior. Casting const away and then modifying the
> value is undefined behavior.

What about threading?  I can't see how you could statically prove that a multithreaded program did not have any undefined behavior, especially before shared is fully implemented.  To truly ensure no undefined behavior, you'd need the following in the c'tor for core.thread.Thread:

version(safe) {
    assert(0);
}
November 06, 2009
dsimcha wrote:
> What about threading?  I can't see how you could statically prove that a
> multithreaded program did not have any undefined behavior, especially before
> shared is fully implemented.

We definitely have more work to do on the threading model, but I don't think it's an insurmountable problem. I also don't think thinks like race conditions are undefined behavior.
November 06, 2009
Ellery Newcomer <ellery-newcomer@utulsa.edu> wrote:

> Walter Bright wrote:
>>> Well, if that's a problem you could fix it by making immutable not shared unless you also put the shared attribute:
>>>
>>>     immutable Object o;        // thread-local
>>>     shared immutable Object o; // visible from all threads
>> 
>> 
>> Aaggghhhh !!! <g>
>> 
> 
> ditto
> 
agreed, but why can't their just be a space for immutable data to be
allocated?
That way you could prove? that the data is immutable.

-Rory

November 06, 2009
On 05/11/2009 23:24, Andrei Alexandrescu wrote:
> Nick Sabalausky wrote:
>> "Walter Bright" <newshound1@digitalmars.com> wrote in message
>> news:hcv5p9$2jh1$1@digitalmars.com...
>>> Based on Andrei's and Cardelli's ideas, I propose that Safe D be
>>> defined as the subset of D that guarantees no undefined behavior.
>>> Implementation defined behavior (such as varying pointer sizes) is
>>> still allowed.
>>>
>>> Safety seems more and more to be a characteristic of a function,
>>> rather than a module or command line switch. To that end, I propose
>>> two new attributes:
>>>
>>> @safe
>>> @trusted
>>>
>>
>> Sounds great! The lower-grained safeness makes a lot of sense, and I'm
>> thrilled at the idea of safe D finally encompassing more than just
>> memory safety - I'd been hoping to see that happen ever since I first
>> heard that "safeD" only ment memory-safe.
>
> I can think of division by zero as an example. What others are out there?
>
> Andrei

Safe arithmetic like in C# that guards against overflows (throws on overflow).

November 06, 2009
On 05/11/2009 23:45, grauzone wrote:
> Ary Borenszweig wrote:
>> grauzone wrote:
>>> Frank Benoit wrote:
>>>> safe should be the default. The unsafe part should take the extra
>>>> typing, not the other way. Make the user prefer the safe way.
>>>
>>> No. D is not C#.
>>
>> D is an unsafe language.
>> C# is a safe language.
>>
>> Like that? :)
>
> If you mean memory safety, then yes and will probably forever be for all
> practical uses (unless D gets implemented on a Java or .net like VM).

C# does allow memory unsafe code inside unsafe blocks. There's an alloca and malloca functions for allocating on the stack.

VM is just an abstract (virtual) instruction set. You can design a safe native one or an unsafe virtual one. it's all a matter of design choices. there's nothing magical about a VM that makes it inherently safe.

IMO D should be safe by default and allow unsafe code when it is appropriately marked as such, regardless of a VM.

BTW, so called native code on Intel processors runs in a VM as well.
Intel's cisc instruction set is translated to a risc like micro-ops and those micro-ops are executed. the only difference is that this is done in hardware by the processor.

November 06, 2009
Yigal Chripun wrote:

> BTW, so called native code on Intel processors runs in a VM as well.
> Intel's cisc instruction set is translated to a risc like micro-ops and those micro-ops are executed. the only difference is that this is done in hardware by the processor.

It's a bit meaningless to call that a VM. You can just as easily say that _every_ CPU ever made is a VM, since it's implemented with transistors. (Traditionally, CISC processors were implemented with microcode, BTW -- so there's nothing new). So the "virtual" becomes meaningless -- "virtual machine" just means "machine".

The term "virtual machine" is useful to distinguish from the "physical machine" (the hardware). If there's a lower software level, you're on a virtual machine.
November 06, 2009
Walter Bright wrote:
> Following the safe D discussions, I've had a bit of a change of mind. Time for a new strawman.
> 
> Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined as the subset of D that guarantees no undefined behavior. Implementation defined behavior (such as varying pointer sizes) is still allowed.
> 
> Memory safety is a subset of this. Undefined behavior nicely covers things like casting away const and shared.
> 
> Safety has a lot in common with function purity, which is set by an attribute and verified by the compiler. Purity is a subset of safety.
> 
> Safety seems more and more to be a characteristic of a function, rather than a module or command line switch. To that end, I propose two new attributes:
> 
> @safe
> @trusted
> 
> A function marked as @safe cannot use any construct that could result in undefined behavior. An @safe function can only call other @safe functions or @trusted functions.
> 
> A function marked as @trusted is assumed to be safe by the compiler, but is not checked. It can call any function.

> 
> Functions not marked as @safe or @trusted can call any function.
> 
> To mark an entire module as safe, add the line:
> 
>    @safe:
> 
> after the module statement. Ditto for marking the whole module as @trusted. An entire application can be checked for safety by making main() safe:
> 
>     @safe int main() { ... }
> 
> This proposal eliminates the need for command line switches, and versioning based on safety.

I think it's important to also have @unsafe. These are functions which are unsafe, and not trusted. Like free(), for example. They're usually easy to identify, and should be small in number.
They should only be callable from @trusted functions.
That's different from unmarked functions, which generally just haven't been checked for safety.
I want to able to find the cases where I'm calling those guys, without having to mark every function in the program with an @safe attribute.