www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Safety, undefined behavior, safe, trusted

reply Walter Bright <newshound1 digitalmars.com> writes:
Following the safe D discussions, I've had a bit of a change of mind. 
Time for a new strawman.

Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
as the subset of D that guarantees no undefined behavior. Implementation 
defined behavior (such as varying pointer sizes) is still allowed.

Memory safety is a subset of this. Undefined behavior nicely covers 
things like casting away const and shared.

Safety has a lot in common with function purity, which is set by an 
attribute and verified by the compiler. Purity is a subset of safety.

Safety seems more and more to be a characteristic of a function, rather 
than a module or command line switch. To that end, I propose two new 
attributes:

 safe
 trusted

A function marked as  safe cannot use any construct that could result in 
undefined behavior. An  safe function can only call other  safe 
functions or  trusted functions.

A function marked as  trusted is assumed to be safe by the compiler, but 
is not checked. It can call any function.

Functions not marked as  safe or  trusted can call any function.

To mark an entire module as safe, add the line:

     safe:

after the module statement. Ditto for marking the whole module as 
 trusted. An entire application can be checked for safety by making 
main() safe:

      safe int main() { ... }

This proposal eliminates the need for command line switches, and 
versioning based on safety.
Nov 05 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Following the safe D discussions, I've had a bit of a change of mind.
 Time for a new strawman.
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined
 as the subset of D that guarantees no undefined behavior. Implementation
 defined behavior (such as varying pointer sizes) is still allowed.
 Memory safety is a subset of this. Undefined behavior nicely covers
 things like casting away const and shared.
 Safety has a lot in common with function purity, which is set by an
 attribute and verified by the compiler. Purity is a subset of safety.
 Safety seems more and more to be a characteristic of a function, rather
 than a module or command line switch. To that end, I propose two new
 attributes:
  safe
  trusted
 A function marked as  safe cannot use any construct that could result in
 undefined behavior. An  safe function can only call other  safe
 functions or  trusted functions.
 A function marked as  trusted is assumed to be safe by the compiler, but
 is not checked. It can call any function.
 Functions not marked as  safe or  trusted can call any function.
 To mark an entire module as safe, add the line:
      safe:
 after the module statement. Ditto for marking the whole module as
  trusted. An entire application can be checked for safety by making
 main() safe:
       safe int main() { ... }
 This proposal eliminates the need for command line switches, and
 versioning based on safety.

Vote++. The thing I like about it is that, if you've got a well-debugged function that does some well-encapsulated unsafe things (performance hacks, etc.) and needs to be called by safe functions, safety doesn't become viral and force you to reexamine the implementation of your well-encapsulated unsafe function. On the other hand if you've got a function that does non-encapsulated unsafe things that the caller has to understand in order to use it properly (e.g. something like GC.setAttr, which can make regions of memory unscanned by the GC), this should *not* be callable from safe code no matter what. As long as I know that safety isn't going to be viral and force me to modify code that's got tons of performance hacks internally but has a safe interface (and as long as getopt gets fixed), I'm actually starting to like SafeD.
Nov 05 2009
prev sibling next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Following the safe D discussions, I've had a bit of a change of mind.
 Time for a new strawman.
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined
 as the subset of D that guarantees no undefined behavior. Implementation
 defined behavior (such as varying pointer sizes) is still allowed.
 Memory safety is a subset of this. Undefined behavior nicely covers
 things like casting away const and shared.
 Safety has a lot in common with function purity, which is set by an
 attribute and verified by the compiler. Purity is a subset of safety.
 Safety seems more and more to be a characteristic of a function, rather
 than a module or command line switch. To that end, I propose two new
 attributes:
  safe
  trusted
 A function marked as  safe cannot use any construct that could result in
 undefined behavior. An  safe function can only call other  safe
 functions or  trusted functions.
 A function marked as  trusted is assumed to be safe by the compiler, but
 is not checked. It can call any function.
 Functions not marked as  safe or  trusted can call any function.
 To mark an entire module as safe, add the line:
      safe:
 after the module statement. Ditto for marking the whole module as
  trusted. An entire application can be checked for safety by making
 main() safe:
       safe int main() { ... }
 This proposal eliminates the need for command line switches, and
 versioning based on safety.

Oh yeah, and now that it looks like D is getting annotations, can/should we make pure and nothrow annotations, i.e. pure, nothrow, for consistency?
Nov 05 2009
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 13:33:09 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Following the safe D discussions, I've had a bit of a change of mind.  
 Time for a new strawman.

 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined  
 as the subset of D that guarantees no undefined behavior. Implementation  
 defined behavior (such as varying pointer sizes) is still allowed.

 Memory safety is a subset of this. Undefined behavior nicely covers  
 things like casting away const and shared.

 Safety has a lot in common with function purity, which is set by an  
 attribute and verified by the compiler. Purity is a subset of safety.

 Safety seems more and more to be a characteristic of a function, rather  
 than a module or command line switch. To that end, I propose two new  
 attributes:

  safe
  trusted

 A function marked as  safe cannot use any construct that could result in  
 undefined behavior. An  safe function can only call other  safe  
 functions or  trusted functions.

 A function marked as  trusted is assumed to be safe by the compiler, but  
 is not checked. It can call any function.

 Functions not marked as  safe or  trusted can call any function.

 To mark an entire module as safe, add the line:

      safe:

 after the module statement. Ditto for marking the whole module as  
  trusted. An entire application can be checked for safety by making  
 main() safe:

       safe int main() { ... }

 This proposal eliminates the need for command line switches, and  
 versioning based on safety.

I like how the attribute can be applied at different levels. Sounds good to me. Should you also be able to mark a whole struct/class as safe/ trusted, since it's generally a container for member functions? Care to define some rules for "undefined behavior?" -Steve
Nov 05 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 Care to define some rules for "undefined behavior?"

My list may be of help. Andrei
Nov 05 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 Sounds 
 good to me.  Should you also be able to mark a whole struct/class as 
  safe/ trusted, since it's generally a container for member functions?

Yes.
 Care to define some rules for "undefined behavior?"

I suppose I need to come up with a formal definition for it, but it's essentially meaning your program is going to do something arbitrary that's outside of the specification of the language. Basically, you're stepping outside of the domain of the language. For example, assigning a random value to a pointer and then trying to read it is undefined behavior. Casting const away and then modifying the value is undefined behavior.
Nov 05 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hcvbu7$9i$1 digitalmars.com...
 For example, assigning a random value to a pointer and then trying to read 
 it is undefined behavior.

How would the compiler be able to detect that? Or do you mean assigning an arbitrary value?
Nov 05 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Nick Sabalausky (a a.a)'s article
 "Walter Bright" <newshound1 digitalmars.com> wrote in message
 news:hcvbu7$9i$1 digitalmars.com...
 For example, assigning a random value to a pointer and then trying to read
 it is undefined behavior.

arbitrary value?

How do you generate a random pointer without casting it from an int (not allowed in SafeD) or doing pointer arithmetic (not allowed in SafeD)?
Nov 05 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:hcvbu7$9i$1 digitalmars.com...
 For example, assigning a random value to a pointer and then trying to read 
 it is undefined behavior.

How would the compiler be able to detect that? Or do you mean assigning an arbitrary value?

By disallowing casting an int to a pointer.
Nov 05 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Steven Schveighoffer wrote:
 Sounds
 good to me.  Should you also be able to mark a whole struct/class as
  safe/ trusted, since it's generally a container for member functions?

 Care to define some rules for "undefined behavior?"

essentially meaning your program is going to do something arbitrary that's outside of the specification of the language. Basically, you're stepping outside of the domain of the language. For example, assigning a random value to a pointer and then trying to read it is undefined behavior. Casting const away and then modifying the value is undefined behavior.

What about threading? I can't see how you could statically prove that a multithreaded program did not have any undefined behavior, especially before shared is fully implemented. To truly ensure no undefined behavior, you'd need the following in the c'tor for core.thread.Thread: version(safe) { assert(0); }
Nov 05 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 What about threading?  I can't see how you could statically prove that a
 multithreaded program did not have any undefined behavior, especially before
 shared is fully implemented.

We definitely have more work to do on the threading model, but I don't think it's an insurmountable problem. I also don't think thinks like race conditions are undefined behavior.
Nov 05 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 Following the safe D discussions,

I was busy, I am sorry, I have not followed it, but I have read part of the PDF of Cardelli shown by Andrei.
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
 as the subset of D that guarantees no undefined behavior.

I am asking for this for a lot of time, because SafeD meant MemorySafeD so far. So D is again converging toward something similar to what C# has chosen long time ago. Maybe it's time to look better at C#.
 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch.

In C# you use something like: unsafe { // lot of code } I think that's a good solution. Things are meant as safe unless marked as unsafe. You can mark a whole block of code as unsafe putting it into those brackets. D may do the same. Also single functions may have the unsafe attribute.
 To mark an entire module as safe, add the line:
 
      safe:
 
 after the module statement. Ditto for marking the whole module as 
  trusted.

Why is this not good? module foo(unsafe); (But the unsafe {...} may be enough for a whole module too).
 An entire application can be checked for safety by making 
 main() safe:
 
       safe int main() { ... }
 
 This proposal eliminates the need for command line switches, and 
 versioning based on safety.

I may have missed some important parts of the discussion, but this test seems fit for a command line switch (C#uses one for this purpose. If your program has one or more unsafe parts, it needs an unsafe comment line switch. I agree it's a tool a little blunt). Bye, bearophile
Nov 05 2009
parent div0 <div0 users.sourceforge.net> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

bearophile wrote:
 
 In C# you use something like:
 
 unsafe {
   // lot of code
 }
 
 I think that's a good solution. Things are meant as safe unless marked as
 Unsafe. You can mark a whole block of code as unsafe putting it into
 those brackets. D may do the same. Also single functions may have the  unsafe


That's not the whole story though. Use an unsafe block and you have to throw the appropriate compiler switch '/unsafe' and your entire assembly can only be used if it's 'fully trusted'. Though I forget exactly what that means, 'trust' is determined at runtime, so e.g. code run from a web browser can't call your assembly. - -- My enormous talent is exceeded only by my outrageous laziness. http://www.ssTk.co.uk -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iD8DBQFK8z85T9LetA9XoXwRAjivAKCJ7At++ITg7fKmTS4ORsASzZMclACeKS1M cK4ph1dvq3fU5OzxuJvf6w0= =aGab -----END PGP SIGNATURE-----
Nov 05 2009
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-05 13:33:09 -0500, Walter Bright <newshound1 digitalmars.com> said:

 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch. To that end, I propose two new 
 attributes:
 
  safe
  trusted

Looks like a good proposal. That said, since most functions are probably going to be safe, wouldn't it be better to remove safe and replace it by its counterpart: an unsafe attribute? This would make things safe by default, which is undoubtedly safer, and avoid the unnecessary clutter of safe annotations everywhere. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 05 2009
next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 14:57:48 -0500, Michel Fortin  
<michel.fortin michelf.com> wrote:

 On 2009-11-05 13:33:09 -0500, Walter Bright <newshound1 digitalmars.com>  
 said:

 Safety seems more and more to be a characteristic of a function, rather  
 than a module or command line switch. To that end, I propose two new  
 attributes:
   safe
  trusted

Looks like a good proposal. That said, since most functions are probably going to be safe, wouldn't it be better to remove safe and replace it by its counterpart: an unsafe attribute? This would make things safe by default, which is undoubtedly safer, and avoid the unnecessary clutter of safe annotations everywhere.

If unsafe means you cannot pass pointers to local variables, then half of tango (and other performance oriented libs which use stack allocation as much as possible) will fail to compile. My vote is for unsafe as the default. It's the least intrusive option, to ensure that current projects still compile. Then let the project authors ensure their projects are safe one module/function at a time. Also keep in mind that safe annotations for a mostly safe project will be once at the top of each module. They won't be "everywhere". -Steve
Nov 05 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 If unsafe means you cannot pass pointers to local variables, then half 
 of tango (and other performance oriented libs which use stack allocation 
 as much as possible) will fail to compile.

 My vote is for unsafe as the default.  It's the least intrusive option, 
 to ensure that current projects still compile.  Then let the project 
 authors ensure their projects are safe one module/function at a time.

I agree. Also, dealing with safeness is something that comes later on as a project scales to a larger size. As such, it's more of a nuisance on a small program than a help.
 Also keep in mind that  safe annotations for a mostly safe project will 
 be once at the top of each module.  They won't be "everywhere".

Right. Adding: safe: at the top will do it.
Nov 05 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I agree. Also, dealing with safeness is something that comes later on as 
 a project scales to a larger size. As such, it's more of a nuisance on a 
 small program than a help.

I don't know. In modern languages safety is a starting point. Things are safe unless defined otherwise. In C# default is for safety.
 Right. Adding:
 
      safe:
 
 at the top will do it.

See my first answer to this thread. And the same can be said about a modifier 'unsafe' that applies to the whole code of a module. So half of Tango will compile again. Bye, bearophile
Nov 05 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright wrote:
 Steven Schveighoffer wrote:
 If unsafe means you cannot pass pointers to local variables, then half 
 of tango (and other performance oriented libs which use stack 
 allocation as much as possible) will fail to compile.

 My vote is for unsafe as the default.  It's the least intrusive 
 option, to ensure that current projects still compile.  Then let the 
 project authors ensure their projects are safe one module/function at 
 a time.

I agree. Also, dealing with safeness is something that comes later on as a project scales to a larger size. As such, it's more of a nuisance on a small program than a help.
 Also keep in mind that  safe annotations for a mostly safe project 
 will be once at the top of each module.  They won't be "everywhere".

Right. Adding: safe: at the top will do it.

But that forces a library writer to *always* think about safety. I can imagine you implementing this and then 100 bugzilla tickets saying "I can't call phobos' function foo in my safe function because it is not marked as safe". Then they have to wait for the next release. And the same will happen with library writers. I don't want to think about safety all the time, just let me code! If something is unsafe I'll mark it for you, compiler, no problem, but do you think I'm just some crazy unsafe maniac? I program safely.
Nov 05 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Adam D. Ruppe wrote:
 With safe by default, you'd probably make existing code compile just by
 slapping  trusted: at the top and being done with it. That's not actually
 safe - you're just telling the compiler to shut up about it.

That's right, and it's exactly what happened when Java required exception specifications for all thrown exceptions. It's viral, and people would just write wrappers to catch/ignore all exceptions, intending to "fix it" later. But the fixing later never came, and the app would silently ignore all errors.
Nov 05 2009
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-05 16:40:11 -0500, "Adam D. Ruppe" <destructionator gmail.com> said:

 Which is going to work best for existing code? With Walter's idea, you
 compile it, then fix functions piece by piece to make them safe. Since your
 other unsafe functions can still call them, the change is localized and you
 get safer with each revision.
 
 With safe by default, you'd probably make existing code compile just by
 slapping  trusted: at the top and being done with it. That's not actually
 safe - you're just telling the compiler to shut up about it.

That's a great point. Thank you Adam. I changed my mind, let's keep unsafe as the default. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 05 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 14:57:48 -0500, Michel Fortin 
 <michel.fortin michelf.com> wrote:
 
 On 2009-11-05 13:33:09 -0500, Walter Bright 
 <newshound1 digitalmars.com> said:

 Safety seems more and more to be a characteristic of a function, 
 rather than a module or command line switch. To that end, I propose 
 two new attributes:
   safe
  trusted

Looks like a good proposal. That said, since most functions are probably going to be safe, wouldn't it be better to remove safe and replace it by its counterpart: an unsafe attribute? This would make things safe by default, which is undoubtedly safer, and avoid the unnecessary clutter of safe annotations everywhere.

If unsafe means you cannot pass pointers to local variables, then half of tango (and other performance oriented libs which use stack allocation as much as possible) will fail to compile.

While I agree with your point, quick question: could you use ref parameters instead? Ref will be usable in SafeD. Andrei
Nov 05 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 15:20:34 -0500, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 14:57:48 -0500, Michel Fortin 
 <michel.fortin michelf.com> wrote:

 On 2009-11-05 13:33:09 -0500, Walter Bright 
 <newshound1 digitalmars.com> said:

 Safety seems more and more to be a characteristic of a function, 
 rather than a module or command line switch. To that end, I propose 
 two new attributes:
   safe
  trusted

Looks like a good proposal. That said, since most functions are probably going to be safe, wouldn't it be better to remove safe and replace it by its counterpart: an unsafe attribute? This would make things safe by default, which is undoubtedly safer, and avoid the unnecessary clutter of safe annotations everywhere.

half of tango (and other performance oriented libs which use stack allocation as much as possible) will fail to compile.

While I agree with your point, quick question: could you use ref parameters instead? Ref will be usable in SafeD.

Most of the usages are like this: ubyte[1024] buffer; functionThatNeedsBufferSpace(buffer); where functionThatNeedsBufferSpace takes a ubyte[], thereby taking an address of the local data. So it's not explicit address taking, but it's the same thing under the hood. There always exists the potential for the stack reference to escape.

I see, thank you. I am confident that a trusted reap could be implemented in the standard library. (google reap)
 Similar case is scope classes (which are sometimes used to allocate a 
 temporary class for performance in tango).  I can't see scope classes 
 being allowed if you can't take addresses of local variables.

Yah, in fact I think scope classes should just go. But don't hold that against me. :o)
 I'm not saying that safed needs to allow these kinds of things somehow 
 (thereby employing escape analysis), but it might be too restrictive for 
 performance-oriented libs/apps.  I think it's acceptable that tango 
 eventually gets marked with  trusted, but by making  safe the default, 
 you will immediately make many D1 projects not compile without 
 significant effort, which might drive away developers from D2, or else 
 make people automatically mark all files as  trusted without thinking 
 about it.  By defaulting to untrusted and unsafe, you allow people to 
 incrementally add  safe and  trusted tags where they are appropriate and 
 correct.

I agree. Andrei
Nov 05 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking 
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under 
 the hood.  There always exists the potential for the stack reference 
 to escape.

I see, thank you. I am confident that a trusted reap could be implemented in the standard library. (google reap)

I did. Couldn't find anything.

Damn acronyms, sorry. Better results: reap memory allocation ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei
Nov 05 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under
 the hood.  There always exists the potential for the stack reference
 to escape.

I see, thank you. I am confident that a trusted reap could be implemented in the standard library. (google reap)

I did. Couldn't find anything.

ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei

Ok, I understand the basic principle of a reap, but if it's going to convert to a heap when you try to delete something, why not just improve the standard GC heap, i.e. by making per-thread heaps? If you're not going to delete stuff, why not just use a regular old region or stack (not necessarily the call stack, but a stack of some kind)?
Nov 05 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under
 the hood.  There always exists the potential for the stack reference
 to escape.

implemented in the standard library. (google reap)


ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei

Ok, I understand the basic principle of a reap, but if it's going to convert to a heap when you try to delete something, why not just improve the standard GC heap, i.e. by making per-thread heaps? If you're not going to delete stuff, why not just use a regular old region or stack (not necessarily the call stack, but a stack of some kind)?

Perhaps a region could also be defined as a trusted facility! So much good stuff to do, so little time... Andrei
Nov 05 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under
 the hood.  There always exists the potential for the stack reference
 to escape.

implemented in the standard library. (google reap)


ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei

Ok, I understand the basic principle of a reap, but if it's going to convert to a heap when you try to delete something, why not just improve the standard GC heap, i.e. by making per-thread heaps? If you're not going to delete stuff, why not just use a regular old region or stack (not necessarily the call stack, but a stack of some kind)?

good stuff to do, so little time... Andrei

I have my doubts only b/c I've worked on similar things before and the thing that keeps biting me in the $$ is how to handle the GC. If you allocate some huge block of memory to parcel out and have it all be scanned by the GC, you're asking for slow scan times and lots of false pointers, thus largely defeating the purpose of the region scheme. (My precise heap scanning patch doesn't address this, as it assumes that untyped memory blocks will be allocated using the conservative bit mask and scanned conservatively, or not scanned at all.) If you don't have it scanned by the GC, then you can't store the only reference to a GC-allocated object in the region. I chose the latter for TempAlloc, and it's still ridiculously useful in the small niche of dstats, where I need to allocate tons of temporary arrays of numbers or copies of arrays that are already in memory but need to be sorted, etc. However, it's a shoe with a gun built in. If it wasn't, I'd recommend it for Phobos. If I made it scanned by the GC performance and space overhead would likely be so bad that it would be useless.
Nov 05 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under
 the hood.  There always exists the potential for the stack reference
 to escape.

implemented in the standard library. (google reap)


ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei

heap when you try to delete something, why not just improve the standard GC heap, i.e. by making per-thread heaps? If you're not going to delete stuff, why not just use a regular old region or stack (not necessarily the call stack, but a stack of some kind)?

good stuff to do, so little time... Andrei

I have my doubts only b/c I've worked on similar things before and the thing that keeps biting me in the $$ is how to handle the GC. If you allocate some huge block of memory to parcel out and have it all be scanned by the GC, you're asking for slow scan times and lots of false pointers, thus largely defeating the purpose of the region scheme.

Well I'm thinking that often when you use a region, the memory consumption is not really large. If it's really large, then you may be better off just using the GC because it means you do a lot of stuff. But I'm sure I'm ignoring a few important applications. Andrei
Nov 05 2009
parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu
 <SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking
 an address of the local data.
  So it's not explicit address taking, but it's the same thing under
 the hood.  There always exists the potential for the stack reference
 to escape.

implemented in the standard library. (google reap)


ftp://ftp.cs.utexas.edu/pub/emery/papers/reconsidering-custom.pdf Andrei





 heap when you try to delete something, why not just improve the standard GC




 i.e. by making per-thread heaps?  If you're not going to delete stuff, why not
 just use a regular old region or stack (not necessarily the call stack, but a
 stack of some kind)?

good stuff to do, so little time... Andrei

I have my doubts only b/c I've worked on similar things before and the thing that keeps biting me in the $$ is how to handle the GC. If you allocate some huge block of memory to parcel out and have it all be scanned by the GC, you're asking for slow scan times and lots of false pointers, thus largely defeating the purpose of the region scheme.

consumption is not really large. If it's really large, then you may be better off just using the GC because it means you do a lot of stuff. But I'm sure I'm ignoring a few important applications. Andrei

By not really large, how big are we talking? Less than a few 10s of KB? If so, I think just having the whole thing scanned would be feasible.
Nov 05 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Well I'm thinking that often when you use a region, the memory
 consumption is not really large. If it's really large, then you may be
 better off just using the GC because it means you do a lot of stuff. But
 I'm sure I'm ignoring a few important applications.
 Andrei

By not really large, how big are we talking? Less than a few 10s of KB? If so, I think just having the whole thing scanned would be feasible.

Honest, I don't know. Only real-life usage might tell. At any rate, a few 10s of KBs would definitely work for many of my own applications. Andrei
Nov 05 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 dsimcha wrote:
 == Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Well I'm thinking that often when you use a region, the memory
 consumption is not really large. If it's really large, then you may be
 better off just using the GC because it means you do a lot of stuff. But
 I'm sure I'm ignoring a few important applications.
 Andrei

By not really large, how big are we talking? Less than a few 10s of KB? If so, I think just having the whole thing scanned would be feasible.

few 10s of KBs would definitely work for many of my own applications. Andrei

Ok, now we're getting somewhere. I guess if enough ppl find a few 10s of k useful, we could just make GC scanning and block size configurable. Maybe through some static if's we could make it only trusted if it's scanned by the GC. I'm hoping to make a region template that can go into Phobos, and, in some instantiation, can satisfy just about everyone, and then make TempAlloc simply an instantiation of this template for my own personal use. Here are some questions about how this should work: 1. What should happen if you try to allocate more space than you have in the region? Should it silently fall back to heap allocation? Should it throw an exception? Should it return null? Should it silently allocate another region block? 2. Should the region also allow freeing memory in last in, first out order, behaving somewhat as a stack? The advantage to doing so would be to increase flexibility. The downside is that you would have to do a little bookkeeping internally, and the overhead of this might be too high for the use case of lots of tiny allocations. Maybe this should also be a policy. Also, if it's a region + a stack, can we call it a "rack"? 3. Should it be designed as a normal object (more flexible but less convenient) or a thread-local singleton that lazily initializes itself and is just there, kind of like malloc (more convenient but less flexible)? 4. Any other generic comments on how this should be designed?
Nov 05 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Ok, I understand the basic principle of a reap, but if it's going to convert
to a
 heap when you try to delete something, why not just improve the standard GC
heap,
 i.e. by making per-thread heaps?

The problem with per-thread heaps is immutable data can be passed between threads.
Nov 05 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 dsimcha wrote:
 Ok, I understand the basic principle of a reap, but if it's going to convert
to a
 heap when you try to delete something, why not just improve the standard GC
heap,
 i.e. by making per-thread heaps?

between threads.

What does this have to do with anything? If allocation is done w/o locking by using TLS, (manual) freeing could simply check that the thread ID of the block matches the thread ID of the thread doing the freeing and if so, free the memory to the thread-local heap w/o locking. Of course, GCing would still have to stop the world.
Nov 05 2009
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-05 17:11:38 -0500, Walter Bright <newshound1 digitalmars.com> said:

 dsimcha wrote:
 Ok, I understand the basic principle of a reap, but if it's going to 
 convert to a
 heap when you try to delete something, why not just improve the 
 standard GC heap,
 i.e. by making per-thread heaps?

The problem with per-thread heaps is immutable data can be passed between threads.

Well, if that's a problem you could fix it by making immutable not shared unless you also put the shared attribute: immutable Object o; // thread-local shared immutable Object o; // visible from all threads I think having per-thread heaps is a worthy goal. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Michel Fortin wrote:
 On 2009-11-05 17:11:38 -0500, Walter Bright <newshound1 digitalmars.com> 
 said:
 
 dsimcha wrote:
 Ok, I understand the basic principle of a reap, but if it's going to 
 convert to a
 heap when you try to delete something, why not just improve the 
 standard GC heap,
 i.e. by making per-thread heaps?

The problem with per-thread heaps is immutable data can be passed between threads.

Well, if that's a problem you could fix it by making immutable not shared unless you also put the shared attribute: immutable Object o; // thread-local shared immutable Object o; // visible from all threads

Aaggghhhh !!! <g>
 
 I think having per-thread heaps is a worthy goal.
 

Nov 05 2009
parent Ellery Newcomer <ellery-newcomer utulsa.edu> writes:
Walter Bright wrote:
 Well, if that's a problem you could fix it by making immutable not
 shared unless you also put the shared attribute:

     immutable Object o;        // thread-local
     shared immutable Object o; // visible from all threads

Aaggghhhh !!! <g>

ditto
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 15:20:34 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 14:57:48 -0500, Michel Fortin  
 <michel.fortin michelf.com> wrote:

 On 2009-11-05 13:33:09 -0500, Walter Bright  
 <newshound1 digitalmars.com> said:

 Safety seems more and more to be a characteristic of a function,  
 rather than a module or command line switch. To that end, I propose  
 two new attributes:
   safe
  trusted

Looks like a good proposal. That said, since most functions are probably going to be safe, wouldn't it be better to remove safe and replace it by its counterpart: an unsafe attribute? This would make things safe by default, which is undoubtedly safer, and avoid the unnecessary clutter of safe annotations everywhere.

of tango (and other performance oriented libs which use stack allocation as much as possible) will fail to compile.

While I agree with your point, quick question: could you use ref parameters instead? Ref will be usable in SafeD.

Most of the usages are like this: ubyte[1024] buffer; functionThatNeedsBufferSpace(buffer); where functionThatNeedsBufferSpace takes a ubyte[], thereby taking an address of the local data. So it's not explicit address taking, but it's the same thing under the hood. There always exists the potential for the stack reference to escape. Similar case is scope classes (which are sometimes used to allocate a temporary class for performance in tango). I can't see scope classes being allowed if you can't take addresses of local variables. I'm not saying that safed needs to allow these kinds of things somehow (thereby employing escape analysis), but it might be too restrictive for performance-oriented libs/apps. I think it's acceptable that tango eventually gets marked with trusted, but by making safe the default, you will immediately make many D1 projects not compile without significant effort, which might drive away developers from D2, or else make people automatically mark all files as trusted without thinking about it. By defaulting to untrusted and unsafe, you allow people to incrementally add safe and trusted tags where they are appropriate and correct. -Steve
Nov 05 2009
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Thu, Nov 05, 2009 at 10:19:27PM +0100, Ary Borenszweig wrote:
 I don't want to think about 
 safety all the time, just let me code! If something is unsafe I'll mark 
 it for you, compiler, no problem, but do you think I'm just some crazy 
 unsafe maniac? I program safely.

This might be a problem. If safe functions can only call other safe functions and all functions are safe by default, unsafe becomes viral. Let me give an example: void main() { doSomething(); doSomethingElse(); } void doSomething() { does safe things } void doSomethingElse() { oneMoreFunction(); } void oneMoreFunction() { byte* a = cast(byte*) 0xb80000000L; // unsafe!} Now, to actually call oneMoreFunction, you have to mark it as unsafe. Then, since it is called in doSomethingElse, you have to mark it as unsafe. Then, since it is called from main, it too must be marked unsafe. This would just get annoying. This is bypassed by marking oneMoreFunction() as trusted. Having an unsafe is unworkable in safe by default. It is just default (safe) and marked (trusted). Which is going to work best for existing code? With Walter's idea, you compile it, then fix functions piece by piece to make them safe. Since your other unsafe functions can still call them, the change is localized and you get safer with each revision. With safe by default, you'd probably make existing code compile just by slapping trusted: at the top and being done with it. That's not actually safe - you're just telling the compiler to shut up about it. -- Adam D. Ruppe http://arsdnet.net
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 16:30:42 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

  Most of the usages are like this:
  ubyte[1024] buffer;
 functionThatNeedsBufferSpace(buffer);
  where functionThatNeedsBufferSpace takes a ubyte[], thereby taking an  
 address of the local data.
  So it's not explicit address taking, but it's the same thing under the  
 hood.  There always exists the potential for the stack reference to  
 escape.

I see, thank you. I am confident that a trusted reap could be implemented in the standard library. (google reap)

I did. Couldn't find anything.
 Similar case is scope classes (which are sometimes used to allocate a  
 temporary class for performance in tango).  I can't see scope classes  
 being allowed if you can't take addresses of local variables.

Yah, in fact I think scope classes should just go. But don't hold that against me. :o)

They are great when you need temporary objects to do things like filters. It works well in Tango's i/o subsystem where everything uses interfaces. I think they have their place, as long as structs don't implement interfaces and the interface concept isn't removed from D. -Steve
Nov 05 2009
prev sibling next sibling parent "Phil Deets" <pjdeets2 gmail.com> writes:
On Thu, 05 Nov 2009 16:40:11 -0500, Adam D. Ruppe  
<destructionator gmail.com> wrote:

 On Thu, Nov 05, 2009 at 10:19:27PM +0100, Ary Borenszweig wrote:
 I don't want to think about
 safety all the time, just let me code! If something is unsafe I'll mark
 it for you, compiler, no problem, but do you think I'm just some crazy
 unsafe maniac? I program safely.

This might be a problem. If safe functions can only call other safe functions and all functions are safe by default, unsafe becomes viral. Let me give an example: void main() { doSomething(); doSomethingElse(); } void doSomething() { does safe things } void doSomethingElse() { oneMoreFunction(); } void oneMoreFunction() { byte* a = cast(byte*) 0xb80000000L; // unsafe!} Now, to actually call oneMoreFunction, you have to mark it as unsafe. Then, since it is called in doSomethingElse, you have to mark it as unsafe. Then, since it is called from main, it too must be marked unsafe. This would just get annoying. This is bypassed by marking oneMoreFunction() as trusted. Having an unsafe is unworkable in safe by default. It is just default (safe) and marked (trusted). Which is going to work best for existing code? With Walter's idea, you compile it, then fix functions piece by piece to make them safe. Since your other unsafe functions can still call them, the change is localized and you get safer with each revision. With safe by default, you'd probably make existing code compile just by slapping trusted: at the top and being done with it. That's not actually safe - you're just telling the compiler to shut up about it.

Right. Pure propagates toward callees. C++'s const member functions propagate towards callees. I think we should use safe since it too propagates toward callees. Having safe be default would cause an unsafe attribute to propagate back toward callers, which seems backwards.
Nov 05 2009
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Thu, 05 Nov 2009 17:11:38 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 dsimcha wrote:
 Ok, I understand the basic principle of a reap, but if it's going to  
 convert to a
 heap when you try to delete something, why not just improve the  
 standard GC heap,
 i.e. by making per-thread heaps?

The problem with per-thread heaps is immutable data can be passed between threads.

I always assumed that with thread local heaps, both immutable and shared data would be allocated from a shared heap. Also, the shared heap should use some form of thread local allocation.
Nov 05 2009
prev sibling parent rmcguire <rjmcguire gmail.com> writes:
Ellery Newcomer <ellery-newcomer utulsa.edu> wrote:
 
 Walter Bright wrote:
 Well, if that's a problem you could fix it by making immutable not
 shared unless you also put the shared attribute:

     immutable Object o;        // thread-local
     shared immutable Object o; // visible from all threads

Aaggghhhh !!! <g>

ditto

allocated? That way you could prove? that the data is immutable. -Rory
Nov 05 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:hcv5p9$2jh1$1 digitalmars.com...
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
 as the subset of D that guarantees no undefined behavior. Implementation 
 defined behavior (such as varying pointer sizes) is still allowed.

 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch. To that end, I propose two new 
 attributes:

  safe
  trusted

Sounds great! The lower-grained safeness makes a lot of sense, and I'm thrilled at the idea of safe D finally encompassing more than just memory safety - I'd been hoping to see that happen ever since I first heard that "safeD" only ment memory-safe.
Nov 05 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:hcv5p9$2jh1$1 digitalmars.com...
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
 as the subset of D that guarantees no undefined behavior. Implementation 
 defined behavior (such as varying pointer sizes) is still allowed.

 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch. To that end, I propose two new 
 attributes:

  safe
  trusted

Sounds great! The lower-grained safeness makes a lot of sense, and I'm thrilled at the idea of safe D finally encompassing more than just memory safety - I'd been hoping to see that happen ever since I first heard that "safeD" only ment memory-safe.

I can think of division by zero as an example. What others are out there? Andrei
Nov 05 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 Sounds great! The lower-grained safeness makes a lot of sense, and I'm 
 thrilled at the idea of safe D finally encompassing more than just 
 memory safety - I'd been hoping to see that happen ever since I first 
 heard that "safeD" only ment memory-safe. 

I can think of division by zero as an example. What others are out there?

Casting away const/immutable/shared.
Nov 05 2009
next sibling parent reply Jason House <jason.james.house gmail.com> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 Sounds great! The lower-grained safeness makes a lot of sense, and I'm 
 thrilled at the idea of safe D finally encompassing more than just 
 memory safety - I'd been hoping to see that happen ever since I first 
 heard that "safeD" only ment memory-safe. 

I can think of division by zero as an example. What others are out there?

Casting away const/immutable/shared.

I posted in the other thread how casting to immutable/shared can be just as bad. A leaked reference prior to casting to immutable/shared is in effect the same as casting away shared. No matter how you mix thread local and shared, or mutable and immutable, you still have the same undefined behavior
Nov 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.
Nov 05 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

Are we in agreement that safe functions have bounds checking on regardless of -release? Andrei
Nov 05 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 Walter Bright wrote:
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

regardless of -release? Andrei

I'd vote for this. I've wanted, for a while, a way to have finer-grained control over bounds checking anyhow. In non-performance-critical pieces of code it seems like a no-brainer to leave it on all the time, just to be safe. In performance-critical code, it's a no-brainer that it has to be turned off after debugging. Right now I almost never use bounds checking except when I already know I have a bug and am trying to find it because it's just too slow. I'd love to have it as a safety net in the 90+% of my code that isn't performance-critical.
Nov 05 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

Are we in agreement that safe functions have bounds checking on regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.
Nov 05 2009
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-05 19:14:47 -0500, Walter Bright <newshound1 digitalmars.com> said:

 Andrei Alexandrescu wrote:
 Are we in agreement that  safe functions have bounds checking on 
 regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.

But if you remove bound checking, it isn't safe anymore, is it? Sometime safety is more important than performance. If I needed performance in a safe program, I'd profile and find the bottlenecks, review carefully those parts of the code slowing down the program, then when I trust them perfectly I'd add the trusted attribute. trusted should remove bound checks (in release mode). safe should keep them to keep other less trustworthy pieces of of the program truly safe. That said, I'd be in favor of a compiler switch to enable/disable runtime checks in release mode... perhaps "-safe" could return as way to generate truly safe binaries even in release mode. This would also make it pretty easy to evaluate how much impact those runtime checks have on final executable (by turning on and off the compiler switch). -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 05 2009
next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-11-05 22:22:39 -0500, Leandro Lucarella <llucax gmail.com> said:

 Michel Fortin, el  5 de noviembre a las 19:43 me escribiste:
 But if you remove bound checking, it isn't safe anymore, is it?

100% safe doesn't exist. If you think you have it because of bound-checking, you are wrong.

True. What I meant was some things that were supposed to be safe in SafeD (arrays) are no longer safe, pretty much destroying the concept of SafeD being memory safe.
 Sometime safety is more important than performance. [...]

What if I'm using an external library that I don't control? *That's* the problem for me, I want to be able to compile things I *trust* as if they were *trusted* :) I vote for an -unsafe (and/or -disable-bound-check). Safe should be the default.

You're right. Having "-unsafe" to disable runtime checks is better than "-safe" to enable them because then the default behavior is safe. And it allows you to recompile any library you want with "-unsafe" to remove runtime checks from safe functions when you don't care about safety. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Nov 05 2009
prev sibling parent =?UTF-8?B?UGVsbGUgTcOlbnNzb24=?= <pelle.mansson gmail.com> writes:
Leandro Lucarella wrote:
 Michel Fortin, el  5 de noviembre a las 19:43 me escribiste:
 On 2009-11-05 19:14:47 -0500, Walter Bright <newshound1 digitalmars.com> said:

 Andrei Alexandrescu wrote:
 Are we in agreement that  safe functions have bounds checking on
 regardless of -release?

practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.


100% safe doesn't exist. If you think you have it because of bound-checking, you are wrong.
 Sometime safety is more important than performance. If I needed
 performance in a safe program, I'd profile and find the bottlenecks,
 review carefully those parts of the code slowing down the program,
 then when I trust them perfectly I'd add the  trusted attribute.
  trusted should remove bound checks (in release mode).  safe should
 keep them to keep other less trustworthy pieces of of the program
 truly safe.

What if I'm using an external library that I don't control? *That's* the problem for me, I want to be able to compile things I *trust* as if they were *trusted* :)
 That said, I'd be in favor of a compiler switch to enable/disable
 runtime checks in release mode... perhaps "-safe" could return as
 way to generate truly safe binaries even in release mode. This would
 also make it pretty easy to evaluate how much impact those runtime
 checks have on final executable (by turning on and off the compiler
 switch).

I vote for an -unsafe (and/or -disable-bound-check). Safe should be the default.

Doesn't safe disable features? Like taking the adress of things on the stack? If I am not mistaken and this is the case, I vote against default safe. It's not a big problem for bigger projects that actually need safety, and it doesn't mess up any quick-and-dirty code.
Nov 06 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

Are we in agreement that safe functions have bounds checking on regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.

This is a showstopper. What kind of reputation do you think D would achieve if "safe" code has buffer overrun attacks? A function that wants to rely on hand-made verification in lieu of bounds checks may go with trusted. There is absolutely no way a safe function could allow buffer overruns in D, ever. Andrei
Nov 05 2009
parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

Are we in agreement that safe functions have bounds checking on regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.

This is a showstopper. What kind of reputation do you think D would achieve if "safe" code has buffer overrun attacks? A function that wants to rely on hand-made verification in lieu of bounds checks may go with trusted. There is absolutely no way a safe function could allow buffer overruns in D, ever.

I agree if the flag is called "-release". But if the "disable bounds checking" flag is renamed to -unsafe or similar, I can't see any impact on reputation.
Nov 06 2009
parent Michal Minich <michal.minich gmail.com> writes:
 I agree if the flag is called "-release". But if the "disable bounds
 checking"  flag is renamed to -unsafe or similar, I can't see any
 impact on reputation.

Ditto.
Nov 06 2009
prev sibling parent Michal Minich <michal.minich gmail.com> writes:
Hello Andrei,

 Walter Bright wrote:
 
 Jason House wrote:
 
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have
 the same undefined behavior
 

behavior. Hence, such code would go into a trusted function.

regardless of -release? Andrei

I think there are two cases: User would want max performance from a some library, but he does not care about performance. User should have the right to override intentions of library writer, making his code less safe, to achieve speed. Compiler flag -unsafe (or -unsafe-no-bounds-checking) should do this. The user will know that he is *overriding* the original safety to lower level. Also, when a library writer wants his code to appeal and be usable to most users, he should mark it safe or trusted. But in situation, when he knows that safe code would be always bounds-checked (there is no -unsafe compiler switch), he may rather mark his code trusted, even if it would be safe, in order to appeal to user that don't want slow (bounds-checked) library. If the library writer knows that users can override safety (bounds-checking), he would not hesitate to use safe where appropriate. the -unsafe-no-bounds-checking flag is essential for properly working of safeness in D.
Nov 06 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 17:49:33 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

But how does such a trusted function guarantee that the invariant/shared reference has no other aliases?

It doesn't. Trusted code is verified by the programmer, not the compiler.
 The point is, there is no way to write 
 such a function in good faith because you can't guarantee it's actually 
 safe, it's still up to the user of the function.  My understanding is 
 that a  trusted function should be provably safe even if the compiler 
 can't prove it.
 
 -Steve

Nov 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 That is, I have a mutable reference x, I want to make it immutable.  How 
 do you write a function to do that?
 
 i.e.:
 
  safe void foo()
 {
    x = new X();
    x.modifyState(5);
    immutable(X) ix = ???; // how to write this part
 }

If you, the writer of foo(), know that there are no other mutable references to x you can cast it to immutable - but you'll have to mark the function as trusted.
Nov 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 But what if I don't what the whole function to be trusted, just that 
 creation section?  I have to create a new function just to create the data?

Separate out the trusted code to a separate function.
 Maybe function-level granularity isn't good enough...

Nov 05 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 However, I'll let it go, I don't know the ramifications since allocating 
 immutable objects is a rare occurrence, and I'm not sure how it will be 
 done.  I am also not sure how solid a use case this is (allocating an 
 object, then manipulating it via methods before changing it to immutable).

I don't have any solid history with this, it's just my opinion that the right place for it is at the function level. Experience may show your idea to be better, but it's easier to move in that direction later than to try and turn off support for unsafeness at the statement level.
Nov 05 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 17:49:33 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

But how does such a trusted function guarantee that the invariant/shared reference has no other aliases?

It doesn't. Trusted code is verified by the programmer, not the compiler.
 The point is, there is no way to write 
 such a function in good faith because you can't guarantee it's actually 
 safe, it's still up to the user of the function.  My understanding is 
 that a  trusted function should be provably safe even if the compiler 
 can't prove it.
 
 -Steve

Nov 05 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 Sounds great! The lower-grained safeness makes a lot of sense, and 
 I'm thrilled at the idea of safe D finally encompassing more than 
 just memory safety - I'd been hoping to see that happen ever since I 
 first heard that "safeD" only ment memory-safe. 

I can think of division by zero as an example. What others are out there?

Casting away const/immutable/shared.

I think those lead to memory errors :o). Andrei
Nov 05 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 05/11/2009 23:24, Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message
 news:hcv5p9$2jh1$1 digitalmars.com...
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be
 defined as the subset of D that guarantees no undefined behavior.
 Implementation defined behavior (such as varying pointer sizes) is
 still allowed.

 Safety seems more and more to be a characteristic of a function,
 rather than a module or command line switch. To that end, I propose
 two new attributes:

  safe
  trusted

Sounds great! The lower-grained safeness makes a lot of sense, and I'm thrilled at the idea of safe D finally encompassing more than just memory safety - I'd been hoping to see that happen ever since I first heard that "safeD" only ment memory-safe.

I can think of division by zero as an example. What others are out there? Andrei

Safe arithmetic like in C# that guards against overflows (throws on overflow).
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 17:49:33 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

But how does such a trusted function guarantee that the invariant/shared reference has no other aliases? The point is, there is no way to write such a function in good faith because you can't guarantee it's actually safe, it's still up to the user of the function. My understanding is that a trusted function should be provably safe even if the compiler can't prove it. -Steve
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 19:11:34 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 On Thu, 05 Nov 2009 17:49:33 -0500, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 Jason House wrote:
 I posted in the other thread how casting to immutable/shared can be
 just as bad. A leaked reference prior to casting to immutable/shared
 is in effect the same as casting away shared. No matter how you mix
 thread local and shared, or mutable and immutable, you still have the
 same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

invariant/shared reference has no other aliases?

It doesn't. Trusted code is verified by the programmer, not the compiler.

OK, you totally ignored my point though. How do you write such a function? That is, I have a mutable reference x, I want to make it immutable. How do you write a function to do that? i.e.: safe void foo() { x = new X(); x.modifyState(5); immutable(X) ix = ???; // how to write this part } -Steve
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 19:44:55 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 That is, I have a mutable reference x, I want to make it immutable.   
 How do you write a function to do that?
  i.e.:
   safe void foo()
 {
    x = new X();
    x.modifyState(5);
    immutable(X) ix = ???; // how to write this part
 }

If you, the writer of foo(), know that there are no other mutable references to x you can cast it to immutable - but you'll have to mark the function as trusted.

But what if I don't what the whole function to be trusted, just that creation section? I have to create a new function just to create the data? Maybe function-level granularity isn't good enough... -Steve
Nov 05 2009
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 21:17:08 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Steven Schveighoffer wrote:
 But what if I don't what the whole function to be trusted, just that  
 creation section?  I have to create a new function just to create the  
 data?

Separate out the trusted code to a separate function.

Although that's a solution, it seems artificial to have to build a separate function just to allocate some immutable object. If unsafe code is truly going to be a rare occurrence, you might want to allow as fine grained control as possible over where the compiler ignores safety. Having to create artificial boundaries that cause performance problems/code bloat doesn't seem right to me. However, I'll let it go, I don't know the ramifications since allocating immutable objects is a rare occurrence, and I'm not sure how it will be done. I am also not sure how solid a use case this is (allocating an object, then manipulating it via methods before changing it to immutable). -Steve
Nov 05 2009
prev sibling next sibling parent reply Frank Benoit <keinfarbton googlemail.com> writes:
safe should be the default. The unsafe part should take the extra
typing, not the other way. Make the user prefer the safe way.
Nov 05 2009
next sibling parent reply grauzone <none example.net> writes:
Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.
Nov 05 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)
Nov 05 2009
parent reply grauzone <none example.net> writes:
Ary Borenszweig wrote:
 grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)

If you mean memory safety, then yes and will probably forever be for all practical uses (unless D gets implemented on a Java or .net like VM).
Nov 05 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
grauzone wrote:
 Ary Borenszweig wrote:
 grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)

If you mean memory safety, then yes and will probably forever be for all practical uses (unless D gets implemented on a Java or .net like VM).

Oh how cool. So it turns out that SafeD can be 100% implemented on a safe VM. It's great to give a well-defined target to potential implementers. Andrei
Nov 05 2009
next sibling parent grauzone <none example.net> writes:
Andrei Alexandrescu wrote:
 grauzone wrote:
 Ary Borenszweig wrote:
 grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)

If you mean memory safety, then yes and will probably forever be for all practical uses (unless D gets implemented on a Java or .net like VM).

Oh how cool. So it turns out that SafeD can be 100% implemented on a safe VM. It's great to give a well-defined target to potential implementers.

I'm not sure. On Java you probably had trouble emulating ref params/returns, and on .net, you can use pointers anyway. By the way, I remember someone saying T[new] would be good for implementing arrays/slices on .net, but it got dumped (I find this funny and absurd in this context).
 Andrei

Nov 05 2009
prev sibling parent Rainer Deyke <rainerd eldwood.com> writes:
Andrei Alexandrescu wrote:
 Oh how cool. So it turns out that SafeD can be 100% implemented on a
 safe VM.

This is also true of regular unsafe C. -- Rainer Deyke - rainerd eldwood.com
Nov 05 2009
prev sibling next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 05/11/2009 23:45, grauzone wrote:
 Ary Borenszweig wrote:
 grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)

If you mean memory safety, then yes and will probably forever be for all practical uses (unless D gets implemented on a Java or .net like VM).

C# does allow memory unsafe code inside unsafe blocks. There's an alloca and malloca functions for allocating on the stack. VM is just an abstract (virtual) instruction set. You can design a safe native one or an unsafe virtual one. it's all a matter of design choices. there's nothing magical about a VM that makes it inherently safe. IMO D should be safe by default and allow unsafe code when it is appropriately marked as such, regardless of a VM. BTW, so called native code on Intel processors runs in a VM as well. Intel's cisc instruction set is translated to a risc like micro-ops and those micro-ops are executed. the only difference is that this is done in hardware by the processor.
Nov 05 2009
next sibling parent Don <nospam nospam.com> writes:
Yigal Chripun wrote:

 BTW, so called native code on Intel processors runs in a VM as well.
 Intel's cisc instruction set is translated to a risc like micro-ops and 
 those micro-ops are executed. the only difference is that this is done 
 in hardware by the processor.

It's a bit meaningless to call that a VM. You can just as easily say that _every_ CPU ever made is a VM, since it's implemented with transistors. (Traditionally, CISC processors were implemented with microcode, BTW -- so there's nothing new). So the "virtual" becomes meaningless -- "virtual machine" just means "machine". The term "virtual machine" is useful to distinguish from the "physical machine" (the hardware). If there's a lower software level, you're on a virtual machine.
Nov 06 2009
prev sibling parent grauzone <none example.net> writes:
Yigal Chripun wrote:
 On 05/11/2009 23:45, grauzone wrote:
 Ary Borenszweig wrote:
 grauzone wrote:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

D is an unsafe language. C# is a safe language. Like that? :)

If you mean memory safety, then yes and will probably forever be for all practical uses (unless D gets implemented on a Java or .net like VM).

C# does allow memory unsafe code inside unsafe blocks. There's an alloca and malloca functions for allocating on the stack. VM is just an abstract (virtual) instruction set. You can design a safe native one or an unsafe virtual one. it's all a matter of design choices. there's nothing magical about a VM that makes it inherently safe.

Yes, but most VMs are designed to be memory safe for some reason, and I trust an instruction set designed to be memory safe more than having an additional "safety" feature tucked onto a complex language like D, and its half-assed implementation in dmd.
 IMO D should be safe by default and allow unsafe code when it is 
 appropriately marked as such, regardless of a VM.

I think most D code will have to be somewhat "unsafe" to be efficient, or to do stuff like binding to C libs. I already can see how code will be scattered with " safe", " trusted", etc., making that whole "safety" promise both a joke (theoretically) and a major PITA for the programmer (practically). But it's pointless to discuss about this, because SafeD is not here yet.
 BTW, so called native code on Intel processors runs in a VM as well.
 Intel's cisc instruction set is translated to a risc like micro-ops and 
 those micro-ops are executed. the only difference is that this is done 
 in hardware by the processor.
 

If you water down the word "VM" that much, it doesn't mean anything anymore.
Nov 06 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for all 
 practical uses (unless D gets implemented on a Java or .net like VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.
Nov 07 2009
next sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for 
 all practical uses (unless D gets implemented on a Java or .net like VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

In practice, the big disadvantage which D has is that it can make calls to C libraries which are not necessarily memory safe -- and this is an important feature of the language. Dealing with the external, uncheckable libraries is always going to be a weak point. Both Java and .net have mitigated this by rewriting a fair chunk of an OS in their libraries. That's probably never going to happen for D.
Nov 07 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 07/11/2009 11:53, Don wrote:
 Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for
 all practical uses (unless D gets implemented on a Java or .net like
 VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

In practice, the big disadvantage which D has is that it can make calls to C libraries which are not necessarily memory safe -- and this is an important feature of the language. Dealing with the external, uncheckable libraries is always going to be a weak point. Both Java and .net have mitigated this by rewriting a fair chunk of an OS in their libraries. That's probably never going to happen for D.

Sun pretty much implemented a full OS inside the JVM. At least their RT offering contains a scheduler in order to provide guaranties regarding collection time. In .Net land, MS uses .net to implement parts of their OS so no surprise there that those OS APIs are available to .net code. I wouldn't say that it's part of their libraries but rather parts of the OS itself. What parts of the OS are still missing in D's standard library? Isn't tango/phobos already provide all the common parts like i/o and networking and a few other major libs provide bindings/implementation for UI, 3d & multimedia, db bindings, etc? I think that the big disadvantage you claim D has isn't that big and it is well underway to go away compared to .net/java. Both Java and .net also provide ways to use unsafe C code (e.g. JNI, COM), It just a matter of what's the default, what's easier to do and what can be done without choosing the unsafe option. I think that D isn't that far off behind and could and should catch up.
Nov 07 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Yigal Chripun wrote:
 In .Net land, MS uses .net to implement parts of their OS so no surprise 
 there that those OS APIs are available to .net code.

Really? What parts? There are a bajillion APIs that you can use from .NET that aren't written in .NET. Microsoft just made it easier to use native code from .NET than Java does.
Nov 07 2009
parent Yigal Chripun <yigal100 gmail.com> writes:
Christopher Wright wrote:
 Yigal Chripun wrote:
 In .Net land, MS uses .net to implement parts of their OS so no 
 surprise there that those OS APIs are available to .net code.

Really? What parts? There are a bajillion APIs that you can use from .NET that aren't written in .NET. Microsoft just made it easier to use native code from .NET than Java does.

WPF for one. yes, it uses an unmanaged low-level engine called MIL to improve performance and interoperability but the windowing APIs themselves are .NET only and it's not just wrappers, it contains over 3000 classes according to MSDN. Of course there are bajillion Non .net APIs that are accessible from .NET. That's because MS has backward compatibility support starting from the DOS era. New technology is however done in .NET
Nov 07 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Don wrote:
 In practice, the big disadvantage which D has is that it can make calls 
 to C libraries which are not necessarily memory safe -- and this is an 
 important feature of the language. Dealing with the external, 
 uncheckable libraries is always going to be a weak point. Both Java and 
 .net have mitigated this by rewriting a fair chunk of an OS in their 
 libraries. That's probably never going to happen for D.

Java has the jni interface where one can execute arbitrary C code. Obviously, that isn't memory safe, either. Some of the standard C library functions are safe, some of them aren't. We'll mark them appropriately in the std.c.* headers. I expect there will be a lot of pressure for 3rd party D libraries to be marked as safe, so I think this problem will sort itself out over time.
Nov 07 2009
prev sibling parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for 
 all practical uses (unless D gets implemented on a Java or .net like VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

Yes, but VM bytecode is a much smaller language than D, which makes it far easier to verify for safety. In practice, SafeD will gradually become actually safe as people use it, see it break, and you fix the bugs. That's why I said for "all practical uses".
Nov 07 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for 
 all practical uses (unless D gets implemented on a Java or .net like 
 VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

Yes, but VM bytecode is a much smaller language than D, which makes it far easier to verify for safety. In practice, SafeD will gradually become actually safe as people use it, see it break, and you fix the bugs. That's why I said for "all practical uses".

The Java VM didn't start out as completely safe, either, as people found the holes they were fixed.
Nov 07 2009
parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 grauzone wrote:
 Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be for 
 all practical uses (unless D gets implemented on a Java or .net like 
 VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

Yes, but VM bytecode is a much smaller language than D, which makes it far easier to verify for safety. In practice, SafeD will gradually become actually safe as people use it, see it break, and you fix the bugs. That's why I said for "all practical uses".

The Java VM didn't start out as completely safe, either, as people found the holes they were fixed.

Because the bytecode language is much smaller than a high level language like D, it's easier for Java. Also, Java was planned to be safe right from the beginning, while SafeD is a rather unnatural feature added on the top of a complex existing language. To make it safe, you need to forbid a set of features, which inconveniences the programmer and will possibly reduce code efficiency. I'm not even opposed to the idea of SafeD, I'm just worrying that forcing all D code to adhere to SafeD by default will cause more trouble than gain.
Nov 07 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Because the bytecode language is much smaller than a high level language 
 like D, it's easier for Java.

I don't agree that has anything to do with it. The VM is compiled down to the same old CPU instructions that D is compiled to. What matters is the semantics.
 Also, Java was planned to be safe right 
 from the beginning, while SafeD is a rather unnatural feature added on 
 the top of a complex existing language. To make it safe, you need to 
 forbid a set of features, which inconveniences the programmer and will 
 possibly reduce code efficiency. I'm not even opposed to the idea of 
 SafeD, I'm just worrying that forcing all D code to adhere to SafeD by 
 default will cause more trouble than gain.

Only time will tell, of course, but D has a lot of inherently safe constructs (such as length-delimited arrays) that obviate most of the need for manipulating pointers. C++ users have also discovered that if they stick to writing in certain ways and using the STL, their programs are memory safe. The problem with C++ is, once again, this is by convention and is not checkable by the compiler.
Nov 07 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
grauzone wrote:
 Walter Bright wrote:
 grauzone wrote:
 Walter Bright wrote:
 grauzone wrote:
 If you mean memory safety, then yes and will probably forever be 
 for all practical uses (unless D gets implemented on a Java or .net 
 like VM).

A VM is neither necessary nor sufficient to make a language memory safe. It's all in the semantics of the language.

Yes, but VM bytecode is a much smaller language than D, which makes it far easier to verify for safety. In practice, SafeD will gradually become actually safe as people use it, see it break, and you fix the bugs. That's why I said for "all practical uses".

The Java VM didn't start out as completely safe, either, as people found the holes they were fixed.

Because the bytecode language is much smaller than a high level language like D, it's easier for Java. Also, Java was planned to be safe right from the beginning, while SafeD is a rather unnatural feature added on the top of a complex existing language. To make it safe, you need to forbid a set of features, which inconveniences the programmer and will possibly reduce code efficiency. I'm not even opposed to the idea of SafeD, I'm just worrying that forcing all D code to adhere to SafeD by default will cause more trouble than gain.

On the other hand, Java has had a much larger ambition, i.e. executing untrusted code in a sandbox, so that balances things a bit. I may as well be wrong, but my intuition is that there are no insurmountable holes that would make D unusable for safe programs. I can clearly see the exact reasons why C++ cannot have a reasonably well-defined safe subset: you can't do anything of significance in C++ without using pointers and pointer arithmetic. (That could be mitigated by a library.) Anyhow, here are a few elements that I think contribute to D's ability to approach memory safety: * garbage collection * built-in arrays * reference semantics for classes * pass-by-reference (ref) * safe approach to variadic functions Without some or all of these, a safe subset of D would be more difficult to define and less expressive. Andrei
Nov 07 2009
prev sibling parent Frank Benoit <keinfarbton googlemail.com> writes:
grauzone schrieb:
 Frank Benoit wrote:
 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

No. D is not C#.

As I understand the philosophy of D, it should be easy to write correct/good code, but it shall also be possible to do dirty thing. This is exactly implying that "safe" shall be the default.
Nov 05 2009
prev sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 16:13:01 -0500, Frank Benoit  
<keinfarbton googlemail.com> wrote:

 safe should be the default. The unsafe part should take the extra
 typing, not the other way. Make the user prefer the safe way.

Let's find out how painful/painless "safe" is before making that assertion :) I suspect that the rules for safed will be too strict for lots of provably safe code. -Steve
Nov 05 2009
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 05 Nov 2009 15:19:46 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Steven Schveighoffer wrote:
 Care to define some rules for "undefined behavior?"

My list may be of help.

Thanks, I found it. I note that you specify "No escape of a pointer or reference to a local variable outside its scope" There are definitely different degrees of detecting this. From reading some of your other posts, I take it you mean: * you cannot pass such a pointer to another function except by reference * you cannot return such a reference or pointer * you cannot take the address of a 'ref' parameter, because that parameter could be allocated on the stack. Without doing full escape-analysis, there are some problems with this. For example, let's take the function std.string.split. It takes a reference to data and returns a reference to that same data. However, the compiler is unaware of where the data being returned comes from. For example: char[] getFirstWordOfFile() { char buf[1024]; auto x = buf[0..readFile("foo.d", buf)]; return split(x)[0]; // memory escape } I'm not a phobos guy, so I don't know exactly how to do readFile, but I think we all know what it means. readFile will obviously be marked as trusted, since it does not escape any memory, but calls a (potentially) unsafe C function (read). But what about split? Should it be illegal to pass in the reference to the stack memory? Should it be illegal to mark the split function as safe? How does safeD prevent this mistake? My point is, because without full analysis, the compiler cannot connect the outputs of a function with its inputs, only functions which heap-allocate defensively, or don't use references, can be marked as safe. Because safety is sometimes contextual, it will be impossible to use the all-or-nothing safe marker on many functions (such as split). I'm still not sure how to solve this, or whether it will have a large impact on how safed works. -Steve
Nov 05 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  5 de noviembre a las 12:12 me escribiste:
 Steven Schveighoffer wrote:
If unsafe means you cannot pass pointers to local variables, then
half of tango (and other performance oriented libs which use stack
allocation as much as possible) will fail to compile.

My vote is for unsafe as the default.  It's the least intrusive
option, to ensure that current projects still compile.  Then let
the project authors ensure their projects are safe one
module/function at a time.

I agree. Also, dealing with safeness is something that comes later on as a project scales to a larger size. As such, it's more of a nuisance on a small program than a help.
Also keep in mind that  safe annotations for a mostly safe project
will be once at the top of each module.  They won't be
"everywhere".

Right. Adding: safe: at the top will do it.

Being so easy to mark a whole file unsafe, I think safe as default is a saner choice. It add an interesting property of Cardelli's definition: no untrapped errors. People by default will be warned about any unsafe behaviour, if you really want unsafe, just say so. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- <Damian_Des> Me anDa MaL eL CaPSLoCK
Nov 05 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Adam D. Ruppe, el  5 de noviembre a las 16:40 me escribiste:
 With safe by default, you'd probably make existing code compile just by
 slapping  trusted: at the top and being done with it. That's not actually
 safe - you're just telling the compiler to shut up about it.

I don't see this problem going away just by making unsafe the default. With this arguments one can think that people will not use safe at all and that's it. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Y2K - what a disappointment... i had at least expected one nuclear plant to blow
Nov 05 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Michel Fortin, el  5 de noviembre a las 19:43 me escribiste:
 On 2009-11-05 19:14:47 -0500, Walter Bright <newshound1 digitalmars.com> said:
 
Andrei Alexandrescu wrote:
Are we in agreement that  safe functions have bounds checking on
regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.

But if you remove bound checking, it isn't safe anymore, is it?

100% safe doesn't exist. If you think you have it because of bound-checking, you are wrong.
 Sometime safety is more important than performance. If I needed
 performance in a safe program, I'd profile and find the bottlenecks,
 review carefully those parts of the code slowing down the program,
 then when I trust them perfectly I'd add the  trusted attribute.
  trusted should remove bound checks (in release mode).  safe should
 keep them to keep other less trustworthy pieces of of the program
 truly safe.

What if I'm using an external library that I don't control? *That's* the problem for me, I want to be able to compile things I *trust* as if they were *trusted* :)
 That said, I'd be in favor of a compiler switch to enable/disable
 runtime checks in release mode... perhaps "-safe" could return as
 way to generate truly safe binaries even in release mode. This would
 also make it pretty easy to evaluate how much impact those runtime
 checks have on final executable (by turning on and off the compiler
 switch).

I vote for an -unsafe (and/or -disable-bound-check). Safe should be the default. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Lo último que hay que pensar es que se desalinea la memoria Hay que priorizar como causa la idiotez propia Ya lo tengo asumido -- Pablete, filósofo contemporáneo desconocido
Nov 05 2009
prev sibling next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  5 de noviembre a las 19:10 me escribiste:
 Walter Bright wrote:
Andrei Alexandrescu wrote:
Walter Bright wrote:
Jason House wrote:
I posted in the other thread how casting to immutable/shared can be
just as bad. A leaked reference prior to casting to immutable/shared
is in effect the same as casting away shared. No matter how you mix
thread local and shared, or mutable and immutable, you still have the
same undefined behavior

Not undefined, it's just that the compiler can't prove it's defined behavior. Hence, such code would go into a trusted function.

Are we in agreement that safe functions have bounds checking on regardless of -release?

You're right from a theoretical perspective, but not from a practical one. People ought to be able to flip on 'safe' without large performance penalties. If it came with inescapable large performance penalties, then it'll get a bad rap and people will be reluctant to use it, defeating its purpose.

This is a showstopper. What kind of reputation do you think D would achieve if "safe" code has buffer overrun attacks?

If you compiled it with the -unsafe (or -disable-bound-check) flag, I think there should be no impact in the reputation. It the *users*/*maintainer* (whoever compiles the code) choice if he assumes the risks.
 A function that wants to rely on hand-made verification in lieu of
 bounds checks may go with  trusted. There is absolutely no way a
  safe function could allow buffer overruns in D, ever.

Again, the problem is with code you don't control. I want to be able to turn bound-checking off (and any other runtime safety, but not compile-time safety) without modifying other people's code. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- 22% of the time a pizza will arrive faster than an ambulance in Great-Britain
Nov 05 2009
prev sibling next sibling parent reply Don <nospam nospam.com> writes:
Walter Bright wrote:
 Following the safe D discussions, I've had a bit of a change of mind. 
 Time for a new strawman.
 
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
 as the subset of D that guarantees no undefined behavior. Implementation 
 defined behavior (such as varying pointer sizes) is still allowed.
 
 Memory safety is a subset of this. Undefined behavior nicely covers 
 things like casting away const and shared.
 
 Safety has a lot in common with function purity, which is set by an 
 attribute and verified by the compiler. Purity is a subset of safety.
 
 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch. To that end, I propose two new 
 attributes:
 
  safe
  trusted
 
 A function marked as  safe cannot use any construct that could result in 
 undefined behavior. An  safe function can only call other  safe 
 functions or  trusted functions.
 
 A function marked as  trusted is assumed to be safe by the compiler, but 
 is not checked. It can call any function.

 
 Functions not marked as  safe or  trusted can call any function.
 
 To mark an entire module as safe, add the line:
 
     safe:
 
 after the module statement. Ditto for marking the whole module as 
  trusted. An entire application can be checked for safety by making 
 main() safe:
 
      safe int main() { ... }
 
 This proposal eliminates the need for command line switches, and 
 versioning based on safety.

I think it's important to also have unsafe. These are functions which are unsafe, and not trusted. Like free(), for example. They're usually easy to identify, and should be small in number. They should only be callable from trusted functions. That's different from unmarked functions, which generally just haven't been checked for safety. I want to able to find the cases where I'm calling those guys, without having to mark every function in the program with an safe attribute.
Nov 06 2009
next sibling parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Nov 6, 2009 at 12:48 AM, Don <nospam nospam.com> wrote:
 Walter Bright wrote:
 Following the safe D discussions, I've had a bit of a change of mind. Ti=


 for a new strawman.

 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined
 as the subset of D that guarantees no undefined behavior. Implementation
 defined behavior (such as varying pointer sizes) is still allowed.

 Memory safety is a subset of this. Undefined behavior nicely covers thin=


 like casting away const and shared.

 Safety has a lot in common with function purity, which is set by an
 attribute and verified by the compiler. Purity is a subset of safety.

 Safety seems more and more to be a characteristic of a function, rather
 than a module or command line switch. To that end, I propose two new
 attributes:

  safe
  trusted

 A function marked as  safe cannot use any construct that could result in
 undefined behavior. An  safe function can only call other  safe function=


  trusted functions.

 A function marked as  trusted is assumed to be safe by the compiler, but
 is not checked. It can call any function.

 Functions not marked as  safe or  trusted can call any function.

 To mark an entire module as safe, add the line:

 =A0  safe:

 after the module statement. Ditto for marking the whole module as
  trusted. An entire application can be checked for safety by making main=


 safe:

 =A0 =A0 safe int main() { ... }

 This proposal eliminates the need for command line switches, and
 versioning based on safety.

I think it's important to also have unsafe. These are functions which ar=

 unsafe, and not trusted. Like free(), for example. They're usually easy t=

 identify, and should be small in number.
 They should only be callable from  trusted functions.
 That's different from unmarked functions, which generally just haven't be=

 checked for safety.
 I want to able to find the cases where I'm calling those guys, without
 having to mark every function in the program with an  safe attribute.

Agreed. Having safe but no unsafe is like having "private:" with no "public:". --bb
Nov 06 2009
prev sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Don (nospam nospam.com)'s article
 Walter Bright wrote:
 Following the safe D discussions, I've had a bit of a change of mind.
 Time for a new strawman.

 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined
 as the subset of D that guarantees no undefined behavior. Implementation
 defined behavior (such as varying pointer sizes) is still allowed.

 Memory safety is a subset of this. Undefined behavior nicely covers
 things like casting away const and shared.

 Safety has a lot in common with function purity, which is set by an
 attribute and verified by the compiler. Purity is a subset of safety.

 Safety seems more and more to be a characteristic of a function, rather
 than a module or command line switch. To that end, I propose two new
 attributes:

  safe
  trusted

 A function marked as  safe cannot use any construct that could result in
 undefined behavior. An  safe function can only call other  safe
 functions or  trusted functions.

 A function marked as  trusted is assumed to be safe by the compiler, but
 is not checked. It can call any function.

 Functions not marked as  safe or  trusted can call any function.

 To mark an entire module as safe, add the line:

     safe:

 after the module statement. Ditto for marking the whole module as
  trusted. An entire application can be checked for safety by making
 main() safe:

      safe int main() { ... }

 This proposal eliminates the need for command line switches, and
 versioning based on safety.

are unsafe, and not trusted. Like free(), for example. They're usually easy to identify, and should be small in number. They should only be callable from trusted functions. That's different from unmarked functions, which generally just haven't been checked for safety. I want to able to find the cases where I'm calling those guys, without having to mark every function in the program with an safe attribute.

Can't you just mark every *module* in the program with the safe attribute?
Nov 06 2009
prev sibling parent reply Knud Soerensen <4tuu4k002 sneakemail.com> writes:
Instead of just defining  safe and  trusted
it should possible to define this type of code annotations and 
constrains in D.

See Red Code/Green Code - Generalizing Const by Scott Meyers
http://video.google.com/videoplay?docid=-4728145737208991310#

Then we can define  safe,  pure,  thread_safe,  exception_safe,  gpl, 
 lgpl,  beautiful and  ugly code or all the constrains we like.

It would also be nice if we could annotate code with  debug
and then it would argument the code with debugging code.


Walter Bright wrote:
 Following the safe D discussions, I've had a bit of a change of mind. 
 Time for a new strawman.
 
 Based on Andrei's and Cardelli's ideas, I propose that Safe D be defined 
 as the subset of D that guarantees no undefined behavior. Implementation 
 defined behavior (such as varying pointer sizes) is still allowed.
 
 Memory safety is a subset of this. Undefined behavior nicely covers 
 things like casting away const and shared.
 
 Safety has a lot in common with function purity, which is set by an 
 attribute and verified by the compiler. Purity is a subset of safety.
 
 Safety seems more and more to be a characteristic of a function, rather 
 than a module or command line switch. To that end, I propose two new 
 attributes:
 
  safe
  trusted
 
 A function marked as  safe cannot use any construct that could result in 
 undefined behavior. An  safe function can only call other  safe 
 functions or  trusted functions.
 
 A function marked as  trusted is assumed to be safe by the compiler, but 
 is not checked. It can call any function.
 
 Functions not marked as  safe or  trusted can call any function.
 
 To mark an entire module as safe, add the line:
 
     safe:
 
 after the module statement. Ditto for marking the whole module as 
  trusted. An entire application can be checked for safety by making 
 main() safe:
 
      safe int main() { ... }
 
 This proposal eliminates the need for command line switches, and 
 versioning based on safety.

-- Join me on CrowdNews http://crowdnews.eu/users/addGuide/42/ Facebook http://www.facebook.com/profile.php?id=1198821880 Linkedin http://www.linkedin.com/pub/0/117/a54 Mandala http://www.mandala.dk/view-profile.php4?profileID=7660
Nov 06 2009
parent Sclytrack <Sclytrack idiot.com> writes:
== Quote from Knud Soerensen (4tuu4k002 sneakemail.com)'s article
 Instead of just defining  safe and  trusted
 it should possible to define this type of code annotations and
 constrains in D.
 See Red Code/Green Code - Generalizing Const by Scott Meyers
 http://video.google.com/videoplay?docid=-4728145737208991310#
 Then we can define  safe,  pure,  thread_safe,  exception_safe,  gpl,
  lgpl,  beautiful and  ugly code or all the constrains we like.
 It would also be nice if we could annotate code with  debug
 and then it would argument the code with debugging code.
 Walter Bright wrote:
 Following the safe D discussions, I've had a bit of a change of mind.
 Time for a new strawman.


I'll watch that video tomorrow, (or not it is a bit long.) :-) attrib(nogc) void handleSituation1() { int * m =casting malloc(20); } attrib(nogc) void handleSituation1() requires(nogc) { handleSituation2(); } void helloWorld() { requires(nogc) { handleSituation2(); } } attrib(validatedBy("Tom hank")) void doStuff3() requires(validatedBy) { callThis(); callThat(); } attrib(trusted) void handleSituation() requires(nogc) permit(unsafe) { } void handleSituation() permit(unsafe) { } void handleSituation() { ... permit(unsafe) { } } ----mutable isolation = mutiso requires(pure) class BoeClass { private: int number; public: prop int Number { return number; } { number = value; } int dupsy() { return number + 1; } } requires(pure) int doStuff( int a) { BoeClass jim; } -------- void doStuff() //attrib(safe) requires(safe) attrib(safe) void doStuff() requires(safe) //default void doStuff() permit(!safe) //loses the safe attribute requires(safe) void doStuff() //enforces and attributes it. requires(nogc) void doStuff() //enforces and attributes it. void doStuff() requires(nogc) //enforces but does not attribute it. attrib(validated) doStuff() permit(!safe) //validated by the programmer using unsafe code attrib(default) void doStuff() requires( default - [safe] ) Okay I'm going nuts again. ----------------- Okay for let's say "properties" that are meant to be serialized. By which I mean "actual data", could we start them with a capital case. This would tell other programmers which ones to pick. Bad idea? struct Area { int Width() //Big letters { return width; } int Height() //Big letters { return height; } int area() //small letters { return width * height; } }
Nov 06 2009