www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - From Ada 2012

reply "bearophile" <bearophileHUGS lycos.com> writes:
Ada shares many purposes with D: correctness from the language 
design too, mostly imperative, native compilation, efficiency of 
the binary, closeness to the metal (even more, because not 
requiring a GC, it's probably usable in more situations), generic 
programming, OOP, strong static typing, and both languages share 
many small features (like array slicing syntax, and so on).

Coding in Ada is a bit boring, because you have to specify every 
small detail and to write a lot, but for certain programming 
tasks, like code that can't have too many bugs, it's maybe the 
best language. As Ada vendors say, if your life depends on a 
program, you often prefer that code to be written in good Ada 
instead of good C. Even if writing Ada is slower than writing C 
or C++, you save some time later debugging less. Today for 
certain tasks Haskell seems to produce reliable code, but it uses 
a GC and it's lazy, so it's quite less strict compared to Ada.

I've found another pack of slides about the Ada 2012 language:
http://www.slideshare.net/AdaCore/ada-2012

Come quotations and comments from/about the slides:

[page 3] >In High-Reliable Software, choice is usually being made 
between C C++ Java Ada<

Maybe someday D too will be among those.


[p.6] >Applying these skills to any programming language should 
be easy for any developer<

Good luck with programming in Haskell :-)


[p.7] >Is the language properly supported by tools?<

Right, some bugs are avoided thanks to the supporting tools too.


[p.8] >Can the language full development cycle 
(specification/code/verification)?<

I don't know, regarding D.


[p.15] >Put as much (formal) information as possible in the code<

This is quite important for a language that wants to enforce more 
correctness.


[p.17] >Values are checked at run-time (can be deactivated)<

This often saves the programmer ass.


[p.19] >Arrays can be indexed by any discrete types (integers, 
enumeration)<

This is quite handy for enums (and sometimes chars), and 
reliable. Currently in D if you define an array with enum index 
you get an associative array, that is wasteful in both memory and 
performance for most enums that have contiguous values (but I 
think maybe D implementations will be free to use a more 
efficient array here, because the interface of AAs is opaque).


[p.21]
 Three parameter modes : in (input), out (output) and in-out 
 (input/output)

 procedure Do_Something
   (P1 : in     Integer; --  P1 can’t be changed
    P2 : out    Integer; --  No initial value on P2

In D P2 is initialized... [p.21]
 The compiler decides if it has to be passed by reference of copy
 
 procedure Do_Something
   (P1 : in Huge_Structure) –-  Passed by reference if too big

D offers more low-level knowlege/control here, it doesn't decide to pass by value or reference, leaving the decision to the programmer, I prefer D here. But in D code like this, where a large value is passed, I'd like the D compiler to give a warning (despite once in a while that's exactly what you want?): alias int[1_000] TA; void int(TA a) {} [p.22]
 Generalized contracts are available through pre and 
 post-conditions
 
 procedure P (V : in out Integer)
    with Pre  => V >= 10,
         Post => V'Old /= V;

So there's the Old (prestate) too in the Ada2012 built-in contract programming. [p.30]
 * Pointers are typed, associated with accessibility checks
 * Objects that can be pointed are explicitly identifed
 * Pointers constraints can be specified
 – Is null value expected?
 – Is the pointer constant?
 – Is the object pointed by the pointer constant?

Ada has severl kinds of pointers, according to how much freedom they have. [p.36]
 Ada 2012 detects "obvious" aliasing problems
 
 function Change (X, Y : in out Integer) return Integer is
    begin
       X := X * 2;
       Y := Y * 4;
 
       return X + Y;
    end;
 
    One, Two : Integer := 1;
 
 begin
 
    Two := Change (One, One);
    -- warning: writable actual for "X" overlaps with actual for 
 "Y“
 
    Two := Change (One, Two) + Change (One, Two);
    --  warning: result may differ if evaluated after other 
 actual in expression

Are such warnings/tests useful in D too? D compiles this with no warnings/errors: int change(ref int x, ref int y) { x *= 2; y *= 4; return x + y; } void main() { int one = 1, two; two = change(one, one); two = change(one, two) + change(one, two); } [p.42]
 if C in 'a' | 'e' | 'i'
       | 'o' | 'u' | 'y' then

Sometimes Ada2012 is succint too. [p.43]
 Function implementation can be directly given at specification 
 time if it represents only an "expression"
 
 function Even (V : Integer) return Boolean
    is (V mod 2 = 0);

It's related to: http://d.puremagic.com/issues/show_bug.cgi?id=7176 Bye, bearophile
May 03 2012
next sibling parent Artur Skawina <art.08.09 gmail.com> writes:
On 05/03/12 16:04, bearophile wrote:
 [p.19] >Arrays can be indexed by any discrete types (integers, enumeration)<
 
 This is quite handy for enums (and sometimes chars), and reliable. Currently
in D if you define an array with enum index you get an associative array, that
is wasteful in both memory and performance for most enums that have contiguous
values (but I think maybe D implementations will be free to use a more
efficient array here, because the interface of AAs is opaque).

It's a reasonable default; D will let you choose how it's done. struct EA(T,I) { enum size_t FIRST = I.min, N = I.max - FIRST + 1; T[N] a; auto opIndex(I i) { return a[i-FIRST]; } auto opIndexAssign(T v, I i) { return a[i-FIRST] = v; } // etc for any other useful operator } enum E { one=10, two, three } EA!(int, E) a; a[E.three] = 42; assert(a[E.three]==42); assert(a.sizeof==int.sizeof*3); It doesn't get much more efficient than this; the compiler will take care of the rest, both the type checking and optimizing it all away. Maybe something like this should be in the std lib, but i'm not sure if it's very useful in it's raw form, usually you'll want a custom version; it would be more suited for a mixin-template library. artur
May 03 2012
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
On Thursday, 3 May 2012 at 14:04:41 UTC, bearophile wrote:
 [p.21]
 The compiler decides if it has to be passed by reference of 
 copy
 
 procedure Do_Something
  (P1 : in Huge_Structure) –-  Passed by reference if too big

D offers more low-level knowlege/control here, it doesn't decide to pass by value or reference, leaving the decision to the programmer, I prefer D here. But in D code like this, where a large value is passed, I'd like the D compiler to give a warning (despite once in a while that's exactly what you want?): alias int[1_000] TA; void int(TA a) {}

I was surprised a little when compiler rejected `ref in`.
May 03 2012
parent reply =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 03-05-2012 19:36, Steven Schveighoffer wrote:
 On Thu, 03 May 2012 13:03:34 -0400, Kagamin <spam here.lot> wrote:

 On Thursday, 3 May 2012 at 14:04:41 UTC, bearophile wrote:
 [p.21]
 The compiler decides if it has to be passed by reference of copy
 procedure Do_Something
 (P1 : in Huge_Structure) –- Passed by reference if too big

D offers more low-level knowlege/control here, it doesn't decide to pass by value or reference, leaving the decision to the programmer, I prefer D here. But in D code like this, where a large value is passed, I'd like the D compiler to give a warning (despite once in a while that's exactly what you want?): alias int[1_000] TA; void int(TA a) {}

I was surprised a little when compiler rejected `ref in`.

in is synonymous for "const scope". Doing "const scope ref" yields: "scope cannot be ref or out" which makes sense. Just use const instead. -Steve

Doesn't make sense to me. It seems perfectly normal to do something like this: void foo(ref in int i) { i = 42; // we're setting i indirectly, and not leaking it } int i; foo(i); -- - Alex
May 03 2012
parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 03-05-2012 20:29, Alex Rønne Petersen wrote:
 On 03-05-2012 19:36, Steven Schveighoffer wrote:
 On Thu, 03 May 2012 13:03:34 -0400, Kagamin <spam here.lot> wrote:

 On Thursday, 3 May 2012 at 14:04:41 UTC, bearophile wrote:
 [p.21]
 The compiler decides if it has to be passed by reference of copy
 procedure Do_Something
 (P1 : in Huge_Structure) –- Passed by reference if too big

D offers more low-level knowlege/control here, it doesn't decide to pass by value or reference, leaving the decision to the programmer, I prefer D here. But in D code like this, where a large value is passed, I'd like the D compiler to give a warning (despite once in a while that's exactly what you want?): alias int[1_000] TA; void int(TA a) {}

I was surprised a little when compiler rejected `ref in`.

in is synonymous for "const scope". Doing "const scope ref" yields: "scope cannot be ref or out" which makes sense. Just use const instead. -Steve

Doesn't make sense to me. It seems perfectly normal to do something like this: void foo(ref in int i) { i = 42; // we're setting i indirectly, and not leaking it } int i; foo(i);

On second thought, the 'const' in the 'in' would probably make this nonsensical. Still, passing structs by ref is a case where 'ref in' makes sense (e.g. matrices). -- - Alex
May 03 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 Currently in D if you define an array with enum index you get 
 an associative array, that is wasteful in both memory and 
 performance for most enums that have contiguous values (but I 
 think maybe D implementations will be free to use a more 
 efficient array here, because the interface of AAs is opaque).

And there is this small problem too: http://d.puremagic.com/issues/show_bug.cgi?id=6974 ------------------------ Kagamin:
 I was surprised a little when compiler rejected `ref in`.

But this compiles: alias int[1_000] TA; void foo(const ref TA a) {} void main() {} Bye, bearophile
May 03 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 03 May 2012 13:03:34 -0400, Kagamin <spam here.lot> wrote:

 On Thursday, 3 May 2012 at 14:04:41 UTC, bearophile wrote:
 [p.21]
 The compiler decides if it has to be passed by reference of copy
  procedure Do_Something
  (P1 : in Huge_Structure) =E2=80=93-  Passed by reference if too big=



 D offers more low-level knowlege/control here, it doesn't decide to  =


 pass by value or reference, leaving the decision to the programmer, I=


 prefer D here.
 But in D code like this, where a large value is passed, I'd like the =


 compiler to give a warning (despite once in a while that's exactly wh=


 you want?):

 alias int[1_000] TA;
 void int(TA a) {}

I was surprised a little when compiler rejected `ref in`.

in is synonymous for "const scope". Doing "const scope ref" yields: "scope cannot be ref or out" which makes sense. Just use const instead. -Steve
May 03 2012
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
Am 03.05.2012 16:04, schrieb bearophile:
 Ada shares many purposes with D: correctness from the language design
 too, mostly imperative, native compilation, efficiency of the binary,
 closeness to the metal (even more, because not requiring a GC, it's
 probably usable in more situations), generic programming, OOP, strong
 static typing, and both languages share many small features (like array
 slicing syntax, and so on).

 Coding in Ada is a bit boring, because you have to specify every small
 detail and to write a lot, but for certain programming tasks, like code
 that can't have too many bugs, it's maybe the best language. As Ada
 vendors say, if your life depends on a program, you often prefer that
 code to be written in good Ada instead of good C. Even if writing Ada is
 slower than writing C or C++, you save some time later debugging less.
 Today for certain tasks Haskell seems to produce reliable code, but it
 uses a GC and it's lazy, so it's quite less strict compared to Ada.

I am quite found of Ada, even if it means writting a bit more than C or C++, IDEs can help here. When coding in Java or .NET, the IDE writes most of the stuff already for me. The company developing the open source Ada compiler GNAT, had the main talk in this years FOSDEM. Ada still suffers from the expensive compilers it had on the early years, but thanks to the increase in security concern in software, actually it seems to be picking up users in Europe, for projects where human lifes are at risk. But D would, of course, be an easier upgrade path for C or C++ developers. -- Paulo
May 03 2012
prev sibling next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
Paulo Pinto:

but thanks to the increase in security concern in software, 
actually it seems to be picking up users in Europe, for projects 
where human lifes are at risk.<

Recently I am seeing a little grow of the interest in Ada. Maybe they have accepted that despite the flaws of Ada, it's the best tool for certain purposes. But more probably, several other more important factors are at play, probably political too.
But D would, of course, be an easier upgrade path for C or C++ 
developers.<

In my opinion for a decent C or C++ programmer it's not too much hard to learn the Zen of Ada, they share similar roots (Ada is closer to Pascal, C++ closer to C, but the paradigms used are similar. Example: Ada templates require explicit instantiation, but learning this doesn't require a C++ programmer to change his/her/hir brain a lot). D seems fit to write videogame engines, but despite D is safer than C and C++, for high integrity software, I think D will need an external tool that enforces very strict safe coding standards, because safe can't be enough. Example: on default Ada all integral values don't overflow silently. Another example: there are strict and safe built-in ways to use multi-cores. Another example: kinds of pointers. I have not used Ada a lot, but I like how you usually define (strong) types for most classes of variables, like for integral values, each with their range, if they are sub-ranges (subtypes) of other strong ranges, and so on. A small example. If you have two matrices, where one contains (r,c) row-column pairs of indexes of the other matrix, it's easy to enforce the tuple items to be inside the number of rows or columns of the other matrix. If the second matrix has to contain only positive values, plus let's say the -3 -2 and -1 values to signal special cases, it's easy to define such integral type, and so on. And the compiler will verify things at compile-time where possible (example: If you write a literal of a string of enumerated chars, or the second matrix, it will verify at compile-time that the values of the literal are in the specified ranges), and insert out of range tests for run-time. Such range kinds and tests don't require advanced type system features to be implemented by the Ada compilers, but they are able to catch a lot of bugs early, that in C/C++/D bite you often. In most C++ code I've seen there is not even a bit of such strong static typing of the integral values. This makes the code harder to modify, and just "int" used for ten different purposes makes it easy to use an integral variable where a totally different one was needed, this turns the C++/D code into a "soup" that's buggy and harder to debug. I don't like the carefree attitude of C-style languages regarding strong typing of integral values. I have seen that computer language features 30+ years old are able to avoid most of such troubles. In functional languages as Haskell and F# such work on indexes and ranged values and so on is much less needed, but in high-performance Ada/C/C++/Delphi/D coding, they are used quite often. Bye, bearophile
May 03 2012
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, May 04, 2012 at 01:49:32AM +0200, bearophile wrote:
[...]
 I have not used Ada a lot, but I like how you usually define (strong)
 types for most classes of variables, like for integral values, each
 with their range, if they are sub-ranges (subtypes) of other strong
 ranges, and so on.

This makes me wonder if such a thing is implementable in D without sacrificing efficiency. Perhaps have a series of templated structs that wrap around an int, that keep track of the range? TDPL gives an example of an int wrapper that checks for overflow. I think it might be possible to write an int wrapper, say RangedInt!(min,max) that enforces a certain value range. Member functions can handle cross-range computations (e.g., RangedInt!(1,10) is assignable to RangedInt!(0,15) without any checks, but the converse assignment will have a check, either runtime or compile-time if possible, to ensure the assigned value is within range). You can also version the thing to omit range checks in -release mode or have the user specify some version=... to omit range checks, if he wishes to have maximum efficiency after extensive testing to ensure range violations don't happen. T -- Frank disagreement binds closer than feigned agreement.
May 03 2012
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
H. S. Teoh:

 This makes me wonder if such a thing is implementable in D
 without sacrificing efficiency.

Part of it is surely implementable. But if it's not in Phobos no one will use it, and even if you put those things in Phobos, I suspect you will not see a lot of D code in the wild using them.
 but the converse assignment will have a check,
 either runtime or compile-time if possible, to ensure the 
 assigned value is within range).

Performing those tests at compile-time is currently generally not possible in D because despite D has strong compile-time skills, the code is run at compile time only if's evaluated in a static context. At best you have to wrap your literals in some template call, that calls CT code, but it's not natural coding and I am not sure it's enough in all cases. There are ways to solve this problem introducing some extra capabilities in D, in the last years I have suggested several alternative ideas to do similar things, but they were ignored every time. Bye, bearophile
May 03 2012