www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Notes IV

reply Jason House <jason.james.house gmail.com> writes:
bearophile Wrote:

 3) In my D code I keep writing "length" all the time (currently I can find 458
"length" words inside my d libs). It's a long word, $ inside the scope of []
helps reducing the typing, but I often enough write "lenght", so I think still
a default attribute named "len" (as in Python) may be better than "length". The
attribute "dup" too is an abbreviation, probably of "duplicate", so
abbreviations seem acceptable in such context.

I agree that len is better than length (I like shorter code), but I agree with the other poster that size is even better.
 6a) Often most of the time necessary to write programs is used to debug the
code. So I encourage D to try to adopt syntax and other constructs that help
avoid bugs in the first place. Many bugs can be avoided adding certain runtime
cheeks that the compiler can remove when the code is compiled in release mode.

I totally agree with this. My biggest pet peeve is accessing null variables causing crashes. I'd love to see this give meaningful error output without needing a debugger to catch it in the act.
 7) The D syntax of is() is powerful, but I think in some of its variants it's
not much readable, so may there may be a better syntax (even if requires is()
to be split in my more than one syntax).

I totally agree. I always look this up when I use it (I just don't use it enough to remember the proper syntax)
 10a) The new syntax for properties in C# seems nice; instead of this code:
 private int myval;
 public int Myval { get { return myval; } private set { myval = value; } }
 You just need:
 public int property Myval { get; private set; }

Seems good. The difference between properties and true variables are rather subtle. I worry that it's an error-prone construct.
 14) D follows the good choice of fixing the length of all types, but real. I
can accept that some compilers and CPUs can support 80-bit floating point
values, while others can't, but I don't like to use "real"s leaving the
compiler the choice to use 64 or 80 bit to implement them. So "real" can be
renamed "real80", and have a fixed length. If the compiler/CPU doesn't allow 80
bit floating point numbers, then fine, you don't find real80 defined, and if
you use them you get a syntax error (or you use a static if + an alias to
rename float as real, so you can fake them by yourself. I don't like the
compiler to fake them silently for me).

I've never been all that happy with the fixed size thing... I think sizes should be compiler-dependent unless the user is explicit about what they want. That gives the compiler room for optimization on whatever the hardware happens to be. I'd actually like to see "int" be variable length and have stuff like int8, int16, int32, int64 have set sizes. In my mind, it has the added benefit of making code that uses them more readable as requiring a fixed size.
 15) I'd like to import modules only inside unittests, or just when I use the
-unittest flag. With the help of a static if something simple like this may
suffice:
 static if (unittest) {
   import std.stdio;
   unittest foo1 { ... }
   unittest bar1 { ... }
 }

I like this too.
 17) After using D for some time I think still that "and", "or" (and maybe
"not") as in Python are more readable than "&&" and "||" (and maybe !, but this
is less important).
 The only good side of keeping them in D is to make the compiler digest C-like
code better.
 GCC has the -foperator-names option, that allows you to use "and", "or", etc,
in C++ code.

Also sounds good. I've been coding C++ long enough to not care about use of the weird symbols, but that's probably not helpful for people starting out with D as a language.
Jan 23 2008
next sibling parent reply Robert Fraser <fraserofthenight gmail.com> writes:
Jason House wrote:
 I've never been all that happy with the fixed size thing...  I think sizes
should be compiler-dependent unless the user is explicit about what they want. 
That gives the compiler room for optimization on whatever the hardware happens
to be.  I'd actually like to see "int" be variable length and have stuff like
int8, int16, int32, int64 have set sizes.  In my mind, it has the added benefit
of making code that uses them more readable as requiring a fixed size.

Ewww no! That's why there's #defines at the top of every big C program to ensure the sizes match up. Will there be an overflow here on one processor but not on another? How should this value be serialized? These are the sort of questions I don't want to have to deal with. Furthermore, that would make pointer arithmetic and inline assembly, among other low-level things, harder.
Jan 23 2008
parent Jason House <jason.james.house gmail.com> writes:
Robert Fraser wrote:

 Jason House wrote:
 I've never been all that happy with the fixed size thing...  I think
 sizes should be compiler-dependent unless the user is explicit about what
 they want.  That gives the compiler room for optimization on whatever the
 hardware happens to be.  I'd actually like to see "int" be variable
 length and have stuff like int8, int16, int32, int64 have set sizes.  In
 my mind, it has the added benefit of making code that uses them more
 readable as requiring a fixed size.

Ewww no! That's why there's #defines at the top of every big C program to ensure the sizes match up.

 Will there be an overflow here on one 
 processor but not on another? 

If exact overflow behavior is needed, then the developer should require an exact size. If overflow behavior is some kind of exceptional corner case well beyond the range that the code is designed for (some min int size), it should not matter. What about size_t and ptr_diff?
 How should this value be serialized? 

That's certainly a fair issue!
 These  
 are the sort of questions I don't want to have to deal with.
 Furthermore, that would make pointer arithmetic and inline assembly,
 among other low-level things, harder.

Isn't inline assembly platform-specific anyway? I guess it just seems strange to me to force potentially inefficient code generation on non-32-bit machines in order to save headache. My gut says that this works well for the current dominance of 32-bit machines likely won't grow with future computer architecture changes.
Jan 27 2008
prev sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Jason House" <jason.james.house gmail.com> wrote in message 
news:fn86eb$1h76$1 digitalmars.com...

 I've never been all that happy with the fixed size thing...  I think sizes 
 should be compiler-dependent unless the user is explicit about what they 
 want.  That gives the compiler room for optimization on whatever the 
 hardware happens to be.  I'd actually like to see "int" be variable length 
 and have stuff like int8, int16, int32, int64 have set sizes.  In my mind, 
 it has the added benefit of making code that uses them more readable as 
 requiring a fixed size.

I actually agree with you. The problem with the fixed size stuff is, well, for example: for(uint i = 0; i < array.length; i++) ... So this is now broken on 64-bit machines since arrays can be longer than what can be held in a 32-bit integer. So we have foreach loops, but they can't necessarily be used in all circumstances. If we have a case where we _have_ to use a C-style loop, we're stuck using the ugly, hard-to-type, badly-named "size_t" instead. Keep in mind too that size_t is unsigned, and if you need a signed version -- it's ptrdiff_t! Of course! Another aesthetic issue is that qualitative names, like 'short' and 'long' don't make a lot of sense on architectures where the word length is > 32. On a 64-bit machine, 'long' is not long, it's "normal". 'int' is now short, and 'short' even shorter. The idea Chris Miller and I share, is to have, as you've said, [u]int<n> types where <n> is an integer indicating the size, so int8, uint8, int32, uint128 (logical progression instead of "cent", yaay!), int36 (for PDP-10s) etc. Then, the qualitative names become aliases for "sensible" int sizes based on the architecture. On a 32-bit machine, [u]short is 16 bits, [u]int is 32, and [u]long is 64; on a 64-bit machine, [u]short is 32 bits, [u]int is 64, and [u]long is 128; on a PDP-9, [u]int would be 18 bits and [u]long would be 36 ;) This also obviates size_t and ptrdiff_t, as [u]int takes their place as the "native word size". This is all theoretical ranting, of course.
Jan 24 2008
parent reply "Janice Caron" <caron800 googlemail.com> writes:
On Jan 24, 2008 1:52 PM, Jarrett Billingsley <kb3ctd2 yahoo.com> wrote:
 The problem with the fixed size stuff is, well,
 for example:

 for(uint i = 0; i < array.length; i++)

Not a problem for D! foreach(element;array)
Jan 24 2008
next sibling parent Matti Niemenmaa <see_signature for.real.address> writes:
Janice Caron wrote:
 On Jan 24, 2008 1:52 PM, Jarrett Billingsley <kb3ctd2 yahoo.com> wrote:
 The problem with the fixed size stuff is, well,
 for example:

 for(uint i = 0; i < array.length; i++)

Not a problem for D! foreach(element;array)

Until you want to do more complex iteration. for (size_t i = array.length; i-- > 0;) { do_stuff(); if (some_special_case) i += some_special_value; } -- E-mail address: matti.niemenmaa+news, domain is iki (DOT) fi
Jan 24 2008
prev sibling parent "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Janice Caron" <caron800 googlemail.com> wrote in message 
news:mailman.18.1201188331.5260.digitalmars-d puremagic.com...
 On Jan 24, 2008 1:52 PM, Jarrett Billingsley <kb3ctd2 yahoo.com> wrote:
 The problem with the fixed size stuff is, well,
 for example:

 for(uint i = 0; i < array.length; i++)

Not a problem for D! foreach(element;array)

You apparently didn't read the rest of my post. "So we have foreach loops, but they can't necessarily be used in all circumstances. If we have a case where we _have_ to use a C-style loop, we're stuck using the ugly, hard-to-type, badly-named "size_t" instead."
Jan 24 2008