www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Wish: Variable Not Used Warning

reply Robert Fraser <fraserofthenight gmail.com> writes:
Markus Koskimies Wrote:

 On Wed, 09 Jul 2008 17:53:52 -0400, Nick Sabalausky wrote:
 
 In a "properly defined language", how would you solve the problem of
 unintentionally-unused variables?

My suggestion: just give error. No need for "unused" keyword, just comment out code that has no effects. For function arguments, if they are unused but mandatory because of keeping interface, leave it without name if it is not used. Furthermore, give also errors unused private/static things. If they are not used, why are they in the code? Just comment them out. In similar manner, warn about conditional expressions that have constant value (like "uint a; if(a > 0) { ... }"), code that has no effect and all those things :) And yes, warnings could be considered as "optional errors" for us who think that it's best to tackle all sorts of quirks & potential bugs at compile time and not trying to find them with runtime debugging. As long as the warning makes some sense and can be circumvented in some reasonable way, just throw it to my screen :)

In a final release, unused things are signs of errors. When writing code, unused variables (perhaps they were used in a commented-out section?) are a dime a dozen. If unused vars were errors in any language, it would be for development. With our linkers, unused imports are potentially more dangerous than an unused local variable that the code generator throws out.
Jul 10 2008
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Fraser wrote:
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.

Yes, that's why I find the warning to be a nuisance, not a help.
Jul 10 2008
next sibling parent reply Markus Koskimies <markus reaaliaika.net> writes:
On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:

 Robert Fraser wrote:
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.

Yes, that's why I find the warning to be a nuisance, not a help.

I'd been coding for awhile, and I have hard times to remember the last time I have had unused vars or other pieces of code, that would be there intentionally... Sure, I may have got some sort of brain-damage due to heavy use of "gcc -Wall" and "lint" to affect to my coding style so that I unconsciously avoid warning-generating sketchs... About those unused imports (mentioned by Robert) - I think that the compiler could stop also with those. I was just coming to that subject :D
Jul 10 2008
next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Markus,

 On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:
 
 Robert Fraser wrote:
 
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.
 


time I have had unused vars or other pieces of code, that would be there intentionally... Sure, I may have got some sort of brain-damage due to heavy use of "gcc -Wall" and "lint" to affect to my coding style so that I unconsciously avoid warning-generating sketchs... About those unused imports (mentioned by Robert) - I think that the compiler could stop also with those. I was just coming to that subject :D

one cases where the extra vars might be added is as padding for cache effects.
Jul 10 2008
next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
Markus Koskimies wrote:
 On Thu, 10 Jul 2008 21:09:26 +0000, BCS wrote:
 
 one cases where the extra vars might be added is as padding for cache
 effects.

?!? You mean you use extra vars for making the cache to load the correct one to a right cache line? Sounds extremely silly to me!

For a good talk on just how important this can be, read the slides and/or watch the video here: http://www.nwcpp.org/Meetings/2007/09.html The memory latency and cache line behavior is covered towards the end. The entire talk is really really good. Parts of it talk about another issue that's come up on these newsgroups more than once, instruction and memory ordering within concurrent applications. Later, Brad
Jul 10 2008
parent Brad Roberts <braddr puremagic.com> writes:
Markus Koskimies wrote:
 On Fri, 11 Jul 2008 04:17:13 +0000, Markus Koskimies wrote:
 
 I'll read that later.

I read it. It's all about the well-known barrier between porcessors, memories (RAM) and disks, and the necessarity of (1) having mulit-level caches (2) strive to the locality of execution Nothing to do with D compiler, extra unused vars and performance. If you really like to do cache optimization for modern PC, and not to trust to compiler & runtime environment, you would need (1) determine the cache hierarchy, sizes and the number of ways it has (as well as indexing) (2) write your code in assembler, and locate it at runtime so that it fills the cache lines optimally Certainly nothing to do with HLLs like D. Absolutely nothing.

Why is it that so many people here seem to have some sort of weird blinders that turn the world into black and white with no shades of grey? The world just doesn't work like that. Sorry to burst your bubble. I'm glad it's well known to you, but it's completely foreign to others. It's very relevant information and that's why I posted the URL. Additionally, your last sentence makes me think you're either being willfully blind or just stubborn. The cache latency and multi-processor interlocking on cache lines can be a serious performance killer that is easy resolved with padding without the need to dip into linker tricks and assembly. Unfortunatly, tools don't really exist to make it easy to discover these sorts of problems, so just knowing that they can exist might help someone out there realize a new avenue of thought at some point in their programming career. Every modern x86 shares a cache line size these days.. 64 bytes. That one optimization alone can double the performance of a system that's hitting cache line contention. An awful lot of people aren't even aware this sort of thing can occur. Are you suggesting that it's not something programmers should be aware of? You're 'absolutely nothing' comment is wrong. Every one of the examples in that presentation are in C, and demonstrate quite clearly its effects. Can you do even better by going lower level, sure, but doesn't make it worthless or nothing. Later, Brad
Jul 10 2008
prev sibling parent JAnderson <ask me.com> writes:
BCS wrote:
 Reply to Markus,
 
 On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:

 Robert Fraser wrote:

 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.


time I have had unused vars or other pieces of code, that would be there intentionally... Sure, I may have got some sort of brain-damage due to heavy use of "gcc -Wall" and "lint" to affect to my coding style so that I unconsciously avoid warning-generating sketchs... About those unused imports (mentioned by Robert) - I think that the compiler could stop also with those. I was just coming to that subject :D

one cases where the extra vars might be added is as padding for cache effects.

In C++ this is not really a problem. There are 2 ways to deal with this. One "var;" and 2 don't give the variable a name. -Joel
Jul 29 2008
prev sibling next sibling parent Markus Koskimies <markus reaaliaika.net> writes:
On Thu, 10 Jul 2008 21:09:26 +0000, BCS wrote:

 one cases where the extra vars might be added is as padding for cache
 effects.

?!? You mean you use extra vars for making the cache to load the correct one to a right cache line? Sounds extremely silly to me!
Jul 10 2008
prev sibling next sibling parent Markus Koskimies <markus reaaliaika.net> writes:
On Thu, 10 Jul 2008 20:23:03 -0700, Brad Roberts wrote:

 Markus Koskimies wrote:
 On Thu, 10 Jul 2008 21:09:26 +0000, BCS wrote:
 
 one cases where the extra vars might be added is as padding for cache
 effects.

?!? You mean you use extra vars for making the cache to load the correct one to a right cache line? Sounds extremely silly to me!

For a good talk on just how important this can be, read the slides and/or watch the video here: http://www.nwcpp.org/Meetings/2007/09.html

I'll read that later.
 The memory latency and cache line behavior is covered towards the end.
 The entire talk is really really good.  Parts of it talk about another
 issue that's come up on these newsgroups more than once, instruction and
 memory ordering within concurrent applications.

Sorry to say to you, but; 1) the situation is different in embedded systems. In those systems you know the size of the cache, the number of ways it has and the amount of lines it stores. And in those systems, you don't use compiler for optimize cache. You use linker to put the things to correct place. 2) For PC, IMO cache optimization is totally ridiculous. You really don't have any kind of glue in which kind of computer your code is being executed. If you optimize the cache usage for one special type of cache- CPU -configuration, you have no idea how it performs in another configuration. I'll put all my €2 that the optimizations made for one's Windows Pentium-4 does really have no (good) effetcs on my Linux 64-bit. If you're going to make cache optimizations, you'll need linker, and in PC world you need a system that does it automatically for you.
Jul 10 2008
prev sibling next sibling parent reply Markus Koskimies <markus reaaliaika.net> writes:
On Fri, 11 Jul 2008 04:17:13 +0000, Markus Koskimies wrote:

 I'll read that later.

I read it. It's all about the well-known barrier between porcessors, memories (RAM) and disks, and the necessarity of (1) having mulit-level caches (2) strive to the locality of execution Nothing to do with D compiler, extra unused vars and performance. If you really like to do cache optimization for modern PC, and not to trust to compiler & runtime environment, you would need (1) determine the cache hierarchy, sizes and the number of ways it has (as well as indexing) (2) write your code in assembler, and locate it at runtime so that it fills the cache lines optimally Certainly nothing to do with HLLs like D. Absolutely nothing.
Jul 10 2008
next sibling parent BCS <ao pathlink.com> writes:
Reply to Markus,

 On Fri, 11 Jul 2008 04:17:13 +0000, Markus Koskimies wrote:
 
 I'll read that later.
 

memories (RAM) and disks, and the necessarity of

The spesific effect I was talking about is not in the slides. If you havn't seen the video, you didn't see the part I was refering to. int[1000] data; thread 1: for(int i = 1000_000); i; i--) data[0]++; thread 2a: for(int i = 1000_000); i; i--) data[1]++; thread 2b: for(int i = 1000_000); i; i--) data[999]++; On a multi core system run thread 1 and 2a and then run 1 and 2b. You will see a difference.
Jul 11 2008
prev sibling parent Markus Koskimies <markus reaaliaika.net> writes:
On Fri, 11 Jul 2008 17:10:50 +0000, BCS wrote:

 The spesific effect I was talking about is not in the slides. If you
 havn't seen the video, you didn't see the part I was refering to.
 
 
 int[1000] data;
 
 thread 1:
    for(int i = 1000_000); i; i--) data[0]++;
 
 thread 2a:
    for(int i = 1000_000); i; i--) data[1]++;
 
 thread 2b:
    for(int i = 1000_000); i; i--) data[999]++;
 
 
 On a multi core system run thread 1 and 2a and then run 1 and 2b. You
 will see a difference.

Sure I will. In the first example the caches of processor cores will be constantly negotiating the cache contents. If you are writing program with threads intensively accessing the same data structures, you need to know what you are doing. There is a big difference of doing: 1) int thread_status[1000]; thread_code() { ... thread_status[my_id] = X ... } 2) Thread* threads[1000]; Thread { int status; run() { ... status = X ... } } In the first example, you use a global data structure for threads and that can always cause problems. The entire cache system is based on locality; without locality in software it will not work. In that example, you would need to know the details of the cache system to align the data correctly. In the second example, the thread table is global, yes; but the data structures for threads get allocated from heap, and they are local. Whenever they are allocated from the same cache line or not depends on the operating system as well as the runtime library (implementation of heap; does it align the blocks to cache lines or not). Doing threaded code, I would always suggest to try to minimize the accesses to global data structures, and try to always use local data. Most probably every forthcoming processor architecture tries to improve the effectiveness of such threads. I would also try to use the standard thread libraries, since they try to tackle the machine-dependent bottlenecks.
Jul 11 2008
prev sibling next sibling parent reply Markus Koskimies <markus reaaliaika.net> writes:
On Thu, 10 Jul 2008 21:53:49 -0700, Brad Roberts wrote:

 Certainly nothing to do with HLLs like D. Absolutely nothing.

Why is it that so many people here seem to have some sort of weird blinders that turn the world into black and white with no shades of grey? The world just doesn't work like that. Sorry to burst your bubble.

Be my guest.
 Additionally, your last sentence makes me think you're either being
 willfully blind or just stubborn.

Probably both.
 Every modern x86 shares a cache line size these days.. 64 bytes.  That
 one optimization alone can double the performance of a system that's
 hitting cache line contention.

There is a thing called align. If you don't mind about cache indexes, but just one to make things to appear in separate lanes, use align. But that really does not have any effect to _regular_ multi-way caches.
 An awful lot of people aren't even aware
 this sort of thing can occur.

For decades, PC processor manufacturers are optimized their processors for software, not in the other way. That is why the processors execute functions so quickly, that is the sole reasons for having caches (the regular locality of software, e.g. the IBM study from 60's).
 Are you suggesting that it's not
 something programmers should be aware of?

Yes, I am.
 You're 'absolutely nothing' comment is wrong.  Every one of the examples
 in that presentation are in C, and demonstrate quite clearly its
 effects.  Can you do even better by going lower level, sure, but doesn't
 make it worthless or nothing.

Certainly, if you make lowlevel optimizations, it pays back somehow. But only in the architectures you are doing it. Not a HLL thing IMO. And all the time I'm optimizing cache usage for specified architecture, I use alignments (to cache line sizes) and linker (not to put two regularly referenced things to same index).
Jul 10 2008
next sibling parent BCS <ao pathlink.com> writes:
Reply to Markus,

 For decades, PC processor manufacturers are optimized their processors
 for software, not in the other way. That is why the processors execute
 functions so quickly, that is the sole reasons for having caches (the
 regular locality of software, e.g. the IBM study from 60's).
 

I hope I'm reading you wrong but if I'm not: The whole point of the talk is that CPU's can't get better performance by optimizing them more. If the code isn't written well (the code isn't optimized for the CPU) performance will not improve... ever.
 Are you suggesting that it's not
 something programmers should be aware of?


How can you say that? Expecting the tool chain to deal with cache effects would be like expecting it to convert a bubble sort into qsort.
Jul 11 2008
prev sibling parent reply Markus Koskimies <markus reaaliaika.net> writes:
On Fri, 11 Jul 2008 17:16:54 +0000, BCS wrote:

 Reply to Markus,
 
 For decades, PC processor manufacturers are optimized their processors
 for software, not in the other way. That is why the processors execute
 functions so quickly, that is the sole reasons for having caches (the
 regular locality of software, e.g. the IBM study from 60's).
 
 

is that CPU's can't get better performance by optimizing them more. If the code isn't written well (the code isn't optimized for the CPU) performance will not improve... ever.

They will get better, and that is going to affect your software. IMO you should not write your software for CPU, instead you need to follow certain paradigms. I explain this lengthly. The current processors are fundamentally based on RASP models, which are an example of so called von Neumann architecture. This architecture offers, when physically realized, a very flexible but dense computing platform, since it is constructed from two specialized parts - memory and CPU. The drawback of this architecture is so called von Neumann bottleneck, which has been irritating both processor and software designers for decades. --- The processor fabrication technology sets limitations to how fast a processor can execute instructions. The early processors fetched the instructions always from main memory (causing of course lots of external bus activity), and they processed one instruction at time. Since processor fabrication technology gets better quite slowly, there have always been interest to search "alternative" solutions, which could give performance benefits on current technology. These improvements have been for example; - Pipelining - Super-scalar architectures - OoO execution - Threaded processors - Multi-core processors - etc. The more switches you can put to silicon, the more you can try to find performance benefits from concurrency. Pipelining & OoO have had a major impact to compiler technology; in early days, code generation was relatively easy, but in modern days to get the best possible performance you really need to know the internals of the processors. When writing code with C or D, you really have very minimal possibilities to try to make your software to utilize pipelines and OoO - if the compiler does not do that, your program will not do that. But at the same time, the processors have been tried to make compiler- friendly; since high level languages uses lots of certain instructions and patterns, the processors try to be good with them. If you take a look to the evolution of processors and compare it to the evolution of software design, you will see the impacts of changing from BASIC/ assembler programming to the compiled HLLs, changing from procedural languages to OO languages, and changing to threaded architectures. At BASIC/Assembler era, the processor machine language was intended for humans; that was the era of CISC-style processors. Compilers does not need human-readable machine code, and when the compiled languages were taken into use, there were raise of RISC processors. The procedural languages used lots of calls - the processors were optimized for calling functions quickly. The OO introduced intensive use of referring data via pointers (compared to data segments of procedural languages); the processors were optimized for accessing memory efficiently via pointers. How caching relates to this? Complex memory hierarchy (and in fact, the pipelines and OoO, too) is not desirable and intentional thing, it is a symptom raised from RASP model. It has been introduced only because it can give performance benefits to software, and the key word here is locality. Locality - and its natural consequence, distribution - is, in fact, one of the keyword of forthcoming processor models. The next major step in processor architectures is very likely reconfigurable platforms, and they will introduce a whole new set of challenges to compilers and software to be fully utilized. Refer to PlayStation Cell compiler to get the idea. At code level, you really can't design your software to "reconfigurable- friendly". The best thing is just keep the code clear, and hope that compilers can get the idea and make a good results. At your software architecture level, if you are using threads, try to keep everything local. The importance of that thing is just getting higher.
 Are you suggesting that it's not
 something programmers should be aware of?


effects would be like expecting it to convert a bubble sort into qsort.

Does that description above answer to this question? In case it does not, I'll explain; in general software, don't mess with the cache. Instead, strive to locality and distribution. Use the threading libraries, and when possible, try to do the interactions between threads with some standard way. If you're doing lower level code, like threading library or hardware driver, you will probably need to know about caching. That is totally different story, since especially writing a hardware driver introduces much more things to take into account along with caches.
Jul 11 2008
parent BCS <ao pathlink.com> writes:
Reply to Markus,

[lots of stuff]

I think we are seeing the same effects from slightly different perspectives 
and arriving at /only slightly/ different results.

As long as you have code that has a wide fan out of potential memory access 
("in 42 instructions i might be accessing an of 2GB of data" vs. "I might 
be accessing any of only 32 bytes in the next 200 instructions") deep memory 
hierarchy will be needed because you can't fit 2GB or ram on the CPU and 
accessing /anything/ off chip and (even things on chip to some extent) is 
slow. PGA's and PPA's (programmable /processor/ arrays) might be able to 
tackle some of these issues, particularly in highly pipeline centric
programming 
(Do X to Y1 to Y1e6) but this still requiters that the programmer be aware 
of the CPU/cache/memory stuff.

Also, I have access to 2 P4's 3 P-III's and a Sparc. So If I want to improve 
performance, my only option is wright better programs and that's something 
that my tool chain can only do just so much for. Then I need to known about 
the system. I'm all for better chips. And when they come I'll (where it's 
needed) optimize my code for them. But till then and even then, programmers 
need to known what they are working with.
Jul 11 2008
prev sibling next sibling parent Markus Koskimies <markus reaaliaika.net> writes:
On Fri, 11 Jul 2008 05:14:57 +0000, Markus Koskimies wrote:

 And all the time I'm optimizing cache usage for specified architecture,
 I use alignments (to cache line sizes) and linker (not to put two
 regularly referenced things to same index).

Never ever I have tried to make cache optimization with unused variables.
Jul 10 2008
prev sibling parent "Bruce Adams" <tortoise_74 yeah.who.co.uk> writes:
On Thu, 10 Jul 2008 21:32:23 +0100, Markus Koskimies  
<markus reaaliaika.net> wrote:

 About those unused imports (mentioned by Robert) - I think that the
 compiler could stop also with those. I was just coming to that subject :D

Unused symbols must not be a warning in the case of shared libraries. They must be errors when called from functions in a real program as this is most likely a bug and is what we have come to expect from C programs. For unreachable code I suppose it doesn't matter so much. I would still be inclined to err on the side of caution and require a stub rather than trying to be clever on the sly (which might go wrong). It is of course reasonable to have symbols that are resolved at runtime by a dynamic linker. These probably should be annotated specially in the code. But please for gawd's sake not __declspec(dllimport) because that's just sick and twisted. Regards, Bruce.
Jul 14 2008
prev sibling next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright a écrit :
 Robert Fraser wrote:
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.

Yes, that's why I find the warning to be a nuisance, not a help.

Related to this, but not specifically to your post, I found some unused variable in DMD's front-end code. I don't know if they are there for a reason. Should I post them as bugs or something like that?
Jul 10 2008
prev sibling next sibling parent reply Markus Koskimies <markus reaaliaika.net> writes:
On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:

 Robert Fraser wrote:
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.

Yes, that's why I find the warning to be a nuisance, not a help.

I have thought this issue more closely now about few hours. I'll try write a more comprehensive answer to my home pages, but something now quickly thought; * What is the purpose of the source code? There is probably lots of answers, but I will present you just one; the source code is aimed for human readers & writers. Certainly this does not mean, that the source would be readable by John Doe - certainly, you need to learn the practices used by one programming language and get familiar with that before you can say anything about the practicality of one programming language. An ultra-short history; I have been programming about 30 years, done it for my living about 20 years. My "brain damage" is that I have been programmed mostly embedded systems, DSP, MCU and microcontrollers, so people more familiar with Win/Linux programming can tell where I'm wrong. I have never be a programming language purist. In fact, most of my colleges think that I'm a misuser of OOP, since I see nothing wrong of using goto's, large switch-cases or God-objects, if they just work. I think, that source code is not for the compiler. Compilers can deal with languages not having any kind of syntactic salt, they does not require comments, and they generally give a shit about indentation. No, source code is not meant to be compiler-friendly (although it is very good that it is one, for completely different reasons), instead it is aimed to be read by a human - it is the bridge between informal humans and formal compilers. How could you improve your source code specification? I think that there is just one answer, which D at the moment follows; the more complex programs you can understand by reading the source (and being familiar with that specific language), the better it is. I know that that sounds something humanists, but really - if the source is not meant to be understood by limited capability of humans, why the hell we are using (1) modular languages, (2) high level languages, (3) _indentation_, (4) __COMMENTS?!?__?!? Tell me, that source code is something else that fundamentally aimed to be understood by a human, which is familiar with the language, and I'll reconsider the following. --- Since source code is aimed for humans, to be able to understand even more complex structures, what is the purpose of warnings? Many of us D-users have a long experience with C, C++ and Java. Many of us are well aware about the problems in those languages. There is a very solid reason that I - and we - nowadays use D for my/us freetime activities, and why I'm advocating C++ people to give D a try (I'm very sad, that being a few years out from the community, there is currently a big fight between DMD-GDC-Tango-Phobos; that is something we all need to solve, to really make D as the future programming language - what it can be IMO!) But those warnings? I know that some of the readers think that warnings are something that indicate inadequates in the design of the programming language. But - referring to the previous parts - what is the sole reason of source language? All of you that think that D does not need any more warnings, please answer to this question: There is two source codes, and lets assume that both of them are valid D; 1) class A { int thing; int getThing() { return thing; } }; 2) class A { private void addThatThing() { ... } int a = 0; void *getThatProperty() { ... } template A(T) { ... } int getThing() { template A(int) { possiblyReturn(1); } if(itWasNot!(int()) return somethingElse(); return module.x.thatThing(); } } Which one is easier to understand? Yes, I know that the examples are very extreme, but think about this; if you allow the intermediate code to exists, does it really make the language better? Consider, that both of the examples are made by a very experienced D programmer. From compiler point of view, what is the big difference? Nothing. As long as the complex thing follows the correct syntax, compiler is happy; but does it mean, that the source is really understandable by other readers? Does it really follow good programming practices - and more importantly, do we need to aim people to follow good programming practices? Please, say me that complex source code is less error-proof, and I will stay silent from this issue to the rest of my life! --- For me, the answer is very clear; yes, we need to guide people to follow good practices. From human point of view (and I have lots of experience of reading other people sources), there is a big difference writing an infinite loop in following examples; 1) for(;;) { ... } 2) int a = -1; int b = (a - 2); for(uint i = 0; (a*2) + (b*3) < i; i++) { ... } The first one you recognize in less than a microsecond to an infinite loop. The second one requires you careful examination, let's say that a minute, which is 60,000 times more than the first one. From the point of compiler, they make no difference. Compiler easily detects the inifinite loop and drops all unnecessary parts. Could you resolve the situation with adding more syntactic salt? Like I said earlier, I think that there is a certain limit for adding that. It is not anymore useful for telling everything with reserved keywords to compiler, instead the compiler could strive you to use common good programming practices - those are called warnings; they does not necessarily prevent compiler to generate code, but they do not follow the guidelines of human understandability. In this end, I know that there is a great resistance against C/C++ in the D community (I undersand it, but I'm not signing it). What I am asking for is to follow the principles stated in the "D Overview", and the common sense; the language is not meant for language purists, instead it is meant for us trying to write something useful, as easily it can achieved. That is the real power of D; and it really DOES NOT mean that D compilers would not have warnings!
Jul 10 2008
parent reply BCS <ao pathlink.com> writes:
Reply to Markus,

 On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:
 
 Robert Fraser wrote:
 
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.
 


write a more comprehensive answer to my home pages, but something now quickly thought; * What is the purpose of the source code? There is probably lots of answers, but I will present you just one; the source code is aimed for human readers & writers.

 I know that
 that sounds something humanists, but really - if the source is not
 meant to be understood by limited capability of humans, why the hell
 we are using (1) modular languages, (2) high level languages, (3)
 _indentation_, (4) __COMMENTS?!?__?!?
 

counter point: perl/regex counter counter point: smalltalk (I've never seen it, but I've been told...) <g> all (most?) programming languages are designed to be written (lisp?), some are designed to be read. The purpose of a programming language is to tell the compiler what to do in a way that humans /can deal with/, saying nothing about how well.
Jul 11 2008
parent reply "Nick Sabalausky" <a a.a> writes:
"BCS" <ao pathlink.com> wrote in message 
news:55391cb32f16f8cab1577d9fd978 news.digitalmars.com...
 Reply to Markus,

 On Thu, 10 Jul 2008 12:51:47 -0700, Walter Bright wrote:

 Robert Fraser wrote:

 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.


write a more comprehensive answer to my home pages, but something now quickly thought; * What is the purpose of the source code? There is probably lots of answers, but I will present you just one; the source code is aimed for human readers & writers.

 I know that
 that sounds something humanists, but really - if the source is not
 meant to be understood by limited capability of humans, why the hell
 we are using (1) modular languages, (2) high level languages, (3)
 _indentation_, (4) __COMMENTS?!?__?!?

counter point: perl/regex

I'm not sure that's much of a counterpoint since those are widely considered by everyone exept total die-hards to be unsuitable for most non-trivial tasks.
Jul 11 2008
parent BCS <ao pathlink.com> writes:
Reply to Nick,

 "BCS" <ao pathlink.com> wrote in message
 news:55391cb32f16f8cab1577d9fd978 news.digitalmars.com...
 
 counter point: perl/regex
 

considered by everyone exept total die-hards to be unsuitable for most non-trivial tasks.

but is still a programming language (and I was sort of makeing a joke)
Jul 11 2008
prev sibling next sibling parent Markus Koskimies <markus reaaliaika.net> writes:
On Mon, 14 Jul 2008 21:51:16 +0100, Bruce Adams wrote:

 On Thu, 10 Jul 2008 21:32:23 +0100, Markus Koskimies
 <markus reaaliaika.net> wrote:
 
 About those unused imports (mentioned by Robert) - I think that the
 compiler could stop also with those. I was just coming to that subject
 :D

Unused symbols must not be a warning in the case of shared libraries.

You mean unused local vars or private members in classes? (although I learned a short time a go that private members in D are not invisible from other code). Certainly public members in classes/modules are never "unused", since you may link it with different code and it becomes used.
Jul 14 2008
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Walter Bright wrote:
 Robert Fraser wrote:
 In a final release, unused things are signs of errors. When writing
 code, unused variables (perhaps they were used in a commented-out
 section?) are a dime a dozen.

Yes, that's why I find the warning to be a nuisance, not a help.

That's because you think warnings should be errors, which they shouldn't. (see my other post about "cautions"). Unused variables in code should only generate "caution" messages, not errors. -- Bruno Medeiros - Software Developer, MSc. in CS/E graduate http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Jul 27 2008