www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Best practices of using const

reply envoid <envoid cool-mail.com> writes:
In C++ we have const correctness in some way. A compiler can make 
optimizations whenever it doesn't find a const_cast and the 
mutable specifier marks members that aren't a part of the object 
state. Of course, it's not perfect but one can document their 
intentions and it's possible to use synchronization primitives 
without an issue. On the opposite side, D has a stricter 
(transitive) const and it's almost useless in many cases. Is 
there an article that explains best practices of using const in 
D? The statement "In C++ const isn't transitive so we fixed 
that." alone isn't convincing. The only way I see right now is 
omitting the const keyword completely which is ridiculous.
Feb 13 2019
next sibling parent Kagamin <spam here.lot> writes:
D has immutable data, const allows to consume both mutable and 
immutable data.
Feb 13 2019
prev sibling next sibling parent XavierAP <n3minis-git yahoo.es> writes:
On Wednesday, 13 February 2019 at 11:32:46 UTC, envoid wrote:
 Is there an article that explains best practices of using const 
 in D?
Chapter 8 of Andrei Alexandrescu's book The D Programming Language.
Feb 13 2019
prev sibling next sibling parent Alex <sascha.orlov gmail.com> writes:
On Wednesday, 13 February 2019 at 11:32:46 UTC, envoid wrote:
 Is there an article that explains best practices of using const 
 in D?
http://jmdavisprog.com/articles/why-const-sucks.html
Feb 13 2019
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Feb 13, 2019 at 11:32:46AM +0000, envoid via Digitalmars-d-learn wrote:
 In C++ we have const correctness in some way. A compiler can make
 optimizations whenever it doesn't find a const_cast and the mutable
 specifier marks members that aren't a part of the object state. Of
 course, it's not perfect but one can document their intentions and
 it's possible to use synchronization primitives without an issue. On
 the opposite side, D has a stricter (transitive) const and it's almost
 useless in many cases. Is there an article that explains best
 practices of using const in D? The statement "In C++ const isn't
 transitive so we fixed that." alone isn't convincing. The only way I
 see right now is omitting the const keyword completely which is
 ridiculous.
Const in D is very restrictive because it's supposed to provide real compiler guarantees, i.e., it's statically verifiable that the data cannot be changed. Unfortunately, that guarantee also excludes a lot of otherwise useful idioms, like objects that cache data -- a const object cannot cache data because that means it's being mutated, or lazily-initialized objects -- because once the ctor has run, the object can no longer be mutated. Most notably, D's powerful range idiom is pretty much unusable with const because iteration over a range requires mutating the range (though having const *elements* in a range is fine). This doesn't seem as bad at first glance, but it wreaks havoc on generic code, another thing that D is purportedly good at. It's very hard (and often impossible) to write generic code that works with both const and mutable objects. In practice, I've found that using const is really only sustainable at the lowest levels of code, to guarantee low-level non-mutability of PODs and other low-level objects. It's also useful for representing a reference to data that could be either mutable or immutable, in this "type inheritance" diagram that's very helpful for D learners to understand how D's const system works: const / \ mutable immutable I.e., mutable and immutable are implicitly convertible to const, but const is not implicitly convertible to either. Immutable in D is a hard guarantee that the data cannot ever be changed by anyone in any thread. Const means the holder of the const reference cannot mutate it, but a 3rd party could possibly hold a mutable reference to it and mutate it that way. So it's a somewhat weaker guarantee. But that's beside the point. The point is that when your code doesn't touch the data but you want to be able to pass both mutable and immutable arguments to it, const is the ticket. But given the restrictiveness of const, it's a rare occasion when you actually have to do this. The most notable exception being D strings, which are defined to be immutable(char)[], i.e., a (mutable) array of immutable chars (meaning the array itself can be changed, e.g., by slicing, changing length, etc., but the underlying char data is immutable). Some of my own projects use const(char)[] quite often, in order for the code to be able to accept both mutable char[] and string (i.e., immutable(char)[]). Outside of this, I only use const rarely, maybe in the occasional query method in a low-level type where I'm sure mutation will never be necessary. Even in such cases, I rarely use const, because it's infectious and a seemingly small change of adding const to a getter method sometimes percolates throughout the entire codebase and requires const correctness everywhere else, usually ending in a stalemate when it reaches something like a range that needs to be mutable and cannot be made const without onerous refactoring. It's *possible* in theory to make everything const-correct, but it's quite onerous and honestly only of limited benefit relative to the sheer amount of effort required to pull it off. So most of the time I just don't bother except in the lowest levels of code where the scope of const's infectiousness is limited. So ironically, the iron-clad semantics of D's const system turns out to be also its own downfall. T -- Obviously, some things aren't very obvious.
Feb 13 2019
next sibling parent envoid <envoid cool-mail.com> writes:
Thank you for such a comprehensive answer.
Feb 14 2019
prev sibling next sibling parent reply Marco de Wild <mdwild sogyo.nl> writes:
On Wednesday, 13 February 2019 at 16:40:18 UTC, H. S. Teoh wrote:
 On Wed, Feb 13, 2019 at 11:32:46AM +0000, envoid via 
 Digitalmars-d-learn wrote:
 Unfortunately, that guarantee also excludes a lot of otherwise 
 useful idioms, like objects that cache data -- a const object 
 cannot cache data because that means it's being mutated, or 
 lazily-initialized objects -- because once the ctor has run, 
 the object can no longer be mutated. Most notably, D's powerful 
 range idiom is pretty much unusable with const because 
 iteration over a range requires mutating the range (though 
 having const *elements* in a range is fine).  This doesn't seem 
 as bad at first glance, but it wreaks havoc on generic code, 
 another thing that D is purportedly good at. It's very hard 
 (and often impossible) to write generic code that works with 
 both const and mutable objects.

 So ironically, the iron-clad semantics of D's const system 
 turns out to be also its own downfall.


 T
I agree that const by nature unfortunately kills lazy initialization. However, I don't really understand why const is a problem with ranges. Const elements are not a problem. Iterating over a range consumes it (if I understand correctly). It does not make sense to be able to consume a const object, so from my point of view it's perfectly logical to disallow iterating const ranges. If I'm missing something, please correct me. I use const quite thoroughly in my project (a mahjong board game) and in fact I am writing a blog post explaining how it helped me understand what was happening in my code base. It enforces encapsulated mutations. In classic OOP languages, mutable objects propagate through the entire system, unless you actively create an immutable copy of it (which is a lot of work for little gain). If someone modifies your object on a place you don't expect (e.g. creating and persisting data when rendering a read-only view), it becomes hard to impossible to reason about the problem and debug it. Refactoring in const was a lot of work, but I think it made my code better in the end. I didn't run into any problems when using it, except when I tried to modify an object where I should not have (e.g. sorting a hand when rendering the view). I was able to untangle the spaghetti because the compiler poked me about it. As I didn't run into any problems and it helped clean up my code base, I would recommend trying it.
Feb 17 2019
parent reply Yatheendra <3df4 gmail.ru> writes:
Am I mistaken in saying that we are conflating:
    "anything that is logically const should be declared const"
    // makes perfect sense
    // e.g. the lowest 2, and some branches of the 3rd and 4th, 
levels
    // of members (and a subset of the overall methods) in a 
5-deep type hierarchy are const
with:
    "most code/data should be declared const"
    // no! isn't efficient code all about mutation?
    // no grounds for, e.g.: "ideally, no more than 40% of code 
should be doing mutation"

 On Wednesday, 13 February 2019 at 16:40:18 UTC, H. S. Teoh 
 wrote:
 On Wed, Feb 13, 2019 at 11:32:46AM +0000, envoid via 
 Digitalmars-d-learn wrote:
 Unfortunately, that guarantee also excludes a lot of otherwise 
 useful idioms, like objects that cache data -- a const object 
 cannot cache data because that means it's being mutated, or 
 lazily-initialized objects -- because once the ctor has run, 
 the object can no longer be mutated. Most notably, D's 
 powerful range idiom is pretty much unusable with const 
 because iteration over a range requires mutating the range 
 (though having const *elements* in a range is fine).  This 
 doesn't seem as bad at first glance, but it wreaks havoc on 
 generic code, another thing that D is purportedly good at. 
 It's very hard (and often impossible) to write generic code 
 that works with both const and mutable objects.

 So ironically, the iron-clad semantics of D's const system 
 turns out to be also its own downfall.


 T
The point about generic code (reiterated by many) is intriguing on its own; until now, I hadn't explicitly thought about const even for my C++ template library code (whatever little I have of those). Any pointers to other posts or articles elaborating this a little bit? I believe the other points probably matter when interacting with every other feature (I would have to write some of my "real" code in D to see if I hit it on my own), but there doesn't seem to be anything unusable about them on their own. The inability to have a const caching object seems correct. The way around would be to have a wrapper that caches (meh). If that is not possible, then maybe caching objects just aren't meant to be const by their nature? Isn't memoize a standard library feature? I should look at it, but I wouldn't expect it to be const. On Monday, 18 February 2019 at 06:50:32 UTC, Marco de Wild wrote:
 I agree that const by nature unfortunately kills lazy 
 initialization.
Lazy initialization - is this the same as post-blit? At the cost of copying (justifiable? maybe), doesn't D have a way to copy-construct a const/immutable struct object from a mutable one? If there is a way (or will be - there is a recent posting and a Dconf talk about copy constructors), does the copying negate the benefits of lazy initialization?
 However, I don't really understand why const is a problem with 
 ranges. Const elements are not a problem. Iterating over a 
 range consumes it (if I understand correctly). It does not make 
 sense to be able to consume a const object, so from my point of 
 view it's perfectly logical to disallow iterating const ranges. 
 If I'm missing something, please correct me.
 ...
+1. Or I haven't understood why ranges would ever ever need to be const. After all, in C++, what use is: std::vector::const_iterator const iter = sequence.begin(); About the only kind of use would be: std::vector::const_iterator iter = sequence.begin(); std::vector::const_iterator const iterEnd = sequence.end(); What are ranges if not an encapsulation of the above functionality?
Jun 20 2019
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jun 21, 2019 at 06:07:59AM +0000, Yatheendra via Digitalmars-d-learn
wrote:
 Am I mistaken in saying that we are conflating:
    "anything that is logically const should be declared const"
No, not in D. D does not have logical const; it has "physical" const, which is a strict subset of logical const, and therefore there are some uses of logical const for which D's const is unsuitable.
    // makes perfect sense
    // e.g. the lowest 2, and some branches of the 3rd and 4th, levels
    // of members (and a subset of the overall methods) in a 5-deep type
hierarchy are const
 with:
    "most code/data should be declared const"
    // no! isn't efficient code all about mutation?
    // no grounds for, e.g.: "ideally, no more than 40% of code should be
 doing mutation"
Actually, optimizers work best when there is minimal mutation *in the original source*. The emitted code, of course, is free to use mutation however it wants. But the trouble with mutation at the source level is that it makes many code analyses very complex, which hinders the optimizer from doing what it might have been able to do in the absence of mutation (or a reduced usage of mutation). Aliasing is one example that hampers optimizers from emitting optimal code. Aliasing plus mutation makes the analysis so complex that the optimizer has a hard time deciding whether a particular construct can be optimized away or not. Having minimal mutation in the original source code allows the optimizer to make more assumptions, which in turn leads to better optimizations. It also makes the source code easier to understand. Paradoxically, having less mutation in the source code means it's easier for the compiler to optimize it into mutation-heavy optimal code -- because it doesn't have to worry about arbitrary mutations in the source code, and therefore can be free(r) to, e.g., eliminate redundant copies, redundant movement of data, etc., which ultimately results in in-place modification of values, i.e., mutation-heavy emitted code. Conversely, if the source code is heavy on mutations, then the compiler cannot confidently predict the overall effect of the mutations, and therefore is forced to err on the safe side of assuming the worst, i.e., don't apply aggressive optimizations in case the programmer's mutations invalidate said optimizations. The result is less optimal code. [...]
 The inability to have a const caching object seems correct. The way
 around would be to have a wrapper that caches (meh). If that is not
 possible, then maybe caching objects just aren't meant to be const by
 their nature? Isn't memoize a standard library feature? I should look
 at it, but I wouldn't expect it to be const.
It's not as simple as it might seem. Here's the crux of the problem: you have an object that logically never changes (assuming no bugs, of course). Meaning every time you read it, you get the same value, and so multiple reads can be elided, etc.. I.e., you want to tell the compiler that it's OK to assume this object is const (or immutable). However, it is expensive to initialize, and you'd like it to be initialized only when it's actually needed, and once initialized you'd like it to be cached so that you don't have to incur the initialization cost again. However, declaring a const object in D requires initialization, and after initialization it cannot be mutated anymore. This means you cannot declare it const in the first place if you want caching. It gets worse, though. Wrappers only work up to a certain point. But when you're dealing with generic code, it becomes problematic. Assume, for instance, that you have a type Costly that's logically const, but lazily initialized (and cached). Since you can't actually declare it const -- otherwise lazy initialization doesn't work -- you have to declare it mutable. Or, in this case, declare a wrapper that holds a const reference to it, say something like this: struct Payload { // lazily-initialized data } struct Wrapper { const(Payload)* impl; ... } However, what if you're applying some generic algorithms to it? Generic code generally assume that given a type T, if you want to declare a const instance of it, you simply write const(T). But what do you pass to the generic function? If you pass Wrapper, const(Wrapper) means `impl` cannot be rebound, so lazily initialization fails. OK, then let's pass const(Payload) directly. But that means you no longer have a wrapper, so you can't have lazy initialization (Payload must be constructed before you can pass it to the function, thus it must be eagerly initialized at this point). It's an impasse. Cached / lazily-initialized objects and D's const simply don't mix. Well, you can try to mix them, but it's like trying to mix water and oil. They just don't work well together. T -- Notwithstanding the eloquent discontent that you have just respectfully expressed at length against my verbal capabilities, I am afraid that I must unfortunately bring it to your attention that I am, in fact, NOT verbose.
Jun 21 2019
parent reply Yatheendra <3df4 gmail.ru> writes:
That is a comprehensive reply. No pointers to other material 
required :-)

On Friday, 21 June 2019 at 16:35:50 UTC, H. S. Teoh wrote:
 On Fri, Jun 21, 2019 at 06:07:59AM +0000, Yatheendra via 
 Digitalmars-d-learn wrote:
 Actually, optimizers work best when there is minimal mutation 
 *in the original source*.  The emitted code, of course, is free 
 to use mutation however it wants.  But the trouble with 
 mutation at the source level is that it makes many code 
 analyses very complex, which hinders the optimizer from doing 
 what it might have been able to do in the absence of mutation 
 (or a reduced usage of mutation).
 [...]
(aside: I hope we don't end up advocating the Haskell/Erlang way or the Clojure way!) Yes, the hindrances of non-const code are documented (are most programmers listening!). I was only pointing out that mutation being part of the design limits what can be logically const. Is the trade-off clear, between (mythical) guaranteed C++-like-const at all the points we remember to put it, versus guaranteed D-const at the fewer points we manage to put it? Does D-const distort the design (but you get all the optimizations possible in that scenario)?
 The inability to have a const caching object seems correct. 
 The way around would be to have a wrapper that caches (meh). 
 If that is not possible, then maybe caching objects just 
 aren't meant to be const by their nature? Isn't memoize a 
 standard library feature? I should look at it, but I wouldn't 
 expect it to be const.
It's not as simple as it might seem. Here's the crux of the problem: you have an object that logically never changes (assuming no bugs, of course). Meaning every time you read it, you get the same value, and so multiple reads can be elided, etc.. I.e., you want to tell the compiler that it's OK to assume this object is const (or immutable). However, it is expensive to initialize, and you'd like it to be initialized only when it's actually needed, and once initialized you'd like it to be cached so that you don't have to incur the initialization cost again. However, declaring a const object in D requires initialization, and after initialization it cannot be mutated anymore. This means you cannot declare it const in the first place if you want caching. It gets worse, though. Wrappers only work up to a certain point. But when you're dealing with generic code, it becomes problematic. Assume, for instance, that you have a type Costly that's logically const, but lazily initialized (and cached). Since you can't actually declare it const -- otherwise lazy initialization doesn't work -- you have to declare it mutable. Or, in this case, declare a wrapper that holds a const reference to it, say something like this: struct Payload { // lazily-initialized data } struct Wrapper { const(Payload)* impl; ... } However, what if you're applying some generic algorithms to it? Generic code generally assume that given a type T, if you want to declare a const instance of it, you simply write const(T). But what do you pass to the generic function? If you pass Wrapper, const(Wrapper) means `impl` cannot be rebound, so lazily initialization fails. OK, then let's pass const(Payload) directly. But that means you no longer have a wrapper, so you can't have lazy initialization (Payload must be constructed before you can pass it to the function, thus it must be eagerly initialized at this point).
I should check on std memoize & maybe code something up for understanding before writing more than this - would you mind pointing to an example range algorithm that we would have trouble passing a caching wrapper to? I hadn't considered pointers as an option. Why wouldn't the following work, if expressible in D? struct CostlyComputeResult { ... // data fields // constructor takes compute results, no postblit } struct Wrapper { const (CostlyComputeResult) *cachee = 0; ... // data fields storing compute inputs // constructor takes compute inputs // pointer to function(compute inputs) const ref get() { if (!cachee) { cachee = new(function(inputs)); } return cachee; } } Hopefully jumping through these hoops is worth the while. Instead, maybe just wait until the compiler grows a 'cache pure' function qualifier (move constructor required?).
Jun 21 2019
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jun 21, 2019 at 06:32:33PM +0000, Yatheendra via Digitalmars-d-learn
wrote:
[...]
    struct CostlyComputeResult {
       ... // data fields
       // constructor takes compute results, no postblit
    }
 
    struct Wrapper {
       const (CostlyComputeResult) *cachee = 0;
       ... // data fields storing compute inputs
       // constructor takes compute inputs
       // pointer to function(compute inputs)
       const ref get() {
          if (!cachee) {
             cachee = new(function(inputs));
          }
          return cachee;
       }
    }
 
 Hopefully jumping through these hoops is worth the while. Instead,
 maybe just wait until the compiler grows a 'cache pure' function
 qualifier (move constructor required?).
The problem with this is that you cannot use const(Wrapper). In particular, if you have a function that wants to document that it does not mutate its argument, you cannot write: auto func(in Wrapper data) { ... } because const(Wrapper) does not allow lazy initialization. Basically you have to resort to convention (e.g., name it ReadOnly or something similar) rather than actually mark it const. This generally isn't a big problem if you're using func in isolation, but as soon as you need to compose func with other const code, you quickly find yourself in a gordian knot of const incompatibilities that percolate throughout the entire call chain, because D's const is transitive. I.e., you want func to interact with other functions that trade in const data, but you cannot because of the constant(!) need to keep Wrapper mutable. The non-constness of Wrapper will then percolate up the call chain, "tainting" all functions that call it so that they cannot be marked const, even though *logically* they are const. This mix is already bad enough (try it on a non-trivial codebase sometime and see for yourself), but once you add generic functions to the mix, the whole thing simply becomes unusable -- because generic functions expect to write const types as const(T), but that will break if T is Wrapper. OK, so you can try to make it mutable as a workaround. But then that breaks const-ness attribute inference so the generic function becomes non-const, which in turn recursively causes its callers to be non-const, etc. Somewhere at the top of the call chain you'll have a const method that wants to call a const function, passing some higher-level data structure that eventually contains Wrapper somewhere deep down -- and it simply doesn't work without making the *entire* structure mutable, due to the infectiousness of const. So as soon as you use Wrapper in any data structure of arbitrary complexity, the entire thing must be mutable -- otherwise const percolates all the way down to Wrapper and the caching doesn't work anymore. tl;dr: using a wrapper works fine for relatively simple cases. But as soon as you add any meaningful complexity to it, the scheme quickly becomes either impractically convoluted, or outright impossible to use without a hard cast to cast away const (thereby invoking the lovely UB). T -- INTEL = Only half of "intelligence".
Jun 21 2019
parent reply Yatheendra <3df4 gmail.ru> writes:
It feels disingenous to want to call a caching object even 
"logically" const. There has to be a scaffolding-based but 
hopefully generic compromise. I haven't yet tested this belief, 
but I believe "physical" const is of good use wherever it can be 
applied.

On Friday, 21 June 2019 at 23:39:20 UTC, H. S. Teoh wrote:
 The problem with this is that you cannot use const(Wrapper).

 In particular, if you have a function that wants to document 
 that it does not mutate its argument, you cannot write:

 	auto func(in Wrapper data) { ... }

 because const(Wrapper) does not allow lazy initialization.
 ...
IMHO, in parameters are a more important scenario than const in ranges (of course, same constraints). Just for the heck of it, I'll try to get a snippet "working" but I see out parameters snaking all through the call chain!
Jun 21 2019
parent Yatheendra <3df4 gmail.ru> writes:
On Saturday, 22 June 2019 at 05:10:14 UTC, Yatheendra wrote:
 It feels disingenous to want to call a caching object even 
 "logically" const. There has to be a scaffolding-based but 
 hopefully generic compromise. I haven't yet tested this belief, 
 but I believe "physical" const is of good use wherever it can 
 be applied.

 On Friday, 21 June 2019 at 23:39:20 UTC, H. S. Teoh wrote:
 The problem with this is that you cannot use const(Wrapper).

 In particular, if you have a function that wants to document 
 that it does not mutate its argument, you cannot write:

 	auto func(in Wrapper data) { ... }

 because const(Wrapper) does not allow lazy initialization.
 ...
...
"physical" const has to be applicable & good in many/most other use-cases than caching (citation needed). Somehow, wanting to call mutating code on logically const values sounds to be the wrong want. Lazy initialization sounds like it will be a good DIP :-) Generate (once) on (first) read, just like copy (once) on (first) write. But there are other ways. Heavy computation called at most once: bite the bullet, eagerly construct an immutable value ahead of time. "physical" const might have just enough optimization opportunity to offset biting the bullet. Called more than once: same thing.
Jun 22 2019
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Wednesday, 13 February 2019 at 16:40:18 UTC, H. S. Teoh wrote:
 On Wed, Feb 13, 2019 at 11:32:46AM +0000, envoid via 
 Digitalmars-d-learn wrote:
 [...]
Const in D is very restrictive because it's supposed to provide real compiler guarantees, i.e., it's statically verifiable that the data cannot be changed. [...]
I keep hearing how const is nigh unusable in D, and except for ranges I litter my code with const everywhere, pretty much just as often as I used in C++. I normally only use `auto` for return types and input ranges, and nearly all of my function parameters are `in`. It's true that a lot of people don't use `const` because I keep finding and filing bugs in dub libraries as soon as I try using them, but other than that: const is fine.
Feb 19 2019
parent reply Kagamin <spam here.lot> writes:
On Tuesday, 19 February 2019 at 15:30:22 UTC, Atila Neves wrote:
 I keep hearing how const is nigh unusable in D, and except for 
 ranges I litter my code with const everywhere, pretty much just 
 as often as I used in C++.
I once spent a good amount of effort to annotate my code with pure and inout only to find a compiler bug, then I realized that annotations aren't really needed, because the collection is inherently mutable anyway (appender).
Feb 19 2019
parent reply drug <drug2004 bk.ru> writes:
On 19.02.2019 19:19, Kagamin wrote:
 On Tuesday, 19 February 2019 at 15:30:22 UTC, Atila Neves wrote:
 I keep hearing how const is nigh unusable in D, and except for ranges 
 I litter my code with const everywhere, pretty much just as often as I 
 used in C++.
I once spent a good amount of effort to annotate my code with pure and inout only to find a compiler bug, then I realized that annotations aren't really needed, because the collection is inherently mutable anyway (appender).
I use const all over the place too. And I made PR to other libraries to add const qualifier. Yes, it sometimes forces me to make a copy of data to mutate it - but I'm pretty sure this is the purpose of the qualifier. This helps me to catch/prevent bug. So I don't agree with people who do not use const at all. Definitely const qualifier in D is usable and is useful. The same I can say about properties - for example I use them in meta programming to detect what to serialize/process - I skip methods but serialize properties and for me this is a nice language feature.
Feb 19 2019
parent reply Kagamin <spam here.lot> writes:
On Tuesday, 19 February 2019 at 16:38:17 UTC, drug wrote:
 The same I can say about properties - for example I use them in 
 meta programming to detect what to serialize/process - I skip 
 methods but serialize properties and for me this is a nice 
 language feature.
Serialization of arbitrary stuff is a bad practice anyway, it was the cause of vulnerabilities in serialization libraries. DTO is the way to go.
Feb 20 2019
parent drug <drug2004 bk.ru> writes:
On 20.02.2019 11:05, Kagamin wrote:
 On Tuesday, 19 February 2019 at 16:38:17 UTC, drug wrote:
 The same I can say about properties - for example I use them in meta 
 programming to detect what to serialize/process - I skip methods but 
 serialize properties and for me this is a nice language feature.
Serialization of arbitrary stuff is a bad practice anyway, it was the cause of vulnerabilities in serialization libraries. DTO is the way to go.
serialization is just an example here. But using properties lets me to avoid using DTO except really complex cases and lets me decrease maintenance cost. In my case (I develop a prototype and very often change its data structures) they work really well.
Feb 20 2019
prev sibling parent Bart <Bart gmail.com> writes:
On Wednesday, 13 February 2019 at 16:40:18 UTC, H. S. Teoh wrote:
 So ironically, the iron-clad semantics of D's const system 
 turns out to be also its own downfall.
Such things are not ironic. There is always a trade off. You get nothing for free in this universe. Physics tells us this. Conservation laws apply to energy and everything is energy. Hence your computer cannot violate these laws nor can the D specification(whatever it ends up meaning) nor can the D const system, so to speak... That is, any time there something is inversely related to something else then there will be a conservation relationship. If you restrict something too much then something else in direct opposition is becoming too unrestricted. It's not that D's const system is bad, it is that it creates too much restriction without any other option. The usual way too solve these problems is granularity. this way you can choose the right tool for the job. Maybe D needs different levels of const. constN where constN can always be cast to constn for n <= N. One would need to properly define the levels to maximize utility and minimize the granularity. D probably only needs 3-5 levels to be effective.
Jun 22 2019
prev sibling parent psycha0s <box mail.com> writes:
On Wednesday, 13 February 2019 at 11:32:46 UTC, envoid wrote:
 Is there an article that explains best practices of using const 
 in D?
You can find some information here: https://dlang.org/articles/const-faq.html
Feb 13 2019