www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - I like dlang but i don't like dub

reply Alain De Vos <devosalain ymail.com> writes:
Dlang includes some good ideas.
But dub pulls in so much stuff. Too much for me.
I like things which are clean,lean,little,small.
But when i use dub it links with so many libraries.
Are they really needed ?
And how do you compare to pythons pip.
Feel free to elaborate.
Mar 17 2022
next sibling parent Cym13 <cpicard purrfect.fr> writes:
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
Long story short, dub isn't needed. If you prefer pulling dependencies and compiling them by hand nothing is stopping you. As for comparison to pip, I'd say that dub compares favourably actually. Yes, it does do more than pip, and that used to annoy me. But if you look at it from the stance of a user it makes sense: when you pull dependencies or a package using pip you expect to be able to run them immediately. Python isn't a compiled language, but D is and to get these packages and dependencies to be run immediately it needs to do more than pip: download dependencies, manages their version and compile them. This last part is the reason for most of the added complexity to dub IMHO.
Mar 18 2022
prev sibling next sibling parent Tobias Pankrath <tobias pankrath.net> writes:
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
Dub is fantastic at some places, e.g. if you need to just execute something from code.dlang.org via `dub run`, or single file packages (https://dub.pm/advanced_usage) are great to write small cmdline utilities with dependencies. I don't like it as a build system and it is notoriously hard to integrate into existing build systems. You can look at meson (which had some D related bug fixes recently) or reggae for that. Or just do `dmd -i` as long as compile times are low enough.
Mar 18 2022
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 18, 2022 at 04:13:36AM +0000, Alain De Vos via Digitalmars-d-learn
wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
[...] As a package manager, dub is OK, it does its job. As a build system, I find dub simply isn't good enough for my use cases. Its build model is far too simplistic, and does not integrate well with external build systems (esp in projects involving multiple programming languages). I rather prefer SCons for my general build needs. As far as dub pulling in too many libraries: IMNSHO this is a malaise of modern software in general. (No) thanks to the code reuse mantra, nobody seems satisfied until they refactor every common function into its own package, and everything depends on everything else, so doing something trivial like displaying a static webpage with vibe.d pulls in 25 packages just so it can be built. I much rather prefer Adam's arsd libs[1], where you can literally just copy the module into your own workspace (they are almost all standalone single-file modules, except for a small number of exceptions) and just build away. No hairy recursive dependencies to worry about, everything you need is encapsulated in a single file. That's the kind of dependency philosophy I subscribe to. The dependency graph of a project should not be more than 2 levels deep (preferably just 1). You shouldn't have to download half the world just to build a hello world program. And you definitely shouldn't need to solve an NP-complete problem[2] just so you can build your code. [1] https://github.com/adamdruppe/arsd/ [2] https://research.swtch.com/version-sat - Dependency hell is NP-complete. T -- It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
Mar 18 2022
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
tldr; I am talking on a soap box with a big question mind hovering over 
on my head: Why can't I accept pulling in dependencies automatically?

On 3/18/22 07:48, H. S. Teoh wrote:

 As a package manager, dub is OK, it does its job.
As a long-time part of the D community, I am ashamed to admit that I don't use dub. I am ashamed because there is no particular reason, or my reasons may not be rational.
 As a build system
I have seen and used a number of build systems that were started after make's shortcomings and they ended up with their own shortcomings. Some of them were actually program code that teams would write to build their system. As in steps "compile these, then do these". What? My mind must have been tainted by the beauty of make that writing build steps in a build tool strikes me as unbelievable... But it happened. I don't remember its name but it was in Python. You would modify Python code to build your programs. (?) I am well aware of make's many shortcomings but love it's declarative style where things happen automatically. That's one smart program there. A colleague loves Bazel and is playing with it. Fingers crossed...
 I much rather prefer Adam's arsd libs[1], where you can literally just
 copy the module into your own workspace (they are almost all standalone
 single-file modules
That sounds great but aren't there common needs of those modules to share code from common modules? It is ironic that packages being as small as possible reduces the chance of dependencies of those modules and at the same time it increases the total number of dependencies.
 The dependency graph of a project
 should not be more than 2 levels deep (preferably just 1).
I am fortunate that my programs are commond line tools and libraries that so far depended only on system libraries. The only outside dependency is cmake-d to plug into our build system. (I don't understand or agree with all of cmake-d but things are in an acceptable balance at the moment.) The only system tool I lately started using is ssh. (It's a topic for another time but my program copies itself to the remote host over ssh to work as a pair of client and server.)
 You shouldn't have to download half the world
The first time I learned about pulling in dependencies terrified me. (This is the part I realize I am very different from most other programmers.) I am still terrified that my dependency system will pull in a tree of code that I have no idea doing. Has it been modified to be malicious overnight? I thought it was possible. The following story is an example of what I was exactly terrified about: https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5 Despite such risks many projects just pull in code. (?) What am I missing? I heard about a team at a very high-profile company actually reviewing such dependencies before accepting them to the code base. But reviewing them only at acceptance time! Once the dependency is accepted, the projects would automatically pull in all unreviewed changes and run potentially malicious code on your computer. I am still trying to understand where I went wrong. I simply cannot understand this. (I want to believe they changed their policy and they don't pull in automatically anymore.) When I (had to) used Go for a year about 4 years ago, it was the same: The project failed to build one morning because tere was an API change on one of the dependencies. O... K... They fixed it in a couple of hours but still... Yes, the project should probably have depended on a particular version but then weren't we interested in bug fixes or added functionality? Why should we have decided to hold on to version 1.2.3 instead of 1.3.4? Should teams follow their many dependencies before updating? Maybe that's the part I am missing... Thanks for listening... Boo hoo... Why am I like this? :) Ali
Mar 18 2022
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 18, 2022 at 11:16:51AM -0700, Ali Çehreli via Digitalmars-d-learn
wrote:
 tldr; I am talking on a soap box with a big question mind hovering
 over on my head: Why can't I accept pulling in dependencies
 automatically?
Because it's a bad idea for your code to depend on some external resource owned by some anonymous personality somewhere out there on the 'Net that isn't under your control.
 On 3/18/22 07:48, H. S. Teoh wrote:
 
 As a package manager, dub is OK, it does its job.
As a long-time part of the D community, I am ashamed to admit that I don't use dub. I am ashamed because there is no particular reason, or my reasons may not be rational.
I have only used dub once -- for an experimental vibe.d project -- and that only using a dummy empty project the sole purpose of which was to pull in vibe.d (the real code is compiled by a different build system). And I'm not even ashamed to admit it. :-P
 As a build system
I have seen and used a number of build systems that were started after make's shortcomings and they ended up with their own shortcomings. Some of them were actually program code that teams would write to build their system. As in steps "compile these, then do these". What? My mind must have been tainted by the beauty of make that writing build steps in a build tool strikes me as unbelievable... But it happened. I don't remember its name but it was in Python. You would modify Python code to build your programs. (?)
Maybe you're referring to SCons? I love SCons... not because it's Python, but because it's mostly declarative (the Python calls don't actually build anything immediately -- they register build actions with the build engine and are executed later by an opaque scheduler). The procedural part is really for things like creating lists of files and such (though for the most common tasks there are already basically-declarative functions available for use), or for those occasions where the system simply doesn't have the means to express what you want to do, and you need to invent your own build recipe and plug it in.
 I am well aware of make's many shortcomings but love it's declarative
 style where things happen automatically. That's one smart program
 there. A colleague loves Bazel and is playing with it. Fingers
 crossed...
Make in its most basic incarnation was on the right path. What came after, however, was a gigantic mess. The macro system, for example, which leads to spaghetti code of the C #ifdef-hell kind. Just look at dmd/druntime/phobos' makefiles sometime, and see if you can figure out what exactly it's trying to do, and how. There's also implementational issues, the worst of which is non-reproducibility: running `make` after making some changes has ZERO guarantees about the consistency of what happens afterwards. It *may* just work, or it may silently link in stale binaries from previous builds that silently replace some symbols with obsolete versions, leading to heisenbugs that exist in your executable but do not exist in your code. (I'm not making this up; I have seen this with my own eyes in my day job on multiple occasions.) The usual bludgeon-solution to this is `make clean; make`, which defeats the whole purpose of having a build system in the first place (just write a shell script to recompile everything from scratch, every time). Not to mention that `clean` isn't a built-in rule, and I've encountered far too many projects where `make clean` doesn't *really* clean everything thoroughly. Lately I've been resorting to `git clean -dfx` as a nuke-an-ant solution to this persistent problem. (Warning: do NOT run the above git command unless you know what you're doing. :-P)
 I much rather prefer Adam's arsd libs[1], where you can literally
 just copy the module into your own workspace (they are almost all
 standalone single-file modules
That sounds great but aren't there common needs of those modules to share code from common modules?
Yes and no. The dependencies aren't zero, to be sure. But Adam also doesn't take code reuse to the extreme, in that if some utility function can be written in 2-3 lines, there's really no harm repeating it across modules. Introducing a new module just to reuse 2-3 lines of code is the kind of emperor's-clothes philosophy that leads to Dependency Hell. Unfortunately, since the late 70's/early 80's code reuse has become the sacred cow of computer science curriculums, and just about everybody has been so indoctrinated that they would not dare copy-n-paste a 2-3 line function for fear that the Reuse Cops would come knocking on their door at night.
 It is ironic that packages being as small as possible reduces the
 chance of dependencies of those modules and at the same time it
 increases the total number of dependencies.
IMNSHO, when the global dependency graph becomes non-trivial (e.g., NP-complete Dependency Hell), that's a sign that you've partitioned your code wrong. Dependencies should be simple, i.e., more-or-less like a tree, without diamond dependencies or conflicting dependencies of the kind that makes resolving dependencies NP-complete. The one-module-per-dependency thing about Adam's arsd is an ideal that isn't always attainable. But the point is that one ought to strive in the direction of less recursive dependencies rather than more. When importing a single Go or Python module triggers the recursive installation of 50+ modules, 45 of which I've no idea why they're needed, is a sign that something has gone horribly, horribly wrong with the whole thing; we're losing sight of the forest for the trees. That way be NP-complete dragons.
 The dependency graph of a project should not be more than 2 levels
 deep (preferably just 1).
I am fortunate that my programs are commond line tools and libraries that so far depended only on system libraries. The only outside dependency is cmake-d to plug into our build system. (I don't understand or agree with all of cmake-d but things are in an acceptable balance at the moment.) The only system tool I lately started using is ssh. (It's a topic for another time but my program copies itself to the remote host over ssh to work as a pair of client and server.)
I live and breathe ssh. :-D I cannot imagine getting anything done at all without ssh. Incidentally, this is why I prefer a vim-compatible programming environment over some heavy-weight IDE any day. Running an IDE over ssh is out of the question.
 You shouldn't have to download half the world
The first time I learned about pulling in dependencies terrified me.
This is far from the first time I encountered this concept, and it *still* terrifies me. :-D
 (This is the part I realize I am very different from most other
 programmers.)
I love being different! ;-)
 I am still terrified that my dependency system will pull in a tree of
 code that I have no idea doing. Has it been modified to be malicious
 overnight? I thought it was possible. The following story is an
 example of what I was exactly terrified about:
 
 https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5
EXACTLY!!! This is the sort of thing that gives nightmares to people working in network security. Cf. also the Ken Thompson compiler hack.
 Despite such risks many projects just pull in code. (?) What am I
 missing?
IMNSHO, it's because of the indoctrination of code reuse. "Why write code when you can reuse something somebody else has already written?" Sounds good, but there are a lot of unintended consequences: 1) You become dependent on code of unknown provenance written by authors of unknown motivation; how do you know you aren't pulling in malicious code? (Review the code you say? Ha! If you were that diligent, you'd have written the code yourself in the first place. Not likely.) This problem gets compounded with every recursive dependency (it's perhaps imaginable if you carefully reviewed library L before using it -- but L depends on 5 other libraries, each of which in turn depends on 8 others, ad nauseaum. Are you seriously going to review ALL of them?) 2) You become dependent on an external resource, the availability of which may not be under your control. E.g., what happens if you're on the road without an internet connection, your local cache has expired, and you really *really* need to recompile something? Or what if one day, the server on which this dependency was hosted suddenly upped and vanished itself into the ether? Don't tell me "but it's hosted on XYZ network run by Reputable Company ABC, they'll make sure their servers never go down!" -- try saying that 10 years later when you suddenly really badly need to recompile your old code. Oops, it doesn't compile anymore, because a critical dependency doesn't exist anymore and nobody has a copy of the last ancient version the code compiled with. 3) The external resource is liable to change any time, without notice (the authors don't even know you exist, let alone who you are and why changing some API will seriously break your code). Wake up the day of your important release, and suddenly your project doesn't compile anymore 'cos upstream committed an incompatible change. Try explaining that one to your irate customers. :-P
 I heard about a team at a very high-profile company actually reviewing
 such dependencies before accepting them to the code base. But
 reviewing them only at acceptance time! Once the dependency is
 accepted, the projects would automatically pull in all unreviewed
 changes and run potentially malicious code on your computer.
Worse yet, at review time library L depended on external packages X, Y, Z. Let's grant that X, Y, Z were reviewed as well (giving the benefit of the doubt here). But are the reviewers seriously going to continue reviewing X, Y, Z on an ongoing basis? Perhaps X, Y, Z depended upon P, Q, R as well; is *anyone* who uses L going to even notice when R's maintainer turned rogue and committed some nasty backdoor into his code?
 I am still trying to understand where I went wrong. I simply cannot
 understand this. (I want to believe they changed their policy and they
 don't pull in automatically anymore.)
If said company is anything like the bureaucratic nightmare I have to deal with every day, I'd bet that nobody cares about this 'cos it's not their department. Such menial tasks are owned by the department of ItDoesntGetDone, and nobody ever knows what goes on there -- we're just glad they haven't bothered us about show-stopping security flaws yet. ;-)
 When I (had to) used Go for a year about 4 years ago, it was the same:
 The project failed to build one morning because tere was an API change
 on one of the dependencies. O... K... They fixed it in a couple of
 hours but still...  Yes, the project should probably have depended on
 a particular version but then weren't we interested in bug fixes or
 added functionality? Why should we have decided to hold on to version
 1.2.3 instead of 1.3.4? Should teams follow their many dependencies
 before updating? Maybe that's the part I am missing...
See, this is the fundamental problem I have with today's philosophy of "put it all in `the cloud', that's the hip thing to do". I do *not* trust that code from some external server somewhere out there isn't going to just vanish into the ether suddenly, or keel over and die the day after, or just plain get hacked (very common these days) and had trojan code inserted into the resource I depend on. Or the server just plain becomes unreachable because I'm on the road, or my ISP is acting up (again), or the network it's on got sanctioned overnight and now I'm breaking the law just by downloading it. I also do *not* trust that upstream isn't going to commit some incompatible change that will fundamentally break my code in ways that are very costly to fix. I mean, they have every right to do so, why should they stop just because some anonymous user out here depended on their code? I want to *manually* verify every upgrade to make sure that it hasn't broken anything, before I commit to the next version of the dependency. AND I want to keep a copy of the last working version on my *local harddrive* until I'm 100% sure I don't need it anymore. I do NOT trust some automated package manager to do this for me correctly (I mean, software can't possibly ever fail, right?). And you know what? I've worked with some projects that have lasted for over a decade or two, and on that time scale, the oft-touted advantages of code reuse have taken on a whole new perspective that people these days don't often think about. I've seen several times how as time goes on, external dependencies become more of a liability than an asset. In the short term, yeah it lets you get off the ground faster, saves you the effort of reinventing the wheel, blah blah blah. In the long term, however, these advantages don't seem so advantageous anymore: - You don't *really* understand the code you depend on, which means if upstream moves in an incompatible direction, or just plain abandons the project (the older the project the more likely this happens), you would not have the know-how to replicate the original functionality required by your own code. - Sometimes the upstream breakage is a subtle one -- it works most of the time, but in this one setting with this one particular customer the behaviour changed. Now your customer is angry and you don't have the know-how to fix it (and upstream isn't going to do it 'cos the old behaviour was a bug). - You may end up with an irreplacable dependency on abandoned old code, but since it isn't your code you don't have the know-how to maintain it (e.g., fix bugs, security holes, etc.). This can mean you're stuck with a security flaw that will be very expensive to fix. - Upstream may not have broken anything, but the performance characteristics may have changed (for the worse). I'm not making this up -- I've seen an actual project where compiling with the newer library causes a 2x reduction in runtime performance. Many months after, it was somewhat improved, but still inferior to the original *unoptimized* library. And complaining upstream didn't help -- they insisted their code wasn't intended to be used this way, so the performance issues are the user's fault, not theirs. - Upstream licensing terms may change, leaving you stuck up the creek without a paddle. Writing the code yourself may have required more up-front investment (and provoke the ire of the Code Reuse police), but you have the advantage that you own the code, have a copy of it always available, won't have licensing troubles, and understand the code well enough to maintain it over the long term. You become independent of the network availability, immune to outages and unwanted breaking changes. The code reuse emperor has no clothes, but his cronies brand me as heretic scum worthy only to be spat out like a gnat. Such is life. ;-)
 Thanks for listening... Boo hoo... Why am I like this? :)
[...] 'cos you're the smart one. ;-) Most people don't even think about these issues, and then years later it comes back and bites them in the behind. T -- It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
Mar 18 2022
next sibling parent mee6 <mee6 lookat.me> writes:
On Friday, 18 March 2022 at 21:04:03 UTC, H. S. Teoh wrote:
 Review the code you say? Ha! If you were that diligent, you'd 
 have written the code yourself in the first place.  Not likely.
That logic doesn't make sense. Reading code takes way less time than writing good code, especially for larger projects.
Mar 19 2022
prev sibling parent reply Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Friday, 18 March 2022 at 21:04:03 UTC, H. S. Teoh wrote:
 On Fri, Mar 18, 2022 at 11:16:51AM -0700, Ali Çehreli via 
 Digitalmars-d-learn wrote:
 tldr; I am talking on a soap box with a big question mind 
 hovering over on my head: Why can't I accept pulling in 
 dependencies automatically?
Because it's a bad idea for your code to depend on some external resource owned by some anonymous personality somewhere out there on the 'Net that isn't under your control.
True, and because of that you can try and have local/company wide dub registry (if not, should be added support for), in which packages are verified by you/your company, eliminating the problem of net not being under control. Best regards, Alexandru.
Mar 21 2022
parent Tobias Pankrath <tobias pankrath.net> writes:
On Monday, 21 March 2022 at 10:29:53 UTC, Alexandru Ermicioi 
wrote:
 On Friday, 18 March 2022 at 21:04:03 UTC, H. S. Teoh wrote:
 On Fri, Mar 18, 2022 at 11:16:51AM -0700, Ali Çehreli via 
 Digitalmars-d-learn wrote:
 tldr; I am talking on a soap box with a big question mind 
 hovering over on my head: Why can't I accept pulling in 
 dependencies automatically?
Because it's a bad idea for your code to depend on some external resource owned by some anonymous personality somewhere out there on the 'Net that isn't under your control.
True, and because of that you can try and have local/company wide dub registry (if not, should be added support for), in which packages are verified by you/your company, eliminating the problem of net not being under control. Best regards, Alexandru.
That's actually possible right now, in the easiest case you can have a directory of package zip files. Isn't that best practice in all language eco systems.
Mar 21 2022
prev sibling next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 18 March 2022 at 18:16:51 UTC, Ali Çehreli wrote:
 As a long-time part of the D community, I am ashamed to admit 
 that I don't use dub. I am ashamed because there is no 
 particular reason, or my reasons may not be rational.
dub is legitimately awful. I only use it when forced to, and even making my libs available for others to use through it is quite an unnecessary hassle due to its bad design.
 That sounds great but aren't there common needs of those 
 modules to share code from common modules?
Some. My policy is: 1) Factor out shared things when I *must*, not just because I can. So if it just coincidentally happens to be the same code, I'd actually rather copy/paste it than import it. Having a private copy can be a hassle - if a bug fix applies to both, I need to copy/paste it again - but it also has two major benefits: it keeps the build simple and it keeps the private implementation actually private. This means I'm not tempted to complicate the interface to support two slightly different use cases if the need arises; I have the freedom to edit it to customize for one use without worrying about breaking it for another. When I must factor out it is usually because it is part of a shared public *interface* rather than an implementation detail. A shared interface happens when interoperability is required. The biggest example in my libs is the Color and MemoryImage objects, which are loaded from independent image format modules and then can be loaded into independent screen drawing or editing modules. Loading an image then being unable to display it without a type conversion* would be a bit silly, hence the shared type. * Of course, sometimes you convert anyway. With .tupleof or getAsBytes or something, you can do agnostic conversions, but it is sometimes nice to just have `class SpecialImage { this(GenericImage) { } }` to do the conversions and that's where a shared third module comes in, so they can both `import genericimage;`. 2) Once I do decide to share something, there's a policy of tiers: first tier has zero imports (exceptions made for druntime and SOMETIMES phobos, but i've been strict about phobos lately too). These try to be the majority of them, providing interop components and some encapsulated basic functionality. They can import other things but only if the user actually uses it. For example, dom.d has zero imports for basic functions. But if you ask it to load a non-utf8 file, or a file from the web, it will import arsd.characterencodings and/or arsd.http2 on-demand. Basic functionality must just work, it allows those opt-in extensions though. second tier has generally just one import, and it must be from the first tier or maybe a common C library. I make some exceptions to add an interop interface module too, but I really try to keep it to just one. These built on the interop components to provide some advanced functionality. This is where my `script.d` comes in, for example, extending `jsvar.d`'s basic functionality with a dynamic script capability. I also consider `minigui.d` to be here, since it extends simpledisplay's basic drawing with a higher-level representation of widgets and controls, though since simpledisplay itself imports color.d now (it didn't when I first wrote them... making that change was something I really didn't want to do, but was forced to by practical considerations), minigui does have two imports... but still, I'm leaving it there. Then, finally, there's the third tier, which I call the application/framework tier, which is the rarest one in my released code (but most common in my proprietary code, where I just `dmd -i` it and use whatever is convenient). At this point, I'll actually pull in whatever I want (from the arsd package, that is) so there is no limit on the number of imports. I still tend to minimize them, but won't take extraordinary effort. This is quite rare for me to do in a library module since this locks it out of use by any other library module! Obviously, no tier one or two files can import a tier three, so if I want to actually reuse anything in there, it must be factored out back to independence first. C libraries btw are themselves also imports, so I also minimize them, but there's again some grey area: postgres.d use both database.d as the shared interface, but libpq as its implementation. I still consider it tier two though, despite a C library being even harder for the user to set up than 50 arsd modules. 3) I try to minimize and batch breaking changes, including breaks to the build instructions. When I changed simpledisplay to import color, it kinda bugged me since for a few years at that point, I told people they can just download it off my website and go. I AM considering changing this policy slightly and moving more to tier two, so it is the majority instead of tier one. All my new instructions say "dmd -i" instead of "download the file" but I still am really iffy on if it is worth it. Merging the Uri structs and the event loops sounds nice, having the containers and exception helpers factored out would bring some real benefits, but I've had this zero-or-one import policy for so long, making it one-or-two seems too far. But I'll decide next year when the next breaking change release is scheduled. (btw my last breaking change release was last summer and it actually broke almost nothing since i was able to migration path it to considerable joy. im guessing most my users never even noticed it happened)
 Despite such risks many projects just pull in code. (?) What am 
 I missing?
It is amazing how many pretty silly things are accepted as gospel.
Mar 20 2022
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 3/20/22 05:27, Adam D Ruppe wrote:

 So if it just coincidentally happens to be the same code, I'd actually
 rather copy/paste it than import it.
This is very interesting because it is so much against common guidelines. I first read about such copy/paste in a book (my guess is John Lakos's Large Scale C++ Software Design book because my next example below is from that book.) The author was saying exactly the same thing: Yes, copy/paste is bad but dependencies are bad as well. I was surprised by John Lakos's decision to use external header guards. In addition to the following common include guard idiom: // This is foo.h #ifndef INCLUDED_FOO_H_ #define INCLUDED_FOO_H_ // ... #endif // INCLUDED_FOO_H_ He would do the same in the including modules as well: // This is bar.c #ifndef INCLUDED_FOO_H_ #include "foo.h" #endif // ... Such a crazy idea and it is completely against the DRY principle! However, according to his measurements on hardware and file systems at that time, he was saving a lot of build time. (File system's reading the file many times to determine that it has already been included was too expensive. Instead, he was determining it from the very include guard macro himself.) Those were the first examples when I started to learn that it was possible to go against common guidelines. I admire people like you and John Lakos who don't follow guidelines blindly. I started to realize the power of engineering very late. Engineering almost by definition should break guidelines. Ali
Mar 20 2022
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 20, 2022 at 07:51:38AM -0700, Ali Çehreli via Digitalmars-d-learn
wrote:
 On 3/20/22 05:27, Adam D Ruppe wrote:
 
 So if it just coincidentally happens to be the same code, I'd
 actually rather copy/paste it than import it.
This is very interesting because it is so much against common guidelines. I first read about such copy/paste in a book (my guess is John Lakos's Large Scale C++ Software Design book because my next example below is from that book.) The author was saying exactly the same thing: Yes, copy/paste is bad but dependencies are bad as well.
Yeah, IME it goes both ways. I've encountered situations of what I might call "mindless cut-n-paste", in which a utility function clearly shared across multiple modules was mindlessly copied across 8-12 different files, bugs and all. Several subsequent bugfixes repaired most, but not all, of each copy, each bugfix touching a different set of functions, ending up with multiple mutations of the same original function, with several different combinations of remaining bugs. The icing on top was that there was *already* a common utility module that contained commonly-used functions, but for some reason nobody bothered to move the functions there. (And the icing on top of the icing was that at some subsequent point somebody else came along and reinvented the same function under a different name with a slightly different API, thus making an already bad problem worse.) On the other side of the spectrum, though, I've also seen code that needed some function F defined in some library L, that function being the *only* thing needed from L, yet because of the dogma of code reuse, L was linked in its entirety, recursive dependencies and all, causing needless code bloatage and build complexity. Perhaps a copy-n-paste would've been a better solution in this situation. [...]
 Engineering almost by definition should break guidelines.
[...] This one's going into my quotes file. ;-) T -- Hey, anyone can ignore me and go ahead and do it that way. I wish you the best of luck -- sometimes us old coots are dead wrong -- but forgive me if I'm not going to be terribly sympathetic if you ignore my advice and things go badly! -- Walter Bright
Mar 21 2022
prev sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 18 March 2022 at 18:16:51 UTC, Ali Çehreli wrote:
 The first time I learned about pulling in dependencies 
 terrified me. (This is the part I realize I am very different 
 from most other programmers.) I am still terrified that my 
 dependency system will pull in a tree of code that I have no 
 idea doing. Has it been modified to be malicious overnight? I 
 thought it was possible. The following story is an example of 
 what I was exactly terrified about:


 https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

 Despite such risks many projects just pull in code. (?) What am 
 I missing?
This is an interesting observation and something of an oddity in modern SW engineering. I have been on several projects where they just download versions of libraries from some random server. For personal projects I guess this would be ok but for commercial software this would be a big no-no for me. Still the trend goes towards this. Now, several build systems and packet manager software have the possibility to change the server to a local one. Changing to local one is unusual though which is strange. First as you mentioned is that you increase the vulnerability by the possibility injecting a modified version of a library with back doors. Then you also become dependent on outside servers which is bad if they are down. In all, for commercial software just avoid dub. If you want to use a build system go for Meson as it has D support out of the box today. For commercial projects pull libraries manually as you want to have full control where you get it, the version and so on.
Mar 22 2022
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Mar 22, 2022 at 05:36:13PM +0000, IGotD- via Digitalmars-d-learn wrote:
 On Friday, 18 March 2022 at 18:16:51 UTC, Ali Çehreli wrote:
 
 The first time I learned about pulling in dependencies terrified me.
[...]
 https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5
 
 Despite such risks many projects just pull in code. (?) What am I
 missing?
 
This is an interesting observation and something of an oddity in modern SW engineering. I have been on several projects where they just download versions of libraries from some random server. For personal projects I guess this would be ok but for commercial software this would be a big no-no for me. Still the trend goes towards this. Now, several build systems and packet manager software have the possibility to change the server to a local one. Changing to local one is unusual though which is strange.
[...] To be fair, even though I'm clearly on the side of not depending on external resources, there are various reasons why one might prefer to go the route of dub / the modern trend of package managers that depend on external resources. - It alleviates the tedium of having to manually maintain local archives of 3rd party packages. When the code is needed, it gets downloaded from the upstream servers. The package manager (dub in this case) manages the local cache for you. - You get updates automatically. If there's a critical security fix, for example, you'll get it upon the next build, you don't even have to be aware of the existence of the security flaw and its fix to reap the benefits. When a new feature is made available upstream, you don't have to manually download the latest version to reap the benefits, you get it automatically upon the next retrieval of the package. - It's very convenient: you don't have to know where the upstream servers are, how to download it, where to store it -- the package manager handles that all for you. You just specify which packages you want, and it takes it from there. Of course, as with programming projects in general, convenience often comes at a price. The security flaws that crop up, for example, which, in today's threat landscape, are much more frequent and important than a decade ago, and worthy of some very serious consideration. While automatic downloads do get you "automatic" security fixes, it also introduces potential security holes (trojan attacks, MITM attacks, etc.). Also, the long-term consequences of convenience. A lot of the benefits of external dependencies are short-term benefits; over the long term, they get outweighed by long-term maintenance issues, like the ones I mentioned in my other post (long-term compatibility, breaking changes, availability, etc.). T -- Only boring people get bored. -- JM
Mar 22 2022
prev sibling next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
DUB changed my programming practice. To understand why DUB is needed I think it's helpful to see the full picture, at the level of your total work, in particular recurring costs. My small software shop operation (sorry) is built on DUB and if I analyze my own package usage, there are 4 broad categories: - Set A. Proprietary Code => **8 packages, 30.4** kloc - Set B. Open source, that I wrote, maintain, and evolve => **33 packages, 88.6** kloc - Set C. Open source, that I maintain minimally, write it in part only => **5 packages, 59.1** kloc - Set D. Foreign packages (not maintaining it, nor wrote it. Stuff like arsd) => **14 package, 45.9** kloc => Total = **224 kloc**, only counting non-whitespace lines here. This is only the code that needs to be kept alive and maintained. Obviously code that is more R&D and/or temporary bear no recurring cost. Visually: Set A: ooo 30.4 (proprietary) Set B: ooooooooo 88.6 (open-source) Set C: oooooo 59.1 (open-source) Set D: oooo 45.9 (open-source) -------------------------------------- Total: oooooooooooooooooooooo At a very minimum, all code in A + B + C + D needs to build with the D compiler since the business use it, and build at all times. Maintaining the "it builds" invariant takes a fixed cost m(A) + m(B) + m(C) + m(D). Here m(D) is beared by someone else. As B and C are open-source and maintained by me, the cost of building B and C for someone else is zero, that's why ecosystem is so important for language, as a recurrent expense removal. And indeed, open-source ecosystem is probably the main driver of language adoption, as a pure capital gain. Now consider the cost of evolving and bug fixing instead of just building. => This is about the same reasoning, with perhaps bug costs being less transferrable. Reuse delivers handsomely, and is cited by the Economics of Software Quality as one of the best driver for increased quality [1]. Code you don't control, but trust, is a driver for increased quality (and as the book demonstrate: lowered cost/defect/litigations). For maintaining the invariant "it builds with latest compiler", you'd have to pay: m(A) + m(B) + m(C) but then do another important task: => Copy each new updated source in dependent projects. Unfortunately this isn't trivial at all, that code is now duplicated in several place. Realistically you will do this on an as-needed basis. And then other people can rely on none of your code (it doesn't build, statistically) and then much fewer ecosystem becomes possible (because nothing builds and older version of files are everywhere). Without using DUB, you can't have a large set of code that maintain this or that invariant, and will have to rely to an attentional model where only the last thing you worked on is up-to-date. DUB also make it easy to stuff your code into the B and C categories which provides value for everyone. With DUB you won't have say VisualD projects because the cost of maintaining the invariant "has a working VisualD project" would be too high, but with DUB because it's declarative it's almost free. [1] "The Economics of Software Quality" - Jones, Bonsignour, Subramanyam
Mar 19 2022
prev sibling next sibling parent reply Dadoum <contact dadoum.ml> writes:
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
Personally I use CMake, it allows me to access to C and C++ libraries while still being able to use small Dub libraries. Also everyone knows how to build a project with CMake nowadays.
Mar 21 2022
parent reply Tobias Pankrath <tobias pankrath.net> writes:
On Monday, 21 March 2022 at 09:25:56 UTC, Dadoum wrote:
 On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
Personally I use CMake, it allows me to access to C and C++ libraries while still being able to use small Dub libraries. Also everyone knows how to build a project with CMake nowadays.
I do the same with meson and I wish dub would be easier to integrate into third party build systems. The first problem with dub is, that it doesn't really let you decide where it puts stuff. There is `--cache`, but that doesn't accept an path either and does not guarantee that afterwards everything you need is there. For example if you `dub fetch A --cache=local` and some dependencies of A are already under $HOME/.dub, you won't have them locally afterwards. There is a workaround for this though: `HOME=. dub fetch A`. A second problem is that `dub describe` returns paths to the packages directories not to the actual build directories, thus you can only use one compiler and the last `dub build` wins. Although there are custom build directories per compiler and half of the code to integrate dub packages in meson is to find and use the correct build directory (instead of just calling `dub describe`). This would be much easier, if there were a `dub provide` (or whatever) that builds all deps for a project, installs them into a given prefix/path and makes them usable from `dub describe` afterwards, so that dub describe works more or less like pkg-config afterwards.
Mar 21 2022
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 21/03/2022 11:19 PM, Tobias Pankrath wrote:
 This would be much easier, if there were a `dub provide` (or whatever) 
 that builds all deps for a project, installs them into a given 
 prefix/path and makes them usable from `dub describe` afterwards, so 
 that dub describe works more or less like pkg-config afterwards.
Sounds like a great addition if somebody were to add it.
Mar 21 2022
prev sibling next sibling parent Marcone <marcone email.com> writes:
The DMD compiler could import the modules directly with the 
import command just like it does with the modules in the phobos 
library.
Mar 21 2022
prev sibling next sibling parent AnimusPEXUS <animuspexus protonmail.com> writes:
On Friday, 18 March 2022 at 04:13:36 UTC, Alain De Vos wrote:
 Dlang includes some good ideas.
 But dub pulls in so much stuff. Too much for me.
 I like things which are clean,lean,little,small.
 But when i use dub it links with so many libraries.
 Are they really needed ?
 And how do you compare to pythons pip.
 Feel free to elaborate.
I came to Dlang from Golang. I didn't liked Dub too at the beginning. But now I think Dub is ok. matter of habit.
Mar 21 2022
prev sibling parent reply Marcone <marcone email.com> writes:
Why is dmd unable to import modules installed by dub using the 
import command like it does with the Phobos library? He can't 
send these modules to Linker? Needing to be passed to dmd via 
command line. I think it could be all automatic.
Mar 22 2022
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Tuesday, 22 March 2022 at 14:44:59 UTC, Marcone wrote:
 Why is dmd unable to import modules installed by dub using the 
 import command like it does with the Phobos library? He can't 
 send these modules to Linker? Needing to be passed to dmd via 
 command line. I think it could be all automatic.
`import` itself has nothing to do with the linker, and it is not the reason Phobos is automatically linked. That happens because the compiler *always* passes Phobos to the linker, whether you import anything from it or not (though not in BetterC). When you import a module from a library, the compiler is only using the import to understand which symbols are available to the module it's currently compiling. It does not attempt to compile the imported modules and doesn't have enough information to pass anything to the linker. It has no idea if an imported module is already compiled, or will be compiled later, and whether it will be passed to the linker as an object file or part of a static library, or if it won't go to the linker at all because it's in a dynamic library. The only time `import` has any impact on the linker is when you pass `-i` to the compiler, in which case it will attempt to compile the modules it imports (excluding Phobos) and, in turn, pass the compiled object to the linker. But that's going to compile everything every time. Because the compiler supports separate compilation and linking, it can't make any assumptions about what you actually want to compile and link. That's what build tools like dub and reggae are for.
Mar 22 2022
prev sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Tuesday, 22 March 2022 at 14:44:59 UTC, Marcone wrote:
 Why is dmd unable to import modules installed by dub using the 
 import command like it does with the Phobos library? He can't 
 send these modules to Linker? Needing to be passed to dmd via 
 command line. I think it could be all automatic.
dmd CAN do that. dub CHOOSES not to use this capability. This is a big reason why I hate dub so much. It doesn't use D's strengths at all.
Mar 22 2022