www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Answers needed from those using D for Web Development, Web APIs and

reply aberba <karabutaworld gmail.com> writes:
I'm going to do a writeup on the state of D in Web Development, 
APIs and Services for 2017. I need the perspective of the 
community too along with my personal experience. Please help out. 
More details the better.

0. Since when did you or company start using D in this area?

1. Do you use a framework? Which one?

2. Why that approach and what would have done otherwise?

3. Which task exactly do you use D to accomplish?

4. Which (dub) packages do you use and for what purpose?

5. How do you host your software code (cloud platforms,  vps,  
PaaS, docker,  Openshift, kubernetes, etc)?

6. What are some constraints and problems in using D for such 
tasks?

7. What solutions do you recommend?
Dec 15 2017
next sibling parent reply crimaniak <crimaniak gmail.com> writes:
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 I'm going to do a writeup on the state of D in Web Development, 
 APIs and Services for 2017. I need the perspective of the 
 community too along with my personal experience. Please help 
 out. More details the better.
I think some questions already answered in this survey https://forum.dlang.org/thread/hrtakvaqrhvayeidqxbb forum.dlang.org I wonder, it is possible to filter Google Forms result to see only results with relevant items in 'primary and secondary area of development' questions?
Dec 18 2017
parent reply WebFreak001 <d.forum webfreak.org> writes:
On Tuesday, 19 December 2017 at 06:58:06 UTC, crimaniak wrote:
 On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 I'm going to do a writeup on the state of D in Web 
 Development, APIs and Services for 2017. I need the 
 perspective of the community too along with my personal 
 experience. Please help out. More details the better.
I think some questions already answered in this survey https://forum.dlang.org/thread/hrtakvaqrhvayeidqxbb forum.dlang.org I wonder, it is possible to filter Google Forms result to see only results with relevant items in 'primary and secondary area of development' questions?
yes, I can see each response and export them to csv. I wanted to make a summary of the results but I don't really know where to start
Dec 21 2017
parent crimaniak <crimaniak gmail.com> writes:
On Thursday, 21 December 2017 at 19:58:44 UTC, WebFreak001 wrote:
 yes, I can see each response and export them to csv. I wanted
I think if you can make csv with only answers related to WEB development it will be data relevant to what aberba wants.
 to make a summary of the results but I don't really know where 
 to start
As for me, the main summary of results already is on the /viewanalytics page: D User Survey 167 answers D even much more marginal than I expected. -- Узок их круг, страшно далеки они от народа
Dec 21 2017
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 0. Since when did you or company start using D in this area?
I used D for web in 2009 until about 2013 for work, and then changed jobs and didn't get back into using D until this year.
 1. Do you use a framework? Which one?
my own web.d
 2. Why that approach and what would have done otherwise?
Libraries suck. I avoid them unless they are ubiquitous. Using traditional cgi meant it could go with a well-tested production server.
 3. Which task exactly do you use D to accomplish?
everything server-side, json apis as well as html pages/form handling/websockets.
 4. Which (dub) packages do you use and for what purpose?
none
 5. How do you host your software code (cloud platforms,  vps,  
 PaaS, docker,  Openshift, kubernetes, etc)?
on a linux server.
 6. What are some constraints and problems in using D for such 
 tasks?
Nothing serious. Compile time got a little slow (~14 seconds at one point) but some refactoring improved it.
 7. What solutions do you recommend?
keep it simple to get work done.
Dec 21 2017
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
 On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 0. Since when did you or company start using D in this area?
Unfortunately I'm presently not able to use D at my job. [...]
 1. Do you use a framework? Which one?
I've used Adam Ruppe's cgi.d for a bit, but recently switched to vibe.d. [...]
 2. Why that approach and what would have done otherwise?
I actually rather like cgi.d for its simplicity. Vibe.d is pretty cool and pretty powerful, but it comes with a complex set of dependencies that are a pain to manage. Yes I know dub does it "automatically", but the problem with dub is that it tries to do too much -- it wants to be a build system in addition to being a packaging system. The former is OK, I guess, even though I really wish it was more configurable in terms of how it manages local repository caches. But as a build system, I'm sorry to say that dub sucks. Or at least, its docs suck, 'cos I can't figure out how to make it do what I want. After struggling with it for about a week or two, I threw in the towel and went back to SCons. Nowadays I only use dub for updating vibe.d via a dummy blank project. The main reason I went to vibe.d was because of HTTPS support, that cgi.d didn't have. That, and also it was supposed to be the flagship web platform for D, so I figured I should at least give it a try. It does have some nice perks like less boilerplate for handling HTTP requests, I suppose, and Diet templates are kinda cool though also kinda klunky in certain details. The ubiquitous use of classes instead of structs rubbed me the wrong way somewhat, but I can live with it. Built-in support for databases was nice, but I didn't end up using it because (1) I needed a persistent database that's always consistent on-disk, so Redis was out, and I really don't like the idea of needing a separate database server just to run Mongo (managing a separate dependent service is way too much needless complexity for what I'm doing), so I went back to Adam Ruppe's sqlite.d instead. Overall, vibe.d is not bad. The docs could use improvement -- I struggled to find what I want for quite a few things, which wasted a lot of time. A lot of the frustration came from the docs being unclear about whether something was possible or not. Beyond the most basic examples, many things were implicit, or just plain not stated, leading to me spending far too much time trying to figure out whether module X supported feature Y, or if I should look elsewhere or write my own. I'm OK with writing a feature myself, but it's frustrating when I don't even know whether I need to. Other than that, though, vibe.d performance is pretty good and it does what I need, with a number of nice syntactic shortcuts to reduce boilerplate. So overall, I'm relatively happy with it. [...]
 3. Which task exactly do you use D to accomplish?
Just serving dynamic webpages on server-side. Basically as a PHP replacement. [...]
 4. Which (dub) packages do you use and for what purpose?
Just vibe.d and its dependencies. I struggled to make dub do what I want, so in general I avoided using it, sad to say. Currently the only thing I use dub for is to update vibe.d, via a dummy empty package that declares dependency on vibe.d. Once vibe.d is built, I link my code to its static libraries manually from SCons.
 5. How do you host your software code (cloud platforms,  vps,  PaaS,
 docker,  Openshift, kubernetes, etc)?
On a Linux server.
 6. What are some constraints and problems in using D for such tasks?
Constraints? Problems? D has none. :-D Well OK, let's just say D fits the way I work very well, and has powerful tools for reducing boilerplate, so I find working in D highly productive. If there's any real issue with using D overall, it's in struggling to make dub do what I want. I still haven't figured out whether that's due to an inherent limitation in dub, or the docs are just that bad. [...]
 7. What solutions do you recommend?
[...] If you're starting from scratch, vibe.d + dub isn't a bad approach. My main issue with dub is in trying to integrate it with an existing codebase and with my style of working. It's somewhat Windows-like in that respect, if you follow its workflow style, everything Just Works(tm). But if you want to customize stuff like I do, be prepared for a less pleasant time that may involve doing lots of stuff on your own. Thankfully, doing stuff on your own in D is less painful than in other languages that I've programmed in, so even if you end up needing to do that, it's workable. T -- Frank disagreement binds closer than feigned agreement.
Dec 22 2017
parent Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 22 December 2017 at 17:42:57 UTC, H. S. Teoh wrote:
 The main reason I went to vibe.d was because of HTTPS support, 
 that cgi.d didn't have.
You shouldn't be running a homemade HTTP server in public. The way you should do it is putting the application behind a real web server (you can do it with cgi mode or http using a reverse proxy, both of which cgi.d fully supports) which is responsible for the encryption.
Dec 22 2017
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Fri, 2017-12-22 at 09:42 -0800, H. S. Teoh via Digitalmars-d wrote:
[=E2=80=A6]
 that are a pain to manage. Yes I know dub does it "automatically",
 but
 the problem with dub is that it tries to do too much -- it wants to
 be a
 build system in addition to being a packaging system. The former is
 OK,
 I guess, even though I really wish it was more configurable in terms
 of
 how it manages local repository caches. But as a build system, I'm
 sorry
 to say that dub sucks. Or at least, its docs suck, 'cos I can't
 figure
 out how to make it do what I want. After struggling with it for about
 a
 week or two, I threw in the towel and went back to SCons.  Nowadays I
 only use dub for updating vibe.d via a dummy blank project.
=20
[=E2=80=A6] Just to reiterate, SCons D support now has a ProgramAllAtOnce builder for those that want to use Unit-Threaded in their D codebases using SCons. Also I have the beginnings of a Dub SCons tool for using Dub as a package manager in a SCons build. Currently it does what I need, so until there are users requesting extensions and bug fixes it is "job done", though I may try to get enough tests to go for a pull request of the currently separate tool into the SCons distribution. https://github.com/russel/SCons_D_Experiment --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Dec 22 2017
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Dec 22, 2017 at 06:21:04PM +0000, Russel Winder wrote:
 On Fri, 2017-12-22 at 09:42 -0800, H. S. Teoh via Digitalmars-d wrote:
 […]
 that are a pain to manage. Yes I know dub does it "automatically",
 but the problem with dub is that it tries to do too much -- it wants
 to be a build system in addition to being a packaging system. The
 former is OK, I guess, even though I really wish it was more
 configurable in terms of how it manages local repository caches. But
 as a build system, I'm sorry to say that dub sucks. Or at least, its
 docs suck, 'cos I can't figure out how to make it do what I want.
 After struggling with it for about a week or two, I threw in the
 towel and went back to SCons.  Nowadays I only use dub for updating
 vibe.d via a dummy blank project.
 
[…] Just to reiterate, SCons D support now has a ProgramAllAtOnce builder for those that want to use Unit-Threaded in their D codebases using SCons.
For D projects, I've been finding that Command() has been the best tool for me in terms of configuring exactly how I want things built. I used to use (early versions of) your SCons D build tools (and thanks for that!), but ultimately went back to Command() because I found it very frustrating to have my builds break because of an incompatible change in the D tooling whenever I upgrade SCons. So until the SCons D tooling API has stabilized, I'll probably hold off for the time being. Also, for vibe.d projects, I've been finding the need to write my own scanner in order to pick up Diet template (*.dt) dependencies, so that builds would trigger correctly when Diet templates are changed. There is no standard way to do this, unfortunately; so far I've been scanning for `render!(.*)` lines, but this doesn't always work if `render` is instantiated with parameters generated from CTFE. Manual hardcoding has been necessary to get this part of my dependency tree to work. In the long term, I think an approach similar to tup will have to be adopted. O(n) dependency scanning just doesn't cut it anymore for code the size of today's large software projects. And with dynamic dependencies (e.g. CTFE-dependent imports) that are bound to happen in D code with heavy metaprogramming, there's really no sane way to manage dependencies explicitly; you really need to just instrument the compiler and record all input files it reads the way tup does. I shouldn't be needing to write custom scanners just to accomodate CTFE-generated imports that may change again after a few more commits. It's SSOT (single source of truth) all over again: the compiler is the ultimate authority that determines which file depends on what, and having to repeat this information in your build script (or independently derive it via scanners) introduces fragility / incompleteness into your build system.
 Also I have the beginnings of a Dub SCons tool for using Dub as a
 package manager in a SCons build.
[...] That's nice, though for now, I'm sticking with manually updating my dependencies when needed. One thing I found annoying with dub was the sheer amount of time it spent at startup to scan all dependencies and packages and possibly downloading a whole bunch of stuff. The network latency really kills the compile-test-debug cycle time. I know there's a switch to suppress this behaviour, but the initial dependency scanning is still pretty slow even in spite of that. When a 1-line change requires waiting 15-20 seconds just to recompile, that really breaks my workflow. Plus, sometimes I *don't* want anything updated -- when debugging a program, the last thing I want is for dub or the build script or whatever to decide to link in a slightly different version of a library, and suddenly I'm no longer sure if the new crash is caused by the library or my own code, or the bug may now be masked by the slightly different behaviour of an upgraded library. I know that for people who want things done for them automatically and handed over on a silver platter, dub is great. Unfortunately, it doesn't work for me. (But I also know that I don't represent typical usage, so take all this with a grain of salt.) T -- Маленькие детки - маленькие бедки.
Dec 22 2017
prev sibling next sibling parent reply Russel Winder <russel winder.org.uk> writes:
On Fri, 2017-12-22 at 10:39 -0800, H. S. Teoh via Digitalmars-d wrote:
 [=E2=80=A6]
=20
 For D projects, I've been finding that Command() has been the best
 tool
 for me in terms of configuring exactly how I want things built. I
 used
 to use (early versions of) your SCons D build tools (and thanks for
 that!), but ultimately went back to Command() because I found it very
 frustrating to have my builds break because of an incompatible change
 in
 the D tooling whenever I upgrade SCons.  So until the SCons D tooling
 API has stabilized, I'll probably hold off for the time being.
This is sad because without users the D support in SCons will not improve. There have been no changes in the D support SCons 2.6 =E2=86=92 3.0 other t= han adding ProgramAllAtOnce so I have no idea what you found that broke. We need a test case so as to fix it for 3.0.2 or 3.1.0
 Also, for vibe.d projects, I've been finding the need to write my own
 scanner in order to pick up Diet template (*.dt) dependencies, so
 that
 builds would trigger correctly when Diet templates are changed. There
 is
 no standard way to do this, unfortunately; so far I've been scanning
 for
 `render!(.*)` lines, but this doesn't always work if `render` is
 instantiated with parameters generated from CTFE.  Manual hardcoding
 has
 been necessary to get this part of my dependency tree to work.
Let's write a standard publish it via SCons_D_Experiments initially and then put it is SCons Contrib or into the distribution. People doing their own thing and not sharing is a good way of not getting good things into the core.
 In the long term, I think an approach similar to tup will have to be
 adopted. O(n) dependency scanning just doesn't cut it anymore for
 code
 the size of today's large software projects. And with dynamic
 dependencies (e.g. CTFE-dependent imports) that are bound to happen
 in D
 code with heavy metaprogramming, there's really no sane way to manage
 dependencies explicitly; you really need to just instrument the
 compiler
 and record all input files it reads the way tup does.  I shouldn't be
 needing to write custom scanners just to accomodate CTFE-generated
 imports that may change again after a few more commits. It's SSOT
 (single source of truth) all over again: the compiler is the ultimate
 authority that determines which file depends on what, and having to
 repeat this information in your build script (or independently derive
 it
 via scanners) introduces fragility / incompleteness into your build
 system.
Again unless we do something nothing will change. I am not sure you can get away from some element of O(n) behaviour if a build system is to detect what is to be rebuilt in a compile then link system. Obviously there are ways of minimising cf. Tup and Ninja vs. Make and to some extent SCons. Tup still has a form of scan it is just very fast due to the use of the file system tools it uses. So if SCons is to be abandoned for D builds let's agree that and got on with the tool that SCons and Dub are not. [=E2=80=A6]
 That's nice, though for now, I'm sticking with manually updating my
 dependencies when needed.  One thing I found annoying with dub was
 the
 sheer amount of time it spent at startup to scan all dependencies and
 packages and possibly downloading a whole bunch of stuff. The network
 latency really kills the compile-test-debug cycle time.  I know
 there's
 a switch to suppress this behaviour, but the initial dependency
 scanning
 is still pretty slow even in spite of that.  When a 1-line change
 requires waiting 15-20 seconds just to recompile, that really breaks
 my
 workflow.
I have been dithering with replacing the use of Dub itself, with a SCons tool to replace Dub and work directly with the repository. Dub's build structure really isn't useful for anything other than using Dub as a built system. Having two modes: update each time vs. only update when the developer requires it is important. Unless a version glob is used, checking dependencies should never take long.
 Plus, sometimes I *don't* want anything updated -- when debugging a
 program, the last thing I want is for dub or the build script or
 whatever to decide to link in a slightly different version of a
 library,
 and suddenly I'm no longer sure if the new crash is caused by the
 library or my own code, or the bug may now be masked by the slightly
 different behaviour of an upgraded library.
Isn't this consequent on the Dub version specification? If a specific version is required this behaviour should not happen.
 I know that for people who want things done for them automatically
 and
 handed over on a silver platter, dub is great.  Unfortunately, it
 doesn't work for me. (But I also know that I don't represent typical
 usage, so take all this with a grain of salt.)
<panto-mode> Oh no it isn't. </panto-mode> I am not a fan of Dub as a build system, but it appears to be the accepted standard, or in my view sub-standard. (Trying to develop GtkD Should the community push to ditch Make, CMake, SCons, Dub and use Reggae (and hence Tup or Ninja)? Not a simple question. For example CLion requires CMake. CMake-D appears not to work so we can do D in CLion. Work on D in IntelliJ IDEA is progressing but is relatively slow due to relying on volunteers. Compare Rust which is now officially supported by JetBrains. This makes a huge difference. The develoment environment is almost as important as the programming language. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Dec 24 2017
parent reply bachmeier <no spam.net> writes:
On Sunday, 24 December 2017 at 12:09:56 UTC, Russel Winder wrote:
 I am not a fan of Dub as a build system, but it appears to be 
 the accepted standard, or in my view sub-standard. (Trying to 


 Should the community push to ditch Make, CMake, SCons, Dub and 
 use
 Reggae (and hence Tup or Ninja)?
I like SCons. Do any of the others have advantages over SCons? The Python dependency IME does complicate things because it's not trivial to get it working on Windows. It's been too long to remember specifics, but it was an adventure, and if I've got a working D installation, why am I messing around with Python?
Dec 24 2017
parent Russel Winder <russel winder.org.uk> writes:
On Sun, 2017-12-24 at 13:27 +0000, bachmeier via Digitalmars-d wrote:
=20
[=E2=80=A6]
 I like SCons. Do any of the others have advantages over SCons?
That is a moot point. For me SCons is the tool of choice when I am not using Meson.
 The Python dependency IME does complicate things because it's not=20
 trivial to get it working on Windows. It's been too long to=20
 remember specifics, but it was an adventure, and if I've got a=20
 working D installation, why am I messing around with Python?
Python being hard to install on Windows has been over since Python 3.4. A lot of effort went into making Python installation really easy. Python 3.6 should provide no problems. I am not sure Chocolately does as good a job is using the official installer. But as a Linux user I have no personal evidence. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Dec 24 2017
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Dec 24, 2017 at 12:09:56PM +0000, Russel Winder wrote:
 On Fri, 2017-12-22 at 10:39 -0800, H. S. Teoh via Digitalmars-d wrote:
[...]
 In the long term, I think an approach similar to tup will have to be
 adopted. O(n) dependency scanning just doesn't cut it anymore for
 code the size of today's large software projects. And with dynamic
 dependencies (e.g. CTFE-dependent imports) that are bound to happen
 in D code with heavy metaprogramming, there's really no sane way to
 manage dependencies explicitly; you really need to just instrument
 the compiler and record all input files it reads the way tup does.
 I shouldn't be needing to write custom scanners just to accomodate
 CTFE-generated imports that may change again after a few more
 commits. It's SSOT (single source of truth) all over again: the
 compiler is the ultimate authority that determines which file
 depends on what, and having to repeat this information in your build
 script (or independently derive it via scanners) introduces
 fragility / incompleteness into your build system.
Again unless we do something nothing will change. I am not sure you can get away from some element of O(n) behaviour if a build system is to detect what is to be rebuilt in a compile then link system. Obviously there are ways of minimising cf. Tup and Ninja vs. Make and to some extent SCons. Tup still has a form of scan it is just very fast due to the use of the file system tools it uses. So if SCons is to be abandoned for D builds let's agree that and got on with the tool that SCons and Dub are not.
OK, I may have worded things poorly here. What I meant was that with "traditional" build systems like make or SCons, whenever you needed to rebuild the source tree, the tool has to scan the *entire* source tree in order to discover what needs to be rebuilt. I.e., it's O(N) where N is the size of the source tree. Whereas with tup, it uses the Linux kernel's inotify mechanism to learn about which file(s) being monitored have been changed since the last invocation, so that it can scan the changed files in O(n) time where n is the number of changed files, and in the usual case, n is much smaller than N. It's still linear in terms of the size of the change, but sublinear in terms of the size of the entire source tree. I think it should be obvious that an approach whose complexity is proportional to the size of the changeset is preferable to an approach whose complexity is proportional to the size of the entire source tree, esp. given the large sizes of today's typical software projects. If I modify 1 file in a project of 10,000 source files, rebuilding should not be orders of magnitude slower than if I modify 1 file in a project of 100 files. In this sense, while SCons is far superior to make in terms of usability and reliability, its core algorithm is still inferior to tools like tup. Now, I've not actually used tup myself other than a cursory glance at how it works, so there may be other areas in which it's inferior to SCons. But the important thing is that it gets us away from the O(N) of traditional build systems that requires scanning the entire source tree, to the O(n) that's proportional to the size of the changeset. The former approach is clearly not scalable. We ought to be able to update the dependency graph in proportion to how many nodes have changed; it should not require rebuilding the entire graph every time you invoke the build.
 […]
 One thing I found annoying with dub was the sheer amount of time it
 spent at startup to scan all dependencies and packages and possibly
 downloading a whole bunch of stuff. The network latency really kills
 the compile-test-debug cycle time.  I know there's a switch to
 suppress this behaviour, but the initial dependency scanning is
 still pretty slow even in spite of that.  When a 1-line change
 requires waiting 15-20 seconds just to recompile, that really breaks
 my workflow.
I have been dithering with replacing the use of Dub itself, with a SCons tool to replace Dub and work directly with the repository. Dub's build structure really isn't useful for anything other than using Dub as a built system. Having two modes: update each time vs. only update when the developer requires it is important. Unless a version glob is used, checking dependencies should never take long.
Preferably, checking dependencies ought not to be done at all unless the developer calls for it. Network access is slow, and I find it intolerable when it's not even necessary in the first place. Why should it need to access the network just because I changed 1 line of code and need to rebuild?
 Plus, sometimes I *don't* want anything updated -- when debugging a
 program, the last thing I want is for dub or the build script or
 whatever to decide to link in a slightly different version of a
 library, and suddenly I'm no longer sure if the new crash is caused
 by the library or my own code, or the bug may now be masked by the
 slightly different behaviour of an upgraded library.
Isn't this consequent on the Dub version specification? If a specific version is required this behaviour should not happen.
The documentation does not help in this respect. The only thing I could find was a scanty description of how to invoke dub in its most basic forms, with little or no information (or hard-to-find information) on how to configure it more precisely. Also, why should I need to hardcode a specific version of a dependent library just to suppress network access when rebuilding?! Sometimes I *do* want to have the latest libraries pulled in -- *when* I ask for it -- just not every single time I build. [...]
 I am not a fan of Dub as a build system, but it appears to be the
 accepted standard, or in my view sub-standard. (Trying to develop GtkD

AFAIK, the only standard that Dub is, is a packaging system for D. I find it quite weak as a build tool. That's the problem, it tries to do too much. It would have been nice if it stuck to just dealing with packaging, rather than trying to do builds too, and doing it IMO rather poorly.
 Should the community push to ditch Make, CMake, SCons, Dub and use
 Reggae (and hence Tup or Ninja)?
 
 Not a simple question. For example CLion requires CMake. CMake-D
 appears not to work so we can do D in CLion. Work on D in IntelliJ
 IDEA is progressing but is relatively slow due to relying on
 volunteers.  Compare Rust which is now officially supported by
 JetBrains. This makes a huge difference.
 
 The develoment environment is almost as important as the programming
 language.
[...] Honestly, I don't care to have a "standard" build system for D. A library should be able to produce a .so or .a, and have an import path, and I couldn't care less how that happens; the library could be built by a hardcoded shell script for all I care. All I should need to do in my code is to link to that .so or .a and specify -I with the right import path(s). Why should upstream libraries dictate how my code is built?! To this end, a standard way of exporting import paths in a D library (it can be as simple as a text file in the code repo, or some script or tool akin to llvm-config or sdl-config that spits out a list of paths / libraries / etc) would go much further than trying to shoehorn everything into a single build system. T -- No! I'm not in denial!
Dec 28 2017
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Thu, 2017-12-28 at 10:21 -0800, H. S. Teoh via Digitalmars-d wrote:
=20
 [=E2=80=A6]
Apologies for taking so long to get to this.
 OK, I may have worded things poorly here.  What I meant was that with
 "traditional" build systems like make or SCons, whenever you needed
 to
 rebuild the source tree, the tool has to scan the *entire* source
 tree
 in order to discover what needs to be rebuilt. I.e., it's O(N) where
 N
 is the size of the source tree.  Whereas with tup, it uses the Linux
 kernel's inotify mechanism to learn about which file(s) being
 monitored
 have been changed since the last invocation, so that it can scan the
 changed files in O(n) time where n is the number of changed files,
 and
 in the usual case, n is much smaller than N. It's still linear in
 terms
 of the size of the change, but sublinear in terms of the size of the
 entire source tree.
This I can agree with. SCons definitely has to check hashes to determine which files have changed in a "not just space change" way on the leaves of the build ADG. I am not sure what Ninja does, but yes Tup uses inotify to filter the list of touched, but not necessarily changed, files. For my projects build time generally dominates check time so I don't see much difference. Except that Ninja is way faster than Make as a backend to CMake.
 I think it should be obvious that an approach whose complexity is
 proportional to the size of the changeset is preferable to an
 approach
 whose complexity is proportional to the size of the entire source
 tree,
 esp.  given the large sizes of today's typical software projects.  If
 I
 modify 1 file in a project of 10,000 source files, rebuilding should
 not
 be orders of magnitude slower than if I modify 1 file in a project of
 100 files.
Is it obvious, but complexity is not everything, wall clock time is arguably more important. As is actual build time versus preparation time. SCons does indeed have a large up-front ADG check time for large projects. I believe there is the Parts overlay on SCons for dealing with big projects. I believe the plan for later in the year is for the most useful parts of Parts to become part of the main SCons system.=20
 In this sense, while SCons is far superior to make in terms of
 usability
 and reliability, its core algorithm is still inferior to tools like
 tup.
However Tup is not getting traction compared to CMake (and either Make of preferably Ninja backend =E2=80=93 I wonder if there is a Tup backend).
 Now, I've not actually used tup myself other than a cursory glance at
 how it works, so there may be other areas in which it's inferior to
 SCons.  But the important thing is that it gets us away from the O(N)
 of
 traditional build systems that requires scanning the entire source
 tree,
 to the O(n) that's proportional to the size of the changeset. The
 former
 approach is clearly not scalable. We ought to be able to update the
 dependency graph in proportion to how many nodes have changed; it
 should
 not require rebuilding the entire graph every time you invoke the
 build.
I am not using Tup much simply because I have not started using it much, I just use SCons, Meson, and when I have to CMake/Ninja. In the end my projects are just not big enough for me to investigate the faster build times Tup reputedly brings.
=20
 [=E2=80=A6]
 Preferably, checking dependencies ought not to be done at all unless
 the
 developer calls for it. Network access is slow, and I find it
 intolerable when it's not even necessary in the first place.  Why
 should
 it need to access the network just because I changed 1 line of code
 and
 need to rebuild?
This was the reason for Waf, split the SCons system into a configuration set up and build =C3=A0 la Autotools. CMake also does this. A= s does Meson. I have a preference for this way. And yet I still use SCons quite a lot!
=20
[=E2=80=A6]
 The documentation does not help in this respect. The only thing I
 could
 find was a scanty description of how to invoke dub in its most basic
 forms, with little or no information (or hard-to-find information) on
 how to configure it more precisely.  Also, why should I need to
 hardcode
 a specific version of a dependent library just to suppress network
 access when rebuilding?! Sometimes I *do* want to have the latest
 libraries pulled in -- *when* I ask for it -- just not every single
 time
 I build.
If Dub really is to become the system for D as Cargo is for Rust, it clearly needs more people to work on it and evolve the code and the documentation. Whilst no-one does stuff, the result will be rhetorical ranting on the email lists.
 [=E2=80=A6]
=20
 AFAIK, the only standard that Dub is, is a packaging system for D.  I
 find it quite weak as a build tool.  That's the problem, it tries to
 do
 too much.  It would have been nice if it stuck to just dealing with
 packaging, rather than trying to do builds too, and doing it IMO
 rather
 poorly.
No argument from me there, except Cargo. Cargo does a surprisingly good job of being a package management and build system. Even the go command is quite good at it for Go. So I am re-assessing my old dislike of this way =E2=80=93 I used to be a "separate package management and build, and le= ave build to build systems" person, I guess I still am really. However Cargo is challenging my view, where Dub currently does not. Given the thought above, unless I and others actually get on a evolve Dub, nothing will change.
=20
[=E2=80=A6]
 Honestly, I don't care to have a "standard" build system for D. A
 library should be able to produce a .so or .a, and have an import
 path,
 and I couldn't care less how that happens; the library could be built
 by
 a hardcoded shell script for all I care. All I should need to do in
 my
 code is to link to that .so or .a and specify -I with the right
 import
 path(s). Why should upstream libraries dictate how my code is built?!
This last point is one of the biggest problems with the current Dub system, and a reason many people have no intention of using Dub for build. Your earlier points in this paragraph should be turned into issues on the Dub source repository, and indeed the last one as well. And then we should create pull requests. I actually think a standard way is a good thing, but that there should be other ones as well. SCons, CMake, Meson, etc. all need ways of building D for those who do not want to use the standard way. Seems reasonable to me. However SCons and Meson support for D is not yet as good as it would be to have it, and last tine I tried CMake-D didn't work for me.
 To this end, a standard way of exporting import paths in a D library
 (it
 can be as simple as a text file in the code repo, or some script or
 tool
 akin to llvm-config or sdl-config that spits out a list of paths /
 libraries / etc) would go much further than trying to shoehorn
 everything into a single build system.
So let's do it rather than just talk about it? --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Feb 01 2018
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 01, 2018 at 12:56:28PM +0000, Russel Winder wrote:
[...]
 Apologies for taking so long to get to this.
Not a problem, you and I are both busy, and it's perfectly understandable that we can't respond to things instantly.
 On Thu, 2017-12-28 at 10:21 -0800, H. S. Teoh via Digitalmars-d wrote:
[...]
 OK, I may have worded things poorly here.  What I meant was that
 with "traditional" build systems like make or SCons, whenever you
 needed to rebuild the source tree, the tool has to scan the *entire*
 source tree in order to discover what needs to be rebuilt. I.e.,
 it's O(N) where N is the size of the source tree.  Whereas with tup,
 it uses the Linux kernel's inotify mechanism to learn about which
 file(s) being monitored have been changed since the last invocation,
 so that it can scan the changed files in O(n) time where n is the
 number of changed files, and in the usual case, n is much smaller
 than N. It's still linear in terms of the size of the change, but
 sublinear in terms of the size of the entire source tree.
This I can agree with. SCons definitely has to check hashes to determine which files have changed in a "not just space change" way on the leaves of the build ADG. I am not sure what Ninja does, but yes Tup uses inotify to filter the list of touched, but not necessarily changed, files. For my projects build time generally dominates check time so I don't see much difference. Except that Ninja is way faster than Make as a backend to CMake.
In small projects like my personal ones, SCons still does a fast enough job that I don't really care about the difference between O(N) and O(n). So I still use SCons for them -- SCons does have a really nice interface and is generally pleasant to work with, so I don't feel an immediate need to improve the build system. But in a large project, like the one I work with at my job, containing 500,000 source files (not counting data files and the like that also need to be processed by the build system), the difference can become very pronounced. In our case, we use make, which doesn't scan file contents, and recursive make at that, so the initial scanning pause is not noticeable. However, this perceived speed comes at the heavy cost of reliability. On more occasions than I'd wish anyone else to experience, I've had problems with faulty software builds caused not by actual bugs in the code, but merely by make not rebuilding something when it should be, or not cleaning up stray stale files when it should, causing stale object files to be linked instead of the real objects. It has simply become an accepted fact of life to `make clean; make`. Well actually, it's even worse than that -- our `make clean` does *not* clean everything that might potentially be a problem, so I have for the last ≥5 years resorted to a script that manually deletes everything that isn't under version control. (What makes it even sadder is that the version control server is overloaded and it's faster to delete files locally than to checkout a fresh copy of the workspace, which is essentially what my script amounts to.) At one point I was on the verge of proposing SCons as a make replacement, but balked when initial research into the prospect showed that SCons consistently had performance issues with needing to scan the entire source tree before it begins building. That, coupled with general resistance to change in your average programmer workforce and their general unfamiliarity with make alternatives, made me back off from making the proposal. Had tup been around at that time, it would likely have turned the tables with its killer combo of sublinear (relative to workspace size) scanning and reliability. That's why I think that any modern build system that's going to last into the future must have these two features, at a minimum.
 I think it should be obvious that an approach whose complexity is
 proportional to the size of the changeset is preferable to an
 approach whose complexity is proportional to the size of the entire
 source tree, esp.  given the large sizes of today's typical software
 projects.  If I modify 1 file in a project of 10,000 source files,
 rebuilding should not be orders of magnitude slower than if I modify
 1 file in a project of 100 files.
Is it obvious, but complexity is not everything, wall clock time is arguably more important.
Using inotify() to update your dependency tree has basically zero wall clock time because it's done in the background. You can't beat that with anything that requires scanning upon invocation of the build tool.
 As is actual build time versus preparation time. SCons does indeed
 have a large up-front ADG check time for large projects. I believe
 there is the Parts overlay on SCons for dealing with big projects. I
 believe the plan for later in the year is for the most useful parts of
 Parts to become part of the main SCons system. 
But still, the fundamental design limitation remains: scanning time is proportional to workspace size, as opposed to being proportional to changeset size. Judging by current trends in software sizes, this issue is only going to become increasingly important.
 In this sense, while SCons is far superior to make in terms of
 usability and reliability, its core algorithm is still inferior to
 tools like tup.
However Tup is not getting traction compared to CMake (and either Make of preferably Ninja backend – I wonder if there is a Tup backend).
I mentioned Tup as an example of a superior build algorithm to the decades-old make model. I'm not partial to Tup itself, and it doesn't concern me whether or not it's gaining traction. What I'm more concerned with is whether the underlying algorithm of (insert whatever build system you prefer here) is going to remain relevant going forward.
 Now, I've not actually used tup myself other than a cursory glance
 at how it works, so there may be other areas in which it's inferior
 to SCons.  But the important thing is that it gets us away from the
 O(N) of traditional build systems that requires scanning the entire
 source tree, to the O(n) that's proportional to the size of the
 changeset. The former approach is clearly not scalable. We ought to
 be able to update the dependency graph in proportion to how many
 nodes have changed; it should not require rebuilding the entire
 graph every time you invoke the build.
I am not using Tup much simply because I have not started using it much, I just use SCons, Meson, and when I have to CMake/Ninja. In the end my projects are just not big enough for me to investigate the faster build times Tup reputedly brings.
Given its simplicity and lack of historical baggage, I'm expecting Tup will be pretty fast, if not on par, with existing make-based designs, when it comes to small to medium projects. But for large projects of today's scale, I'm expecting Tup is going to outstrip its competitors by orders of magnitude, maybe more, *while still maintaining build reliability*. (It *may* be possible to beat Tup in speed if you sacrifice reliability, but I'm not considering that option as viable.) Tup is getting pretty close to doing the absolute minimum work you need to do in order for a code change to be reflected in the build products. Any less than that, and you start risking unreliable builds (i.e. outdated build products are not rebuilt). [...]
 Preferably, checking dependencies ought not to be done at all unless
 the developer calls for it. Network access is slow, and I find it
 intolerable when it's not even necessary in the first place.  Why
 should it need to access the network just because I changed 1 line
 of code and need to rebuild?
This was the reason for Waf, split the SCons system into a configuration set up and build à la Autotools. CMake also does this. As does Meson. I have a preference for this way. And yet I still use SCons quite a lot!
IMO, if a build system relies on network access as part of its dependency graph, then something has gone horribly wrong. (Aside from NFS and the like, of course.) Updating libraries is IMO not the build system's job; that's what a package manager is supposed to be doing. The build system should be concerned solely with producing build products, given the current state of the source tree. It has no business going about *updating* the source tree from the network willy-nilly just because it can. That's simply an unworkable model -- I could be in the middle of debugging something, and then I rebuild and suddenly the bug can no longer be reproduced because the build tool has "helpfully" replaced one of my libraries with a new version and now the location of the bug has shifted, putting my hours' worth of work in narrowing down the locus of the bug to waste. [...]
 The documentation does not help in this respect. The only thing I
 could find was a scanty description of how to invoke dub in its most
 basic forms, with little or no information (or hard-to-find
 information) on how to configure it more precisely.  Also, why
 should I need to hardcode a specific version of a dependent library
 just to suppress network access when rebuilding?! Sometimes I *do*
 want to have the latest libraries pulled in -- *when* I ask for it
 -- just not every single time I build.
If Dub really is to become the system for D as Cargo is for Rust, it clearly needs more people to work on it and evolve the code and the documentation. Whilst no-one does stuff, the result will be rhetorical ranting on the email lists.
The problem is that I have fundamental disagreements with dub's design, and therefore find it difficult to bring myself to work on its code, since my first inclination would be to rip its guts out and rewrite from scratch, which I don't think Sönke will take kindly to, much less merge into the official repo. I suppose if I were pressed I could bring myself to contribute to its documentation, but right now, I've switched back to SCons for my builds and basically confined dub to a dummy empty project that fetches and builds my dependent libraries and nothing else. This setup works well for me, so I don't really have much motivation to improve dub's docs or otherwise improve dub -- I won't be using it very much after all. [...]
 AFAIK, the only standard that Dub is, is a packaging system for D.
 I find it quite weak as a build tool.  That's the problem, it tries
 to do too much.  It would have been nice if it stuck to just dealing
 with packaging, rather than trying to do builds too, and doing it
 IMO rather poorly.
No argument from me there, except Cargo. Cargo does a surprisingly good job of being a package management and build system. Even the go command is quite good at it for Go. So I am re-assessing my old dislike of this way – I used to be a "separate package management and build, and leave build to build systems" person, I guess I still am really. However Cargo is challenging my view, where Dub currently does not.
[...] Then perhaps you should submit PRs to dub to make it more Cargo-like. ;-) [...]
 Honestly, I don't care to have a "standard" build system for D. A
 library should be able to produce a .so or .a, and have an import
 path, and I couldn't care less how that happens; the library could
 be built by a hardcoded shell script for all I care. All I should
 need to do in my code is to link to that .so or .a and specify -I
 with the right import path(s). Why should upstream libraries dictate
 how my code is built?!
This last point is one of the biggest problems with the current Dub system, and a reason many people have no intention of using Dub for build. Your earlier points in this paragraph should be turned into issues on the Dub source repository, and indeed the last one as well. And then we should create pull requests.
Good idea. Though I can't see this changing without rather intrusive changes to the way dub works, so I'm not sure if Sönke would be open to this sort of change. But submitting issues to that effect wouldn't hurt.
 I actually think a standard way is a good thing, but that there should
 be other ones as well. SCons, CMake, Meson, etc. all need ways of
 building D for those who do not want to use the standard way. Seems
 reasonable to me. However SCons and Meson support for D is not yet as
 good as it would be to have it, and last tine I tried CMake-D didn't
 work for me.
A standard way to build would be fine if we were starting out from scratch, in a brand new ecosystem, like Rust. The problem is, D has always supported C/C++-style builds since day 1, and D codebases have been around for far longer than dub has been, and have become entrenched in the way they are built. So for dub (or any other packaging / build system, really) to come along and be gratuitously incompatible with how existing build systems work, is a big showstopper, and gives off the impression of being a walled garden -- either you embrace it fully to the exclusion of all else, or you're left out in the cold.
 To this end, a standard way of exporting import paths in a D library
 (it can be as simple as a text file in the code repo, or some script
 or tool akin to llvm-config or sdl-config that spits out a list of
 paths / libraries / etc) would go much further than trying to
 shoehorn everything into a single build system.
So let's do it rather than just talk about it?
[...] Sure. Since this information is ostensibly already present in a dub project (encoded somewhere in dub.json or dub.sdl), it seems to make little sense to introduce yet another new thing that nobody implements. So a first step might be to enhance dub with a command-line command to output import paths / linker paths in a machine-readable format. Then existing dub projects can be immediately made accessible to external build systems by having said build systems invoke: Perhaps, to eliminate the need for existing build scripts to need to parse JSON or something like that, we could provide finer-grained subcommands, like: dub config import-paths dub config linker-paths dub config dynamic-library-paths and it would output, respectively, something along the lines of: /path/to/somelibrary/src /path/to/someotherlib/src /path/to/yetanotherlib/submodule1/import /path/to/yetanotherlib/submodule2/import /path/to/somelibrary/generated/os/64/lib /path/to/someotherlib/generated/lib /path/to/yetanotherlib/generated/sub/module1/out /path/to/yetanotherlib/generated/sub/module2/out -lsomelibrary -lsomeotherlib -lyetanotherlib Not 100% sure what to do with existing non-dub projects. Perhaps a text file in some standard location. T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Feb 02 2018
prev sibling next sibling parent Jonathan Marler <johnnymarler gmail.com> writes:
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 I'm going to do a writeup on the state of D in Web Development, 
 APIs and Services for 2017. I need the perspective of the 
 community too along with my personal experience. Please help 
 out. More details the better.

 0. Since when did you or company start using D in this area?
I converted 2 of my websites from PHP to D earlier this year. http://clarityfitidaho.com http://bobbiblu.net
 1. Do you use a framework? Which one?
I use my own "cgi.d" framework to create cgi scripts in D that are served up by apache. (https://github.com/marler8997/mored/blob/master/more/cgi.d) I started a project to replace apache for my applications, however, I have not finished or deployed it yet.
 2. Why that approach and what would have done otherwise?
PHP is a nightmare. I love the benefits of compiled languages, and D is both compiled and powerful enough to compete with dynamic languages like python/javascript/php. Go would also be a good candidate, but without all the power of D that I've come to know and love, I feel like someone's cut off my legs when I program in it.
 3. Which task exactly do you use D to accomplish?
In the web space, I use it to render web pages via cgi scripts, but I use D in many more areas outside of the web space.
 4. Which (dub) packages do you use and for what purpose?
I don't use any dub packages.
 5. How do you host your software code (cloud platforms,  vps,  
 PaaS, docker,  Openshift, kubernetes, etc)?
I rent a debian server for $50 a month from serverpronto. I host a handful of websites on it and use it for other things as well.
 6. What are some constraints and problems in using D for such 
 tasks?
No complaints from me. D works great especially on linux.
 7. What solutions do you recommend?
Dec 21 2017
prev sibling next sibling parent bauss <jj_1337 live.dk> writes:
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 I'm going to do a writeup on the state of D in Web Development, 
 APIs and Services for 2017. I need the perspective of the 
 community too along with my personal experience. Please help 
 out. More details the better.

 0. Since when did you or company start using D in this area?

 1. Do you use a framework? Which one?

 2. Why that approach and what would have done otherwise?

 3. Which task exactly do you use D to accomplish?

 4. Which (dub) packages do you use and for what purpose?

 5. How do you host your software code (cloud platforms,  vps,  
 PaaS, docker,  Openshift, kubernetes, etc)?

 6. What are some constraints and problems in using D for such 
 tasks?

 7. What solutions do you recommend?
0. Since over a year ago, I mostly do freelancing with it, so just me really. 1. Diamond -- https://github.com/diamondmvc/diamond 2. Would have done nothing different. 3. Every project I do using D, I pretty much use D only, so all tasks necessary; big or small. 4. mysql-native, vibe.d and diamond. 5. I don't host any projects myself. 6. None so far, except for webservices like SOAP. (Although I'm planning on implementing that for Diamond -- Don't know when however.) 7. Depends on the task.
Dec 21 2017
prev sibling parent Neia Neutuladh <neia ikeran.org> writes:
On Friday, 15 December 2017 at 08:13:25 UTC, aberba wrote:
 I'm going to do a writeup on the state of D in Web Development, 
 APIs and Services for 2017. I need the perspective of the 
 community too along with my personal experience. Please help 
 out. More details the better.

 0. Since when did you or company start using D in this area?
I have been fooling around with D for web-related stuff for a year or so, but nothing terribly concrete. I've been actively migrating my RSS system to D mostly in the past month and a bit.
 1. Do you use a framework? Which one?
I use vibe.d.
 2. Why that approach and what would have done otherwise?
It was there. Without it, I probably would have looked at Hunt briefly and then cobbled together something based on CGI.
 3. Which task exactly do you use D to accomplish?
https://github.com/dhasenan/pierce/ I use D for all backend code, so: * reading feeds * mucking about with the database * authentication
 4. Which (dub) packages do you use and for what purpose?
* arsd-official:dom: an excellent HTML/XML parser * datefmt: date parsing, primarily, with a side of formatting * pbkdf2: password hashing * urld: to have consistent URL handling with other applications * vibe-d-postgresql: postgres
 5. How do you host your software code (cloud platforms,  vps,  
 PaaS, docker,  Openshift, kubernetes, etc)?
I have a couple VPS boxes with linode that I deploy stuff to. PaaS providers do me a concern. Docker is
 6. What are some constraints and problems in using D for such 
 tasks?
vibe.d is all non-blocking IO and it's not always easy to find a library that does a thing with non-blocking IO. You can dispatch tasks to a worker thread, but that uses D's `shared` system, which imposes barriers I'm not terribly familiar with. That also doesn't give you Futures. I believe my code still has blocking database access, at least in production. With my user volume (one for the D alpha, three for to fork a process for handling background tasks, since they can be long. dub doesn't know how to dynamically link dependencies. This means my binary is 38MB and takes half a minute to copy to the server. Since I worked out how to do dynamic linking manually, I'm going to add that in, and then I'll be able to rsync everything. That should reduce my application's binary size to a megabyte or less, which will be a lot nicer. vibe doesn't have a logging appender for std.experimental.logger, and vibe's logging system isn't terribly awesome. I wrote a vibe-compatible rolling file appender for std.experimental.logger in the end. It's kind of weird, though, that std.experimental.logger doesn't separate layout from the output type. It makes sense for some potential appenders to ignore the layout system you would give them -- like if you have an appender for some sort of structured logging API that accepts protobuf-encoded events. But most logs are just text, and if I don't like the layout, I need to write a whole logger.
 7. What solutions do you recommend?
vibe.d isn't a bad option. It's got a lot of stuff in it. However, it might be simpler on the whole to use fastcgi.
Dec 24 2017