www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D enters Tiobe top 20

reply RazvanN <razvan.nitu1305 gmail.com> writes:
Good news, everyone!

D has entered the Tiobe top 20 ranking of programming languages 
[1], landing on the 18th position. I have been keeping an eye on 
this index, and it is for the first time that I see this 
happening.

Cheers,
RazvanN

[1] https://www.tiobe.com/tiobe-index/
Nov 04 2019
next sibling parent reply Daniel Kozak <kozzi11 gmail.com> writes:
On Mon, Nov 4, 2019 at 10:05 AM RazvanN via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages
 [1], landing on the 18th position. I have been keeping an eye on
 this index, and it is for the first time that I see this
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
AFAIK, it has been there before, but for very short time: https://forum.dlang.org/post/fdmafvxoawgmsfjmhksr forum.dlang.org
Nov 04 2019
next sibling parent RazvanN <razvan.nitu1305 gmail.com> writes:
On Monday, 4 November 2019 at 09:19:16 UTC, Daniel Kozak wrote:
 On Mon, Nov 4, 2019 at 10:05 AM RazvanN via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming 
 languages [1], landing on the 18th position. I have been 
 keeping an eye on this index, and it is for the first time 
 that I see this happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
AFAIK, it has been there before, but for very short time: https://forum.dlang.org/post/fdmafvxoawgmsfjmhksr forum.dlang.org
I started looking into D in October 2016, so I did not know about that (the post is from June 2016). Anyway, I think the community should take pride in this outcome and we should do our best to make sure D stays in the top 20. Best regards, RazvanN
Nov 04 2019
prev sibling parent reply IGotD- <nise nise.com> writes:
On Monday, 4 November 2019 at 09:19:16 UTC, Daniel Kozak wrote:
 On Mon, Nov 4, 2019 at 10:05 AM RazvanN via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming 
 languages [1], landing on the 18th position. I have been 
 keeping an eye on this index, and it is for the first time 
 that I see this happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
AFAIK, it has been there before, but for very short time: https://forum.dlang.org/post/fdmafvxoawgmsfjmhksr forum.dlang.org
I think D is going to stay in the top segment now, because of C++xx. Many people are seeking a new programming language and D is one refuge. Also that it got into the GNU compiler collection also helps. D can copy the success of Python, being expressive and simple. Hopefully, the D community keep D simple and do not add a lot of crud like in C++xx. Aesthetic design in programming languages matters, just as they do with cars. Speaking of Rust, I think Rust will remain a niche language because it is a bit too complicated than what benefits it gives. A programmer who only used Python before is going to choose D over Rust and that's where the masses are. Keep up the good work.
Nov 04 2019
next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 4 November 2019 at 10:56:29 UTC, IGotD- wrote:
 On Monday, 4 November 2019 at 09:19:16 UTC, Daniel Kozak wrote:
 A programmer who only used Python before is going to choose D 
 over Rust and that's where the masses are.
I think you are really right, especially with a good framework for example for machine learning. Alas with a good deep learning D solution, we will for sure use D for that, instead of doing the work in python, and then translating the successful one into C++ for performance and embedding, for example.
Nov 06 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 November 2019 at 09:18:58 UTC, Paolo Invernizzi 
wrote:
 Alas with a good deep learning D solution, we will for sure use 
 D for that, instead of doing the work in python, and then 
 translating the successful one into C++ for performance and 
 embedding, for example.
Out of curiosity, does that mean that you prefer to do the training on your own machines rather than "renting" existing infrastructure (cloud solutions)? Having little experience with deep learning (although some with basic ML), I thought the advantage of using a ready-made like the Python one from Google is that you have a large set of prewritten libraries from both Google and other third party contributors that you can compose? Also, if you create your own training-environment, you would still have to run it on a cluster with GPUs? Then distill it down into something that can run well on a single CPU/GPU/SoC? Maybe a framework can do that translation well... but I guess then the better option would be to have a dedicated high level language that translates well to both the training-environment and the final host environment. That seems more reasonable to me? Or maybe you talk about the initial preprocessing of data? In which case you can do it now by interfacing with Python from D?
Nov 06 2019
prev sibling parent reply bachmeier <no spam.net> writes:
On Monday, 4 November 2019 at 10:56:29 UTC, IGotD- wrote:

 Speaking of Rust, I think Rust will remain a niche language 
 because it is a bit too complicated than what benefits it 
 gives. A programmer who only used Python before is going to 
 choose D over Rust and that's where the masses are.
I'm not going to speculate about Rust's future, but that's consistent with my story. I used Rust before using D. Rust had all the hype and D was selling itself as a dialect of C++ in those days, so the decision to go with Rust was easy. It didn't take long to realize that Rust was not going to work. It has a learning curve like Vim. That meant that even ignoring the hideous syntax, which might be better now, nobody I was working with (the Python type of programmer) was ever going to write a line of Rust. It was hard to understand what it brought to the table for most of the programming world. After trying Go and not liking that either, I decided to give D a try during a free moment one evening, expecting to not spend more than a few minutes on it. What I found was a surprisingly well-designed language that had little in common with C++ other than the common ancestor of C. For anyone looking at compiled languages for speed, D was miles ahead of anything else in terms of the combination of ease of use and features.
Nov 06 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/6/2019 2:31 AM, bachmeier wrote:
 That meant that even ignoring the hideous syntax
I seriously don't like that syntax, either. The Ownership/Borrowing scheme for D does not introduce ANY new syntax other than live. It'll look just like the code you're used to!
Nov 06 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07.11.19 01:03, Walter Bright wrote:
 On 11/6/2019 2:31 AM, bachmeier wrote:
 That meant that even ignoring the hideous syntax
I seriously don't like that syntax, either. The Ownership/Borrowing scheme for D does not introduce ANY new syntax other than live. It'll look just like the code you're used to!
That's not a plus. It will _mean_ something else. Therefore, you will have trouble at the interface between live and non- live code, because it is not the same language. Not to mention that live without any further syntax changes will necessarily be even more restrictive than safe Rust.
Nov 07 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/7/2019 1:45 AM, Timon Gehr wrote:
 That's not a plus. It will _mean_ something else. Therefore, you will have 
 trouble at the interface between  live and non- live code, because it is not
the 
 same language. Not to mention that  live without any further syntax changes
will 
 necessarily be even more restrictive than safe Rust.
I guess we'll have to see. I'm making progress every day on it. I'm looking forward to you tearing it apart :-)
Nov 08 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08.11.19 11:11, Walter Bright wrote:
 On 11/7/2019 1:45 AM, Timon Gehr wrote:
 That's not a plus. It will _mean_ something else. Therefore, you will 
 have trouble at the interface between  live and non- live code, 
 because it is not the same language. Not to mention that  live without 
 any further syntax changes will necessarily be even more restrictive 
 than safe Rust.
I guess we'll have to see. I'm making progress every day on it. I'm looking forward to you tearing it apart :-)
So am I. I'm dispersing preliminary advice primarily to make that task more challenging. I'd be delighted to fail. ;)
Nov 08 2019
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.
I'm surprised that D is so high up in the list. There are many languages below D that I thought would be ahead, especially Go. -- /Jacob Carlborg
Nov 04 2019
prev sibling next sibling parent reply NaN <divide by.zero> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
You'd think with all the press rust seems to be getting lately it'd be higher up than it is?
Nov 04 2019
next sibling parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Monday, 4 November 2019 at 09:32:56 UTC, NaN wrote:
 On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming 
 languages [1], landing on the 18th position. I have been 
 keeping an eye on this index, and it is for the first time 
 that I see this happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
You'd think with all the press rust seems to be getting lately it'd be higher up than it is?
It kind of sucks that even in the announce on the Tiobe site they say "Rust is at its peak", even though it is on the 25th position, but they do not mention anything about D getting in the top 20. I think that all this Rust hype is artificial and people with large codebases that started looking into it are not that pleased with it.
Nov 04 2019
parent reply JN <666total wp.pl> writes:
On Monday, 4 November 2019 at 09:36:33 UTC, RazvanN wrote:
 I think that all this Rust hype is artificial and people with 
 large codebases that started looking into it are not that 
 pleased with it.
It's not artificial at all. Many big companies start their projects in Rust now. Rust became the go-to language for new projects that would use C/C++ normally. They definitely have the hype behind it and the hype helps build its ecosystem, which will be helpful in maintaining the popularity after the hype dies over.
Nov 04 2019
parent reply drug <drug2004 bk.ru> writes:
On 11/4/19 2:17 PM, JN wrote:
 On Monday, 4 November 2019 at 09:36:33 UTC, RazvanN wrote:
 I think that all this Rust hype is artificial and people with large 
 codebases that started looking into it are not that pleased with it.
It's not artificial at all. Many big companies start their projects in Rust now. Rust became the go-to language for new projects that would use C/C++ normally. They definitely have the hype behind it and the hype helps build its ecosystem, which will be helpful in maintaining the popularity after the hype dies over.
Yeah. Also some companies try to drop the current language for Rust and then after some time of Rust experience switch back. Considering huge hype about Rust and comparing it to Go it's obviously that something is wrong with Rust adopting. I totally agree to Andrei about Rust and I think it is a good but niche language and with abnormal hype.
Nov 04 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/4/2019 11:52 PM, drug wrote:
 Yeah. Also some companies try to drop the current language for Rust and then 
 after some time of Rust experience switch back. Considering huge hype about
Rust 
 and comparing it to Go it's obviously that something is wrong with Rust
adopting.
 I totally agree to Andrei about Rust and I think it is a good but niche
language 
 and with abnormal hype.
I'm going to speculate that Rust is a bit like pure functional programming. It's a great paradigm, but is inconvenient for that last mile of the code one needs to write. I've struggled with getting that last mile done with other languages (Pascal), and it would consume an inordinate amount of time. This is why D has "pure" functions. You can slip in and out of functional programming as you like. I'm working on an ownership/borrowing system for D. But the idea is it will only be active in functions marked as live.
Nov 05 2019
next sibling parent reply drug <drug2004 bk.ru> writes:
On 11/6/19 8:02 AM, Walter Bright wrote:
 I'm going to speculate that Rust is a bit like pure functional 
 programming. It's a great paradigm, but is inconvenient for that last 
 mile of the code one needs to write. I've struggled with getting that 
 last mile done with other languages (Pascal), and it would consume an 
 inordinate amount of time.
I like your simile. Some developers complain that working with hardware in Rust means huge amount of unsafe code. The last mile in Rust can be really difficult.
 
 This is why D has "pure" functions. You can slip in and out of 
 functional programming as you like.
 
 I'm working on an ownership/borrowing system for D. But the idea is it 
 will only be active in functions marked as  live.
For example in Rust it's very hard to implement tree where nodes can point to both its parent and children because in trivial case both parent and child node are borrow each other that is impossible in Rust.
Nov 06 2019
next sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 6 November 2019 at 09:40:29 UTC, drug wrote:
 On 11/6/19 8:02 AM, Walter Bright wrote:

 For example in Rust it's very hard to implement tree where 
 nodes can point to both its parent and children because in 
 trivial case both parent and child node are borrow each other 
 that is impossible in Rust.
Trees are like computer science fundamentals and they are located in a wide variety of programs. Having a link to the parent is mandatory for doing a fast sorted walk through the elements in tree. You can do it rust but that means unsafe code and also you cannot use the memory management of Rust. Maybe with a reference counted object it will be easier. I'm trying to wrap my head around that they created a language that cannot support simple, common and fundamental algorithms used for more than a half of a century. The designers of Rust should have known better.
Nov 06 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 November 2019 at 11:43:30 UTC, IGotD- wrote:
 I'm trying to wrap my head around that they created a language 
 that cannot support simple, common and fundamental algorithms 
 used for more than a half of a century. The designers of Rust 
 should have known better.
Depends on what you think Rust safe-mode is. If you view it as high level language then that is OK as it can be done as an ADT. That means you have to create ADTs outside the safe language. If writing such ADTs are not possible in Rust, then that would be more of a meta-programming deficiency than a type system deficiency (or both). Reference counting and garbage collection are not alternatives, in the general case... that is way too slow... There are no safe alternatives to Rust's model, in the general case, if you want to allow arbitrary optimizations. Although you can do it in Rusts model with node-ids and indexes, but that is only efficient if you know the upper bound of the number of nodes. So, while Rust's model isn't perfect, there are no known competitive alternative models that work for graphs on the heap either! It is waaay to early to judge Rust, they have not hit the end of the road yet. We'll have to wait and see what their meta-programming ADT and type-system story evolves into.
Nov 06 2019
prev sibling parent reply drug <drug2004 bk.ru> writes:
On 11/6/19 2:43 PM, IGotD- wrote:
 Trees are like computer science fundamentals and they are located in a 
 wide variety of programs. Having a link to the parent is mandatory for 
 doing a fast sorted walk through the elements in tree. You can do it 
 rust but that means unsafe code and also you cannot use the memory 
 management of Rust. Maybe with a reference counted object it will be 
 easier.
Yes, I wasn't clear on that. Of course I meant it is hard in safe Rust. Just because in other case there is no reason to use Rust at all I believe
 
 I'm trying to wrap my head around that they created a language that 
 cannot support simple, common and fundamental algorithms used for more 
 than a half of a century. The designers of Rust should have known better.
 
Nov 06 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 November 2019 at 12:14:33 UTC, drug wrote:
 Yes, I wasn't clear on that. Of course I meant it is hard in 
 safe Rust.
 Just because in other case there is no reason to use Rust at 
 all I believe
Well, I think Rust-developers have been overselling that aspect of Rust, but you could make the exact same argument for D. If you benchmark against standard C++ practices then Rust borrowing only replaces some common cases of RAII in C++. If you cannot do it in a RAII-fashion in C++ then you cannot do it in Rust safe mode either. You can make the exact same argument for D, though. And C++! So, all 3 languages have a safe/unsafe dichotomy that can only be resolved fully with ADTs. In C++ it is checked in code reviews and by tooling, while Rust and D codifies that informal practice in the language to some extent. In order to convince others that D is a better C++ alternative than Rust you'd have to argue encapsulation, meta-programming, composability of ADTs and available frameworks that provide ADTs. Basically do a comparison that shows that you can more easily in unsafe-mode build ADTs that competes with commonly used best-practice domain-specific C++ ADTs. And that using them in safe-mode is more convenient than in the competing languages (with comparable memory/speed performance).
Nov 06 2019
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/6/2019 1:40 AM, drug wrote:
 On 11/6/19 8:02 AM, Walter Bright wrote:
 I'm going to speculate that Rust is a bit like pure functional programming. 
 It's a great paradigm, but is inconvenient for that last mile of the code one 
 needs to write. I've struggled with getting that last mile done with other 
 languages (Pascal), and it would consume an inordinate amount of time.
I like your simile. Some developers complain that working with hardware in Rust means huge amount of unsafe code. The last mile in Rust can be really difficult.
For comparison, the last mile in C is quite straightforward. That instantly seduced me.
 For example in Rust it's very hard to implement tree where nodes can point to 
 both its parent and children because in trivial case both parent and child
node 
 are borrow each other that is impossible in Rust.
Speaking as someone who's only Rust experience is reading the manual, there seems to be a lot of dependence on library code that is implemented in unsafe Rust. Not that this is necessarily a bad thing, as I also promote the safe/unsafe code dichotomy. D needs to make safe the default, though.
Nov 06 2019
next sibling parent reply drug <drug2004 bk.ru> writes:
On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the 
 safe/unsafe code dichotomy.
 
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
 D needs to make  safe the default, though.
No doubts
Nov 06 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the 
 safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
Nov 07 2019
next sibling parent drug <drug2004 bk.ru> writes:
On 11/7/19 12:41 PM, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the 
 safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
I think that Rust is a good language. I'm against cargo cult.
Nov 07 2019
prev sibling next sibling parent drug <drug2004 bk.ru> writes:
On 11/7/19 12:41 PM, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the 
 safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
Thanks for the link. The project aims to the problem I've mentioned above and can really have a big impact.
Nov 07 2019
prev sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 7 November 2019 at 09:41:05 UTC, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote 
 the safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
"... Any realistic languages targeting this domain in the future will encounter the same problem ..." I underline _realistic_ ... Sorry Walter, I'm all with Timon on that.
Nov 07 2019
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/7/2019 7:34 AM, Paolo Invernizzi wrote:
 On Thursday, 7 November 2019 at 09:41:05 UTC, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the safe/unsafe 
 code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
"... Any realistic languages targeting this domain in the future will encounter the same problem ..." I underline _realistic_  ... Sorry Walter, I'm all with Timon on that.
I don't see anything on that site that contradicts what I wrote. In particular: "Rust's core type system prohibits the aliasing of mutable state, but this is too restrictive for implementing some low-level data structures. Consequently, Rust's standard libraries make widespread internal use of "unsafe" blocks, which enable them to opt out of the type system when necessary. The hope is that such "unsafe" code is properly encapsulated, so that Rust's language-level safety guarantees are preserved. But due to Rust's reliance on a weak memory model of concurrency, along with its bleeding-edge type system, verifying that Rust and its libraries are actually safe will require fundamental advances to the state of the art." is saying the same thing.
Nov 07 2019
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.11.19 04:43, Walter Bright wrote:
 On 11/7/2019 7:34 AM, Paolo Invernizzi wrote:
 On Thursday, 7 November 2019 at 09:41:05 UTC, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote the 
 safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
"... Any realistic languages targeting this domain in the future will encounter the same problem ..." I underline _realistic_  ... Sorry Walter, I'm all with Timon on that.
I don't see anything on that site that contradicts what I wrote. In particular: "Rust's core type system prohibits the aliasing of mutable state, but this is too restrictive for implementing some low-level data structures. Consequently, Rust's standard libraries make widespread internal use of "unsafe" blocks, which enable them to opt out of the type system when necessary. The hope is that such "unsafe" code is properly encapsulated, so that Rust's language-level safety guarantees are preserved. But due to Rust's reliance on a weak memory model of concurrency, along with its bleeding-edge type system, verifying that Rust and its libraries are actually safe will require fundamental advances to the state of the art." is saying the same thing.
Indeed, but more importantly, this group is working on verification techniques for unsafe Rust code, so it is likely that unsafe Rust libraries will be proved correct in the future. There is also this recent work on functional verification of safe Rust code: http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?media=papers:prusti.pdf
Nov 08 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/8/2019 2:09 AM, Timon Gehr wrote:
 On 08.11.19 04:43, Walter Bright wrote:
 I don't see anything on that site that contradicts what I wrote. In particular:

 "Rust's core type system prohibits the aliasing of mutable state, but this is 
 too restrictive for implementing some low-level data structures. Consequently, 
 Rust's standard libraries make widespread internal use of "unsafe" blocks, 
 which enable them to opt out of the type system when necessary. The hope is 
 that such "unsafe" code is properly encapsulated, so that Rust's 
 language-level safety guarantees are preserved. But due to Rust's reliance on 
 a weak memory model of concurrency, along with its bleeding-edge type system, 
 verifying that Rust and its libraries are actually safe will require 
 fundamental advances to the state of the art."

 is saying the same thing.
Indeed, but more importantly, this group is working on verification techniques for unsafe Rust code, so it is likely that unsafe Rust libraries will be proved correct in the future.
I read it as ensuring the interface to the unsafe code is correct, not the unsafe code itself. I.e. "is properly encapsulated". After all, if the unsafe code can be proved correct, then it is no longer unsafe, and Rust's ownership/borrowing system can be dispensed with entirely as unnecessary for safety. I'm happy to let the CS Phd's work on those problems, as I am way out of my depth on it. I've convinced myself that the Ownership/Borrowing notion is sound, but have no idea how to formally prove it.
 There is also this recent work on functional verification of safe Rust code: 
 http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?media=papers:prusti.pdf
I don't see any reason why the techniques used in the paper wouldn't work on D, given O/B. Note that the paper says: "We do not address unsafe code in this paper".
Nov 08 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 10:25:44 UTC, Walter Bright wrote:
 After all, if the unsafe code can be proved correct, then it is 
 no longer unsafe, and Rust's ownership/borrowing system can be 
 dispensed with entirely as unnecessary for safety.
They probably mean that it is verified with human provided guides. A human must annotate the library with carefully chosen asserts that the verifier will prove one-by-one until they can all be disposed of. So you have to understand that verification engine and carefully "nudge" it until it succeeds. So, mostly suitable for the standard library and simple third party libraries, but not for bleeding-edge applications that keep changing...
Nov 08 2019
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 08.11.19 11:25, Walter Bright wrote:
 On 11/8/2019 2:09 AM, Timon Gehr wrote:
 On 08.11.19 04:43, Walter Bright wrote:
 I don't see anything on that site that contradicts what I wrote. In 
 particular:

 "Rust's core type system prohibits the aliasing of mutable state, but 
 this is too restrictive for implementing some low-level data 
 structures. Consequently, Rust's standard libraries make widespread 
 internal use of "unsafe" blocks, which enable them to opt out of the 
 type system when necessary. The hope is that such "unsafe" code is 
 properly encapsulated, so that Rust's language-level safety 
 guarantees are preserved. But due to Rust's reliance on a weak memory 
 model of concurrency, along with its bleeding-edge type system, 
 verifying that Rust and its libraries are actually safe will require 
 fundamental advances to the state of the art."

 is saying the same thing.
Indeed, but more importantly, this group is working on verification techniques for unsafe Rust code, so it is likely that unsafe Rust libraries will be proved correct in the future.
I read it as ensuring the interface to the unsafe code is correct, not the unsafe code itself. I.e. "is properly encapsulated". ...
Read the next sentence after that. In general, feel free to read it any way you like, but that's not going to change the goals of the project, which correspond to my summary. https://plv.mpi-sws.org/rustbelt/popl18/paper.pdf Last sentences in abstract: "Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem."
 After all, if the unsafe code can be proved correct, then it is no 
 longer unsafe, and Rust's ownership/borrowing system can be dispensed 
 with entirely as unnecessary for safety.
 ...
That stance makes little sense. The proofs are manual. As an analogy, it's a bit like saying "if it is realistic to audit trusted functions, we can just mark the entire codebase trusted, obtaining memory safe code without any of the safe restrictions". The difference is that here, the proofs can be checked by the computer, but that doesn't make them much easier to provide.
 I'm happy to let the CS Phd's work on those problems, as I am way out of 
 my depth on it.
 ...
I understand, but I can't help but notice that you often choose to contradict what they are saying on those issues anyway. The bird's-eye perspective you would get from an understanding of program verification and dependent type theory would help a lot for the design of sound and expressive type system extensions, such as O/B for D.
 I've convinced myself that the Ownership/Borrowing notion is sound, but 
 have no idea how to formally prove it.
 ...
One way is to embed it into separation logic. It's in the paper below.
 There is also this recent work on functional verification of safe Rust 
 code: 
 http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?med
a=papers:prusti.pdf 
I don't see any reason why the techniques used in the paper wouldn't work on D,
Indeed, separation logic "works" on any language: https://vst.cs.princeton.edu/download/VC.pdf https://vst.cs.princeton.edu/veric/ The reason why you want both safe and system code is that most programmers are unable or unwilling to manually carry out memory safety proofs for all the code they write.
 given O/B.
 ...
Sure. Given O/B of the same caliber, you would get a similar simplification of correctness proofs. But right now we don't have correctness proofs at all, and a lot of times when program verification came up on this newsgroup it was rejected with a vague reference to the halting problem.
 Note that the paper says: "We do not address unsafe code in this paper".
(I'm aware, having read the paper. Which is of course why I described it as "functional verification of _safe_ Rust code". They also refer to RustBelt for verification of unsafe code.)
Nov 08 2019
prev sibling parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 10:25:44 UTC, Walter Bright wrote:
 
 Indeed, but more importantly, this group is working on 
 verification techniques for unsafe Rust code, so it is likely 
 that unsafe Rust libraries will be proved correct in the 
 future.
I read it as ensuring the interface to the unsafe code is correct, not the unsafe code itself. I.e. "is properly encapsulated". After all, if the unsafe code can be proved correct, then it is no longer unsafe, and Rust's ownership/borrowing system can be dispensed with entirely as unnecessary for safety. I'm happy to let the CS Phd's work on those problems, as I am way out of my depth on it. I've convinced myself that the Ownership/Borrowing notion is sound, but have no idea how to formally prove it.
 There is also this recent work on functional verification of 
 safe Rust code: 
 http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?media=papers:prusti.pdf
I don't see any reason why the techniques used in the paper wouldn't work on D, given O/B. Note that the paper says: "We do not address unsafe code in this paper".
The AdaCore/SPARK folks have done a lot of related work here that might be useful: https://www.adacore.com/uploads/techPapers/Safe-Dynamic-Memory-Management-in-Ada-and-SPARK.pdf They took the pointer borrowing idea from Rust and plugged it into the SPARK solver (Why3) so formal correctness is guaranteed. I am very bullish on formal verification in languages going forward. The one big advantage that D has in this arena is DbC. I was disappointed in your NWCPP talk that you listed it as a "miss." :) The potential here is huge! I think it's entirely possible that we can do _compile-time_ proofs that D contracts can be satisfied. With sufficient pre & post-conditions on "unsafe" functions, (and maybe, as somebody pointed out, some asserts along the way to "nudge" the solver in the right direction), you can prove at that nothing bad happens with an automated solver. A couple of other contract types like loop & type invariants might also be necessary (they are in SPARK, at least). It would be pretty cool if you had to mark a function as unsafe only if the compiler (with some help) couldn't prove otherwise. If Rust is going the route of a team of pros doing manual proofs in Coq to demonstrate absence of errors in "unsafe" code, they're in for a world of pain, frankly. My understanding is that the formal verification efforts in Rust were more geared towards proving that the Rust language/compiler themselves were sound, rather than giving Rust users a way of proving correctness of their own code, but I may be mistaken. The paper you linked shows the opposite, though. I'm not an expert, and it's only a hunch, but I suspect that DbC + CTFE could make formal verification of D code a lot easier to implement than trying to graft it on to other languages. The PhD students can tell me if I'm wrong :) My take is that formal verification is sort of where AI/ML was 20 years ago - academically interesting but of little practical use. Now it's a big deal, and I think in 10-20 years, FV will be too. -Doc
Nov 08 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 16:10:32 UTC, Doc Andrew wrote:
 With sufficient pre & post-conditions on "unsafe" functions, 
 (and maybe, as somebody pointed out, some asserts along the way 
 to "nudge" the solver in the right direction), you can prove at 
 that nothing bad happens with an automated solver.
If you talk only about memory safety and require that programs are written in a certain way, then maybe yes. In the general case this will not be possible though. You certainly need to manually guide the verifier (e.g. "nudge" the solver with asserts). The search space is vast. Keep in mind that when humans do proofs in math they do a lot of pattern matching, use a high level conceptualization of the problem space and only do it on "toy-sized" problems.
 I'm not an expert, and it's only a hunch, but I suspect that 
 DbC + CTFE could make formal verification of D code a lot 
 easier to implement than trying to graft it on to other 
 languages.
Formal verification is never easy, but certainly less difficult when the programming language is designed with that in mind... D is too flexible. And that's only for single-threaded programs. Add multi-threading and you basically need the programming language to be designed for it to get anywhere realistic. Such languages do exist, but they are not going to give you C-performance.
 My take is that formal verification is sort of where AI/ML was 
 20 years ago - academically interesting but of little practical 
 use. Now it's a big deal, and I think in 10-20 years, FV will 
 be too.
I think FV is where AI was 70 years ago... But yeah, maybe neural networks can be used for matching up verification problems with strategies so that the verifier succeeds more frequently, but the fact that an automated verifier frequently will fail to reach any conclusion will remain for practical programming that aim for very high performance in the forseeable future. A more likely future is that processing performance will be in abundance and that you therefore don't have to write programs that maximize performance and can switch to languages and algorithms that are easier to prove correct...
Nov 08 2019
parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 16:33:48 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 8 November 2019 at 16:10:32 UTC, Doc Andrew wrote:
 With sufficient pre & post-conditions on "unsafe" functions, 
 (and maybe, as somebody pointed out, some asserts along the 
 way to "nudge" the solver in the right direction), you can 
 prove at that nothing bad happens with an automated solver.
If you talk only about memory safety and require that programs are written in a certain way, then maybe yes. In the general case this will not be possible though. You certainly need to manually guide the verifier (e.g. "nudge" the solver with asserts). The search space is vast. Keep in mind that when humans do proofs in math they do a lot of pattern matching, use a high level conceptualization of the problem space and only do it on "toy-sized" problems.
Sure, one reason SPARK is amenable to FV is because the language restrictions and richness of Ada types does a lot to restrict that search space, the same way dependently-typed languages do.
 I'm not an expert, and it's only a hunch, but I suspect that 
 DbC + CTFE could make formal verification of D code a lot 
 easier to implement than trying to graft it on to other 
 languages.
Formal verification is never easy, but certainly less difficult when the programming language is designed with that in mind... D is too flexible.
For sure - but there may be a _useful_ formally-verifiable subset of D buried in there somewhere, kind of like MISRA C.
 And that's only for single-threaded programs. Add 
 multi-threading and you basically need the programming language 
 to be designed for it to get anywhere realistic. Such languages 
 do exist, but they are not going to give you C-performance.
With the way we do multi-threading now, where anything goes, absolutely. SPARK can formally verify multi-threaded tasks but using a fairly restrictive model (Ravenscar). I don't think performance goes absolutely in the tank with Ravenscar, but yeah, if you don't worry about synchronization, ignore race conditions, and just hope for the best, then sure, you're going to win in the benchmarks :) I look at it a little differently - multi-threaded code is so tough to get right that some sort of verified correctness for it is becoming _essential_. (see: https://www.phoronix.com/scan.php?page=news_item&px=Google-KCSAN-Sanitizer) It may be that a slightly-different threading model, perhaps like that thread "nursery" idea which has been posted here before (https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement- onsidered-harmful/) will be a magic bullet... or not.
 My take is that formal verification is sort of where AI/ML was 
 20 years ago - academically interesting but of little 
 practical use. Now it's a big deal, and I think in 10-20 
 years, FV will be too.
I think FV is where AI was 70 years ago...
Way too pessimistic! :)
 But yeah, maybe neural networks can be used for matching up 
 verification problems with strategies so that the verifier 
 succeeds more frequently, but the fact that an automated 
 verifier frequently will fail to reach any conclusion will 
 remain for practical programming that aim for very high 
 performance in the forseeable future.
One thing that I've looked around for (without much luck) is just using GPUs to speed up the solvers themselves. Apparently parallelizing SAT solvers isn't the easiest thing in the world, but it would be a neat research topic.
 A more likely future is that processing performance will be in 
 abundance and that you therefore don't have to write programs 
 that maximize performance and can switch to languages and 
 algorithms that are easier to prove correct...
The cool thing is that, with the right proofs in place, you can actually get _faster_ code when formally verified, because all of the normal sanity-checks you do in code can get shifted to compile-time. You might even be able to reduce or eliminate thread blocking, etc. with the right model (and probably some HW assistance). Your point is well taken, but I do hope that we can get proven correctness of bare-metal code without having to write something like Liquid Haskell ;) -Doc
Nov 08 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 17:07:59 UTC, Doc Andrew wrote:
 With the way we do multi-threading now, where anything goes, 
 absolutely. SPARK can formally verify multi-threaded tasks but 
 using a fairly restrictive model (Ravenscar). I don't think 
 performance goes absolutely in the tank with Ravenscar, but 
 yeah, if you don't worry about synchronization, ignore race 
 conditions, and just hope for the best, then sure, you're going 
 to win in the benchmarks :)
Ok, so Ravenscar requires tasks to be dispatched in a FIFO manner, but you should be able to do better than that if you only get to use high level synchronization. For instance there are people who do verification of distributed communication protocols, so if you use some kind of message-passing scheme (or equivalent) then those approaches would apply.
 It may be that a slightly-different threading model, perhaps 
 like that thread "nursery" idea which has been posted here 
 before
«Structured concurrency» reminds me of rendezvous, i.e. that threads wait for each other at certain points, but less flexible. I'm not really convinced that a "nursery" is easier to deal with than futures/promises, in the general case, though.
 One thing that I've looked around for (without much luck) is 
 just using GPUs to speed up the solvers themselves. Apparently 
 parallelizing SAT solvers isn't the easiest thing in the world, 
 but it would be a neat research topic.
There are papers on it, but solvers are sensitive to the strategies (heuristics) used... so might be difficult to get right on a GPU. You'd probably have to use some kind of hybrid. I also see that there are papers on FPGA SAT solvers.
 The cool thing is that, with the right proofs in place, you can 
 actually get _faster_ code when formally verified, because all 
 of the normal sanity-checks you do in code can get shifted to 
 compile-time.
Sure, like indexing of arrays (array bounds checks).
 Your point is well taken, but I do hope that we can get proven 
 correctness of bare-metal code without having to write 
 something like Liquid Haskell ;)
Liquid Haskell looks interesting, though. You might find this interesting: http://leino.science/dafny-power-user/ It gives some hints of where such languages might go.
Nov 08 2019
next sibling parent Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 17:41:06 UTC, Ola Fosheim Grøstad 
wrote:
 Your point is well taken, but I do hope that we can get proven 
 correctness of bare-metal code without having to write 
 something like Liquid Haskell ;)
Liquid Haskell looks interesting, though. You might find this interesting: http://leino.science/dafny-power-user/ It gives some hints of where such languages might go.
Thanks for the link! I've looked briefly at Dafny in the past and liked what I saw. Between Dafny, F* and the Lean prover, MS research is doing some pretty neat work in the field.
Nov 08 2019
prev sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 8 November 2019 at 17:41:06 UTC, Ola Fosheim Grøstad 
wrote:
 «Structured concurrency» reminds me of rendezvous, i.e. that 
 threads wait for each other at certain points, but less 
 flexible.
What I found insightful is the idea that without structured concurrency, functions can just spawn tasks with no obligation to wait or clean them up. What you find in practice is that those responsibilities are often spread over several functions. The result is that even if you have clean code, the end and start of those tasks is woven like spaghetti through your code. Structured concurrency introduces blocks for concurrency, which are similar to those introduced by structured programming. You can always be sure that when a nursery is exited, all its tasks have properly ended. I for one have had many occasions where I had a rogue task or a leaking websocket. With structured concurrency writing that code is easier, but more importantly, reviewing is easier as well.
 I'm not really convinced that a "nursery" is easier to deal 
 with than futures/promises, in the general case, though.
You still have futures with a nursery. It is just that you can't leave the nursery block without all concurrent tasks completing, timing out or being canceled.
Nov 08 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 20:28:32 UTC, Sebastiaan Koppe 
wrote:
 I for one have had many occasions where I had a rogue task or a 
 leaking websocket.

 With structured concurrency writing that code is easier, but 
 more importantly, reviewing is easier as well.
Right, code with multiple futures can look a bit messy as the management of different futures and the main flow often ends up being interleaved in the source code.
 You still have futures with a nursery. It is just that you 
 can't leave the nursery block without all concurrent tasks 
 completing, timing out or being canceled.
So one can easily build something nursery-like on top of a futures-library that provide a wait_for_all(…) function, or like OS-X dispatch queues with dispatch_group_wait(…). I can see how that could be useful sometimes, for sure.
Nov 08 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 17:07:59 UTC, Doc Andrew wrote:
 For sure - but there may be a _useful_ formally-verifiable 
 subset of D buried in there somewhere, kind of like MISRA C.
I guess it is worth mentioning though that all integers in D follow the semantics of modular math, so you would not be able to conduct proofs with the assumption that you work with natural integers. So you cannot assume that "x+1 > x" for any type in D, that might make many proofs more difficult. Modular math can be handled with bit-blasting (turning each bit to a logic expression and use SAT), but maybe with less success.
Nov 08 2019
parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 17:55:41 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 8 November 2019 at 17:07:59 UTC, Doc Andrew wrote:
 For sure - but there may be a _useful_ formally-verifiable 
 subset of D buried in there somewhere, kind of like MISRA C.
I guess it is worth mentioning though that all integers in D follow the semantics of modular math, so you would not be able to conduct proofs with the assumption that you work with natural integers. So you cannot assume that "x+1 > x" for any type in D, that might make many proofs more difficult. Modular math can be handled with bit-blasting (turning each bit to a logic expression and use SAT), but maybe with less success.
It works if you explicitly mark overflow as an error. You can try to prove that x <= x.max for a given path, and if you can't, then you just flag a warning. That's what SPARK does, at least. If the modular arithmetic is desired you'd probably have to manually put the overflow check and reassignment to 0 (or whatever) in there. That's one kind of annoying thing I discovered about Rust when working on some crypto code - having to write Wrapping types all over the place. The bad thing there was that the overflow checks were only in Debug mode, so in Release mode the code worked as intended without Wrapping. That was a few years ago, so things may be a little different now.
Nov 08 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
 It works if you explicitly mark overflow as an error. You can 
 try to prove that x <= x.max for a given path, and if you 
 can't, then you just flag a warning.
So you manually assert it then? I guess you could use a templated type with asserts builtin. But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps. Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.
Nov 08 2019
parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 18:25:02 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
 It works if you explicitly mark overflow as an error. You can 
 try to prove that x <= x.max for a given path, and if you 
 can't, then you just flag a warning.
So you manually assert it then? I guess you could use a templated type with asserts builtin. But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps. Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.
With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).
Nov 08 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
 With SPARK you don't have to manually assert the checks, it 
 does the overflow checks as part of it's proof automatically. 
 The neat thing is that you define new modular types with their 
 range of allowable values, so as you prove the code, you show 
 that those bounds aren't exceeded. It's basically a type 
 invariant contract as part of the type definition itself (as I 
 understand it).
Hm, seems to me that it has signed integers with bounds that are non-modular and unsigned integers that are modular: https://learn.adacore.com/courses/intro-to-ada/chapters/strongly_typed_language.html#integers So, in that case a prover could model bounded integers as integers with a constraint, and modular unsigned integers as bitvectors.
Nov 08 2019
parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 18:51:42 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
 With SPARK you don't have to manually assert the checks, it 
 does the overflow checks as part of it's proof automatically. 
 The neat thing is that you define new modular types with their 
 range of allowable values, so as you prove the code, you show 
 that those bounds aren't exceeded. It's basically a type 
 invariant contract as part of the type definition itself (as I 
 understand it).
Hm, seems to me that it has signed integers with bounds that are non-modular and unsigned integers that are modular: https://learn.adacore.com/courses/intro-to-ada/chapters/strongly_typed_language.html#integers So, in that case a prover could model bounded integers as integers with a constraint, and modular unsigned integers as bitvectors.
I'm WAAAYYY out of my pay grade here, but I _think_ the distinction may depend on the underlying logic that the solver is using. Something like Coq where your Naturals are built on successor types would probably have a hard time with the modular arithmetic, but it might just not be a big deal for the Why3 solvers which I _think_ are using Hoare logic, since that's what HOL/ML is built on. I'm not really an expert in the Hoare logic though, and know just enough Coq to embarrass myself :) The Why3 solver will also let you use Coq to do a manual proof in the event that the automated prover can't figure it out, so I can't say for sure.
Nov 08 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 19:04:38 UTC, Doc Andrew wrote:
 I'm not really an expert in the Hoare logic though
Roughly; Hoare logic is a set of rules for going from the precondition to the postcondition over a single statement. Each rule specify what the post condition should be given the precondition (and vice versa).
 myself :) The Why3 solver will also let you use Coq to do a 
 manual proof in the event that the automated prover can't 
 figure it out, so I can't say for sure.
Hm, that's interesting. I've bookmarked Why3, now… 4 l8r… :-)
Nov 08 2019
parent Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 19:11:52 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 8 November 2019 at 19:04:38 UTC, Doc Andrew wrote:
 I'm not really an expert in the Hoare logic though
Roughly; Hoare logic is a set of rules for going from the precondition to the postcondition over a single statement. Each rule specify what the post condition should be given the precondition (and vice versa).
 myself :) The Why3 solver will also let you use Coq to do a 
 manual proof in the event that the automated prover can't 
 figure it out, so I can't say for sure.
Hm, that's interesting. I've bookmarked Why3, now… 4 l8r… :-)
I think we've probably given the next batch of GSoC candidates something to think about at least, haha.
Nov 08 2019
prev sibling parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 18:39:25 UTC, Doc Andrew wrote:
 On Friday, 8 November 2019 at 18:25:02 UTC, Ola Fosheim Grøstad 
 wrote:
 On Friday, 8 November 2019 at 18:11:39 UTC, Doc Andrew wrote:
 It works if you explicitly mark overflow as an error. You can 
 try to prove that x <= x.max for a given path, and if you 
 can't, then you just flag a warning.
So you manually assert it then? I guess you could use a templated type with asserts builtin. But then the solver have to figure out when to designate the type as an integer and when to designate it as a bitvector. I guess there is some room for that... perhaps. Another approach is to first prove the code correct for natural numbers and then try to prove that there is some upper limit and select an integer size that can contain that upper limit (e.g. 64 bits vs 32 bits). I am not sure about Dafny, but I think Whiley backends do that.
With SPARK you don't have to manually assert the checks, it does the overflow checks as part of it's proof automatically. The neat thing is that you define new modular types with their range of allowable values, so as you prove the code, you show that those bounds aren't exceeded. It's basically a type invariant contract as part of the type definition itself (as I understand it).
It's probably worth noting too that SPARK just translates the type definitions it sees into Why3 ML code, and uses that in the Why3 meta-solver for the proof. It separates proof from compilation, so if the proof checks out, then you can compile into bare-metal machine-code. This is a little different than a lot of the FV languages which take an ML/FP-ish language (with GC, etc.) and then have a C backend for it (with a runtime, usually).
Nov 08 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 8 November 2019 at 18:53:49 UTC, Doc Andrew wrote:
 It's probably worth noting too that SPARK just translates the 
 type definitions it sees into Why3 ML code, and uses that in 
 the Why3 meta-solver for the proof.
Ah, got it :) I haven't looked at Why3 yet.
 This is a little different than a lot of the FV languages which 
 take an ML/FP-ish language (with GC, etc.) and then have a C 
 backend for it (with a runtime, usually).
Yeah, high level FV languages can just do like Python and have unbounded integers and such.
Nov 08 2019
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/8/2019 8:10 AM, Doc Andrew wrote:
 I am very bullish on formal verification in languages going forward. The one
big 
 advantage that D has in this arena is DbC. I was disappointed in your NWCPP
talk 
 that you listed it as a "miss." :) The potential here is huge! I think it's 
 entirely possible that we can do _compile-time_ proofs that D contracts can be 
 satisfied.
assert()s are the contracts that matter, not the in and out contracts, which can be replaced with assert()s. In no way did I mean to imply that assert()s were a mistake! I've thought for 35 years that assert()s can be the input to a program prover, i.e. it's nothing new, but I do not have the expertise to figure out such proofs might work, other than simplistic cases.
Nov 08 2019
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 09/11/2019 11:41 AM, Walter Bright wrote:
 
 I've thought for 35 years that assert()s can be the input to a program 
 prover, i.e. it's nothing new, but I do not have the expertise to figure 
 out such proofs might work, other than simplistic cases.
Sounds like some universities need to be contacted, perhaps a future PHD student would be interested in doing the theory.
Nov 08 2019
next sibling parent reply Doc Andrew <x x.com> writes:
On Saturday, 9 November 2019 at 02:18:41 UTC, rikki cattermole 
wrote:
 Sounds like some universities need to be contacted, perhaps a 
 future PHD student would be interested in doing the theory.
It could be done in baby steps, too - maybe start with a tool that just shows no bad behavior occurs in a simple expression. Then show the same for entire pure functions. From there someone could work up to proving that pre/post-conditions in pure functions are satisfied. Then maybe go on to do invariants, then work up to safe nogc code, and so on. -Doc
Nov 08 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 02:55:37 UTC, Doc Andrew wrote:
 It could be done in baby steps, too - maybe start with a tool 
 that just shows no bad behavior occurs in a simple expression.
Exactly right. PREREQUISITE: Add something like «assume» to the language and make the optimizer add those to its fact database. For LLVM this should be trivial. Then add «assume» clauses to the standard library that are manually proven correct and also add an option to convert assume to assert for additional unit-testing. This allows you to get started working on the optimizer and figure out how to make good use of the extra facts. Also, optimizing the standard library affects all compiled programs... so even small gains might be worthwhile. STAGE 2: Add ensure-statements to the language. Those will be converted to assume-statements if you throw a switch, but display a warning that all ensure-statements have to be proven correct by an external verifier before compilation. Thus the entire burden has been moved to an external tool. This would encourage third parties to work on tools to translate D code into verifiable code to be consumed by standard verification tooling. Extension: create an online certification service that allows library code to carry a certificate that the library code has been verified. So that all their ensure statements can be turned into assume without throwing a switch. Certificates can be revoked (in case you find a bug in the verification process). STAGE 3: Then break all dynamic checks out as asserts by transforming the code. So that all bounds checks turn up as asserts. Break composite asserts into individual asserts. Verifying individual functions: step 1.1: use data-flow to convert as many asserts as possible into assumes step 1.2: use an external verifier to prove others if possible If those fail, try to do the verification at the call site: step 2.1: create a unique copy of the function for the call site step 2.2: use dataflow to bring additional assumed preconditions into the called function step 2.3: go to step 1.1
 Then show the same for entire  pure functions. From there 
 someone could work up to proving that pre/post-conditions in 
  pure functions are satisfied. Then maybe go on to do 
 invariants, then work up to  safe  nogc code, and so on.
Yup, start with the easy ones. Like pure functions provided as parameters to functions that work on arrays which often. Could even limit it to the ones used in the standard library.
Nov 09 2019
parent reply Doc Andrew <x x.com> writes:
On Saturday, 9 November 2019 at 11:19:08 UTC, Ola Fosheim Grøstad 
wrote:
 PREREQUISITE:

 Add something like «assume» to the language and make the 
 optimizer add those to its fact database. For LLVM this should 
 be trivial.
As a stop-gap we might be able to use an assume UDA, but I do think "assume" could be added as a reserved keyword in the near future in anticipation...
 Then add «assume» clauses to the standard library that are 
 manually proven correct and also add an option to convert 
 assume to assert for additional unit-testing.
I like the idea of having a compiler flag to treat assumes as asserts, maybe even in release mode?
 This allows you to get started working on the optimizer and 
 figure out how to make good use of the extra facts.
That's probably a much longer-term goal, but just being able to move normal error-checking into contracts and depending on them in release code is a big win IMO.
 Also, optimizing the standard library affects all compiled 
 programs... so even small gains might be worthwhile.


 STAGE 2:

 Add ensure-statements to the language. Those will be converted 
 to assume-statements if you throw a switch, but display a 
 warning that all ensure-statements have to be proven correct by 
 an external verifier before compilation. Thus the entire burden 
 has been moved to an external tool.
This might be more difficult to enforce without some tight compiler integration. An attribute like ensured or certified could be the "gold standard" of safe D code, which would require a proof of no run-time errors and all contracts. Without the proof, the compiler could throw an error for functions with this attribute. Or, just treat it as a UDA and let the proof be a separate thing and let the user sort it all out. SPARK does the latter, but I like the idea of better compiler integration and support. We'd probably need a DIP to hash this out.
 This would encourage third parties to work on tools to 
 translate D code into verifiable code to be consumed by 
 standard verification tooling.

 Extension: create an online certification service that allows 
 library code to carry a certificate that the library code has 
 been verified. So that all their ensure statements can be 
 turned into assume without throwing a switch. Certificates can 
 be revoked (in case you find a bug in the verification process).
This might tie in with some of the work going on with reproducible builds, but I love the idea of certs for known good object files. Could you use a Merkle tree built-up by the compiler? So the source file hashes are baked into the object file (maybe just have a special symbol), then executables would contain a hash of all the object hashes. This is sort of an ancillary project to FV but if we are talking high-integrity software, it might be an easy win and a cool feather in D's cap.
 STAGE 3:

 Then break all dynamic checks out as asserts by transforming 
 the code. So that all bounds checks turn up as asserts. Break 
 composite asserts into individual asserts.
Using something like the certified (or ensured, or verified, or whatever) attribute, we could just turn off run-time checks in these functions if a proof exists, could be as simple as a source.d.proof file with the function name (and hash?) in it.
 Verifying individual functions:
 step 1.1: use data-flow to convert as many asserts as possible 
 into assumes
I'm not sure about this one. Usually assumes are big red flags, they're the FV version of unsafe - "trust me, I know what I'm doing", that sort of thing. I might have misunderstood your intent here though.
 step 1.2: use an external verifier to prove others if possible

 If those fail, try to do the verification at the call site:
 step 2.1: create a unique copy of the function for the call site
 step 2.2: use dataflow to bring additional assumed 
 preconditions into the called function
 step 2.3: go to step 1.1
This is how SPARK works, even if you can't prove a (normal Ada) function body, because of inline asm, or pointer arithmetic or something, you can still specify pre and post-contracts on the interface. I think there's a little bit of a trusted feel to it, since the post-conditions on a non-SPARK function become sort of an assumption that the function does what it claims, given the preconditions hold. I think SPARK may leave these as run-time checks (normal Ada behavior, normal D _Debug_ behavior), which you'd have to do to have any sort of confidence in your proof. This is where things get a little messy. Verified functions calling non-verified functions with post-condition checks would need these checks enabled even in release mode, I would think. The cool thing is that D _today_ has all the syntax it needs to start working on tooling for this with no language changes. Short-term, it may be of some benefit to reserve a couple keywords like assume or ensure, maybe set aside some attributes like verified or certified or ensured. Medium term, allowing invariant blocks to be associated with specific loops would probably be needed, maybe some extra compiler support for ensuring the presence of proofs. Longer term, I really like the idea of a "trusted language base" with certificates and object hashes that a maintainer can use to ensure complete accounting of everything that went into a particular build. I'm not as concerned as some are with proven builds and linking, but there may be high-integrity requirements in the future where this sort of thing is required. I'm not sure if it's a thing yet, maybe NASA does it, but I'd be surprised if anybody else is yet. All it takes is a high-profile case of somebody's cloud CI pipeline being compromised to use a malicious library, and everybody is going to be jumping on this sort of thing. It may be worth a DIP or a DConf session or something to hash out some of the long-term vision, but I think it's possible to start working on tools now. I'm not totally familiar with the landscape, but generating Why3ML from D seems like it would be one good way to start. You get access to a lot of solvers for free, it seems like it's well-documented and fairly easy to use. -Doc
Nov 09 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 14:49:32 UTC, Doc Andrew wrote:
 As a stop-gap we might be able to use an
  assume UDA, but I do think "assume" could be added as a 
 reserved keyword in the near future in anticipation...
True, I meant to refer to the concept, not syntax. Maybe the syntax for assume should be ugly enough to really stand out so that it is caught in reviews, " ASSUME" or " __assume"... Dunno. That is a detail, though.
 I like the idea of having a compiler flag to treat assumes as 
 asserts, maybe even in release mode?
It could at least be useful in cases where you do exhaustive unit testing (test all combinations), then assume would be safe.
 This allows you to get started working on the optimizer and 
 figure out how to make good use of the extra facts.
That's probably a much longer-term goal, but just being able to move normal error-checking into contracts and depending on them in release code is a big win IMO.
Well, not necessarily, some C++ compilers support it. The problem is that you probably don't want to introduce too many auxiliary facts, as that would just slow down compilation with no gain (and perhaps even confuse a naive optimizer). Anyway, you should be able to add facts to LLVM by compiling a boolean expression and then assuming it to hold. But maybe the language syntax should allow you to select which propositions you want to propagate to the optimizer. Not sure if LLVM can express quantifiers though (forall/exists).
 This might be more difficult to enforce without some tight 
 compiler integration. An attribute like  ensured or  certified 
 could be the "gold standard" of safe D code, which would 
 require a proof of no run-time errors and all contracts. 
 Without the proof, the compiler could throw an error for 
 functions with this attribute.
Well, that might be an issue, if the ensure-statement is meant to help optimization then it could simply be omitted. If it is meant to help correctness then it could be translated into an assert if it isn't verified. But I guess one will have to distinguish the two cases somehow. Which is basically why I think trying to improve basic libraries with assume-statements and manual proofs is the best starting point. Then you get an idea for how it can affect optimization. You probably also want to bake into a certificate trust-building meta information, such as which verification method, and who did it and where, so that developers can specify constraints like "must be verified by service X using verifier Y or Z". Or open up for hand-verified code by developer Q or a specific organization (could be a URL to a public-key).
 We'd probably need a DIP to hash this out.
I will certainly read and comment on it, if you want to write one. You could start with specc'ing out a list of requirements?
 This might tie in with some of the work going on with 
 reproducible builds, but I love the idea of certs for known 
 good object files.
I actually meant source code only. You distribute the source code, but with certified "requires"/"assume"/"ensures" statements that is bound to the source code using cryptographic hashing and an online index (so that it can be checked or revoked). I guess one issue is that it has to be a standalone library (e.g. only depend on a specific version of the standard library) for this to work. Otherwise you would have to tie it to a specific certified version of all the libraries you depend on. But... I don't actually think that is a big deal, you typically would pre-verify basic libraries. You could use a simple concatenation for the hash with sorted filenames and some sentinel character sequence for separating files, e.g.: filename1, sep, file1, sep, filename2, sep, file2, sep, sep.
 Using something like the  certified (or  ensured, or  verified, 
 or whatever) attribute, we could just turn off run-time checks 
 in these functions if a proof exists, could be as simple as a 
 source.d.proof file with the function name (and hash?) in it.
I am actually not sure how this could be done well. Because you could have array accesses within a conditional expression. Maybe you'd actually have to annotate the indexing with some new syntax to tell the compiler that the access does not need bounds checks as it has been proven already to hold... So I guess this is an area that needs to be fleshed out with many examples. Anyway, it seems reasonable that a tool would take source code, verify it and emit an annotated version of the source code. Then less capable compilers could use the verified results. Most users are not going to write verifiable code themselves.
 I'm not sure about this one. Usually assumes are big red flags, 
 they're the FV version of unsafe - "trust me, I know what I'm 
 doing", that sort of thing. I might have misunderstood your 
 intent here though.
I think so, if they are proven to hold by a tool/compiler then they hold. Dataflow analysis is sufficient, when it succeeds, but will fail in many cases at it is a weak approach. However, it is also much faster than a proper solver. Basically just an optimization for performance reasons. But you also want to use assume manually sometimes when describing hardware registers (e.g. describing that bit 8-32 always are zero), or when something has been proven by hand and has a significant performance impact.
 This is how SPARK works, even if you can't prove a (normal Ada) 
 function body, because of inline asm, or pointer arithmetic or 
 something, you can still specify pre and post-contracts on the 
 interface. I think there's a little bit of a  trusted feel to 
 it, since the post-conditions on a non-SPARK function become 
 sort of an assumption that the function does what it claims, 
 given the preconditions hold. I think SPARK may leave these as 
 run-time checks (normal Ada behavior, normal D _Debug_ 
 behavior), which you'd have to do to have any sort of 
 confidence in your proof.
Yes, for typical SPARK use cases that seems reasonable.
 This is where things get a little messy. Verified functions 
 calling non-verified functions with post-condition checks would 
 need these checks enabled even in release mode, I would think.
Yes, this is where different goals may lead to different designs... so the challenge is to come up with something that can both support correctness and optimization without being extremely complicated in terms of syntax.
 The cool thing is that D _today_ has all the syntax it needs to 
 start working on tooling for this with no language changes.
Sure, you could also just define a dummy "ensures", "requires", whatever… function that takes a boolean as a parameter. With an empty body, or an assert.
 a particular build. I'm not as concerned as some are with 
 proven builds and linking, but there may be high-integrity 
 requirements in the future where this sort of thing is 
 required. I'm not sure if it's a thing yet, maybe NASA does it, 
 but I'd be surprised if anybody else is yet.
There are papers on proof-carrying-assembly code... Although the basic idea behind certification would be that you get to use the postconditions and assumes without having to run costly verification on your own machine. You might even see a commercial service with very strong provers and effective hardware.
 I'm not totally familiar with the landscape, but generating 
 Why3ML from D seems like it would be one good way to start. You 
 get access to a lot of solvers for free, it seems like it's 
 well-documented and fairly easy to use.
Perhaps Why3ML is a good choice, but I would've started with Boogie.
Nov 09 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 17:16:40 UTC, Ola Fosheim Grøstad 
wrote:
 Well, not necessarily, some C++ compilers support it.  The 
 problem is that you probably don't want to introduce too many 
 auxiliary facts, as that would just slow down compilation with 
 no gain (and perhaps even confuse a naive optimizer). Anyway, 
 you should be able to add facts to LLVM by compiling a boolean 
 expression and then assuming it to hold.
Maybe by doing something along the lines of: if ( !must_be_true ) { llvm.unreachable }
Nov 09 2019
prev sibling parent reply Doc Andrew <x x.com> writes:
On Saturday, 9 November 2019 at 17:16:40 UTC, Ola Fosheim Grøstad 
wrote:
 Well, not necessarily, some C++ compilers support it.  The 
 problem is that you probably don't want to introduce too many 
 auxiliary facts, as that would just slow down compilation with 
 no gain (and perhaps even confuse a naive optimizer). Anyway, 
 you should be able to add facts to LLVM by compiling a boolean 
 expression and then assuming it to hold.

 But maybe the language syntax should allow you to select which 
 propositions you want to propagate to the optimizer. Not sure 
 if LLVM can express quantifiers though (forall/exists).
I'm pretty ignorant of how all that would all work, is there some sort of a standard way to give optimizers "hints" based on what the solver can show? I don't want to over-sell the potential performance gains. (Like Rust fans talking about how _theoretically_, Rust is much faster than C because of the lack of pointer aliasing, but the benchmarks just don't bear it out. (yet))
 But I guess one will have to distinguish the two cases somehow. 
 Which is basically why I think trying to improve basic 
 libraries with assume-statements and manual proofs is the best 
 starting point. Then you get an idea for how it can affect 
 optimization.
Oof. You're gonna have a hard time finding volunteers for that one. I think that's how projects like SeL4 do it though, translating C into HOL and then adding verification conditions manually. I want to say the SeL4 proofs are something like 10x the number of lines as the code itself. Working manual proofs can be kind of fun, like a puzzle, but I've read about some negative case studies where manually verified code had some incorrect assumptions in the proofs, or proved things that ended up being orthogonal to the desired outcome. It's murky.
 I will certainly read and comment on it, if you want to write 
 one. You could start with specc'ing out a list of requirements?
I'll take a look at the DIP author guidelines and start putting something together. I think it's worth the community's consideration if we can do this in a way that minimizes impact. As a long time mostly-lurker and D user, I don't have any credibility with compiler changes, so I don't want to be one of those guys who jumps in with a big DIP that tries to make Walter and all the other contributors jump through their ass. I was toying with the idea of 2 or more parts: start with a single DIP for "D Formal Verification Support" that maybe just describes which UDAs should be agreed upon for external proof tools to use, the behavior that should be expected when using contracts in those tools, and maybe a small set of changes to the language spec: - Universal quantification: The low-impact way would be to set aside an attribute forall or foreach, then figure out what tuple elements you'd need to generate a VC from them and how you'd tie them to a function, module, variable, etc. (Longer-term, consider a change to the language to create a foreach expression or add a new expression "forall" (or something). I would anticipate a lot of weeping and gnashing of teeth.) - Existential quantification: I think you can do this in a low-impact way is to set aside an attribute exists or forsome and sort out the UDA tuple spec that provers would use. Longer term, add an "exists" or "forsome" expression. - Loop invariants: Allow invariant{} blocks inside loop statements in addition to classes and structs. This is the only part of the DIP that I can foresee requiring a language change; I don't know any way of attaching a UDA to a statement. I _think_ this would be the only blocker. LATER ON: if there's interest and any promise, then we could look at making parts of Phobos verified, adding compiler support for provers, etc. in follow-on DIPs. A very powerful addition would be user-defined scalar types with their own invariants, but this is a BIG language change. (Even better would be strong subtypes, refinement types, or first-order types, but you can simulate them well enough with contracts in an invariant clause). A smaller language change would be to give invariant a parameter with a hypothetical or ghost variable and allow it on aliases: alias uint ShoeSize; invariant(ShoeSize T){ assert(T > 0); assert(T < 24); } void buyBiggerShoes(ShoeSize s){ Shoe t; t.size = s + 1; //Consider this line //Invariant asserts get run if in Debug mode, so hopefully during // unit-testing you tried it with s == 23. //During proof time, this would warn with something like: // "Cannot prove t.size < 24 (example: when s == 23) // Possible solution: add s to list of preconditions for buyNewSchoolShoes" // (this is almost exactly what SPARK gives) sendOrder(t); } Built-in types would have implied invariants, at least as far as proofs are concerned: invariant(int i){ assert(i <= int.max); assert(i >= int.min); } So it might be that this initial DIP could just focus on beefing up invariant{}? I'd welcome any more-informed thoughts on how to proceed and go through the DIP process, if there's interest!
 I actually meant source code only. You distribute the source 
 code, but with certified "requires"/"assume"/"ensures" 
 statements that is bound to the source code using cryptographic 
 hashing and an online index (so that it can be checked or 
 revoked).

 I guess one issue is that it has to be a standalone library 
 (e.g. only depend on a specific version of the standard 
 library) for this to work. Otherwise you would have to tie it 
 to a specific certified version of all the libraries you depend 
 on.

 But... I don't actually think that is a big deal, you typically 
 would pre-verify basic libraries. You could use a simple 
 concatenation for the hash with sorted filenames and some 
 sentinel character sequence for separating files, e.g.:  
 filename1, sep, file1, sep, filename2, sep, file2, sep, sep.


 I am actually not sure how this could be done well. Because you 
 could have array accesses within a conditional expression.
Flow analysis can sort that out, checking both branches of the conditional and making sure that contracts hold in every case.
 Maybe you'd actually have to annotate the indexing with some 
 new syntax to tell the compiler that the access does not need 
 bounds checks as it has been proven already to hold... So I 
 guess this is an area that needs to be fleshed out with many 
 examples.

 Anyway, it seems reasonable that a tool would take source code, 
 verify it and emit an annotated version of the source code. 
 Then less capable compilers could use the verified results.

 Most users are not going to write verifiable code themselves.
Ideally, it will be easy to run the proof tool on existing code as a sort of linter in addition to unit tests and other static analysis. The first time you see warnings like "Cannot prove abc is less than def (example: def = 255)" because you didn't consider edge cases, it opens your eyes to how proofs can really save you from a lot of testing and debugging. The goal is that a user doesn't have to do anything different than normal to write verifiable code. It's all verifiable, but it's up to the user how deep they want to go down the rabbit hole to prove functional correctness by adding a ton of contracts.
 I think so, if they are proven to hold by a tool/compiler then 
 they hold. Dataflow analysis is sufficient, when it succeeds, 
 but will fail in many cases at it is a weak approach. However, 
 it is also much faster than a proper solver. Basically just an 
 optimization for performance reasons.

 But you also want to use assume manually sometimes when 
 describing hardware registers (e.g. describing that bit 8-32 
 always are zero), or when something has been proven by hand and 
 has a significant performance impact.
One kind of neat thing that SPARK has is the ability to break volatile down into sub-categories like "Effective_Reads", where reading a value takes something off a UART, or "Effective_Writes", where writing to a hardware register has some effect on the system, and "Async_Readers/Writers" where some hardware will be accessing memory behind the scenes... now how that affects the state of a proof, I'm not totally sure.
 Yes, this is where different goals may lead to different 
 designs... so the challenge is to come up with something that 
 can both support correctness and optimization without being 
 extremely complicated in terms of syntax.
I'm not super smart on the optimization side of the house, my focus would be on correctness first and then let the optimization wizards extract what useful information they can from that.
 Sure, you could also just define a dummy "ensures", "requires", 
 whatever… function that takes a boolean as a parameter. With an 
 empty body, or an assert.
I'll have to give some thought to this.
 There are papers on proof-carrying-assembly code...

 Although the basic idea behind certification would be that you 
 get to use the postconditions and assumes without having to run 
 costly verification on your own machine.

 You might even see a commercial service with very strong 
 provers and effective hardware.
I think you'll see SaaS offerings like this in 10 years, kind of like big ML farms now. Proofs on big code bases can take a long time, definitely something you would want as part of an overnight build in a CI pipeline. Single file proofs can go pretty quick though, depending on how deep you let them search.
 Perhaps Why3ML is a good choice, but I would've started with 
 Boogie.
I hadn't heard of Boogie before, but it looks promising - It doesn't feel quite as mature as Why3, but the syntax looks a lot friendlier, for sure. So, having done no DMD compiler hacking myself, is the state of the "DMD as a library" mature enough now to where somebody could import dmd.compiler, get an AST and start figuring out VCs? A DIP without some sort of prototype tooling, even if primitive, is going to be pretty toothless, IMO. -Doc
Nov 09 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 21:11:14 UTC, Doc Andrew wrote:
 I'm pretty ignorant of how all that would all work, is there 
 some sort of a standard way to give optimizers "hints" based on 
 what the solver can show?
I don't think so, but if you establish something that is to weak for verification it still can potentially be used for optimization.
 I don't want to over-sell the potential performance gains.
Of course not, you would have to provide actual examples. The "easy" gains would be from removing dynamic checks, but not so easy in reality. If it is easy then the optimizer can already do it using dataflow analysis. So figuring out "trivial" small examples where the optimizer fails by looking at generated assembly is a good starting point.
 - Universal quantification: The low-impact way would be to set 
 aside an attribute  forall or  foreach, then figure out what 
 tuple elements you'd need to generate a VC from them and how 
 you'd tie them to a function, module, variable, etc.
I was more thinking, what do you do if it fails? One cannot keep O(N) or O(N^2) asserts in the code as a dynamic check... :-/ I guess the easy solution is to require that "ensure" is fulfilled at compile time and not allow quantified expressions in asserts.
 (Longer-term, consider a change to the language to create a 
 foreach expression or add a new expression "forall" (or 
 something). I would anticipate a lot of weeping and gnashing of 
 teeth.)
I once added a unicode-token-layer to the dmd-lexer, and added alternative unicode math symbols to expressions. It only took a few hours. Modifying the lexer is a lot easier than expanding the parser. I think people who would write VC also would be familiar with math notation, so I one could just use that to avoid clashes. E.g. you could have a not-so-pretty ascii syntax and a unicode shorthand. Like: "∀" vs " vc_forall" Unicode tokens is really easy to do this way as you just can rewrite "∀" to " vc_forall" and "∃" to " vc_exists", so no major changes are needed to add unicode symbols.
 - Loop invariants: Allow invariant{} blocks inside loop 
 statements in addition to classes and structs. This is the only 
 part of the DIP that I can foresee requiring a language change; 
 I don't know any way of attaching a UDA to a statement. I 
 _think_ this would be the only blocker.
Proving loop progression and termination is very important, and not so easy. What some FV languages do is that they introduce ghost-variables for this (variables that are conceptually computed but not used for code generation). So basically you increment the ghost variable(s) every iteration and prove that there is a threshold for the value where the loop terminates. But it won't really change the language that is compiled. It only changes the language that is verified. So not an effective change. But verification will probably have to happen after template expansion... Not sure how to deal with that. Unless you want to verify templated code... which would be cool, but not sure of what roadblocks you would hit.
 (Even better would be strong subtypes, refinement types, or 
 first-order types, but you can simulate them well enough with 
 contracts in an invariant clause).
Lots of potential with more advanced typing... but prepare to be met with a lot of resistance for things that are actual language changes (things that can affect code gen and not only the verifier).
 A smaller language change would be to give invariant a 
 parameter with a hypothetical or ghost variable and allow it on 
 aliases:
Well, yes, but not necessarily a language change... As a start you could just define a local function "__vc_invariant" and treat it special by the verifier?
 I'd welcome any more-informed thoughts on how to proceed and go 
 through the DIP process, if there's interest!
I don't know enough about what makes a DIP succeed. What is common in other languages is to make a library/tool implementation first as proof of concept and then argue that it would be better as a language extension. So that probably is the better approach... But I don't know. But reserving the vocabulary makes a lot of sense. Then I would point to other languages, particularly FV languages, so that you reserve the most common terms " requires", " ensures", " forall", " exists" etc. But in a proof-of-concept you can just use "_vc_requires" or some other prefix.
 I am actually not sure how this could be done well. Because 
 you could have array accesses within a conditional expression.
Flow analysis can sort that out, checking both branches of the conditional and making sure that contracts hold in every case.
No, I meant how to express it syntactically in a way that allows you to distribute pre-verified code. (The point of having verification would be to cover those cases that standard flow analysis fail on.)
 analysis. The first time you see warnings like "Cannot prove 
 abc is less than def (example: def = 255)" because you didn't 
 consider edge cases, it opens your eyes to how proofs can 
 really save you from a lot of testing and debugging.
Yes, maybe that is a use case. If you write up something formal you ought to list such use-cases, and what they require.
 The goal is that a user doesn't have to do anything different 
 than normal to write verifiable code.
:-/ not possible though.
 One kind of neat thing that SPARK has is the ability to break 
 volatile down into sub-categories like "Effective_Reads", where 
 reading a value takes something off a UART, or 
 "Effective_Writes", where writing to a hardware register has 
 some effect on the system, and "Async_Readers/Writers" where 
 some hardware will be accessing memory behind the scenes... now 
 how that affects the state of a proof, I'm not totally sure.
IIRC, D does not provide volatile though... You have to use something intrinsics-like. So memory mapped registers are not types in D, it is dealt with as a memory fenced access-action...
 So, having done no DMD compiler hacking myself, is the state of 
 the "DMD as a library" mature enough now to where somebody 
 could import dmd.compiler, get an AST and start figuring out 
 VCs?
No idea. I think the major hurdle for you would be the templating. Maybe only deal with non-templated code, because then you can use one of the D parsers and rewrite the code with verification-annotations instead of modifying the actual compiler. For a proof-of-concept.
 A DIP without some sort of prototype tooling, even if 
 primitive, is going to be pretty toothless, IMO.
Yes. You could write something "informational". Then evolve it later. But you'll get feedback on whether there is any interest at all in the community? If there is no interest then it would not be possible to maintain it in the compiler either. Starting a discussion on the vocabulary might tell you something about what kind of support you can expect for your ideas.
Nov 09 2019
parent reply Doc Andrew <x x.com> writes:
On Sunday, 10 November 2019 at 00:27:46 UTC, Ola Fosheim Grøstad 
wrote:
 - Universal quantification: The low-impact way would be to set 
 aside an attribute  forall or  foreach, then figure out what 
 tuple elements you'd need to generate a VC from them and how 
 you'd tie them to a function, module, variable, etc.
I was more thinking, what do you do if it fails? One cannot keep O(N) or O(N^2) asserts in the code as a dynamic check... :-/ I guess the easy solution is to require that "ensure" is fulfilled at compile time and not allow quantified expressions in asserts.
Yeah, you could make ensure work like assert but strictly for proofs and not at runtime, but I kind of like the idea that if you can't prove a check at compile-time, it still makes it into your Debug code for thorough testing. For the most part you're back to the status quo when a proof fails - you've got some unit tests to write, the checks will live in Debug code and they'll go away in Release code just like they do now. If the Debug performance hit is too onerous, then comment it out, or have a separate "proof-time assert". I'd have to think it through. The one thing I'm not sure about is the situation I hinted at where post-conditions are on an interface where the function body itself wasn't proven. In this case, maybe they just act like assumptions. I have to do some more homework here.
 I once added a unicode-token-layer to the dmd-lexer, and added 
 alternative unicode math symbols to expressions. It only took a 
 few hours. Modifying the lexer is a lot easier than expanding 
 the parser. I think people who would write VC also would be 
 familiar with math notation, so I one could just use that to 
 avoid clashes.

 E.g. you could have a not-so-pretty ascii syntax and a unicode 
 shorthand.

 Like:  "∀" vs " vc_forall"

 Unicode tokens is really easy to do this way as you just can 
 rewrite "∀" to " vc_forall" and "∃" to " vc_exists", so no 
 major changes are needed to add unicode symbols.
You're going to scare people off with that sorta stuff :)
 Proving loop progression and termination is very important, and 
 not so easy. What some FV languages do is that they introduce 
 ghost-variables  for this (variables that are conceptually 
 computed but not used for code generation).

 So basically you increment the ghost variable(s) every 
 iteration and prove that there is a threshold for the value 
 where the loop terminates.

 But it won't really change the language that is compiled. It 
 only changes the language that is verified. So not an effective 
 change.

 But verification will probably have to happen after template 
 expansion... Not sure how to deal with that.  Unless you want 
 to verify templated code... which would be cool, but not sure 
 of what roadblocks you would hit.
Yeeeah... I'm not sure how that is going to play out. I don't really know how contracts work with templated code currently. It could be that if the contracts appear in template code, then they just get copy-pasted into each new instantiation of the template, and life is good. The prover will do a ton of extra work re-proving the same thing for each new type, but who cares? And actually, you might discover that the perfectly generic function you wrote actually has a bunch of nasty corner cases lurking inside... I think you'd just have to wait until all the CTFE and template stuff is done before you start generating VCs. It might not be a deal-breaker, but that's speaking from ignorance, frankly.
 Lots of potential with more advanced typing... but prepare to 
 be met with a lot of resistance for things that are actual 
 language changes (things that can affect code gen and not only 
 the verifier).
Oh yeah, I'm not holding my breath for anything like that. Baby steps.
 Well, yes, but not necessarily a language change... As a start 
 you could just define a local function "__vc_invariant" and 
 treat it special by the verifier?
True, we might be able to just use a nested function with some contracts baked into it.
 If you write up something formal you ought to list such 
 use-cases, and what they require.


 The goal is that a user doesn't have to do anything different 
 than normal to write verifiable code.
:-/ not possible though.
Oh I dunno, aside from adding some extra contracts and avoiding certain language constructs* (like goto), not much really has to change for the average user to get some benefit from running the code through a proof tool, even if they could care less about stamping a function as "certified" or not. * Granted, those "certain language constructs" may end up being an insurmountable barrier, where the formally verifiable subset of D is stripped of so many features that it isn't really worth it. Too early to tell.
 But you'll get feedback on whether there is any interest at all 
 in the community? If there is no interest then it would not be 
 possible to maintain it in the compiler either.

 Starting a discussion on the vocabulary might tell you 
 something about what kind of support you can expect for your 
 ideas.
Let me mull it over a bit. I might start a separate thread where people can weigh in. The field is still pretty new [AI/ML 70 years ago ;) ], so it might be a good way to raise awareness and at least get people thinking about some of the possibilities for writing _real_ safe code.
Nov 09 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 10 November 2019 at 02:51:53 UTC, Doc Andrew wrote:
 I think you'd just have to wait until all the CTFE and template 
 stuff is done before you start generating VCs. It might not be 
 a deal-breaker, but that's speaking from ignorance, frankly.
Also keep in mind that you have to deal with exceptions, so you have to introduce a lot of branching code (unless the verifier does that rewriting for you).
 Oh I dunno, aside from adding some extra contracts and avoiding 
 certain language constructs* (like goto), not much really has
You can allow gotos (at least the most common usage). Deal with it the same way you model exceptions.
 * Granted, those "certain language constructs" may end up being 
 an insurmountable barrier, where the formally verifiable subset 
 of D is stripped of so many features that it isn't really worth 
 it. Too early to tell.
Well, I think that modular ints is an issue that may make some cases harder, but I think most constructs can be modelled in a VC language. Still a lot of work.
Nov 10 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 10 November 2019 at 13:39:23 UTC, Ola Fosheim Grøstad 
wrote:
 You can allow gotos (at least the most common usage). Deal with 
 it the same way you model exceptions.
https://www.google.com/search?q=transformation+goto+eliminiation
Nov 10 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 21:11:14 UTC, Doc Andrew wrote:
 A DIP without some sort of prototype tooling, even if 
 primitive, is going to be pretty toothless, IMO.
A fun way to get started might be to create your own verified mini-language and compile it to D code and use that as a proof-of-concept.
Nov 09 2019
parent reply Doc Andrew <x x.com> writes:
On Sunday, 10 November 2019 at 00:31:18 UTC, Ola Fosheim Grøstad 
wrote:
 A fun way to get started might be to create your own verified 
 mini-language and compile it to D code and use that as a 
 proof-of-concept.
So that brings up the question of which approach to use. The idea you mentioned there is how F* (or at least the subset "Low"), Coq, and probably some others I can't think of - do it. You write verified functional code, run it either through an automated solver or write proofs manually, and then "extract" a compile-able (basically C) or interpret-able (OCaml, Scheme, Haskell...) program in another language from it. For C, I think usually they target the CLight dialect of C, so they can use the CompCert verified compiler http://compcert.inria.fr/doc/index.html. The cool thing here is that you can basically guarantee that the assembly that's generated matches the specifications of the original code. I consider this approach the "Obsessive-Compulsive" method. This is for the programmer who just _knows_ that there's a compiler bug causing his code to be wrong. :) This is more suited towards the goal of a totally verified toolchain. (Side note: There's a certain, say, "academic" beauty in thinking that you've got "turtles all the way down" with a totally verified language and toolchain. But there's always going to be hardware bugs, cosmic rays, out-of-memory conditions, etc.) Personally, I'm less worried about what DMD, GCC or LDC might have gotten wrong than what _I_ goofed up in my own code, so this option isn't super appealing to me. Eventually, I'd love for my compiler, standard library, OS, etc. to be formally-verified. But... they're probably not going to be written in Coq or ML, if we're being honest. D or Rust? More probable - so we get to option 2: Option 2, taken by Dafny and SPARK (and I think FRAMA-C too, but don't quote me on that), is take the language you want (or at least something close), then generate an intermediate language that can be easily broken down into verification conditions to run through the solver. For formally-verified D, this is the route I'd want, because I like writing D! Writing a verified language that extracts to D is sort of a dead-end, in my opinion. It doesn't make people's D code any safer, and if I'm going to extract to a compiled language, then I might as well decide on C/CLight and take advantage of CompCert or some other toolchain that's been verified already, or just use GCC and get wider platform exposure. Option 3 is to just take the language as it is and try to generate some kind of formal logic based on the language constructs as you parse it, then work out the solver mechanics or some kind of proof assistant for working through it. There might be _somebody_ out there that could take this on, but it ain't me! A lot of smart people have already worked out automated provers for ML-ish languages, and D is unique in that it already has nice contract syntax that could map nicely to the axioms and verification conditions that those languages use. I suspect that ultimately, we could get what we need by taking some intermediate DMD front-end state (maybe AST + UDA __traits), doing flow analysis, and dumping out (a lot of) some intermediate verification language with the assertions baked in and run an existing proof tool on it. You'd have to do some gonkulation of the solver output to figure out what your original line of code was for sensible error messages, but that's possible. Microsoft's VCC multithreaded C verifier tool might be a good starting point, since it's got a friendly license (MIT) - https://github.com/microsoft/vcc. It uses Boogie, too. SPARK (the GNATProve tool, specifically) is all GPLed, so not sure if it's a good resource or not. IANAL and all that. That's an almost criminally-understated view of the task, but I think it might give D a path towards FV, if people think it's worth it. D has always been on the cutting edge of good SW engineering support in a language, so who knows? -Doc
Nov 09 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 10 November 2019 at 04:15:08 UTC, Doc Andrew wrote:
 Option 2, taken by Dafny and SPARK (and I think FRAMA-C too, 
 but don't quote me on that), is take the language you want (or 
 at least something close), then generate an intermediate 
 language that can be easily broken down into verification 
 conditions to run through the solver.

 For formally-verified D, this is the route I'd want, because I 
 like writing D!
Sounds like you have set a course that you can follow! I would personally just have ignored templates and extended one of the linty D-parser, as I think it will lead to a more transparent codebase that is easier to experiment with than modifying the D compiler. But maybe you'll find that doing it within dmd from start works out for you. The plain dmd compiler should not so difficult to set up, so just try it, I guess? You could build your own VC-ast from the D-ast, then manipulate it and emit VC-code.
 That's an almost criminally-understated view of the task, but I 
 think it might give D a path towards FV, if people think it's 
 worth it.
Yes, I don't know, what is most important is that you think it is important enough to carry the burden on your own for at least a year. Over time, surely others will come along that share that viewpoint. Just keep the codebase simple so that people with no FV knowledge can contribute... In the beginning there will basically be no advantages to using the tool, so... it takes a lot of patience. Maybe a well written DIP can bring people on board from the start, but I haven't seen many people interested in FV when that topic has been discussed in the forums. Again, just try, I guess?
Nov 10 2019
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 9 November 2019 at 02:18:41 UTC, rikki cattermole 
wrote:
 Sounds like some universities need to be contacted, perhaps a 
 future PHD student would be interested in doing the theory.
Not sure what theory that would be. The basic idea is well known and pretty straight forward: 1. Try to prove that the assert holds by feeding the function into an external prover. 2. If it fails, retain the assert in the code or omit it. 3. If it succeeds, replace the assert with an «assume(…)» statement. 4. Compile. You can do this today if you want to. The challenge is to make sure that the optimizer will make good use of it.
Nov 09 2019
prev sibling parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Saturday, 9 November 2019 at 02:18:41 UTC, rikki cattermole 
wrote:
 On 09/11/2019 11:41 AM, Walter Bright wrote:
 
 I've thought for 35 years that assert()s can be the input to a 
 program prover, i.e. it's nothing new, but I do not have the 
 expertise to figure out such proofs might work, other than 
 simplistic cases.
Sounds like some universities need to be contacted, perhaps a future PHD student would be interested in doing the theory.
At UPB Bucharest we have discussed with a formal verification professor and published a research project for a masters student (2 years) where the goal is to implement a simple solver for contracts in D.
Nov 10 2019
parent reply Doc Andrew <x x.com> writes:
On Sunday, 10 November 2019 at 10:53:38 UTC, RazvanN wrote:
 At UPB Bucharest we have discussed with a formal verification 
 professor and published a research project for a masters 
 student (2 years) where the goal is to implement a simple 
 solver for contracts in D.
Very cool! Do you know if there is a link or any publications (even if it's just a proposal) for the research? -Doc
Nov 10 2019
parent RazvanN <razvan.nitu1305 gmail.com> writes:
On Sunday, 10 November 2019 at 20:02:33 UTC, Doc Andrew wrote:
 On Sunday, 10 November 2019 at 10:53:38 UTC, RazvanN wrote:
 At UPB Bucharest we have discussed with a formal verification 
 professor and published a research project for a masters 
 student (2 years) where the goal is to implement a simple 
 solver for contracts in D.
Very cool! Do you know if there is a link or any publications (even if it's just a proposal) for the research? -Doc
Unfortunately, I do not know the site where it has been published, but the proposal is in Romanian anyway. What I know is that the student will have to do a state of the art review in verification techniques and come up with a proposal. The milestones are: 1. State of the art review 2. Implement one of the existing techniques 3. Come up with an enhancement of some sort. We don't know yet the exact details, but we are confident that at least we can implement something that is already out there.
Nov 10 2019
prev sibling next sibling parent reply Doc Andrew <x x.com> writes:
On Friday, 8 November 2019 at 22:41:23 UTC, Walter Bright wrote:
 assert()s are the contracts that matter, not the in and out 
 contracts, which can be replaced with assert()s. In no way did 
 I mean to imply that assert()s were a mistake!

 I've thought for 35 years that assert()s can be the input to a 
 program prover, i.e. it's nothing new, but I do not have the 
 expertise to figure out such proofs might work, other than 
 simplistic cases.
Now you've done it. I've seen this before, right before templates were in D. This is where you disappear for a couple of weeks and then come back to announce that D can formally verify user code, right? :) The in/out syntax doesn't add a lot for typical function body code, since everything is in sequential order anyhow, but in interfaces they would be essential. They may be under-utilized, but I think they might just be one of those features that is a late bloomer. I wouldn't consider taking them out of the language just yet, even for regular functions. Proving properties of D code outright is probably tough, but doing a mechanical conversion (of at least some subset of D, maybe betterC) to an intermediate language like ML and then feeding it into a solver is probably possible. Here's an interesting project I found that can generate Why code from LLVM-IR: https://github.com/AnnotationsForAll/WhyR. It might be interesting to throw some LDC IR at it and just see what happens! Given that D already has contracts in place, using them to generate Why's "ensures" and "requires" clauses would be much easier than something like C where you'd need specially-formatted comments or a language extension. Even without specific contracts, a list of things that you _know_ are run-time errors (divide-by-zero, null-pointer dereference, etc.) could be proven absent from code. The nice thing about an approach like this is that you don't have to change the language or mess with the compiler, instead the prover is just another tool, like a linter, that you can use if you need it. Not everybody is as optimistic as I am, but I do believe that with the emphasis on safety these days, I suspect FV will become a requirement in the future. -Doc
Nov 08 2019
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/8/2019 6:43 PM, Doc Andrew wrote:
 Even without specific contracts, a list of things that you _know_ are run-time 
 errors (divide-by-zero, null-pointer dereference, etc.) could be proven absent 
 from code.
Actually, I deliberately insert null-pointer dereferences to get stack traces when debugging :-) Anyhow, don't worry, in/out contracts are remaining because some people do use them. There are some benefits to them, but I just don't think they're worth the cost.
Nov 09 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 08.11.19 23:41, Walter Bright wrote:
 On 11/8/2019 8:10 AM, Doc Andrew wrote:
 I am very bullish on formal verification in languages going forward. 
 The one big advantage that D has in this arena is DbC. I was 
 disappointed in your NWCPP talk that you listed it as a "miss." :) The 
 potential here is huge! I think it's entirely possible that we can do 
 _compile-time_ proofs that D contracts can be satisfied.
assert()s are the contracts that matter, not the in and out contracts, which can be replaced with assert()s. In no way did I mean to imply that assert()s were a mistake! I've thought for 35 years that assert()s can be the input to a program prover, i.e. it's nothing new, but I do not have the expertise to figure out such proofs might work, other than simplistic cases.
You need in and out contracts for modular verification.
Nov 09 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/9/2019 4:23 AM, Timon Gehr wrote:
 You need in and out contracts for modular verification.
You can pick up both from looking at the assert's in the prolog and epilog of the function. More importantly, nobody is working on a modular verification system for D, and haven't for 20 years, so the contracts aren't particularly useful.
Nov 09 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10.11.19 00:23, Walter Bright wrote:
 On 11/9/2019 4:23 AM, Timon Gehr wrote:
 You need in and out contracts for modular verification.
You can pick up both from looking at the assert's in the prolog and epilog of the function. ...
- Modular verification means it's enough to have DI files for functions you call, without a function body. - Some asserts in the prologue of your function might really be asserts and not assumes. - Similarly, some asserts in the epilogue of your function might not be things you actually want to guarantee to the caller.
 More importantly, nobody is working on a modular verification system for 
 D, and haven't for 20 years, so the contracts aren't particularly useful.
Programmers can read contracts too, not just verification systems. They are useful documentation and can be used for assigning blame: Is the assertion failure the fault of the caller or the callee?
Nov 10 2019
next sibling parent Doc Andrew <x x.com> writes:
On Sunday, 10 November 2019 at 10:22:27 UTC, Timon Gehr wrote:
 - Modular verification means it's enough to have DI files for 
 functions you call, without a function body.
In a unit testing world, every assert gets checked every time the function is called anyhow, so it makes little difference whether they are pre or post-conditions, or just sprinkled throughout a function. In a verified world, it makes a _huge_ difference. A verified function with an "in" contract states, "this function satisfies all of its contracts (explicit and implicit) _if and only if_ the in contracts/preconditions are satisfied". With this, once that function is verified, a caller need only make sure that it satisfies those preconditions at each call site. In the absence of preconditions, the verifier has to re-prove _every_ assert() in the callee function (and every OTHER function that the callee uses too...) _every time_ it's called throughout the program. It's an exponential increase in the problem space. You _might_ be able to do a flow analysis to suss out which asserts are considered preconditions without explicitly stating so, but it seems like a tough problem, and explicitly stating them seems like a better practice anyhow. Modular verification is really the only tractable way to approach the problem. You build up the proof starting from leaf functions which, once proven, need only their pre-conditions checked at subsequent calls. Globals and functions with side-effects complicate this somewhat, but for pure functions it works very cleanly. And like Timon said, in the absence of in/ out contracts in a .di file, verified libraries provide no way to extend their guarantees to the user. -Doc
Nov 10 2019
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/10/2019 2:22 AM, Timon Gehr wrote:
 On 10.11.19 00:23, Walter Bright wrote:
 On 11/9/2019 4:23 AM, Timon Gehr wrote:
 You need in and out contracts for modular verification.
You can pick up both from looking at the assert's in the prolog and epilog of the function. ...
- Modular verification means it's enough to have DI files for functions you call, without a function body.
I know.
 - Some asserts in the prologue of your function might really be asserts and
not 
 assumes.
The distinction between assert and assume has come up before, and I've argued that in D they are the same.
 - Similarly, some asserts in the epilogue of your function might not be things 
 you actually want to guarantee to the caller.
You could look for this pattern: T biff(P p) { assert(something about p); T result; scope (success) assert(something about result); ... function body ... } and have the contracts.
 More importantly, nobody is working on a modular verification system for D, 
 and haven't for 20 years, so the contracts aren't particularly useful.
Programmers can read contracts too, not just verification systems. They are useful documentation and can be used for assigning blame: Is the assertion failure the fault of the caller or the callee?
I do understand the point of them. I've been enamored with contracts for 20-30 years. I've done presentations promoting them. I added them to my C++ compiler. I added them to D. I just haven't found them particularly useful or helpful when I've tried them.
Nov 11 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 11 November 2019 at 10:39:24 UTC, Walter Bright wrote:
 The distinction between assert and assume has come up before, 
 and I've argued that in D they are the same.
assert: if(failure) exit… assume: if(failure) unreachable… Can't be the same…
 I do understand the point of them. I've been enamored with 
 contracts for 20-30 years. I've done presentations promoting 
 them. I added them to my C++ compiler. I added them to D.

 I just haven't found them particularly useful or helpful when 
 I've tried them.
They are most useful when you try to learn complex third party libraries.
Nov 11 2019
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On Monday, 11 November 2019 at 10:39:24 UTC, Walter Bright wrote:

 I just haven't found them particularly useful or helpful when 
 I've tried them.
There are two problems with contracts in D. 1. In-contracts are executed by the callee instead of the caller. Currently there's only a syntactic difference between an in-contract and a regular assertion inside the function body. 2. Phobos is not shipped with assertions enabled. There should be a debug version of Phobos shipped with DMD with assertions enabled. Perhaps that could be linked by default when the `-debug` flag is used. Related issues: https://issues.dlang.org/show_bug.cgi?id=6549 https://issues.dlang.org/show_bug.cgi?id=12344 https://issues.dlang.org/show_bug.cgi?id=19131 -- /Jacob Carlborg
Nov 11 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 11.11.19 11:39, Walter Bright wrote:
 
 
 - Some asserts in the prologue of your function might really be 
 asserts and not assumes.
The distinction between assert and assume has come up before, and I've argued that in D they are the same.
Which I have argued is complete insanity. I'm getting pretty angry here. Not sure what to do about it.
Nov 11 2019
parent reply Doc Andrew <x x.com> writes:
On Monday, 11 November 2019 at 22:30:18 UTC, Timon Gehr wrote:
 On 11.11.19 11:39, Walter Bright wrote:
 
 The distinction between assert and assume has come up before, 
 and I've argued that in D they are the same.
Which I have argued is complete insanity. I'm getting pretty angry here. Not sure what to do about it.
I'd expect a FV DIP to hash this out and get everybody on the same page. Different languages, different textbooks, different mathematicians all use slightly different vocabularies for this stuff, so it can get confusing. Other languages are pretty clear on the distinction between assumptions and assertions. F* ("assume" - https://www.riseforfun.com/FStar/tutorialcontent/guide#h24) & SPARK ("pragma Assume" - https://docs.adacore.com/spark2014-docs/html/ug/en/source/ass rtion_pragmas.html) both make clear that assumptions are treated like axioms in their provers, and must be used with caution. The Boogie IVL ("assume" - http://chrisposkitt.com/files/teaching/SV-2015-AutoActiveVerification.pdf) works the same way. Coq has no "assume" keyword, but the "assumption" tactic takes a hypothesis/premise and if it matches the goal (thing you're trying to prove/assert), then it solves the goal). Frama-C/ANSI C Specification Language ("assumes" clause - https://frama-c.com/download/acsl-1.14.pdf) just treats assumptions as hypotheses/premises that imply some conclusion, so pre-conditions, as far as I can tell. This might be where Walter is coming from in saying that Asserts and Assumes would be the same in D. This is the only example I could find that uses "assume" like this, but there may be others. I think _most_ people consider Assert vs Assume the way SPARK and F* do - as seen here: https://blogs.msdn.microsoft.com/francesco/2014/09/03/faq-1-what-is-the-difference-between-assert-and-assume/ But hey, as Frama-C shows, it's not the only way to interpret those keywords. The English language gets in our way sometimes. Inflammable = flammable, and all that. If "assume" is taken to mean an "assert" which is a pre-condition, then fine, we could always point to Frama-C/ACSL to show that D isn't some insane outlier and that we aren't a bunch of backwards idiots. It wouldn't be the way _I_ would do it, but that's just one man's opinion, and I'm biased by my use of SPARK where pragma Assume has a clear meaning and works fine. Given that we already have assert() in in{} contracts, to me the assume() keyword (in Walter's reckoning of the meaning) is redundant, and could be used for the axiomatic meaning. If in{} and out{} contracts ever disappeared from the language, I think an external solver would need something like assume()/require() to distinguish preconditions from normal assertions. Having said all that, we probably need to use a little more mathematical rigor with how we define these regardless of the words we choose to describe the concepts. It would also mean we'd need another keyword/UDA to use for an axiom, like, uh, axiom. But then we're getting into weird math terms that scare people off. So that's probably why "assume" is preferred by some of the others. -Doc
Nov 11 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 12 November 2019 at 04:25:57 UTC, Doc Andrew wrote:
 so pre-conditions, as far as I can tell. This might be where 
 Walter is coming from in saying that Asserts and Assumes would 
 be the same in D. This is the only example I could find that 
 uses "assume" like this, but there may be others.
He meant that asserts are turned into assumes as an optimization. He has explained his position before in an older thread. It kinda might work in some settings if you can limit the assumed facts to a single function, but that is not how LLVM works. So it could work in dmd, but a bit dangerous. It would be a disaster in the two other D compilers. Microsoft c++ uses __assume for giving facts to the optimizer. D should stick with that,
Nov 11 2019
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/11/2019 8:25 PM, Doc Andrew wrote:
 I think _most_ people consider Assert vs Assume the way SPARK and F* do - as 
 seen here: 
 https://blogs.msdn.microsoft.com/francesco/2014/09/03/faq-1-what-is-the-difference-betwee
-assert-and-assume/ 
Which says: "Intuitively, Assert is something that you expect the static checker be able to prove at compile time, whereas Assume is something that the static checker can rely upon, but it will not try to prove. At runtime Assert and Assume behave the same: If the condition is false, then the program fails – how it fails (e.g., throwing an exception, opening a dialog box, etc. can be completely customized with CodeContracts)." Therefore after the Assert, the prover can assume the Assert condition is true. If the prover can prove the Assert is always true, the runtime check can be safely removed.
Nov 12 2019
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12.11.19 10:12, Walter Bright wrote:
 On 11/11/2019 8:25 PM, Doc Andrew wrote:
 I think _most_ people consider Assert vs Assume the way SPARK and F* 
 do - as seen here: 
 https://blogs.msdn.microsoft.com/francesco/2014/09/03/faq-1-what-is-the-difference-betwee
-assert-and-assume/ 
Which says: "Intuitively, Assert is something that you expect the static checker be able to prove at compile time, whereas Assume is something that the static checker can rely upon, but it will not try to prove.
Here, I was talking specifically using verification terminology, where the difference is obviously night and day. Then you dismissed my post by claiming in D they are the same. You might as well declare that from now on true is the same as false in D and hence you can reach any desired conclusion, but that won't change my understanding of logic, nor would I applaud your design choice.
 At runtime 
 Assert and Assume behave the same: If the condition is false, then the 
 program fails – how it fails (e.g., throwing an exception, opening a 
 dialog box, etc. can be completely customized with CodeContracts)."
 
 Therefore after the Assert, the prover can assume the Assert condition 
 is true. If the prover can prove the Assert is always true, the runtime 
 check can be safely removed.
(Technically, probably no, as last time I checked, the code contracts prover still implicitly assumed there cannot be integer overflow.) Maybe, but in D assertions get converted into optimizer assumptions in release mode _without_ any such proof being provided.
Nov 12 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2019 2:26 AM, Timon Gehr wrote:
 At runtime Assert and Assume behave the same: If the condition is false, then 
 the program fails – how it fails (e.g., throwing an exception, opening a 
 dialog box, etc. can be completely customized with CodeContracts)."

 Therefore after the Assert, the prover can assume the Assert condition is 
 true. If the prover can prove the Assert is always true, the runtime check can 
 be safely removed.
(Technically, probably no, as last time I checked, the code contracts prover still implicitly assumed there cannot be integer overflow.)
D doesn't have a contracts prover, and if it did, I think it would be a mistake to ignore integer overflow. What would be of value is being able to prove integer overflow does not happen. Note that the Value Range Propagation logic in D is able to do some nice work that does take into account overflows. The global optimizer data flow analysis I wrote long ago would be broken if it didn't concern itself with overflows. In particular, loop induction variable optimization wouldn't work.
 Maybe, but in D assertions get converted into optimizer assumptions in release 
 mode _without_ any such proof being provided.
True, but D doesn't have a prover anyway :-) The user has complete control over whether asserts are checked at runtime or not. The thing about -release mode is the result of being bludgeoned in the press for having "slow" code because a competitor didn't generate the checks and the ignorant journalist did not understand this and did not ask before going to press. I last tried Spark many years ago, and discovered its contract prover was so limited I concluded it had little practical value. (If I recall, it couldn't deduce things like odd and even.) I'm sure it has improved since, but how much, I have no idea.
Nov 12 2019
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12.11.19 12:31, Walter Bright wrote:
 On 11/12/2019 2:26 AM, Timon Gehr wrote:
 At runtime Assert and Assume behave the same: If the condition is 
 false, then the program fails – how it fails (e.g., throwing an 
 exception, opening a dialog box, etc. can be completely customized 
 with CodeContracts)."

 Therefore after the Assert, the prover can assume the Assert 
 condition is true. If the prover can prove the Assert is always true, 
 the runtime check can be safely removed.
(Technically, probably no, as last time I checked, the code contracts prover still implicitly assumed there cannot be integer overflow.)
D doesn't have a contracts prover, and if it did, I think it would be a mistake to ignore integer overflow.
(Sure. Just saying that is what their prover does, so you actually can't eliminate the check based on that.)
 What would be of value is being able 
 to prove integer overflow does not happen. Note that the Value Range 
 Propagation logic in D is able to do some nice work that does take into 
 account overflows.
 ...
(I know. I contributed some code to that.)
 The global optimizer data flow analysis I wrote long ago would be broken 
 if it didn't concern itself with overflows. In particular, loop 
 induction variable optimization wouldn't work.
 ...
It's funny you would bring that up, because it indeed does not work: https://issues.dlang.org/show_bug.cgi?id=16268
 
 Maybe, but in D assertions get converted into optimizer assumptions in 
 release mode _without_ any such proof being provided.
True, but D doesn't have a prover anyway :-) ...
Yes, it does, just not for functional correctness. D has a safe fragment where the compiler automatically proves absence of memory safety problems. asserts are allowed in safe code. Assuming incorrect propositions in your optimizer can cause memory corruption. See the problem?
 The user has complete control over whether asserts are checked at 
 runtime or not.
No. You either check, or you risk UB. Complete control looks different.
 The thing about -release mode is the result of being 
 bludgeoned in the press for having "slow" code because a competitor 
 didn't generate the checks and the ignorant journalist did not 
 understand this and did not ask before going to press.
 ...
Well, now you can be bludgeoned in the press for having "unsafe" code, because the ignorant journalist called into some library with an assertion in it. Even if they decide to ask, your response would make their article more juicy, not less.
 I last tried Spark many years ago, and discovered its contract prover 
 was so limited I concluded it had little practical value. (If I recall, 
 it couldn't deduce things like odd and even.) I'm sure it has improved 
 since, but how much, I have no idea.
 
I never tried Spark and it's entirely plausible that they were/are vastly behind the state of the art in verification research.
Nov 12 2019
prev sibling parent reply Doc Andrew <x x.com> writes:
On Tuesday, 12 November 2019 at 11:31:43 UTC, Walter Bright wrote:
 True, but D doesn't have a prover anyway :-)
...yet :) But in one sense, we already do. The run-time behavior of disabled assertions and checks basically just assumes you've worked through a "proof" of the program using your unit-testing.
 I last tried Spark many years ago, and discovered its contract 
 prover was so limited I concluded it had little practical 
 value. (If I recall, it couldn't deduce things like odd and 
 even.) I'm sure it has improved since, but how much, I have no 
 idea.
You might be asking too much from some of the solvers. It's worth slogging through some manual proofs with a proof assistant like Coq (highly-recommended resource: https://softwarefoundations.cis.upenn.edu/). After that, what an SMT solver is able to do on it's own seems very nice :) Having said that, SPARK still requires a frustrating amount of hand-holding at times. You're able to do things like this now: ``` ------------------------------------------- -- double.ads (specification) ------------------------------------------- package double with SPARK_Mode => On is type SomeArrayType is array (Positive range <>) of Natural; function doubleArray(arr : in SomeArrayType) return SomeArrayType with Global => null, Depends => (doubleArray'Result => arr), -- set arbitrary range here in precondition to avoid overflow Pre => (for all idx in arr'Range => arr(idx) in 0..20000), -- prove that everything in the return array is even Post => (for all idx in arr'Range => isEven(doubleArray'Result(idx))); function isEven(num : in Natural) return Boolean with Contract_Cases => (num rem 2 = 0 => isEven'Result = True, num rem 2 /= 0 => isEven'Result = False); end double; ------------------------------------------- -- double.adb (implementation) ------------------------------------------- package body double is function doubleArray(arr : in SomeArrayType) return SomeArrayType with SPARK_Mode => On is doubled : SomeArrayType(arr'Range) := (others => 0); begin for i in arr'Range loop doubled(i) := 2 * arr(i); pragma Loop_Invariant(for all j in arr'First .. i => doubled(j) = 2 * arr(j)); end loop; return doubled; end doubleArray; function isEven(num : in Natural) return Boolean with SPARK_Mode => On is begin if num rem 2 = 0 then return True; else return False; end if; end isEven; end double; ------------------------------------------- -- main.adb ------------------------------------------- with Ada.Text_IO; with double; use double; procedure Main with SPARK_Mode => On is arr : SomeArrayType := (3, 7, 8, 0, 1); result : SomeArrayType := doubleArray(arr); begin for i in result'Range loop Ada.Text_IO.Put(Integer'Image(result(i)) & ", "); end loop; end Main; ``` The lame thing is the pragma Loop_Invaraiant. In order to be able to make any conclusions about the array as a whole, you've got to explain how the array looks at each point in the loop. Without that, the prover can't prove that your post-condition is true. My understanding is that SPARK needs a lot fewer of these than it used to, but it's still annoying. Still, I think it's kind of neat that you can prove that kind of behavior at compile-time, no run-time checks or unit-tests required! -Doc
Nov 12 2019
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2019 11:05 AM, Doc Andrew wrote:
 Still, I think it's kind of neat that you can prove that kind of behavior at 
 compile-time, no run-time checks or unit-tests required!
It is neat indeed.
Nov 12 2019
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12.11.19 05:25, Doc Andrew wrote:
 ...
 Other languages are pretty clear on the distinction between assumptions 
 and assertions. F* ("assume" - 
 https://www.riseforfun.com/FStar/tutorialcontent/guide#h24) & SPARK 
 ("pragma Assume" - 
 https://docs.adacore.com/spark2014-docs/html/ug/en/source/ass
rtion_pragmas.html) 
 both make clear that assumptions are treated like axioms in their 
 provers, and must be used with caution. The Boogie IVL ("assume" - 
 http://chrisposkitt.com/files/teaching/SV-2015-AutoActiveVerification.pdf) 
 works the same way.
 ...
Yup.
 Coq has no "assume" keyword, but the "assumption" tactic takes a 
 hypothesis/premise and if it matches the goal (thing you're trying to 
 prove/assert), then it solves the goal).
 ...
Sure, it is shorthand for "there is an assumption that proves the goal". This is a completely standard usage.
 Frama-C/ANSI C Specification Language ("assumes" clause - 
 https://frama-c.com/download/acsl-1.14.pdf) just treats assumptions as 
 hypotheses/premises that imply some conclusion, so pre-conditions, as 
 far as I can tell. This might be where Walter is coming from in saying 
 that Asserts and Assumes would be the same in D.
It is not. Walter is arguing about run-time semantics and he uses some weird argument ("-release only removes the check, it doesn't remove the assumption") to justify the unsafe behavior of asserts with -release.
 This is the only 
 example I could find that uses "assume" like this, but there may be others.
 ...
The 'assumes' clause just says what assumptions are that the _function_ makes in order to satisfy the postcondition. It's saying "this function assumes ...", not "verifier, please assume ...". This is how preconditions operate. The function indeed assumes something, but it puts a burden on a caller who may not want to assume it.
 I think _most_ people consider Assert vs Assume the way SPARK and F* do 
 - as seen here: 
 https://blogs.msdn.microsoft.com/francesco/2014/09/03/faq-1-what-is-the-difference-betwee
-assert-and-assume/ 
 
 
 But hey, as Frama-C shows, it's not the only way to interpret those 
 keywords.
Frama-C shows no such thing. I'm not confused at all by an 'assumes' clause in a function signature that signifies a precondition.
 The English language gets in our way sometimes. Inflammable = 
 flammable, and all that.
 
 If "assume" is taken to mean an "assert" which is a pre-condition,
Just to be clear: I have no problem at all with D's usage of 'assert' in preconditions. My point was that if you get rid of preconditions, 'assert' cannot express preconditions, so the prover won't know whether the callee or the caller is responsible for discharging a certain assertion.
 then 
 fine, we could always point to Frama-C/ACSL to show that D isn't some 
 insane outlier and that we aren't a bunch of backwards idiots.
The -release flag turns asserts into optimizer assumptions. In -release mode, a failing assertion in safe code is undefined behavior. (In the C sense.) Assume cannot be safe, while assert can, but Walter conflates them into the same construct.
 It 
 wouldn't be the way _I_ would do it, but that's just one man's opinion, 
 and I'm biased by my use of SPARK where pragma Assume has a clear 
 meaning and works fine.
 
 Given that we already have assert() in in{} contracts, to me the 
 assume() keyword (in Walter's reckoning of the meaning) is redundant, 
 and could be used for the axiomatic meaning.
 ...
Given that we have 'in' contracts, but we were arguing specifically about a scenario where we don't have them.
 If in{} and out{} contracts ever disappeared from the language, I think 
 an external solver would need something like assume()/require() to 
 distinguish preconditions from normal assertions.
 ...
Yes, this was exactly my point.
Nov 12 2019
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2019 2:51 AM, Timon Gehr wrote:
 If in{} and out{} contracts ever disappeared from the language, I think an 
 external solver would need something like assume()/require() to distinguish 
 preconditions from normal assertions.
 ...
Yes, this was exactly my point.
The precondition would just be the sequence of assert()s before any other code, and the postcondition is the sequence that appears after any other code. It's not a challenge to pull that out of the AST.
Nov 12 2019
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12.11.19 12:35, Walter Bright wrote:
 On 11/12/2019 2:51 AM, Timon Gehr wrote:
 If in{} and out{} contracts ever disappeared from the language, I 
 think an external solver would need something like assume()/require() 
 to distinguish preconditions from normal assertions.
 ...
Yes, this was exactly my point.
The precondition would just be the sequence of assert()s before any other code, and the postcondition is the sequence that appears after any other code.
I understand, and this strategy has the weaknesses I pointed out.
 It's not a challenge to pull that out of the AST.
Of course it is not challenging to implement. The point is that it is not a great idea, not that it is hard to do.
Nov 12 2019
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Friday, 8 November 2019 at 03:43:56 UTC, Walter Bright wrote:
 On 11/7/2019 7:34 AM, Paolo Invernizzi wrote:
 On Thursday, 7 November 2019 at 09:41:05 UTC, Timon Gehr wrote:
 On 07.11.19 08:34, drug wrote:
 On 11/7/19 3:00 AM, Walter Bright wrote:
 Not that this is necessarily a bad thing, as I also promote 
 the safe/unsafe code dichotomy.
And I'm totally agree to you. That's the good engineering approach. But the Rust community is overselling Rust safety and this confuses me at least.
http://plv.mpi-sws.org/rustbelt/
"... Any realistic languages targeting this domain in the future will encounter the same problem ..." I underline _realistic_  ... Sorry Walter, I'm all with Timon on that.
I don't see anything on that site that contradicts what I wrote. In particular: "Rust's core type system prohibits the aliasing of mutable state, but this is too restrictive for implementing some low-level data structures. Consequently, Rust's standard libraries make widespread internal use of "unsafe" blocks, which enable them to opt out of the type system when necessary. The hope is that such "unsafe" code is properly encapsulated, so that Rust's language-level safety guarantees are preserved. But due to Rust's reliance on a weak memory model of concurrency, along with its bleeding-edge type system, verifying that Rust and its libraries are actually safe will require fundamental advances to the state of the art." is saying the same thing.
Agreed, and I definitely appreciate your effort in pushing towards mechanic verification on that topic. What I mean is that there will be the need for manual verification also, when libraries will be forced to rely on unsafe code. To my understanding, the project is providing guidance in doing that (manual) verification of unsafe code. My advice is to work tightly coupled with Timon, to have a clear understanding of what is theoretically feasible and what not, for mechanical check, and to try to have clear borders around the two arenas. In that way, you can concentrate on implementing and polishing what is pragmatically doable in the compiler, in a mechanical way, avoiding efforts towards dead ends. But hey, you have already surprised us all when you designed D ... I will be glad to see that your idea really works! /Paolo
Nov 08 2019
prev sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 11/07/2019 07:34 AM, Paolo Invernizzi wrote:

 http://plv.mpi-sws.org/rustbelt/
"... Any realistic languages
On a lighter note, that's an uncharacteristically nonacademic phrase.
  targeting this domain in the future
Is that an example of common ignorance towards D or do they accept D as being already safe, or perhaps the future is already here with D? :) On a more serious note, I very much appreciate Rust, their taking D seriously, and any work towards software safety. Ali
Nov 08 2019
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 7 November 2019 at 00:00:28 UTC, Walter Bright wrote:
 

 D needs to make  safe the default, though.
In terms of a timeline, can I assume that there are no plans for this until a D 3.0?
Nov 07 2019
parent reply Mike Parker <aldacron gmail.com> writes:
On Thursday, 7 November 2019 at 11:33:35 UTC, jmh530 wrote:
 On Thursday, 7 November 2019 at 00:00:28 UTC, Walter Bright 
 wrote:
 

 D needs to make  safe the default, though.
In terms of a timeline, can I assume that there are no plans for this until a D 3.0?
https://github.com/dlang/DIPs/pull/166
Nov 07 2019
parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 7 November 2019 at 12:06:07 UTC, Mike Parker wrote:
 [snip]
 In terms of a timeline, can I assume that there are no plans 
 for this until a D 3.0?
https://github.com/dlang/DIPs/pull/166
Ah, I hadn't realized a DIP was in the works. Given that it is a Walter DIP, that means it has a high likelihood of being accepted...
Nov 07 2019
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 November 2019 at 05:02:10 UTC, Walter Bright 
wrote:
 I'm going to speculate that Rust is a bit like pure functional 
 programming. It's a great paradigm, but is inconvenient for 
 that last mile of the code one needs to write.
Yes, but I guess one could say that for Go as well. On the other hand, the number of projects in Go seems to taper off, while the number of contributors is steadily increasing. So in the case of Go that might suggest that people are now able to find libraries that are suitable and start contributing to those instead of creating new ones. So, Go appears to have found a stable niche where it can become solid. Despite being inconvenient in some ways, and quite frankly, despite missing features that is welcome when writing larger programs. In the case of Rust, it is expanding, and it seems like people are in the process of figuring out how to best write libraries for it and to figure out where Rust is the best solution. I would expect Rust to do really well in all larger server scenarios in the long term, where you often deal with many objects, tree-like structures, require reliability and also need memory-efficiency. I would expect Rust to do less well in audio/video/embedded, unless some really good Rust-frameworks appear that let you do audio/video/embedded in a high level fashion. Which could happen, e.g. if the Firefox team invest in reusable audio/video components for Rust. Swift isn't really a great language for audio/video either (for other reasons), but it works out because of the frameworks Apple provide as a foundation. I don't see anything in the Rust semantics that prevents it from becoming a good semi-high-level language for writing applications and servers. It all hangs on whether it eventually provides a good multi-platform framework.
 This is why D has "pure" functions. You can slip in and out of 
 functional programming as you like.

 I'm working on an ownership/borrowing system for D. But the 
 idea is it will only be active in functions marked as  live.
The trend is going towards programmers expecting to spend less effort on speccing function signatures and have the IDE/compiler du more of that by static analysis, flow-typing and the like. That is an argument against Rust, but also against attributes...
Nov 06 2019
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Monday, 4 November 2019 at 09:32:56 UTC, NaN wrote:
 You'd think with all the press rust seems to be getting lately 
 it'd be higher up than it is?
It is also surprising to me, but I was said TIOBE is based on job offers. Other indexes are often based on Github and StackOverflow presence, which ultimately depend on how online/hobbyist the community is. Popular languages are ranked differently depending on whether you measure "dark matter" developers that don't interact online, or users of more social websites like Github and StackOverflow that will be biased towards talkative people.
Nov 04 2019
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.
It favours older languages that are present on many websites. Tiobe measures the number of hits for the search "d programming" and "dlang programming". However, for D there are false positives, so they assume 90% confidence... Although they also remove "3-D programming" and "Dtrace". Anyway, this is not a good metric. It is strongly affected by search engine ranking strategies and online structure. (many small sites vs one big site, etc). Ola.
Nov 04 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 4 November 2019 at 21:09:47 UTC, Ola Fosheim Grøstad 
wrote:
 programming" and "dlang programming". However, for D there are 
 false positives, so they assume 90% confidence... Although they
FWIW, you'll see hits on things like: a) … b) … c) … d) programming is a… Or stuff like: …ph.d. programming… But they only look at the relevance for the first 100 hits when estimating the confidence metric, which isn't really compatible with how you can structure ranking in a search engine. You might get less confidence the further down the ranking list you get. E.g the first 100 could give 100% confidence, but then as the ranker is running out of "good matches" the number of false positives could increase significantly... Also... the search engine might do a cut-off or use a predetermined hit-list for popular searches... Too many potential sources for significant bias in Tiobe (and aggregating results from many search engines does not make that better, could even make it worse).
Nov 04 2019
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/4/2019 1:03 AM, RazvanN wrote:
 Good news, everyone!
 
 D has entered the Tiobe top 20 ranking of programming languages [1], landing
on 
 the 18th position. I have been keeping an eye on this index, and it is for the 
 first time that I see this happening.
 
 Cheers,
 RazvanN
 
 [1] https://www.tiobe.com/tiobe-index/
 
 
Great news! (It's been there before, until Tiobe changed their ranking method.)
Nov 04 2019
prev sibling next sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
I've been keeping an eye on the Tiobe index in the last couple of months and I started to notice that D had a slow but steady upwards trend. It was in the 22. and 21. place in the last two months. I'm a bit surprised that it jumped all the way to 18. place. It's a good thing that this happened because the index has a lot of visibility. Journalists like to write a lot about it for some reason. However, this index is quite noisy and flawed in several ways. So this high rating for D might be a fluke and next month D might be back to 21. place... I'm really hoping that D is starting to show up in the top 20 more often than not in the coming months. If you take other indices like the more complex Redmonk index that takes its data from Stackoverflow and Github and is thus biased towards open source projects, you get totally different results. For example, the Redmonk index ranks rust pretty high. This shows how arbitrary these rankings are in the end.
Nov 04 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 02:08:28 UTC, Gregor Mückl wrote:
 However, this index is quite noisy and flawed in several ways.
Right, the big dips in the graph are from Google adjusting their ranker... AFAIK.
Nov 05 2019
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 02:08:28 UTC, Gregor Mückl wrote:
 If you take other indices like the more complex Redmonk index 
 that takes its data from Stackoverflow and Github and is thus 
 biased towards open source projects, you get totally different 
 results. For example, the Redmonk index ranks rust pretty high. 
 This shows how arbitrary these rankings are in the end.
Redmonk is flawed as well, of course. A language with worse documentation, is more difficult to use or appeal more to beginners (like PhP) will have a stronger presence on Stackoverflow... Github is probably the better metric, but counting number of projects is silly. Some languages have a culture of miniature repositories. Others have lots of forking. But yeah, if you did static analysis on all the source code on Github then that could measure something reasonable. Also the counting on Github seems to be approximate. I get ±10% in repeated searches their web interface, e.g. sometimes 7000 hits, sometimes 8000 hits… Searches on github: 2k language:nim 8k language:d 10k language:julia 23k language:go 43k language:rust 759k language:c 890k language:c++ 3400k language:java
Nov 05 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
Ok, so here is another github metric that is more interesting. 
Doing a "language:d" search and then look at the number of 
_users_:

Zig: 10
Pony: 25
Nim:  400
Crystal: 500
Ada:  600
Vala: 800
D:     2000
Pascal: 12000
Dart: 13000
Rust: 16000
Swift: 24000
Go:   128000
C++: 706000

Clearly, this metric favours older languages. It would have been 
better to look at new repos only, but I don't know if that is 
possible with the github interface.
Nov 05 2019
next sibling parent reply Daniel Kozak <kozzi11 gmail.com> writes:
On Tue, Nov 5, 2019 at 11:50 AM Ola Fosheim Grøstad via Digitalmars-d
<digitalmars-d puremagic.com> wrote:

I am using openhub interface:

https://www.openhub.net/languages/compare?utf8=✓&measure=commits&language_name%5B%5D=erlang&language_name%5B%5D=dmd&language_name%5B%5D=-1&language_name%5B%5D=-1&language_name%5B%5D=-1&commit=Update
Nov 05 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 11:45:59 UTC, Daniel Kozak wrote:
 I am using openhub interface:
Thanks, it seems to be buggy tough: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=dmd&language_name%5B%5D=golang&language_name%5B%5D=rust&language_name%5B%5D=cpp&language_name%5B%5D=-1&commit=Update Sharp dip for Go, and sharp peak for C++? ("Nim" does not seem to be present as an option.)
Nov 05 2019
next sibling parent Daniel Kozak <kozzi11 gmail.com> writes:
On Tue, Nov 5, 2019 at 1:05 PM Ola Fosheim Grøstad via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Tuesday, 5 November 2019 at 11:45:59 UTC, Daniel Kozak wrote:
 I am using openhub interface:
Thanks, it seems to be buggy tough: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=dmd&language_name%5B%5D=golang&language_name%5B%5D=rust&language_name%5B%5D=cpp&language_name%5B%5D=-1&commit=Update Sharp dip for Go, and sharp peak for C++? ("Nim" does not seem to be present as an option.)
Yes, this is common issue there, but if you look at it nex month it would be ok, I am not sure why this happening.
Nov 05 2019
prev sibling next sibling parent reply Daniel Kozak <kozzi11 gmail.com> writes:
On Tue, Nov 5, 2019 at 1:24 PM Daniel Kozak <kozzi11 gmail.com> wrote:
 On Tue, Nov 5, 2019 at 1:05 PM Ola Fosheim Grøstad via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Tuesday, 5 November 2019 at 11:45:59 UTC, Daniel Kozak wrote:
 I am using openhub interface:
Thanks, it seems to be buggy tough: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=dmd&language_name%5B%5D=golang&language_name%5B%5D=rust&language_name%5B%5D=cpp&language_name%5B%5D=-1&commit=Update Sharp dip for Go, and sharp peak for C++? ("Nim" does not seem to be present as an option.)
Yes, this is common issue there, but if you look at it nex month it would be ok, I am not sure why this happening.
OK I guess the issue is with not all current data are process yet. So they have statistic for D, for C++ but not for Go, so because it is in percent and there are data only fo few languages it would seems like there are peeks and dips
Nov 05 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 12:27:48 UTC, Daniel Kozak wrote:
 OK I guess the issue is with not all current data are process 
 yet. So they have statistic for D, for C++ but not for Go, so 
 because it is in percent and there are data only fo few 
 languages it would seems like there are peeks and dips
Alright! It is an interesting graphing tool, for sure. I didn't find "Swift" in the menu either, btw.
Nov 05 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 12:04:25 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 5 November 2019 at 11:45:59 UTC, Daniel Kozak wrote:
 I am using openhub interface:
Thanks, it seems to be buggy tough:
Ok, so OpenHub is flawed for 2019, but from 2016-2019 we can see that for Ada, Vala, HaXe and D there has been stagnation. For Pascal there has been a decline and for OCaml growth: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=ada&language_name%5B%5D=crystal&language_name%5B%5D=dmd&language_name%5B%5D=-1&language_name%5B%5D=ocaml&language_name%5B%5D=pascal&language_name%5B%5D=vala&language_name%5B%5D=-1&commit=Update We can also see that Go and Rust have linear growth (relative), https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=csharp&language_name%5B%5D=dmd&language_name%5B%5D=golang&language_name%5B%5D=-1&language_name%5B%5D=rust&language_name%5B%5D=-1&commit=Update Also Kotlin's growth is cutting into Java, while Scala experience stagnation, but Java is still massive: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=java&language_name%5B%5D=kotlin&language_name%5B%5D=scala&language_name%5B%5D=-1&language_name%5B%5D=-1&commit=Update Nothing too surprising, I guess.
Nov 05 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 12:35:11 UTC, Ola Fosheim Grøstad 
wrote:
 Nothing too surprising, I guess.
I guess C++/ C is worth noticing as well: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&measure=contributors&language_name%5B%5D=cpp&language_name%5B%5D=c&language_name%5B%5D=-1&commit=Update So C is experiencing stagnation, but C++ apparently has linear growth from C++14 onwards. 10% of all contributors on Github committing C++ code seems a bit too much. Probably some overcounting, people who commit non-C++ code to a mixed language project will count as a C++ contributor on Open Hub. Maybe more multi-language-projects have started to use C++ for low level libraries? Dunno, but regardless, continuous language improvements seem to pay off in terms of increased adoption.
Nov 05 2019
prev sibling next sibling parent reply user5678 <user5678 1234.de> writes:
On Tuesday, 5 November 2019 at 10:49:43 UTC, Ola Fosheim Grøstad 
wrote:
 Ok, so here is another github metric that is more interesting. 
 Doing a "language:d" search and then look at the number of
This GH feature doesn't work well IMO. I had an account with several D language projects and once I read all the results (someting like 100 pages !) and wasn't listed.
Nov 05 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 19:12:31 UTC, user5678 wrote:
 This GH feature doesn't work well IMO. I had an account with 
 several D language projects and once I read all the results 
 (someting like 100 pages !) and wasn't listed.
:-/ Hopefully all languages are affected in the same way...
Nov 05 2019
prev sibling parent reply GoaLitiuM <goalitium dforums.mail.kapsi.fi> writes:
On Tuesday, 5 November 2019 at 10:49:43 UTC, Ola Fosheim Grøstad 
wrote:
 Ok, so here is another github metric that is more interesting. 
 Doing a "language:d" search and then look at the number of 
 _users_
How does this even work? I tried this and added myself to the search and couldn't find me listed there despite working in multiple D related repos both mine and others. It does list all my PRs and commits but not me as an user.
Nov 06 2019
next sibling parent user5678 <user5678 1234.de> writes:
On Wednesday, 6 November 2019 at 15:40:11 UTC, GoaLitiuM wrote:
 On Tuesday, 5 November 2019 at 10:49:43 UTC, Ola Fosheim 
 Grøstad wrote:
 Ok, so here is another github metric that is more interesting. 
 Doing a "language:d" search and then look at the number of 
 _users_
How does this even work? I tried this and added myself to the search and couldn't find me listed there despite working in multiple D related repos both mine and others. It does list all my PRs and commits but not me as an user.
I suspect that mentioning dlang in your public bio is a factor of inclusion
Nov 06 2019
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 6 November 2019 at 15:40:11 UTC, GoaLitiuM wrote:
 How does this even work? I tried this and added myself to the 
 search and couldn't find me listed there despite working in 
 multiple D related repos both mine and others. It does list all 
 my PRs and commits but not me as an user.
Good question, Github is rather mysterious entity sometimes… I think the above mentioned Open Hub is a much better resource for measuring the "social evolution" of programming languages.
Nov 06 2019
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/4/2019 6:08 PM, Gregor Mückl wrote:
 However, this index is quite noisy and flawed in several ways. So this high 
 rating for D  might be a fluke and next month D might be back to 21. place... 
 I'm really hoping that D is starting to show up in the top 20 more often than 
 not in the coming months.
The great advantage D has is our dogged persistence.
Nov 05 2019
prev sibling next sibling parent reply Mark Rousell <mark.rousell signal100.com> writes:
On 04/11/2019 09:03, RazvanN via Digitalmars-d wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages [1],
 landing on the 18th position. I have been keeping an eye on this
 index, and it is for the first time that I see this happening.
Not that I am advocating gaming the Tiobe ranking protocol but I note that "ratings are based on the number of skilled engineers world-wide, courses and third party vendors. Popular search engines such as Google, Bing, Yahoo!, Wikipedia, Amazon, YouTube and Baidu are used to calculate the ratings." The above being so, a quick win for a language that wished to raise its Tiobe ranking and thus its general profile would be for its community to post lots of "course" (i.e. tutorial) videos on YouTube. It is possible that a series of videos, constituting a course, might be better still. This would potentially increase the ranking directly based upon YouTube hits. If language community blogs and forums could then link to these YouTube courses then this would, in turn, increase the language's ranking on Google, Bing, and so on. Wikipedia articles linking to the YouTube courses would also be beneficial. Just a thought. It is ironic that an increased Tiobe ranking (amongst others) can, in and of itself, increase the genuine user base of a language. (As, perhaps, with the community hype around Rust). -- Mark Rousell
Nov 04 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 5 November 2019 at 04:00:02 UTC, Mark Rousell wrote:
 The above being so, a quick win for a language that wished to 
 raise its Tiobe ranking and thus its general profile would be 
 for its community to post lots of "course" (i.e. tutorial) 
 videos on YouTube.
That depends on the ranker. Seems like Google is trying to reduce the number of hits from the same website. So, with that in mind you would be better off getting bloggers on various blogging sites to write about it in different languages. See: https://www.tiobe.com/tiobe-index/programming-languages-definition/ If you look at the description of Tiobe it seems like they aggregate Google hits from various languages. So the best strategy seems to be to game Google's ranker and do it in all the languages Tiobe are tracking: Google.com: 7.69% Google.com.hk: 5.85% Google.co.in: 5.23% Google.co.jp: 4.00% Google.com.br: 3.69% Google.de: 3.38% Google.ru: 2.77% Google.fr: 2.15% Google.it: 1.85% Google.es: 1.54% Google.cn: 1.23% Google.com.tw: 0.92% Google.com.mx: 0.31%
Nov 05 2019
prev sibling next sibling parent reply user5678 <user5678 1234.de> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
Tiobe... a great indication of how biased we are. When the language they like progress people forget that they've said about 1 million times that the Tiobe index is BS.
Nov 05 2019
parent drug <drug2004 bk.ru> writes:
On 11/5/19 10:00 PM, user5678 wrote:
 Tiobe... a great indication of how biased we are. When the language they 
 like progress people forget that they've said about 1 million times that 
 the Tiobe index is BS.
I wouldn't say that D community is biased but in general I agree to you.
Nov 06 2019
prev sibling next sibling parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
It seems that D continues its ascending [1], while rust has seen a fall in popularity (last month it was ranked 25th, while now it is the 31st). [1] https://www.tiobe.com/tiobe-index/
Dec 09 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 9 December 2019 at 11:01:24 UTC, RazvanN wrote:
 It seems that D continues its ascending [1], while rust has 
 seen a fall in popularity (last month it was ranked 25th, while 
 now it is the 31st).

 [1] https://www.tiobe.com/tiobe-index/
Expect more changes as Google is supposed to be introducing more capable bots at the end of 2019.
Dec 09 2019
prev sibling parent reply Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Monday, 9 December 2019 at 11:01:24 UTC, RazvanN wrote:
 It seems that D continues its ascending [1], while rust has 
 seen a fall in popularity (last month it was ranked 25th, while 
 now it is the 31st).

 [1] https://www.tiobe.com/tiobe-index/
The exact placement of Rust in the list doesn't say much. This group of languages has such a low signal to noise ratio that ordering them is arguably impossible from the available information. I am surprised that D didn't fall back down again, though.
Dec 09 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 9 December 2019 at 14:42:32 UTC, Gregor Mückl wrote:
 I am surprised that D didn't fall back down again, though.
TIOBE is silly. It matches "d) programming" as in "a) swimming b) running c) eating d) programming". It changes when Google's ranking engine changes. And Google is pruning down quite heavily to get more diverse search results... So, heavily biased. Worse than noise.
Dec 09 2019
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Monday, 9 December 2019 at 15:03:07 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 9 December 2019 at 14:42:32 UTC, Gregor Mückl wrote:
 I am surprised that D didn't fall back down again, though.
TIOBE is silly. It matches "d) programming" as in "a) swimming b) running c) eating d) programming". It changes when Google's ranking engine changes. And Google is pruning down quite heavily to get more diverse search results... So, heavily biased. Worse than noise.
Silly as it might be, that is one of the references many MBAs across the enterprise world use when deciding if adopting technology X is worthwhile doing. The other one being the technology radars produced by consulting companies that are just as silly.
Dec 09 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 9 December 2019 at 15:18:46 UTC, Paulo Pinto wrote:
 The other one being the technology radars produced by 
 consulting companies that are just as silly.
Well, that makes it easy to figure out who to avoid. Considering TypeScript is listed at position 40 and Assembly is listed at position 14.
Dec 09 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 9 December 2019 at 16:50:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 9 December 2019 at 15:18:46 UTC, Paulo Pinto wrote:
 The other one being the technology radars produced by 
 consulting companies that are just as silly.
Well, that makes it easy to figure out who to avoid. Considering TypeScript is listed at position 40 and Assembly is listed at position 14.
(TypeScript was at position 37, sry) I assume the reason is (in addition to Google bias) that people don't write "TypeScript programming language". Assembly probably rank high because people write things like "Assembly programming was popular in the 1980s". Sigh...
Dec 09 2019
prev sibling next sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 9 December 2019 at 15:18:46 UTC, Paulo Pinto wrote:
 On Monday, 9 December 2019 at 15:03:07 UTC, Ola Fosheim Grøstad 
 wrote:
 On Monday, 9 December 2019 at 14:42:32 UTC, Gregor Mückl wrote:
 I am surprised that D didn't fall back down again, though.
TIOBE is silly. It matches "d) programming" as in "a) swimming b) running c) eating d) programming". It changes when Google's ranking engine changes. And Google is pruning down quite heavily to get more diverse search results... So, heavily biased. Worse than noise.
Silly as it might be, that is one of the references many MBAs across the enterprise world use when deciding if adopting technology X is worthwhile doing. The other one being the technology radars produced by consulting companies that are just as silly.
o God ... really :-O
Dec 09 2019
prev sibling parent reply Martin Tschierschke <mt smartdolphin.de> writes:
On Monday, 9 December 2019 at 15:18:46 UTC, Paulo Pinto wrote:
 On Monday, 9 December 2019 at 15:03:07 UTC, Ola Fosheim Grøstad 
 wrote:
 On Monday, 9 December 2019 at 14:42:32 UTC, Gregor Mückl wrote:
 I am surprised that D didn't fall back down again, though.
TIOBE is silly. It matches "d) programming" as in "a) swimming b) running c) eating d) programming". It changes when Google's ranking engine changes. And Google is pruning down quite heavily to get more diverse search results... So, heavily biased. Worse than noise.
Silly as it might be, that is one of the references many MBAs across the enterprise world use when deciding if adopting technology X is worthwhile doing. The other one being the technology radars produced by consulting companies that are just as silly.
I think this is an important point. Here you may see the change over time of Tiobes percent value: https://www.tiobe.com/tiobe-index/d/ Any comment on this curve? Is it possible to match certain events to this shape?
Dec 10 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 10 December 2019 at 10:04:17 UTC, Martin Tschierschke 
wrote:
 https://www.tiobe.com/tiobe-index/d/

 Any comment on this curve?
 Is it possible to match certain events to this shape?
Yes, changes to Google's filtering mechanism and possibly that they added new languages. TIOBE only count the number of pages search engines return at any given time. It is not like people suddenly delete all their web pages... A quick drop reflect something mechanical, it is not measuring what people do. Google changes frequently (many times a year). Like you can suddenly loose over 50% of your indexed pages because Google start to classify them as covering the same content. Or they change their static quality scoring of pages. And this propagates very slowly throughout their system as it takes time for them to crawl the entire web. A language like Java that is frequently mentioned as the "java programming language" by mainstream news agencies throughout the world will get more pages with high static quality score spread on many sites, so more likely to be shown and thus generates more hits. Google is not evaluated on the number of page it returns for a query term, but on the relevance of the first few pages of query results. E.g.: https://en.wikipedia.org/wiki/Discounted_cumulative_gain
Dec 10 2019
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 10 December 2019 at 10:04:17 UTC, Martin Tschierschke 
wrote:
 Any comment on this curve?
 Is it possible to match certain events to this shape?
More interesting curve to discuss: https://trends.google.com/trends/explore?date=all&q=dlang Basically no change over the past 5-6 years for active people searching for "dlang".
Dec 10 2019
parent reply Jeff <jeff mail33.org> writes:
On Tuesday, 10 December 2019 at 13:27:04 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 10 December 2019 at 10:04:17 UTC, Martin 
 Tschierschke wrote:
 Any comment on this curve?
 Is it possible to match certain events to this shape?
More interesting curve to discuss: https://trends.google.com/trends/explore?date=all&q=dlang Basically no change over the past 5-6 years for active people searching for "dlang".
I never understood why people here seem to think that D is more popular? If D is more popular, you will see it reflected in a very key core component. All you need to do is look up code.dlang.org its growth and you see that D has the same growth as before. You can even track it on websites like www.modulecounts.com where its growth is the same slow 1 package / day, over the 3 years. No spikes, no sudden growth change ... You expect with a increase in downloads ( d statistic ), blogs, increased marketing that a increased interests results in a growth spike ( like other languages ). But it does not. What does it tell you? People try D, they discus D but a lot simply do not progress beyond trying D. D is not some magically language where all users do not contribute or make missing modules. Just compare the package growth of Rust or Julia or ... That is what a surge in popularity will look like. Maybe compared to 10 years ago D is more popular, but when your looking at a ultra low bar, its easy to feel that your doing good. Tiobe has been fairly unreliable for years by checking google results. The more non-unique a programming language there name, the more the results are unpredictable. Last year Crystal was at 38 because ... crystal + programming = not the programming language. Yikes! Even Swift / Objective-C results are nonsense because they are the same market. Yet now both results 75% are bigger, then the actual original Objective-C market share. Past legacy results keep Object-C at a higher position resulting a wrong impression. A better metric has always been: Jobs posting, Github/Gitlab, module/package gains etc. Show me the D jobs? In Europe there are none. Modules growth is the same as it was 3 years ago ( turtle speed ). In essence while D has been on a charm offense, if one looks at the data. Its clear that D may have people trying out D ( increased in downloads of D ) but this is not reflected in actual contributors to D. Even if new D users are very selfish to not contribute, they can not be all 100% selfish. Even a 10% gain, with a increased in users, will reflect back in projects, modules, contributions etc. Yet, its not. So the essence is that D has a issue keeping those new users interested beyond trying it out. This is something that i feel is ignored in a lot of these conversations.
Dec 11 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 12 December 2019 at 02:26:57 UTC, Jeff wrote:
 growth as before. You can even track it on websites like 
 www.modulecounts.com where its growth is the same slow 1 
 package / day, over the 3 years.
Interesting site. The only languages with sustained exponential exponential growth at the beginning, but then turn linear. Nim has same growth rate as D then, but D seems to have a slight decline in 2019.
Dec 12 2019
prev sibling parent Laeeth isharc <laeeth laeeth.com> writes:
On Thursday, 12 December 2019 at 02:26:57 UTC, Jeff wrote:
 A better metric has always been: Jobs posting, Github/Gitlab, 
 module/package gains etc. Show me the D jobs? In Europe there 
 are none.
I'm hiring at least 25 more people and all of those jobs could be to work in D at least some of the time and the majority all of the time. I haven't advertised for most of the people I have hired to work in D because I haven't had time - it's a lower return on invested time approach - and so far I haven't needed to. Is Britain in Europe. I think so. And I do have people working remotely from mainland Europe. If one were advertising to hire people then probably you would advertise for C++ guys. What would be the point in saying you need to know D in an advert targeting a broad audience? I haven't used headhunters much but when I have they say an investment company is looking for people who know C++.
Dec 12 2019
prev sibling next sibling parent Gregor =?UTF-8?B?TcO8Y2ts?= <gregormueckl gmx.de> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
I just rechecked just for fun: D climbed to 17th place in that index this month. The more I see D climb rapidly in that index, the less I trust those results. :D Does a high listing in that index bring any kind of measurable effects with it?
Jan 07 2020
prev sibling next sibling parent reply p.shkadzko <p.shkadzko gmail.com> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 Good news, everyone!

 D has entered the Tiobe top 20 ranking of programming languages 
 [1], landing on the 18th position. I have been keeping an eye 
 on this index, and it is for the first time that I see this 
 happening.

 Cheers,
 RazvanN

 [1] https://www.tiobe.com/tiobe-index/
In top 20 there is also Delphi/Object Pascal, Assembly and Visual Basic .NET :) I believe their positions barely reflect the popularity of these languages. Funny that D TIOBE popularity growth kind of coincided with me finishing reading "Programming in D" by Ali and starting spending a lot of time online and heavily using search engines, on a daily basis. Just saying :D It is still nice, index is index and any publicity is good for the language.
Jan 07 2020
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 1/7/20 10:50 AM, p.shkadzko wrote:> On Monday, 4 November 2019 at 
09:03:04 UTC, RazvanN wrote:

 "Programming in D" by Ali
 index is index and any publicity is good
Completely misunderstanding this thread, "Ali" "promotes" the following "index", which he finds to be very helpful: http://ddili.org/ders/d.en/ix.html Ali
Jan 07 2020
parent reply p.shkadzko <p.shkadzko gmail.com> writes:
On Tuesday, 7 January 2020 at 19:21:43 UTC, Ali Çehreli wrote:
 On 1/7/20 10:50 AM, p.shkadzko wrote:> On Monday, 4 November 
 2019 at 09:03:04 UTC, RazvanN wrote:

 "Programming in D" by Ali
 index is index and any publicity is good
Completely misunderstanding this thread, "Ali" "promotes" the following "index", which he finds to be very helpful: http://ddili.org/ders/d.en/ix.html Ali
Hmm, for some reason in Firefox clicking on any link in this index takes forever to load the indexed section. I prefer http://ddili.org/ders/d.en/index.html and manual lookup ;)
Jan 07 2020
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 1/7/20 11:55 AM, p.shkadzko wrote:
 On Tuesday, 7 January 2020 at 19:21:43 UTC, Ali =C3=87ehreli wrote:
 =C2=A0 http://ddili.org/ders/d.en/ix.html
 Hmm, for some reason in Firefox clicking on any link in this index take=
s=20
 forever to load the indexed section
Thanks for letting me know. However, they should all be simple links=20 with anchors. I hope others can debug and perhaps I can do something on=20 the server side. Ali
Jan 07 2020
parent Eugene Wissner <belka caraus.de> writes:
On Tuesday, 7 January 2020 at 20:28:10 UTC, Ali Çehreli wrote:
 On 1/7/20 11:55 AM, p.shkadzko wrote:
 On Tuesday, 7 January 2020 at 19:21:43 UTC, Ali Çehreli wrote:
   http://ddili.org/ders/d.en/ix.html
 Hmm, for some reason in Firefox clicking on any link in this 
 index takes forever to load the indexed section
Thanks for letting me know. However, they should all be simple links with anchors. I hope others can debug and perhaps I can do something on the server side. Ali
works fine on developer edition 73.0b2
Jan 08 2020
prev sibling parent reply IGotD- <nise nise.com> writes:
On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 ...

 [1] https://www.tiobe.com/tiobe-index/
As for March 2020, D has gone off the top 20 and is now on place 23. What does that tell us? Not much really, other than that sub 1% language positions are noisy. I'm surprised C is still on second place. What can we do to kill it really?
Mar 06 2020
next sibling parent reply SrMordred <patric.dexheimer gmail.com> writes:
On Friday, 6 March 2020 at 15:27:27 UTC, IGotD- wrote:
 On Monday, 4 November 2019 at 09:03:04 UTC, RazvanN wrote:
 ...

 [1] https://www.tiobe.com/tiobe-index/
As for March 2020, D has gone off the top 20 and is now on place 23. What does that tell us? Not much really, other than that sub 1% language positions are noisy. I'm surprised C is still on second place. What can we do to kill it really?
Easier C interop. I Love D but lots of times i start to think of returning to C. I don´t like having to manage bindings , I need something faster that I can just put my C headers there and make it work like I would in C. I love Atilla´s DPP work, but its not *there* yet. I would like something like how Zig handle this issue. I think that every C programmer that want to swap languages will go eventually to Rust (If they want more safety), or to Zig (easier interop + more power). D + BetterC have today the "more power" part only.
Mar 06 2020
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 6 March 2020 at 16:37:08 UTC, SrMordred wrote:
 [snip]

 Easier C interop.
 I Love D but lots of times i start to think of returning to C.
 I don´t like having to manage bindings , I need something 
 faster that I can just put my C headers there and make it work 
 like I would in C. I love Atilla´s DPP work, but its not 
 *there* yet.

 [snip]
With respect to C interop specifically, what does dpp need to be *there*?
Mar 06 2020
parent reply SrMordred <patric.dexheimer gmail.com> writes:
On Friday, 6 March 2020 at 20:14:38 UTC, jmh530 wrote:
 On Friday, 6 March 2020 at 16:37:08 UTC, SrMordred wrote:
 [snip]

 Easier C interop.
 I Love D but lots of times i start to think of returning to C.
 I don´t like having to manage bindings , I need something 
 faster that I can just put my C headers there and make it work 
 like I would in C. I love Atilla´s DPP work, but its not 
 *there* yet.

 [snip]
With respect to C interop specifically, what does dpp need to be *there*?
There are still somethings that fail to parse.(but thats a minor thing ) But the "problem" i see is that is a separate project. I have to clone it, make a working executable (that had problems on windows on the first time that i tried , but it was solved), then make a dpp file (instead of just working with D files ). Not sure how it work with header only libs. Its not that we can´t do nice stuff with it, but this little frictions slowly build up, and u start to look for alternatives. And it feels like outside the realm of D. (where do a manage all this ? postScripts in dub files?, makefiles? caching? ) I questioned a few days ago about an integration of dpp with dub that i heard last year (that can potentially decrease this friction ), but nobody answered me.
Mar 06 2020
next sibling parent Andre Pany <andre s-e-a-p.de> writes:
On Saturday, 7 March 2020 at 02:49:43 UTC, SrMordred wrote:
 On Friday, 6 March 2020 at 20:14:38 UTC, jmh530 wrote:
 On Friday, 6 March 2020 at 16:37:08 UTC, SrMordred wrote:
 [snip]

 Easier C interop.
 I Love D but lots of times i start to think of returning to C.
 I don´t like having to manage bindings , I need something 
 faster that I can just put my C headers there and make it 
 work like I would in C. I love Atilla´s DPP work, but its not 
 *there* yet.

 [snip]
With respect to C interop specifically, what does dpp need to be *there*?
There are still somethings that fail to parse.(but thats a minor thing ) But the "problem" i see is that is a separate project. I have to clone it, make a working executable (that had problems on windows on the first time that i tried , but it was solved), then make a dpp file (instead of just working with D files ). Not sure how it work with header only libs. Its not that we can´t do nice stuff with it, but this little frictions slowly build up, and u start to look for alternatives. And it feels like outside the realm of D. (where do a manage all this ? postScripts in dub files?, makefiles? caching? ) I questioned a few days ago about an integration of dpp with dub that i heard last year (that can potentially decrease this friction ), but nobody answered me.
New dub features are usually implemented by the developers most eager to have these features. That makes totally sense in my opinion as these developers knows what they want, how they want it and what they don't want That said, I think it is not needed to enhance dub. Dub supports a lightweight plugin feature: dependencies which makes use of pre/post generateCommands. These commands are always executed. This feature is not investigated in detail so far but someone can try to make a DPP plugin and checks whether everything already works, or some small dub enhancement is missing. Kind regards Andre
Mar 07 2020
prev sibling parent reply FeepingCreature <feepingcreature gmail.com> writes:
On Saturday, 7 March 2020 at 02:49:43 UTC, SrMordred wrote:
 On Friday, 6 March 2020 at 20:14:38 UTC, jmh530 wrote:
 With respect to C interop specifically, what does dpp need to 
 be *there*?
There are still somethings that fail to parse.(but thats a minor thing ) But the "problem" i see is that is a separate project. I have to clone it, make a working executable (that had problems on windows on the first time that i tried , but it was solved), then make a dpp file (instead of just working with D files ). Not sure how it work with header only libs. Its not that we can´t do nice stuff with it, but this little frictions slowly build up, and u start to look for alternatives. And it feels like outside the realm of D. (where do a manage all this ? postScripts in dub files?, makefiles? caching? ) I questioned a few days ago about an integration of dpp with dub that i heard last year (that can potentially decrease this friction ), but nobody answered me.
When I made my D-lite language Neat back in 2009ish, one of the things I was most proud of was the C interop. You could just write `import c.SDL` and it would go look for SDL.h, give it to gcc to preprocess, fish out defines, structs and function declarations and insert them all into a module. There was even magic glue that made the common case of "void foo_fun(FooState* state, int i)" callable as "state.fun(i)", with zero extra effort required. It was very slick when it worked :) To be fair, I could only do this because I was the only user, and I could tweak the C parser as required; it never really worked *reliably* for a new header. Still, the sheer ease of use is something I do miss. If D ever gets macros, or I get my own "D with macros" language going, whichever of those very unlikely events comes first, this is definitely something that should be a macro, but with a CTFE macro something like `import include.SDL` seems very doable.
Mar 09 2020
parent reply SrMordred <patric.dexheimer gmail.com> writes:
On Monday, 9 March 2020 at 12:27:34 UTC, FeepingCreature wrote:
 When I made my D-lite language Neat back in 2009ish, one of the 
 things I was most proud of was the C interop. You could just 
 write `import c.SDL` and it would go look for SDL.h, give it to 
 gcc to preprocess, fish out defines, structs and function 
 declarations and insert them all into a module. There was even 
 magic glue that made the common case of "void foo_fun(FooState* 
 state, int i)" callable as "state.fun(i)", with zero extra 
 effort required. It was very slick when it worked :)
Thats really nice! Yeah, this kind of "no-effort-interop" is something that allow people to change languages but keep libs old codes to live on, therefore allowing them to keep working on the new language(increasing user retention). I feel that is a interesting thing for D to pursuit. Out of curiosity, are u working with D or are working/exploring other languages(or custom lang, maybe ) ? I feel that the closest thing of your D-lite language C interop that i know today is Zig.
Mar 09 2020
parent reply Atila Neves <atila.neves gmail.com> writes:
On Tuesday, 10 March 2020 at 04:14:33 UTC, SrMordred wrote:
 On Monday, 9 March 2020 at 12:27:34 UTC, FeepingCreature wrote:
[...]
Thats really nice! Yeah, this kind of "no-effort-interop" is something that allow people to change languages but keep libs old codes to live on, therefore allowing them to keep working on the new language(increasing user retention). I feel that is a interesting thing for D to pursuit. Out of curiosity, are u working with D or are working/exploring other languages(or custom lang, maybe ) ? I feel that the closest thing of your D-lite language C interop that i know today is Zig.
https://github.com/atilaneves/dpp
Mar 10 2020
parent reply SrMordred <patric.dexheimer gmail.com> writes:
On Tuesday, 10 March 2020 at 18:03:31 UTC, Atila Neves wrote:

 https://github.com/atilaneves/dpp
Hello Atila, i think u missed some part of the discussion where I cited dpp ;) My point was that, although i love dpp project, it doenst feel part of the D lang. Its an external project that first comers on the language will just not see, its not yet completly reliable (but work in most cases!),and have some external work to do: create a .dpp file, include the headers that what u, compile/generate binds. All this process feels external to the D language. And although u can make things works, these are little frictions that build up until people start to look for alternatives. (Zig is way more friendly in this area for example. People that are switching from C will find that Zig C interop is very pleasant in comparisson with D). Its my opinion that if D want to grab a bigger portion of the "low level" market, it should focus on DPP development + seamlessly integration with the D and/or Dub + a better BetterC support.
Mar 11 2020
next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 11 March 2020 at 13:40:08 UTC, SrMordred wrote:
 My point was that, although i love dpp project, it doenst feel 
 part of the D lang.
#include "foo.h" is close enough for me.
 Its an external project that first comers on the language will 
 just not see, its not yet completly reliable (but work in most 
 cases!),and have some external work to do: create a .dpp file, 
 include the headers that what u, compile/generate binds.
Honestly, I don't see this as much of an issue, at least in what I've done with dpp. It would be nice if it worked out of the box, but I really don't see that as a big issue these days, more of a one-time minor inconvenience to get it set up.
 All this process feels external to the D language.
IMO, that's a good thing - C interop *is* external to the D language.
 And although u can make things works, these are little 
 frictions that build up until people start to look for 
 alternatives. (Zig is way more friendly in this area for 
 example. People that are switching from C will find that Zig C 
 interop is very pleasant in comparisson with D).
Zig includes a C compiler. That's convenient, but that's not the only relevant metric, and I'm not sure I see it as an advantage over outsourcing compilation to gcc or clang. I admit that I've only spent a little time with Zig, so maybe my impressions are not fully accurate.
Mar 11 2020
parent SrMordred <patric.dexheimer gmail.com> writes:
On Wednesday, 11 March 2020 at 15:30:41 UTC, bachmeier wrote:
 On Wednesday, 11 March 2020 at 13:40:08 UTC, SrMordred wrote:
 My point was that, although i love dpp project, it doenst feel 
 part of the D lang.
#include "foo.h" is close enough for me.
 Its an external project that first comers on the language will 
 just not see, its not yet completly reliable (but work in most 
 cases!),and have some external work to do: create a .dpp file, 
 include the headers that what u, compile/generate binds.
Honestly, I don't see this as much of an issue, at least in what I've done with dpp. It would be nice if it worked out of the box, but I really don't see that as a big issue these days, more of a one-time minor inconvenience to get it set up.
 All this process feels external to the D language.
IMO, that's a good thing - C interop *is* external to the D language.
 And although u can make things works, these are little 
 frictions that build up until people start to look for 
 alternatives. (Zig is way more friendly in this area for 
 example. People that are switching from C will find that Zig C 
 interop is very pleasant in comparisson with D).
Zig includes a C compiler. That's convenient, but that's not the only relevant metric, and I'm not sure I see it as an advantage over outsourcing compilation to gcc or clang. I admit that I've only spent a little time with Zig, so maybe my impressions are not fully accurate.
Yeah, its not that is an "issue". I also use dpp, and most of the time it just works fine. The point is that if u come from C/C++ world and u start to try another languages, smalls frictions can build up. Lets suppose they try D and Zig. I think that if u came from C, D + betterC is amazing (is the reason why i´m using D now ), but u have an giant wall called garbage collector. Most people will dismiss D before even knowing about betterC, or ways to avoid gc. Most people will not even know about DPP. Its not part of D, is just a project. (that´s what i meant by not being part of D). Then they discover DPP, so lets try it: git clone, build, create dpp file (again, another thing that´s not really D) include the headers that i want, d++ ... to generate the binds ( lets ignore compilation ), include in my project. Nice, it works. Now lets try Zig. Zig its already low level/no GC, so its looks better before really looking. const c = cImport({ cInclude("header.h") }); zig run main.zig Works. (again ignoring compilation of C parts, although Zig being better at it ). Its hard to convince people that came from low level programming backgrounds to use D by comparison. Again, I started this discussion responding the question : "I'm surprised C is still on second place. What can we do to kill it really?" For me, D+betterC is a C killer. But D focus was never to be a C killer, although i think it have the potential to be.
Mar 11 2020
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Wednesday, 11 March 2020 at 13:40:08 UTC, SrMordred wrote:
 On Tuesday, 10 March 2020 at 18:03:31 UTC, Atila Neves wrote:

 https://github.com/atilaneves/dpp
Hello Atila, i think u missed some part of the discussion where I cited dpp ;)
I most definitely did. Oops!
Mar 11 2020
prev sibling parent reply Claude <claudemr live.fr> writes:
On Friday, 6 March 2020 at 15:27:27 UTC, IGotD- wrote:
 I'm surprised C is still on second place. What can we do to 
 kill it really?
That does not surprise me, and I don't see C going anywhere in the foreseeable future. It's the common denominator between native languages like C++, D, Rust etc. And it interfaces with assembly-language. For system programming, it's the obvious language: easy to grasp and powerful. However, it's obviously not as expressive as like C++, that's true. For example, Vulkan API is written in C, and it's a beautiful API. If it had been written in C++ (or any other native language), it would have been probably more expressive, but it would have lost a lot in terms of efficiency. Another advantage of C is that, as it's a smaller language, it's easier to enforce a coherent coding-style for a community project.
Mar 09 2020
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Monday, 9 March 2020 at 10:21:16 UTC, Claude wrote:
 On Friday, 6 March 2020 at 15:27:27 UTC, IGotD- wrote:
 I'm surprised C is still on second place. What can we do to 
 kill it really?
That does not surprise me, and I don't see C going anywhere in the foreseeable future. It's the common denominator between native languages like C++, D, Rust etc. And it interfaces with assembly-language. For system programming, it's the obvious language: easy to grasp and powerful. However, it's obviously not as expressive as like C++, that's true. For example, Vulkan API is written in C, and it's a beautiful API. If it had been written in C++ (or any other native language), it would have been probably more expressive, but it would have lost a lot in terms of efficiency. Another advantage of C is that, as it's a smaller language, it's easier to enforce a coherent coding-style for a community project.
DirectX, Metal, NVN, libGNM and libGNMX are doing pretty well, for APIs not written in C.
Mar 10 2020
prev sibling parent bauss <jj_1337 live.dk> writes:
On Monday, 9 March 2020 at 10:21:16 UTC, Claude wrote:
 On Friday, 6 March 2020 at 15:27:27 UTC, IGotD- wrote:
 I'm surprised C is still on second place. What can we do to 
 kill it really?
That does not surprise me, and I don't see C going anywhere in the foreseeable future. It's the common denominator between native languages like C++, D, Rust etc. And it interfaces with assembly-language. For system programming, it's the obvious language: easy to grasp and powerful. However, it's obviously not as expressive as like C++, that's true. For example, Vulkan API is written in C, and it's a beautiful API. If it had been written in C++ (or any other native language), it would have been probably more expressive, but it would have lost a lot in terms of efficiency. Another advantage of C is that, as it's a smaller language, it's easier to enforce a coherent coding-style for a community project.
A lot of languages also compiles to C before they implement compilation with ex. LLVM etc. because it's easier to do, faster to implement and supports mostly all platforms out of the box making it easy to test a language across platforms before you dive into compiling to machine code (because that's not easy to port.)
Mar 10 2020