www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - My Long Term Vision for the D programming language

reply Robert Schadek <rburners gmail.com> writes:


D -- The best programming language!

I imagine a DConf where you guys yell at me, not because we 
disagree,
but because I'm old and forgot my hearing aid.

This is what I think needs to be done to get us there.



GitHub has >10^7 accounts, D's bugzilla has what, 10^3?
No matter what feature github is missing there is no reason to 
not migrate to
github.
The casual D user, when he finds a bug, will never report it we 
he has to
create a special account on our bugzilla.

Github has an okay api, I bet we can replicate 99% of the 
features that are
missing with very little code executed by some bots.

Additionally, we are terrible at longer term planing and 
management.
In pretty much all software projects, you can find milestones, 
epics, roadmaps.
Github has those features, github is were our code lives, why 
does our planing
not life there as well.

I fully understand that D is a community project and that we can 
not tell the
bulk of the contributors to work on issue X or milestone Y, but 
we could ask
them nicely.
And if we follow our own project planing, they might just follow 
along as well.

Currently, I don't know where D is heading.
And if I don't know, how should average JS developer know?
Not by reading a few hundred forum posts, but by looking at the 
project
planing tools he/she is used to.

D does need more people, removing unnecessary bar of entry that 
our bugzilla
should be a no-brainer.

The role of the language/std leadership I see as keeping on top 
of the issues,
PR's, milestones, etc..
Setting priorities, motivating people with good libraries on 
code.dlang
to get them into phobos.
And of course, laying out new directions and goal for the
language and library.
Not short term but long term e.g. ~5 years.
Only after that work is done comes the developing.
Having more development time left would be the measure of success 
for the
leadership side.






My desktop computer has 64GB of RAM, my laptop has 16GB why is 
that all D
compiler work like its 1975 where lexer, parser, ... were 
different programs?
Having played a bit with web languages like svelte and elm, I'm 
disappointed
when going back to D.
An incremental compile, with reggae, for my work project takes 
about seven
seconds.
Elm basically had the unittests running by the time the keyUp 
event reached my
editor.
Svelte was equally fast, but instead of re-running the tests the 
updated
webpage was already loaded.
I know, that D and those two language aim for different 
platforms, but I the
premise should be clear.
Why redo work, if I got enough memory to store it all many times 
over.
For example, if I have a function

```D
T addTwo(T)(T a, T b) {
	return a + b;
}
```

and a test

```D
unittest {
	auto rslt = addTwo(1, 2);
	assert(rslt == 3);
}
```

and change `a + b` to `a * b` only the unittest calling it should 
be
re-compiled and executed.
Additionally, in most modern language most editor's/ide's can 
show me what
the type of `rslt` is, plus many more convenience features.
The compiler at some point knew what the type of `rslt` was, but 
it forgets it
as soon as the compilation is done.
No editor can benefit from this information, that the compiler 
had.
The worst thing though, when I compile next time and no 
dependency leading to
`rslt` has changed, the compiler computes it all over again.
What a waist of time.
Enough talking about how bad the current state is, let's see how 
much greener
the grass could be.

Imagine, a compiler daemon that you start once per 
project/program that keeps
track of all files related to this project.
When a files changes, it lexes and parses the file, and stores 
the information
it can reuse.
As it has the old parse tree of the previous version of the file, 
it should be
able to figure out which declarations have changed.
At some point even dmd must know what the dependencies between 
declarations in
different modules are, or what type template types have been 
evaluated to.
If that information is stored, building an lsp (language server 
protocol)
interface that each lsp client can talk to, to get this 
information is the
easy part.
When all the dependencies, are tracked the above example for the 
minimal
unittest re-run should be possible.
Which a well defined dependency graph effective multi-threading 
should be
possible as well.

Why do I have to choose which backend I want to use before I start
the compiler.
I would imagine, if the compiler daemon didn't find any errors 
with my code, I
should be able to tell it, use llvm to build me x86 executable.
When I ask next for an executable build with the gcc backend, 
only the parts
that change because of version blocks should be rebuild.
There is no reason to re-lex, or re-parse, re-anything any 
already opened file.
Even better, when working on the unittests why create any 
executable at all.
Why not create whatever bytecode any reasonable VM requires and 
pass it.
Companies run on lua, why can't my unittests?
There are embedded devices that run a python VM as its execution 
environment.
Compiling unittests to machine-code shouldn't be a thing.

WASM needs to be first class citizen.
I want to compile D to javascript, nice looking javascript.

Now for the really big guns.
When the compiler daemon is basically the glue that glues 
compiler library
functions together, we could create, basically, database 
migration for
breaking changes.
As an example, lets say autodecoding should be removed.
We would write a program that would, as one part of it, find all 
instances
of a foreach over a string `s` and replace that which a 
`s.byDchar`;
For all breaking changes between version, we supply code 
migrations.
If we are really eager to please, we write a script that applies 
those to all
packages on code.dlang.org and creates PR's where possible on 
github.
No more sed scripts, no more 2to3.py scripts, proper compiler 
library support
for code migrations.
Just imagine the productivity gains for your private code bases 
when you have
to do refactoring.
Refactoring, your D programming, by creating a D programming for 
the D
compiler library.
To add one more level of meta, this could be levered to do 
refactoring on the
compiler library source itself.
The member `id` for the class TemplateInstance should be called 
`identifier`,
no problem lets write a small migration scripts.
When phobos canFind becomes isFindable, just write a small D 
program and run
it on the compiler codebase.

The documentation/spec of the language leaves things to be 
desired, when can
spend huge amount of man power on it, but keeping the spec 
correct and up to
date is a tedious, thankless task.
And to be frank, we don't have the numbers, just take a look at 
the photo of
the C++ standard committee meeting, and of the last physical 
dconf.
But why work hard when we can work smart.
Why can't we use `__traits(allMembers, ` to iterate the AST 
classes and
generate the grammar from that?
You changed the grammar, fair enough, just re-run the AST classes 
to ddoc tool,
done.
I know the current AST classes are not correct reflective of the 
language
grammar, but maybe that is something worth fixing.
Also, there are hundreds of small D files that are used as test 
cases for the
compiler, why aren't they part of the spec?

Just to state the obvious, this would require the compiler 
library to
understand dub.(json|sdl) files, but some of that work is already 
being worked
on ;-)



We need really good error message.
After playing around with elm, coming back to D is really hard.
In comparison, D might as well just use the system speaker to 
send a peep
every time it finds an error.




Batteries included, all of them, even the small flat strange ones.



That means that phobos needs to have support for json-schema, 
yaml, ini, toml,
sdl, jsonnet.
Given a json-schema file named `schema.json` we need to be able 
to write
`mixin(generateJsonSchemaASTandParser(import("schema.json")))` 
and get a
parser and AST hierarchy based on the `schema.json`.
json-schema is also sometimes used for yaml, that should be 
support as well.
Some of the other formats support similar schema specifications 
as well.
Given a hierarchy of classes/structs, phobos also needs a method 
to build
parser for those file formats.
Yes that means serialization should be a part of phobos.
Ideally, we find an abstract DSL set of UDA that can be reused 
for all of the
formats, but the more important step is to have them in phobos.
Perfection being the enemy of the good and all.



phobos needs to have support for an event loop.
The compiler daemon library thing needs that, and that thing 
should be a heavy
user of phobos, dogfooding right.
io_uring seems to be the fast modern system on linux > 5.2, 
obviously Windows,
MacOSX needs to be supported as well.
But again, if the windows event loop is 5x slower than linux, so 
be it.
It is much more important, that there is no friction to get 
started.
The average, javescript dev looking for a statically typed 
language will
likely be blown away by the performance nonetheless.
I'm not saying, merge vibe-core, but I'm saying take a really 
close look at
vibe-core, and grill Sönke for a couple of hours.
At least with io_uring this event loop should scale mostly linear 
in
performance with the amount of threads, given enough CPU cores.



Yes, 1, 2, and 3.



I'm not sure if this is the right place to talk about this, but I 
didn't find
any better place, so here I go.
autowrap ^1 already allows trivial interaction with python and 
excel.

be part of
phobos/D.
If a project demands to get some toml output out of a golang 
call, passing it
to haskell because there is an algorithm you want to reuse, 
followed by

the obvious
choose.



The error messages in phobos are sometimes not great.
That is not good.
When you come from another language, that is not c++, and try to 
get started
with ranges good error messages in phobos are important.

One obvious example is how we constrain template function similar 
to this:

```D
auto someFun(R)(R r) if(isInputRange(R)) {
	...
}
```

you get stuff like

```
a.d(8): Error: template `a.someFun` cannot deduce function from 
argument types `!()(int)`
a.d(3):        Candidate is: `someFun(R)(R r)`
   with `R = int`
   must satisfy the following constraint:
`       isInputRange!R`
```

looks helpful but it is not as good as it could be.
If you don't know what an InputRange is, this does not help you.
You have to go to the documentation.
This could be made a lot easier by a small refactor.

```D
auto someFun(R)(R r) {
	static assert(isInputRange!R, inputRangeErrorFormatter!R);
	...
}
```

The function `inputRangeErrorFormatter` would create a string 
that shows
which of the required features of an InputRange are not fulfilled 
by `R`.
Especially, when there is overload resolution done by Template 
Constrains the
error message get difficult to understand fast.
Just look at:

```D
a.d(3):        Candidates are: `someFun(R)(R r)`
   with `R = int`
   must satisfy the following constraint:
`       isInputRange!R`
a.d(7):                        `someFun(R)(R r)`
   with `R = int`
   must satisfy the following constraint:
`       isRandomAccessRange!R`
```

This can be fixed quite easily as well:

```D
private auto someFunIR(R)(R r) { ... }

private auto someFunRAR(R)(R r) { ...  }

auto somFun(R)(R r) {
	static if(isInputRange!R) {
		someFunIR(r);
	} else static if(isRandomAccessRange!R) {
		someFunRAR(r);
	} else {
		static assert(false, "R should be either be an "
				~ "InputRange but " ~ inputRangeErrorFormatter!R
				~ "\n or R should be an RandomAccessRange but "
				~ randomAccessRangeErrorFormatter!R
				~ "\n therefore you can call " ~ __FUNCTION__);
	}
}
```



This section is needed to be read with the section about *shared* 
in
*The Language* part of this text.
When we have an event loop that also works with threads, 
communication has to
happen somehow.
Mutex do not scale, because it is just to hard.
As an exercise, name the three necessary requirements for a 
deadlock.
Wrong, there are four.

* Mutual exclusion
* Hold and wait
* No preemption
* Circular wait

phobos must have message passing that works with threads and the 
event-loop.
Two kinds of mail-boxes are to be support 1-to-1 and 1-to-N, 
where N is a
defined number of receives, such that the next sender is blocked 
until all N
have read.
Both types support multiple senders, and predefined mailbox queue 
sizes.
Making this  safe, and not just  trusted, will likely require 
some copying.
That is fine, when copying is eating your multi-threading gains,
multi-threading was not the solution to your problem, IMO.
Message passing and the SumType are likely a nice way to emulate 
the Ada
rendezvous concept.



Get your tomatoes and eggs ready.



There GC is here to stay, you don't do manual memory management 
(MMM) in a
compiler daemon that tracks dependency.
I don't care how smart you are, you are not that smart.
D is not going to run the ECU of the next Boeing airplane, rust 
will succeed C
there.
Rust will succeed C and C++ everywhere, but who cares JS runs the 
rest.
How many OS kernels have you written, but how many data 
transformations have
you written.
So fight a war that is over and lost, for a niche field anyway, 
or actually
have some wins and run the world.

Mixing MMM, RC, and GC, is also too complicated IMO.
The whole lifetime tracking requirements make my head spin.
That being said, I think there is a place to reuse the gained 
knowledge.
In my day job I have a lot of code that results in a call to 
std.array.array
allocating an array of some T which by the end of the function 
gets
transformed into something else that is then returned.
The array never leaves the scope of the function.
Given lifetime analysis the compiler could insert GC.free calls.
Think automatic `T t` to `scope T t` transformation.
At least for the code I have been writing for the last two years, 
this should
release quite a bit of memory back to the GC, without the GC 
every having to
mark and sweep.

We want the JS developer, if we have to teach them to use MMM, 
and or RC we
might as well not try.
I don't even want to think about memory I want to get some work 
done.
I don't want to get more work by thinking about memory.
I want to get my project running and iterate on that.

To summarize, GC and GC only.



As said in the phobos section about synchronisation, this is an 
important
building block.
As shared is basically broken, maybe painting a holistic picture 
of where we
want D's multi-threading/fiber programming to go is better than 
to take a look
at shared on its own.
For me, this would mean sharing data between threads and/or 
fibers should be
as easy and error free has letting the GC handle memory.
That means, race conditions need very difficult to produce the 
same as
deadlocks.
This, to me, implies message passing or Ada rendezvous and not 
trading locks
to work on shared data.



betterC is, at best, a waste-by-product, if we have to use 
betterC to write
something for WASM, or anything significant, we might as well 
start learning
rust right now.



Having been saved by it a couple of times, and using a non US 
keyboard
everyday, I still think it is not a terrible idea, but I think 
this battle is
lost and I'm already full of tomatoes by this point.
Meaning, autodecoding will have to go.
At the same time we have to update std.uni and std.utf.
The majority of developers and users of software speak languages 
that do not
fit into ASCII.
When a project requires text processing, your first thought must 
be D, not
perl.
std.uni and std.utf have to be a superset of the std.uni and 
std.utf of the top
20 languages.



Let's keep it simple, and consistent.
You add parenthesis to call a function.
You can not call a property function with parenthesis.
You can not take the address of a property function.



Consistency is king:

 safe -> safe
 trusted -> trusted
 system -> system
 nogc -> nogc


Long story short, language attributes do not start with a  , user 
defined
attributes (UDAs) do.



I had this in the phobos section at the start of writing this.
String interpolation is not what you want, I know it is what you 
want right
now, because you think it fixes your problem, but it does not.
String interpolation is like shoe laces, you want them, but you 
are walking on
lava, opening shoes are not actually your problem.
For work, I have D that generates about 10k lines of typescript, 
and the
places where string interpolation would have helped were trivial 
to do in
std.format.
IMO, the better solution would be something like vibe's diet, 
mustache,
handlebar that doesn't require a buildstep like diet.
Whitespace control and Nullable is a big part of this to.



ImportC must have a preprocessor, or it is DOA.
Shelling out to gcc or clang to preprocess, makes the build 
system horrible
which in turn will make the compiler library daemon thing 
difficult to build.
This is also important for the language interop, as I imagine 
that most
interop will go through a layer of C.
When ImportC can use openssl 1.0.2s or so it is good enough.
Having done some usage of openssl recently, my eyes can not 
un-see the
terribleness that is the openssl usage of C.



This was already partially discussed in the long term goals, but 
needs better
documentation or better yet a spec.
The cool thing is, we don't need to be an ISO spec aka. a pdf.
We could very well be a long .d file with lots of comments and 
unittests.
Frankly, I think that would be much more useful anyway.
Of giving a few select/unmaintained example of a language feature 
show the
tests the compiler runs.
Actually, having looked at some of the tests to figure out how 
stuff should be
I would imagine other people would benefit as well.
When the compiler fails to execute the spec, either the spec is 
wrong or the
compiler has a bug.
Two birds with one stone, right? right!



Obviously, D needs to run on those platforms.
Both platforms have api's, using them must be as easy `dub add 
andoird 12.0.1`.
The gtkd people basically wrote a small program to create a D 
interface to gtk
from the gtk documentation.
I bet a round of drinks at the next physical dconf that this is 
possible for
android and ios as well.
The dart language people shall come to fear our binding generation
capabilities.




D3 will never happen, it sounds to much like what we got when we 
moved from D1
to D2.
The D2 version number 2.098.X does not make sense.
D 2.099 plus std v2 would also be terrible.
By the time I have explained to somebody new why D is in version 
2.099 with
phobos having parts in version v2 in addition to 
std.experimental, which is
was pretty much DOA, the person has installed, compiled, and run 
"hello world"
in rust.
I talked to Andrei about this, as it seemed that we where firmly 
set in our
corners of the argument.
Andrei mentioned the C++ approach, which has been really 
successful.
Good ideas are there to steal, so lets do what C++ does.

Lets call the next D 23, the one after that maybe D 25.
Backwards compatibility is not a given.
But we ship the latest version of, lets say, three D versions 
with the
current release.
D X is implemented in D X-1.
This would mean that the three old D version would still need to 
be able to
create working binaries ~10 years down the road.
I would say, the older versions should only get patches that stop 
them from
doing so.
If they come with a bug, and we have moved on to a new D version, 
this bug
will exist forever in that D version.




I'm writing this section as one of the last.
This is maybe one of the most important parts, but also the 
hardest
to validate.
When reading the forum, or the github PR's I get the feeling that 
people think
that D is a consensus driven, meritocracy.
That is not the case, and that is okay.
The impression of it is very dangerous as it sets people up to be 
continuously
disappointed.
Just look for all the posts where people complain that Walter 
does not change
his mind.
To me this posts shows this disconnect, people except Walter to 
change his
mind because, at least to their mind, their idea is better what 
Walter thinks.
But he doesn't have to agree, because he is the *benevolent 
dictator for life*.
Who is right or wrong is irrelevant, the impression of level of 
influence is
not.
Being a bit dramatic, given people false hope, that gets 
disappointed, will
drive them away from D.
A simple solution, IMO, is to take clear stance on issues.
Direct simple language.
A leadership person saying, yes xor no to thing X.
When new information comes up that warrants a reversal of such a 
statement,
leadership would lay out how decision (yes|no) on X was changed 
by new
information Y.

I see the DIP process troublesome as it gives the impression of 
say of what D
will become.
Maybe renaming *D Improvement Proposals* into
*D Improvement Suggestion* would be an option while 
simultaneously increasing
the amount of work that should go into writing of a *DIS*.
I find that the especially the given *Rationals* are way to short 
to give a way
the pros and cons of an improvement of most existing DIPs.
Just have a look at the quality of the C++ proposals.
The DIS' should aim for that.
Or at least have a matrix how the improvement interacts with each 
of the D
features and an analysis how this actually makes D better in real 
world terms
(code.dlang.org).
This would be another nice usage for the compiler library daemon 
thing.
Always asking, just because we could, should we.

But taking formal steps for the DIP can be avoided I believe if 
the direction
the language should develop in is clearly marked by leadership.
There is no need to discuss the shared atomics DIP, if leadership 
dictates
that message passing is the selected mechanism for thread 
communication and
only that.
Sure you can still argue for shared atomics, but you have no 
reason to be
disappointed when nobody takes you serious, as you already knew 
where the
journey is going.




This year (2021), move from bugzilla to github.
A nice Christmas present to show that we mean business.

D 23:

* remove auto-decoding
* safe by default
* attribute consistency
* ImportC preprocessor
* remove std.experimental

D 25:

* All but, the compiler daemon library thing

D 27:

* Compiler daemon thing.

The work on the compiler daemon thing, will have to start before 
2025.




I'm serious about the motto at the top.
When people start complaining that their language is better, its 
free
marketing for D.




If D continues the way it does, it will soon be irrelevant.
And I don't want that, I want to be yelled at dconf 2071.

D's powerful templates, ctfe, and ranges made heads turn, but the 
other
language have caught up.
Let us really innovate, so that D not only becomes the Voldemort 
language for
C++, but for all other languages as well, because D is the best 
language.


^1 https://code.dlang.org/packages/autowrap
Nov 16 2021
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:
 D 27:

 * Compiler daemon thing.
I am already working on that. Preliminary results show possible 25x speedup for larger codebases. Of course if almost everything is grabbed from the cache it's an even higher speedup. My experimental version can take advantage of multi-core CPUs with around 70% resource utilization.
Nov 16 2021
prev sibling next sibling parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:


 D -- The best programming language!
Very well written! I agree with everything, except: - I don't think Rust has won - I think betterC might still have a place (or rather, some kind of runtime-less D) - I think string interpolation could be useful Other than that, gold star from me! 🌟
Nov 16 2021
next sibling parent reply Greg Strong <mageofmaple protonmail.com> writes:
On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 Other than that, gold star from me! 🌟
Wow, your reaction is pretty much the opposite as mine. There is some good stuff in there, but it is mostly a rant about stuff that is either (a) desire for library stuff that does not _need_ to be in phobos, and probably shouldn't be, (b) pie-in-the-sky stuff that isn't our primary problem, (c) random mishmash of disconnected thoughts. Random example:
WASM needs to be first class citizen.
I want to compile D to javascript, nice looking javascript.
WASM is important. YES. Next sentence ... compile to javascript? Why on Earth would you want to do that if you have WASM? Does the author think these are things are connected? I do appreciate the effort that went into the post and would like to see the community focus better. And, yes, moving to github is prudent. But it's pretty much downhill from there IMHO.
Nov 16 2021
next sibling parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Tuesday, 16 November 2021 at 22:36:18 UTC, Greg Strong wrote:
 On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 [...]
Wow, your reaction is pretty much the opposite as mine. There is some good stuff in there, but it is mostly a rant about stuff that is either (a) desire for library stuff that does not _need_ to be in phobos, and probably shouldn't be, (b) pie-in-the-sky stuff that isn't our primary problem, (c) random mishmash of disconnected thoughts. Random example: [...]
The gold star was for effort 😎
Nov 16 2021
parent Greg Strong <mageofmaple protonmail.com> writes:
On Tuesday, 16 November 2021 at 23:28:23 UTC, Imperatorn wrote:
 The gold star was for effort 😎
Yes, on that we agree :)
Nov 16 2021
prev sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Tuesday, 16 November 2021 at 22:36:18 UTC, Greg Strong wrote:
 WASM is important.
y'all should watch my dconf thing from last year and the new one this saturday too. we already have some wasm. though there's a few points that are waiting on the wasm spec to mature and some things that might need compiler help if you aren't willing to work with some special purpose code. but like the magic i show is that special purpose code doesn't necessarily have to be much different.
Nov 16 2021
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:


 D -- The best programming language!
Very well written! I agree with everything, except: - I don't think Rust has won - I think betterC might still have a place (or rather, some kind of runtime-less D) - I think string interpolation could be useful Other than that, gold star from me! 🌟
It has won, time to accept it, https://source.android.com/setup/build/rust/building-rust-modules/overview https://docs.microsoft.com/en-us/windows/dev-environment/rust/rust-for-windows https://www.phoronix.com/scan.php?page=news_item&px=Rust-Linux-Kernel-Linaro-2021
Nov 16 2021
parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Wednesday, 17 November 2021 at 07:04:54 UTC, Paulo Pinto wrote:
 On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:
It has won, time to accept it,
Sorry, to clarify I meant in the embedded space / functional safety. I have not seen any Rust anywhere in safety-critical appliations yet. (Not D either of course) Since there is no certified compiler for Rust (yet) or toolchain or acknowledged coding standard. I guess there will come something similar like (a proper) MISRA-C for Rust Reading through the coding standards ISO, only very recently (10 years ago) even C++ have been mentioned that it *might* be ok to use. It's a very conservative space. I have no doubt that in about 10 years or so, Rust could be used (maybe?) in these applications, but it all depends on the system at hand and how you build it. Like for example what a safe state is, what level you have on certain parts etc etc. For example you could in theory even use QBASIC to control some critical part of a system if there are no requirements on for example (I don't know the English term) SIL "monitored movements" and only have requirements that the stop function has a certain level. It all depends on the system and requirements. For example, our company has a product from 1986 which is still in use today because it took us about 7-8 years to get all the documentation and testing in place (that one uses assembly though). It's not only software requirements, there are RED, LVD, EMC, EMI etc etc, dual architecture, monitoring of outputs, watchdog requirements (ASIL D), latency requirements, active vs passive stop, data integrity requirements (think CRC), bit flip requirements etc (yes, during the validation and verification process we introduce random bit flips to simulate an external memory corruption event, such as cosmic backround radiation) etc. It is a very conservarive space. In some aspects it might seem dumb (ilke, why would a language with higher guarantees be worse?), but I guess it comes from a sense that you want to be sure all parts work as expected and it's partly driven by fear/being cautious. Gotta work now, but just a quick summary https://www.iar.com/products/requirements/functional-safety/iar-embedded-workbench-for-arm-functional-safety/ https://www.highintegritysystems.com/ https://www.ghs.com/products/industrial_safety.html
Nov 17 2021
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 17 November 2021 at 14:18:04 UTC, Imperatorn wrote:
 On Wednesday, 17 November 2021 at 07:04:54 UTC, Paulo Pinto 
 wrote:
 On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:
It has won, time to accept it,
Sorry, to clarify I meant in the embedded space / functional safety. I have not seen any Rust anywhere in safety-critical appliations yet. (Not D either of course)
Get in touch with https://ferrous-systems.com/ and you will see your applications, for example how they are collaborating with Green Hills Software, https://www.youtube.com/watch?v=G5A7rSPYpb8
Nov 17 2021
parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Wednesday, 17 November 2021 at 14:23:57 UTC, Paulo Pinto wrote:
 On Wednesday, 17 November 2021 at 14:18:04 UTC, Imperatorn 
 wrote:
 On Wednesday, 17 November 2021 at 07:04:54 UTC, Paulo Pinto 
 wrote:
 On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn 
 wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:
It has won, time to accept it,
Sorry, to clarify I meant in the embedded space / functional safety. I have not seen any Rust anywhere in safety-critical appliations yet. (Not D either of course)
Get in touch with https://ferrous-systems.com/ and you will see your applications, for example how they are collaborating with Green Hills Software, https://www.youtube.com/watch?v=G5A7rSPYpb8
Yeah, things are slowly changing
Nov 17 2021
prev sibling parent reply Robert Schadek <rburners gmail.com> writes:
On Tuesday, 16 November 2021 at 21:59:19 UTC, Imperatorn wrote:
 - I don't think Rust has won
Please read the joke as well: You are in the first stage of grief: Denial
 - I think betterC might still have a place (or rather, some 
 kind of runtime-less D)
 - I think string interpolation could be useful
The way I see it is that if you have betterC aka. no runtime aka. no GC you can not work properly with arrays which makes programming really unpleasant in D. Also you can not have string interpolation without dynamic memory.
Nov 16 2021
next sibling parent reply Elronnd <elronnd elronnd.net> writes:
On Wednesday, 17 November 2021 at 07:25:46 UTC, Robert Schadek 
wrote:
 Also you can not have string interpolation without dynamic 
 memory.
Both of the string interpolation proposals were specifically designed to permit this.
Nov 16 2021
parent reply SealabJaster <sealabjaster gmail.com> writes:
On Wednesday, 17 November 2021 at 07:37:24 UTC, Elronnd wrote:
 On Wednesday, 17 November 2021 at 07:25:46 UTC, Robert Schadek 
 wrote:
 Also you can not have string interpolation without dynamic 
 memory.
Both of the string interpolation proposals were specifically designed to permit this.
And I believe this ties into his "Let's not aim for perfect" angle, which I agree with. If anything, this thread simply shows yet again the extreme divide in D's userbase. "Get rid of GC" "Make GC optional" "Fully embrace GC" "Go after the C devs" "Go after the higher level devs" "I want to be able to write kernals in D" "And **I** want to be able to do embedded stuff in D!" "Well, **I** want to make games in D" "Humph, **I** want to make Native applications in D" "Jokes on you, I want to make a web server in D" "pfft, I just want to make quick scripts and tools!" [All the above are on differing levels of requirements regarding high-level and low-level features] "Don't you dare touch Phobos with your GC trash" "Pretty please actually put stuff into Phobos" "Don't you dare add features to this language, just write a library" "Pretty please add native sumtypes so it's actually possible to debug when things don't compile" "Add string interpolation" "but it has to also work with printf because Walter says so" "also not like that since it needs to work in -betterC nogc nothrow pure const shared" "but it also needs to be easy to use because people from other languages expect things to just work" "but that means we can't use nogc which is a hard requirement for the GC-phobics" "but but but" "fuck it let's just scrap it so everyone loses out, just write `mixin(interp!"")` instead, nerds" D - the language of endless bickering and lack of cohesive action. Still absolutely love the language though, but we really need to get ourselves together at some point, because we're stuck in an endless loop of trying to be everything yet nothing.
Nov 16 2021
next sibling parent reply zjh <fqbqrr 163.com> writes:
On Wednesday, 17 November 2021 at 07:54:26 UTC, SealabJaster 
wrote:
 Still absolutely love the language though, but we really need 
 to get ourselves together at some point, because we're stuck in 
 an endless loop of trying to be everything yet nothing.
I've been saying that D should provide options. I'll give you everything. You can assemble it yourself. Like VIM, you map your keys yourself.
Nov 17 2021
parent zjh <fqbqrr 163.com> writes:
On Wednesday, 17 November 2021 at 08:16:10 UTC, zjh wrote:
 On Wednesday, 17 November 2021 at 07:54:26 UTC, SealabJaster
Therefore, it is necessary to provide capabilities similar to `betterC + STD` to compete with others. Otherwise, more and more people will run to `C/C++/Rust`.
Nov 17 2021
prev sibling next sibling parent reply Robert Schadek <rburners gmail.com> writes:
On Wednesday, 17 November 2021 at 07:54:26 UTC, SealabJaster 
wrote:
 D - the language of endless bickering and lack of cohesive 
 action.

 Still absolutely love the language though, but we really need 
 to get ourselves together at some point, because we're stuck in 
 an endless loop of trying to be everything yet nothing.
Sounds to me like my argument for leadership.
Nov 17 2021
parent SealabJaster <sealabjaster gmail.com> writes:
On Wednesday, 17 November 2021 at 08:21:09 UTC, Robert Schadek 
wrote:
 Sounds to me like my argument for leadership.
I just checked out the dconf page again, and it seems the overview for Atila's talk has been added now. https://dconf.org/2021/online/index.html Seems to be in the right direction, although I'm wary trying to avoid breaking changes may be to our detriment in the long run (ironically).
Nov 17 2021
prev sibling next sibling parent reply WebFreak001 <d.forum webfreak.org> writes:
On Wednesday, 17 November 2021 at 07:54:26 UTC, SealabJaster 
wrote:
 On Wednesday, 17 November 2021 at 07:37:24 UTC, Elronnd wrote:
 On Wednesday, 17 November 2021 at 07:25:46 UTC, Robert Schadek 
 wrote:
 Also you can not have string interpolation without dynamic 
 memory.
Both of the string interpolation proposals were specifically designed to permit this.
And I believe this ties into his "Let's not aim for perfect" angle, which I agree with. If anything, this thread simply shows yet again the extreme divide in D's userbase. "Get rid of GC" "Make GC optional" "Fully embrace GC" "Go after the C devs" "Go after the higher level devs" "I want to be able to write kernals in D" "And **I** want to be able to do embedded stuff in D!" "Well, **I** want to make games in D" "Humph, **I** want to make Native applications in D" "Jokes on you, I want to make a web server in D" "pfft, I just want to make quick scripts and tools!" [All the above are on differing levels of requirements regarding high-level and low-level features] "Don't you dare touch Phobos with your GC trash" "Pretty please actually put stuff into Phobos" "Don't you dare add features to this language, just write a library" "Pretty please add native sumtypes so it's actually possible to debug when things don't compile" "Add string interpolation" "but it has to also work with printf because Walter says so" "also not like that since it needs to work in -betterC nogc nothrow pure const shared" "but it also needs to be easy to use because people from other languages expect things to just work" "but that means we can't use nogc which is a hard requirement for the GC-phobics" "but but but" "fuck it let's just scrap it so everyone loses out, just write `mixin(interp!"")` instead, nerds" D - the language of endless bickering and lack of cohesive action. Still absolutely love the language though, but we really need to get ourselves together at some point, because we're stuck in an endless loop of trying to be everything yet nothing.
this sums up the community really well. I would love if we would go forward with the plan proposed by Robert here really, though keeping the options to disable runtime and stuff like betterC. Phobos should really be for the average D programmer, and if that can be -betterC nogc nothrow pure const shared, great! But that shouldn't be the design goal IMO. When I use phobos, I want easy to use, flexible, readable code that doesn't make me do the wrong things - I want to save time using it. When I need -betterC nogc code to make a kernel or program a micro-controller there are plenty of good utility libraries on DUB that nobody seems to be using - we should make that group of users especially use and publish more DUB libraries. I think -betterC nogc code is for rare use-cases (compared to 90% of other code you will write) - they exist and certainly are important, especially for building a good base for some of your other D code, but they are not what I want to use for all code. Unstructured rambling section: When I started using D I was especially drawn in by the great easy-to-use, flexible, batteries-included stdlib + package manager bundled with the language installation for all my other needs. Not a lot has been added to the stdlib since then, actually useful stuff like std.xml has been/will be removed due to it being not up to phobos' standards. Having too many dependencies quickly introduces issues, especially when the dependencies have more dependencies, worse even if they have different versions. D really shined in making web servers with vibe.d - it was easy to transition from express.js. Vibe.d is still my go-to library for all my web services, but not a lot is really moving forward in this space, would love to push more here, especially with stuff like HTTP 2/3 that was promised with vibe-http, that halted for some reason though and I don't know how to improve on that really other than making my own vibe-http from scratch. I'm also a big fan of GtkD, would love to see proper GTK 4 support going in though - I'm feeling unsafe using the gtk4 branch right now with the d_adw library on DUB that doesn't even have a README! Here too I think gtk4 is pretty usable, I don't know what's holding it up. Sure there are gonna be issues, but you can always fix those with later updates.
Nov 17 2021
parent Tejas <notrealemail gmail.com> writes:
On Wednesday, 17 November 2021 at 09:54:31 UTC, WebFreak001 wrote:
 On Wednesday, 17 November 2021 at 07:54:26 UTC, SealabJaster 
 wrote:
 [...]
this sums up the community really well. I would love if we would go forward with the plan proposed by Robert here really, though keeping the options to disable runtime and stuff like betterC. [...]
I believe the GtkD author said that he disagrees philosophically over the direction Gtk 4 has taken so he won't be supporting it... dunno if he later changed his mind though.
Nov 17 2021
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Nov 17, 2021 at 07:54:26AM +0000, SealabJaster via Digitalmars-d wrote:
[...]
 And I believe this ties into his "Let's not aim for perfect" angle,
 which I agree with.
 
 If anything, this thread simply shows yet again the extreme divide in
 D's userbase.
[...]
 D - the language of endless bickering and lack of cohesive action.
 
 Still absolutely love the language though, but we really need to get
 ourselves together at some point, because we're stuck in an endless
 loop of trying to be everything yet nothing.
Absolutely. This is D suffering from its age-old problem of letting the perfect be the enemy of the good. We want perfection but in the process we pushed away the good that could have helped move things along. This is neatly summed up in Andrei's classic post on Great Work vs. Good Work: https://forum.dlang.org/post/q7u6g1$94p$1 digitalmars.com I don't necessarily disagree with his stance (in fact I largely agree with it in principle), but the result of this kind of attitude is that when Great Work is nowhere in sight (perhaps, just perhaps, because a problem is actually tough? -- and no one is smart enough to come up with a revolutionary solution?), then all progress grinds to a halt. T -- Живёшь только однажды.
Nov 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 17 November 2021 at 17:48:05 UTC, H. S. Teoh wrote:
 I don't necessarily disagree with his stance (in fact I largely 
 agree with it in principle), but the result of this kind of 
 attitude is that when Great Work is nowhere in sight (perhaps, 
 just perhaps, because a problem is actually tough? -- and no 
 one is smart enough to come up with a revolutionary solution?), 
 then all progress grinds to a halt.
I am getting Winnie the Pooh vibes from this. The key to finding a solution is understanding the problem and the context. If you don't, you won't find a solution, you will just create more problems. Has nothing to do with "Good Work". It is a sign of "Poor Work". Don't mix those two terms! If understanding the problem is difficult, reduce the problem, reduce the scope of what you try to achieve. Learn from others. So what can we learn from other system level programming languages? No GC! Ok, remove the GC. Now the scope has been reduced and we can more easily find an acceptable solution for a system level programming language. That is basically a consequence of your position, but obviously not what you meant…
Nov 17 2021
parent reply Dukc <ajieskola gmail.com> writes:
On Wednesday, 17 November 2021 at 18:57:23 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 17 November 2021 at 17:48:05 UTC, H. S. Teoh 
 wrote:
 I don't necessarily disagree with his stance (in fact I 
 largely agree with it in principle), but the result of this 
 kind of attitude is that when Great Work is nowhere in sight 
 (perhaps, just perhaps, because a problem is actually tough? 
 -- and no one is smart enough to come up with a revolutionary 
 solution?), then all progress grinds to a halt.
I am getting Winnie the Pooh vibes from this. The key to finding a solution is understanding the problem and the context. If you don't, you won't find a solution, you will just create more problems. Has nothing to do with "Good Work". It is a sign of "Poor Work". Don't mix those two terms!
Andrei called it "good work" because he meant stuff that is bad enough to draw a lot of effort to review, but not so bad that it could be just dismissed without appearing rude. "Bad" or "poor" would be mean the "obviousy not worth it" work.
 If understanding the problem is difficult, reduce the problem, 
 reduce the scope of what you try to achieve. Learn from others. 
 So what can we learn from other system level programming 
 languages? No GC! Ok, remove the GC. Now the scope has been 
 reduced and we can more easily find an acceptable solution for 
 a system level programming language.

 That is basically a consequence of your position, but obviously 
 not what you meant…
Being both GC and NoGC is kind of our unique selling point. There would have to be a very strong case before it'd be wise to discard one or the other from the language.
Nov 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 17 November 2021 at 19:39:04 UTC, Dukc wrote:
 Andrei called it "good work" because he meant stuff that is bad 
 enough to draw a lot of effort to review, but not so bad that 
 it could be just dismissed without appearing rude. "Bad" or 
 "poor" would be mean the "obviousy not worth it" work.
Yes, he wrote a long essay in order to diffuse the issue of being rude. Clearly a compiler, runtime and standard lib should only accept *excellent* code, as everybody else builds on top of it. It is better to reduce the scope of the language/library if that cannot be achieved. The bar for acceptance should not be high, it should be very high.
 Being both GC and NoGC is kind of our unique selling point. 
 There would have to be a very strong case before it'd be wise 
 to discard one or the other from the language.
Yes, but that means that we have to solve a very difficult problem. I think local GC + global RC can be an interesting solution. So it is good that they look at making RC easy to implement. We have to work to find an excellent solution! Mediocre or "you are on your own" is not good enough in system level programming anymore.
Nov 17 2021
prev sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 17 November 2021 at 07:25:46 UTC, Robert Schadek 
wrote:
 Also you can not have string interpolation without dynamic 
 memory.
With dynamic memory I presume you mean GC-allocated memory. dynamic memory allocation is broader term and covers cases such ```d safe pure unittest { scope a = [1, 2]; } ``` which doesn't use the GC, thanks to `a` being `scope`. However, note that both ```d safe pure nogc unittest { scope a = [1, 2]; } ``` and ```d safe pure nogc unittest { string x, y; scope a = x ~ y; } ``` currently fail but I don't think they should because their end-result `a` has life-time limited to the unittest block. And therefore having deterministic destruction and without the need for the GC and should therefore be allowed in betterC.
Nov 24 2021
next sibling parent reply Elronnd <elronnd elronnd.net> writes:
On Wednesday, 24 November 2021 at 14:13:35 UTC, Per Nordlöw wrote:
 However, note that both
 *snip*
 currently fail but I don't think they should because their 
 end-result `a` has life-time limited to the unittest block. And 
 therefore having deterministic destruction and without the need 
 for the GC and should therefore be allowed in betterC.
The latter examaple requires potentially unbounded space, so it cannot be stack-allocated.
Nov 24 2021
parent reply max haughton <maxhaton gmail.com> writes:
On Wednesday, 24 November 2021 at 17:56:57 UTC, Elronnd wrote:
 On Wednesday, 24 November 2021 at 14:13:35 UTC, Per Nordlöw 
 wrote:
 However, note that both
 *snip*
 currently fail but I don't think they should because their 
 end-result `a` has life-time limited to the unittest block. 
 And therefore having deterministic destruction and without the 
 need for athe GC and should therefore be allowed in betterC.
The latter examaple requires potentially unbounded space, so it cannot be stack-allocated.
Ignoring that the example probably could be stack allocated (if you stick a branch in there) in practice, the key thing with this scope transformation isn't stack allocation but rather moving the allocation anywhere other than the GC.
Nov 24 2021
parent reply Elronnd <elronnd elronnd.net> writes:
On Thursday, 25 November 2021 at 02:32:41 UTC, max haughton wrote:
 the key thing with this scope transformation isn't stack 
 allocation but rather moving the allocation anywhere other than 
 the GC.
I am not quite sure what you are saying. Are you saying that: 1. The problem is moving the allocation somewhere other than the gc, including potentially to the stack, or to an alternate heap, or to somewhere else; or, 2. Converting allocations from gc to the stack is fine; but moving them anywhere else is problematic In either case, this compiles currently and produces GC-free code: nogc: void f(scope int[] x); void g(int x, int y) { f([x, y]); } I'm not sure where you would want to move scope allocations to aside from the stack, as scope guarantees LIFO, and so is the perfect fit for a stack. Perhaps an alternate stack (cf 'brk') managed in druntime, to avoid stack overflow? That would be good, but is hardly a challenging transformation if you can already produce regular stack allocation.
Nov 25 2021
next sibling parent max haughton <maxhaton gmail.com> writes:
On Thursday, 25 November 2021 at 09:29:49 UTC, Elronnd wrote:
 On Thursday, 25 November 2021 at 02:32:41 UTC, max haughton 
 wrote:
 the key thing with this scope transformation isn't stack 
 allocation but rather moving the allocation anywhere other 
 than the GC.
I am not quite sure what you are saying. Are you saying that: 1. The problem is moving the allocation somewhere other than the gc, including potentially to the stack, or to an alternate heap, or to somewhere else; or, 2. Converting allocations from gc to the stack is fine; but moving them anywhere else is problematic In either case, this compiles currently and produces GC-free code: nogc: void f(scope int[] x); void g(int x, int y) { f([x, y]); } I'm not sure where you would want to move scope allocations to aside from the stack, as scope guarantees LIFO, and so is the perfect fit for a stack. Perhaps an alternate stack (cf 'brk') managed in druntime, to avoid stack overflow? That would be good, but is hardly a challenging transformation if you can already produce regular stack allocation.
Just use malloc and free. If you want to be clever stick a branch in there or forward to a tuned allocator. The fact that the stack is LIFO makes absolutely no difference compared to the cost of calling into the GC and even worse potentially causing a collection.
Nov 25 2021
prev sibling parent reply max haughton <maxhaton gmail.com> writes:
On Thursday, 25 November 2021 at 09:29:49 UTC, Elronnd wrote:
 On Thursday, 25 November 2021 at 02:32:41 UTC, max haughton 
 wrote:
 the key thing with this scope transformation isn't stack 
 allocation but rather moving the allocation anywhere other 
 than the GC.
I am not quite sure what you are saying. Are you saying that: 1. The problem is moving the allocation somewhere other than the gc, including potentially to the stack, or to an alternate heap, or to somewhere else; or, 2. Converting allocations from gc to the stack is fine; but moving them anywhere else is problematic In either case, this compiles currently and produces GC-free code: nogc: void f(scope int[] x); void g(int x, int y) { f([x, y]); } I'm not sure where you would want to move scope allocations to aside from the stack, as scope guarantees LIFO, and so is the perfect fit for a stack. Perhaps an alternate stack (cf 'brk') managed in druntime, to avoid stack overflow? That would be good, but is hardly a challenging transformation if you can already produce regular stack allocation.
Moving an unbounded allocation to malloc and free is fine. LIFO is totally irrelevant. The point is avoiding putting pressure on the GC. If you want to be clever you can probably come up with some hybrid arrangement that tries to use the stack if it can or has a buffer somewhere.
Nov 25 2021
parent reply Elronnd <elronnd elronnd.net> writes:
On Thursday, 25 November 2021 at 15:42:28 UTC, max haughton wrote:
 Moving an unbounded allocation to malloc and free is fine. LIFO 
 is totally irrelevant.

 The point is avoiding putting pressure on the GC. If you want 
 to be clever you can probably come up with some hybrid 
 arrangement that tries to use the stack if it can or has a 
 buffer somewhere.
LIFO is a useful property that you can take advantage of. Why would you _want_ to allocate on the heap when you could allocate on [some kind of] stack? Re GC pressure, you get the same amount of pressure (amortized, assuming bounded allocation size and call-stack depth) if you allocate with the gc and then GC.free at the end of the scope.
Nov 25 2021
parent max haughton <maxhaton gmail.com> writes:
On Thursday, 25 November 2021 at 20:41:07 UTC, Elronnd wrote:
 On Thursday, 25 November 2021 at 15:42:28 UTC, max haughton 
 wrote:
 Moving an unbounded allocation to malloc and free is fine. 
 LIFO is totally irrelevant.

 The point is avoiding putting pressure on the GC. If you want 
 to be clever you can probably come up with some hybrid 
 arrangement that tries to use the stack if it can or has a 
 buffer somewhere.
LIFO is a useful property that you can take advantage of. Why would you _want_ to allocate on the heap when you could allocate on [some kind of] stack? Re GC pressure, you get the same amount of pressure (amortized, assuming bounded allocation size and call-stack depth) if you allocate with the gc and then GC.free at the end of the scope.
Because it's overly complicated. You can have a parallel stack if you want but KISS. The GC could collect at the first allocation. The point is to avoid the GC entirely, more of a latency thing than throughput. Assuming the analysis is reliable in the frontend this also allows the function to be nogc without forcing the programmer to do the memory allocation themselves.
Nov 25 2021
prev sibling parent Robert Schadek <rburners gmail.com> writes:
On Wednesday, 24 November 2021 at 14:13:35 UTC, Per Nordlöw wrote:
 On Wednesday, 17 November 2021 at 07:25:46 UTC, Robert Schadek 
 wrote:
 Also you can not have string interpolation without dynamic 
 memory.
With dynamic memory I presume you mean GC-allocated memory. dynamic memory allocation is broader term and covers cases such
Yes I mean GC-allocated memory
Nov 25 2021
prev sibling next sibling parent reply bachmeier <no spam.net> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:

 I fully understand that D is a community project and that we 
 can not tell the
 bulk of the contributors to work on issue X or milestone Y, but 
 we could ask
 them nicely.
 And if we follow our own project planing, they might just 
 follow along as well.
That's exactly the reason you need to lay out the plan clearly. It's a lot easier to get someone to contribute if they see the benefit. Feeling as if you're wasting your time is not motivating.


 Batteries included, all of them, even the small flat strange 
 ones.
I agree. But only if the batteries are high quality (bug-free) and high quality (make it easy to do the important tasks). Otherwise it's better to leave them as third-party libraries that are simple to add to your project.


 To summarize, GC and GC only.
I'm not sure about "GC only", but yes, D is only relevant if it has a GC. Going after the GC-free segment of the market is like releasing OpenBSD-only binaries. It's just too small to be worth the effort, especially with a well-funded competitor already in that space.


 betterC is, at best, a waste-by-product, if we have to use 
 betterC to write
 something for WASM, or anything significant, we might as well 
 start learning
 rust right now.
We don't want to promote it, but it does have an appeal to current C programmers, who often prefer an updated version of C to learning a new language.


 ImportC must have a preprocessor, or it is DOA.
 Shelling out to gcc or clang to preprocess, makes the build 
 system horrible
 which in turn will make the compiler library daemon thing 
 difficult to build.
 This is also important for the language interop, as I imagine 
 that most
 interop will go through a layer of C.
 When ImportC can use openssl 1.0.2s or so it is good enough.
 Having done some usage of openssl recently, my eyes can not 
 un-see the
 terribleness that is the openssl usage of C.
It makes sense to publish preprocessed versions of popular C libraries as Dub packages/standalone files that can be included in D programs. This can, to some extent, be done with what we already have. Finally, on interop, there should also be support for R, Matlab, Julia, and Fortran. D is a natural fit for data processing. It also would not be that hard, since all of those languages are designed to work easily with C libraries. And all but Matlab (not sure about that one) are easy to call from D.
Nov 16 2021
next sibling parent Robert Schadek <rburners gmail.com> writes:
On Tuesday, 16 November 2021 at 22:46:24 UTC, bachmeier wrote:
 Batteries included, all of them, even the small flat strange 
 ones.
I agree. But only if the batteries are high quality (bug-free) and high quality (make it easy to do the important tasks). Otherwise it's better to leave them as third-party libraries that are simple to add to your project.
No, nothing is every bug-free. Having stuff in code.dlang reduces visibility of the thing, which reduces the number of people using it with increases the number of bugs. Please get me right, I'm not saying get any crap code into phobos, but aiming for perfect will get us nothing, when aiming for good will get us a lot.
 We don't want to promote it, but it does have an appeal to 
 current C programmers, who often prefer an updated version of C 
 to learning a new language.
Frankly, who cares the C programs are not graduating 12 week coding camps these days they are retiring. Consider, that at least for now, we have limited resources why waste them?
 It makes sense to publish preprocessed versions of popular C 
 libraries as Dub packages/standalone files that can be included 
 in D programs. This can, to some extent, be done with what we 
 already have.
Again, visibility and friction.
 Finally, on interop, there should also be support for R, 
 Matlab, Julia, and Fortran. D is a natural fit for data 
 processing. It also would not be that hard, since all of those 
 languages are designed to work easily with C libraries. And all 
 but Matlab (not sure about that one) are easy to call from D.
Fair enough, I didn't list all candidates as the *vision* was already quite long. I think working on interop with different languages can be parallelized quite well.
Nov 16 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 16 November 2021 at 22:46:24 UTC, bachmeier wrote:
 I'm not sure about "GC only", but yes, D is only relevant if it 
 has a GC. Going after the GC-free segment of the market is like 
 releasing OpenBSD-only binaries. It's just too small to be 
 worth the effort, especially with a well-funded competitor 
 already in that space.
This is not true at all. Lots of people want a cleaned up C++, with C-like syntax. *Without* global GC. C++ will never be productive for application level programming. I don't think Rust will either. What is primarily holding D back is that the project is not run by sound *software engineering* practices. If you were a professional, would you put a tool in your foundation where the design and development practices are not better than your own practices? I would think not. One reason that developers trust tools made by Google, Apple and Mozilla is that they assume that they use proven software development methods. Small projects have to prove that they do. If they don't they are off the table. Restructuring the compiler, cleaning up the language and having *zero* regressions in releases is the first step to producing a tool that appeal to professional use. Right now, D appeals to hobbyists, and that is ok. It is a nice hobby. Nothing wrong with that. And the big advantage of D primarily appealing to hobbyists is that the cost of breaking changes are small, if you don't do them all the time, but collect them and do them at once.
Nov 17 2021
parent reply maltedbarley97 <not.disclosing.here example.com> writes:
On Wednesday, 17 November 2021 at 11:01:41 UTC, Ola Fosheim 
Grøstad wrote:
 On Tuesday, 16 November 2021 at 22:46:24 UTC, bachmeier wrote:
 I'm not sure about "GC only", but yes, D is only relevant if 
 it has a GC. Going after the GC-free segment of the market is 
 like releasing OpenBSD-only binaries. It's just too small to 
 be worth the effort, especially with a well-funded competitor 
 already in that space.
This is not true at all. Lots of people want a cleaned up C++, with C-like syntax. *Without* global GC. C++ will never be productive for application level programming. I don't think Rust will either. What is primarily holding D back is that the project is not run by sound *software engineering* practices. If you were a professional, would you put a tool in your foundation where the design and development practices are not better than your own practices? I would think not. One reason that developers trust tools made by Google, Apple and Mozilla is that they assume that they use proven software development methods. Small projects have to prove that they do. If they don't they are off the table. Restructuring the compiler, cleaning up the language and having *zero* regressions in releases is the first step to producing a tool that appeal to professional use. Right now, D appeals to hobbyists, and that is ok. It is a nice hobby. Nothing wrong with that. And the big advantage of D primarily appealing to hobbyists is that the cost of breaking changes are small, if you don't do them all the time, but collect them and do them at once.
Hobbyism is a nice way of putting it, and at this point, I think you're right. Project goals were last relevant about a decade ago. I think the undesired outcome of D's place in the world comes down to the project management style that ultimately creeps in to the final product. Basically it is the proof of how this style of pleasing everybody fails on limited manpower and influence. C++ can do it, D can not. Even when the principal author spent all his life in the gravitas of C++, does not mean there is reward for good behaviour. By that, I mean obviously the crowd-pleasing attitude (for the crowd you want, not the one you have). As mentioned somewhere, implementing new features on a whim goes against being community-led - it seems he would prefer people working for him instead of with him. Ironically, it becomes the exact opposite of pleasing everybody. By the way, I haven't seen Walter engaging in these "future" conversations in a while. Maybe he's getting deja vu too? Forums look exactly the same as 5 years ago, debating the same things. Making them quite a sad place on the internet.
Nov 17 2021
parent reply Tejas <notrealemail gmail.com> writes:
On Wednesday, 17 November 2021 at 13:27:16 UTC, maltedbarley97 
wrote:
 On Wednesday, 17 November 2021 at 11:01:41 UTC, Ola Fosheim 
 Grøstad wrote:
 On Tuesday, 16 November 2021 at 22:46:24 UTC, bachmeier wrote:
 I'm not sure about "GC only", but yes, D is only relevant if 
 it has a GC. Going after the GC-free segment of the market is 
 like releasing OpenBSD-only binaries. It's just too small to 
 be worth the effort, especially with a well-funded competitor 
 already in that space.
This is not true at all. Lots of people want a cleaned up C++, with C-like syntax. *Without* global GC. C++ will never be productive for application level programming. I don't think Rust will either. What is primarily holding D back is that the project is not run by sound *software engineering* practices. If you were a professional, would you put a tool in your foundation where the design and development practices are not better than your own practices? I would think not. One reason that developers trust tools made by Google, Apple and Mozilla is that they assume that they use proven software development methods. Small projects have to prove that they do. If they don't they are off the table. Restructuring the compiler, cleaning up the language and having *zero* regressions in releases is the first step to producing a tool that appeal to professional use. Right now, D appeals to hobbyists, and that is ok. It is a nice hobby. Nothing wrong with that. And the big advantage of D primarily appealing to hobbyists is that the cost of breaking changes are small, if you don't do them all the time, but collect them and do them at once.
Hobbyism is a nice way of putting it, and at this point, I think you're right. Project goals were last relevant about a decade ago. I think the undesired outcome of D's place in the world comes down to the project management style that ultimately creeps in to the final product. Basically it is the proof of how this style of pleasing everybody fails on limited manpower and influence. C++ can do it, D can not. Even when the principal author spent all his life in the gravitas of C++, does not mean there is reward for good behaviour. By that, I mean obviously the crowd-pleasing attitude (for the crowd you want, not the one you have). As mentioned somewhere, implementing new features on a whim goes against being community-led - it seems he would prefer people working for him instead of with him. Ironically, it becomes the exact opposite of pleasing everybody. By the way, I haven't seen Walter engaging in these "future" conversations in a while. Maybe he's getting deja vu too? Forums look exactly the same as 5 years ago, debating the same things. Making them quite a sad place on the internet.
Stuff can get pretty negative down here, as well as other platforms(regrettably, I myself recently spat venom from my mouth in our Discord channel just a couple days ago...) He recently came up with another idea that will help D gain traction: integrating a C compiler(parser, actually) directly into the front-end. How much skepticism and criticism was thrown upon him(I'm not judging whether that was good or bad)? He's trying, honest to god he's doing everything he thinks that can help, but its not working out :( Plus, as you say, things are getting a little circular here(although there is serious action behind `stdv2` now, so maybe a little bit of genuine progress is being made), so I don't blame him for thinking that its better to keep your head down and work rather than engage in fruitless discussion. Plus Plus, I think he _was_ interacting relatively recently in the refcounting thread, plus a discussion in ImportC as well, so its not as if he's completely absent from the forums
Nov 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 17 November 2021 at 13:51:09 UTC, Tejas wrote:
 He recently came up with another idea that will help D gain 
 traction: integrating a C compiler(parser, actually) directly 
 into the front-end.

 How much skepticism and criticism was thrown upon him(I'm not 
 judging whether that was good or bad)?
Nothing wrong with integrating with C, it is interesting, but it has to happen in the right order. The compiler internals should be be cleaned up first. It is kinda like eating the desert before eating dinner. We are all guilty of that, I assume, but in a compiler the long term costs are higher than usual. So the vision was ok, but the software engineering part of it is questionable.
Nov 17 2021
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 16.11.21 22:00, Robert Schadek wrote:

 
 D -- The best programming language!
 
 I imagine a DConf where you guys yell at me, not because we disagree,
 but because I'm old and forgot my hearing aid.
 
 This is what I think needs to be done to get us there.
 

 ...
 
 Github has an okay api, I bet we can replicate 99% of the features that are
 missing with very little code executed by some bots.
 ...
Why not start by synching github issues into bugzilla using those bots?
 And of course, laying out new directions and goal for the
 language and library.
 Not short term but long term e.g. ~5 years.
 Only after that work is done comes the developing.
 Having more development time left would be the measure of success for the
 leadership side.
 ...
Walter's main priority nowadays seems to be safe memory management without GC and C/C++ interoperability. Andrei's current main priority seems to be more specifically safe reference counting.

 
 Get your tomatoes and eggs ready.
 

 
 There GC is here to stay, you don't do manual memory management (MMM) in a
 compiler daemon that tracks dependency.
 I don't care how smart you are, you are not that smart.
That's why Walter and Andrei want things to be safe.
 ...
 
 To summarize, GC and GC only.
 ...
See above. It does not seem to me like that aligns well with the goals of W&A.
 ...
 

 
 I'm writing this section as one of the last.
 This is maybe one of the most important parts, but also the hardest
 to validate.
 When reading the forum, or the github PR's I get the feeling that people 
 think
 that D is a consensus driven, meritocracy.
 That is not the case, and that is okay.
 ...
(Earlier you noted there is a low number of contributors.)
 ...
 D 27:
 
 * Compiler daemon thing.
 
 The work on the compiler daemon thing, will have to start before 2025.
 
Even if there was a consensus on that, you already noted that it does not matter. This requires massive refactoring of the DMD code base.
Nov 16 2021
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Tuesday, 16 November 2021 at 23:55:03 UTC, Timon Gehr wrote:

 ...
 D 27:
 
 * Compiler daemon thing.
 
 The work on the compiler daemon thing, will have to start 
 before 2025.
 
Even if there was a consensus on that, you already noted that it does not matter. This requires massive refactoring of the DMD code base.
Refactoring is putting it mildly. It needs a fundamental different approach to how we do semantic-analysis/code-transformation/code-insertion/code-expansion. (as you know well) I believe that if we go down that path. (I am already on that path) Only the parser and the code-generator will remain as they are.
Nov 16 2021
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Nov 17, 2021 at 12:04:57AM +0000, Stefan Koch via Digitalmars-d wrote:
 On Tuesday, 16 November 2021 at 23:55:03 UTC, Timon Gehr wrote:
 ...
 D 27:
 
 * Compiler daemon thing.
[...]
 Even if there was a consensus on that, you already noted that it
 does not matter. This requires massive refactoring of the DMD code
 base.
Refactoring is putting it mildly. It needs a fundamental different approach to how we do semantic-analysis/code-transformation/code-insertion/code-expansion.
Yeah, no kidding. It will require basically a rewrite of everything in the compiler except for parsing / codegen, as Stefan says. Actually, I'm not even sure about the parsing part, if you're going to expect the compiler-as-a-daemon to handle on-the-fly code changes. And given the number of quirks in the current DMDFE, such a thing will probably turn out to have different interpretations of the language when fed existing code. IIRC SDC ran into this problem: after a certain point deadalnix discovered ambiguities in the spec that would lead to divergent parses / semantics of the same code. So in order for such an effort not to turn out to be a waste, the spec must be nailed down first. T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De Vitis
Nov 16 2021
prev sibling parent Robert Schadek <rburners gmail.com> writes:
On Tuesday, 16 November 2021 at 23:55:03 UTC, Timon Gehr wrote:
 Why not start by synching github issues into bugzilla using 
 those bots?
there is already a tool https://github.com/wilzbach/bugzilla-migration/blob/master/bugzilla2github.py that is was written in python, I consider vindication for my claims in the post.
 Walter's main priority nowadays seems to be  safe memory 
 management without GC and C/C++ interoperability.
 Andrei's current main priority seems to be more specifically 
  safe reference counting.
People can change their mind.
 That's why Walter and Andrei want things to be  safe.
As said, that is to little, safe by default with GC only. If you want to shoot your foot, you must be required to write *system*.
 To summarize, GC and GC only.
 ...
See above. It does not seem to me like that aligns well with the goals of W&A.
People can change their mind.
 ...
 

 
 I'm writing this section as one of the last.
 This is maybe one of the most important parts, but also the 
 hardest
 to validate.
 When reading the forum, or the github PR's I get the feeling 
 that people think
 that D is a consensus driven, meritocracy.
 That is not the case, and that is okay.
 ...
(Earlier you noted there is a low number of contributors.)
I don't see any contradiction in these two statements. Please elaborate.
 Even if there was a consensus on that, you already noted that 
 it does not matter. This requires massive refactoring of the 
 DMD code base.
It requires a rewrite, so?
Nov 16 2021
prev sibling next sibling parent forkit <forkit gmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:

.....
Here is my vision for D .. to "create interesting software more easily" (D. Ritchie). That should be it's value proposition. Indeed, that should become its motto.
Nov 16 2021
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:
 I imagine a DConf where you guys yell at me, not because we 
 disagree,
 but because I'm old and forgot my hearing aid.
Love the intro! :-D
 And if we follow our own project planing, they might just 
 follow along as well.
Just having a plan and showing how it correlates to progress is the first box to check in a risk assessment for a software project that considers using the language. But you also need a stable branch and a good versioning scheme (no breaking changes between major releases).
Imagine, a compiler daemon that you start once per
 project/program that keeps
 track of all files related to this project.
Yes, what is needed is a highlevel IR that can be emitted. A formal spec of the compilation stages are needed too. It may require language adjustments. Speccing what the compiler does now will most likely not be what you want. But it is totally worth it. It will improve the language semantics.
 There GC is here to stay, you don't do manual memory management 
 (MMM) in a
 compiler daemon that tracks dependency.
 I don't care how smart you are, you are not that smart.
I think "smart" is the wrong argument. We have to also think about costs. So plain MMM is costly, true. But global GC scanning does not work well. Would you be ok with local GC scanning and RC for shared references?
 ImportC must have a preprocessor, or it is DOA.
It also has to emulate common GCC extensions :-). Reference implementations follow ISO C11, other code often does not.
 This was already partially discussed in the long term goals, 
 but needs better
 documentation or better yet a spec.
 The cool thing is, we don't need to be an ISO spec aka. a pdf.
 We could very well be a long .d file with lots of comments and 
 unittests.
Well, a slow reference implementation that is validating input thoroughly is better than just documentation. But then you need to streamline the language semantics, otherwise the reference will be incomprehensible I think. But the language would be better if it was done, so not a disadvantage. Some breakage would have to be expected. I you design a high level IR, then you only need to emit that from the reference compiler.
 The dart language people shall come to fear our binding 
 generation
 capabilities.
I don't think so. Dart supports live coding. You can modify it when the application is running. Anyway iOS/Android are moving targets. Too expensive to do well.
 Being a bit dramatic, given people false hope, that gets 
 disappointed, will
 drive them away from D.
Not from D, but maybe from compiler development. But that is a big time investment. You cannot expect people to be willing to invest so much in an uncertain outcome.
 A simple solution, IMO, is to take clear stance on issues.
Yes. Why invest time in extending the compiler if you don't know where D is heading or what is considered a good addition?
 I see the DIP process troublesome as it gives the impression of 
 say of what D
 will become.
Yes. I think a reference compiler would be better. Then people can implement what they want and show that as a proof of concept. If it is interesting then it will gain momentum.
 I'm serious about the motto at the top.
 When people start complaining that their language is better, 
 its free
 marketing for D.
Ok, I don't think mottos matter much, maybe for the insiders it does, but for outsiders it can be seen as childish... White papers and vision documents matter. If you have a clear vision, then there is no need for a motto. It is self evident. Kudos again, for taking the time to write a thoughtful and *passionate* post! Some changes are necessary, and they will only happen if people show their passion. So what yo do here is important, I think.
Nov 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 17 November 2021 at 12:54:10 UTC, Ola Fosheim 
Grøstad wrote:
 Yes. I think a reference compiler would be better. Then people 
 can implement what they want and show that as a proof of 
 concept. If it is interesting then it will gain momentum.
An interesting aspect of using a reference implementation written for clarity instead of a spec is that if the proof of concept is good, then people can use the reference compiler. If enough people do that then that proof-of-concept feature will make it into the main compiler (or the main compiler will die off). Never thought about that before, it is an interesting idea.
Nov 17 2021
prev sibling next sibling parent Guillaume Piolat <first.last gmail.com> writes:
Lots of good ideas there.

Violently agree with:

     1.
          safe -> safe
          trusted -> trusted
          system -> system
          nogc -> nogc

        No  it's  not  that  disruptive.

2. If I understand correctly auto-decoding is being worked on.

3. WebASM being important. YES.

4. Other languages recruit in Python and Web developer circles, 
since that's what people are taught first.

Some of us don't want GC or Phobos and _need_ to live under 
 nogc, disabled runtime (not -betterC necessarily) is super 
useful. I'd want optional Phobos.

I disagree with:

     - Formats like JSON Schema need champions from the community 
who will produce and maintain a DUB package. Likewise for HTTP 
2/3.
       Different for the event loop, as it has network effects. 
Heck, people create D libraries all the time, it just needs 
direction maybe.

- No mention of DUB aesthetics and usability! It is paramount, 
more colors and animations would go a long way.

- safe by default: I really don't care about that. I guess it 
would be a positive? Will I earn more money selling D software? 
Perhaps slightly more. Why not do it.

Anyway, Robert for president!
Nov 17 2021
prev sibling next sibling parent reply JN <666total wp.pl> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:


 betterC is, at best, a waste-by-product, if we have to use 
 betterC to write
 something for WASM, or anything significant, we might as well 
 start learning
 rust right now.
I think Zig will be a more powerful competitor in the future than Rust. Rust appeals more to C++ programmers, but Zig is targeting C programmers more. I've been looking at Zig lately, and I have to say I think it's a very interesting language. It has optional standard library and bring-your-own-allocator memory management design, which is very tempting for C folks who like to manage their own memory.
Nov 17 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 17 November 2021 at 20:36:10 UTC, JN wrote:
 targeting C programmers more. I've been looking at Zig lately, 
 and I have to say I think it's a very interesting language. It 
 has optional standard library and bring-your-own-allocator 
 memory management design, which is very tempting for C folks 
 who like to manage their own memory.
Good support for "runtimeless" is generally viewed as a requirement for a language to be considered a true system level programming language. I wish someone would do a Zig vs D comparison in the forums as it didn't look all that impressive to me, but I could be wrong. You also have this for WASM (I know absolutely nothing about it): https://github.com/AssemblyScript/assemblyscript One issue with WASM is that you cannot take the address of items on the stack (IIRC) so you have to create an inefficient shadow-stack if the language expects that you can do it. So, really, a restricted D subset tailored to WASM could be better for those who want to use WASM for performance reasons (like writing a game).
Nov 17 2021
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 17 November 2021 at 20:36:10 UTC, JN wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:


 betterC is, at best, a waste-by-product, if we have to use 
 betterC to write
 something for WASM, or anything significant, we might as well 
 start learning
 rust right now.
I think Zig will be a more powerful competitor in the future than Rust. Rust appeals more to C++ programmers, but Zig is targeting C programmers more. I've been looking at Zig lately, and I have to say I think it's a very interesting language. It has optional standard library and bring-your-own-allocator memory management design, which is very tempting for C folks who like to manage their own memory.
Zig is targeting former Objective-C developers with everywhere, has module system based on JavaScript AMD model with its import, and doesn't fix use-after-free. C folks like to manage their own memory and they get it wrong most of the time. https://support.apple.com/en-us/HT212869
Nov 17 2021
parent reply forkit <forkit gmail.com> writes:
On Thursday, 18 November 2021 at 06:54:33 UTC, Paulo Pinto wrote:
 C folks like to manage their own memory and they get it wrong 
 most of the time.
god help us all, if that assertion is true. (which of course, it's not). 'some of the time', sure, even the best, but 'most of the time'.. that is hyperbole - except perhaps, in the case of novices. The tools we have available these days, to assist C programmers, is something to factored in when speaking about the C language. btw. C powers the world. https://www.toptal.com/c/after-all-these-years-the-world-is-still-powered-by-c-programming
Nov 18 2021
next sibling parent Atila Neves <atila.neves gmail.com> writes:
On Thursday, 18 November 2021 at 09:28:08 UTC, forkit wrote:
 On Thursday, 18 November 2021 at 06:54:33 UTC, Paulo Pinto 
 wrote:
 C folks like to manage their own memory and they get it wrong 
 most of the time.
god help us all, if that assertion is true.
It is. It always has been, and decades and many tools later, it still is. This isn't a matter of opinion, it's provable fact.
 (which of course, it's not).
Huge if true.
 'some of the time', sure, even the best, but 'most of the 
 time'..
All of the time, if one only counts projects of a certain size and up. Of course it's trivial to get it right in a 50 line program.
 that is hyperbole - except perhaps, in the case of novices.
Is Walter a novice? Am I? Is Andrei? Is \<insert name here>...? I have no idea how or why this myth of the "sufficiently competent C programmer" persists. They don't exist.
 The tools we have available these days, to assist C 
 programmers, is something to factored in when speaking about 
 the C language.
I've used them.
 btw. C powers the world.
So does burning coal, but I wouldn't recommend that either.
Nov 18 2021
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 18 November 2021 at 09:28:08 UTC, forkit wrote:
 On Thursday, 18 November 2021 at 06:54:33 UTC, Paulo Pinto 
 wrote:
 C folks like to manage their own memory and they get it wrong 
 most of the time.
god help us all, if that assertion is true. (which of course, it's not). 'some of the time', sure, even the best, but 'most of the time'.. that is hyperbole - except perhaps, in the case of novices. The tools we have available these days, to assist C programmers, is something to factored in when speaking about the C language. btw. C powers the world. https://www.toptal.com/c/after-all-these-years-the-world-is-still-powered-by-c-programming
Unfortunely, "~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues" https://msrc-blog.microsoft.com/2019/07/16/a-proactive-approach-to-more-secure-code/ "As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9." https://security.googleblog.com/2019/08/adopting-arm-memory-tagging-extension.html C only powers the world, because lawsuits due to security exploits still aren't a common practice in software like in other industries, where liability is legally enforced. The day this changes, C won't power anything for much longer.
Nov 18 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Nov 18, 2021 at 02:59:24PM +0000, Paulo Pinto via Digitalmars-d wrote:
 On Thursday, 18 November 2021 at 09:28:08 UTC, forkit wrote:
[...]
 btw. C powers the world.
 
 https://www.toptal.com/c/after-all-these-years-the-world-is-still-powered-by-c-programming
Unfortunely, "~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues" https://msrc-blog.microsoft.com/2019/07/16/a-proactive-approach-to-more-secure-code/ "As part of our continuous commitment to improve the security of the Android ecosystem, we are partnering with Arm to design the memory tagging extension (MTE). Memory safety bugs, common in C and C++, remain one of the largest vulnerabilities in the Android platform and although there have been previous hardening efforts, memory safety bugs comprised more than half of the high priority security bugs in Android 9." https://security.googleblog.com/2019/08/adopting-arm-memory-tagging-extension.html C only powers the world, because lawsuits due to security exploits still aren't a common practice in software like in other industries, where liability is legally enforced. The day this changes, C won't power anything for much longer.
Exactly what I said. The day will come when the world realizes just how much of a liability an inherently-unsafe language is, and how much it's costing businesses, and the tables will turn. T -- Being forced to write comments actually improves code, because it is easier to fix a crock than to explain it. -- G. Steele
Nov 18 2021
next sibling parent reply SealabJaster <sealabjaster gmail.com> writes:
On Thursday, 18 November 2021 at 17:52:44 UTC, H. S. Teoh wrote:
 Exactly what I said.  The day will come when the world realizes 
 just how much of a liability an inherently-unsafe language is, 
 and how much it's costing businesses, and the tables will turn.


 T
A shame we didn't get safe by default pushed through, because (from what I recall) extern(C) functions were for some reason also considered safe by default, which caused too much backlash. And to quote a similar post I made before "fuck it, let's just scrap it so everyone looses out instead" is essentially what I remember happening.
Nov 18 2021
next sibling parent reply Greg Strong <mageofmaple protonmail.com> writes:
On Thursday, 18 November 2021 at 23:51:09 UTC, SealabJaster wrote:
 "fuck it, let's just scrap it so everyone looses out instead"
Ding, ding, ding! I think you just stumbled on D's new motto :)
Nov 18 2021
parent SealabJaster <sealabjaster gmail.com> writes:
On Friday, 19 November 2021 at 00:03:39 UTC, Greg Strong wrote:
 On Thursday, 18 November 2021 at 23:51:09 UTC, SealabJaster 
 wrote:
 "fuck it, let's just scrap it so everyone looses out instead"
Ding, ding, ding! I think you just stumbled on D's new motto :)
Depressingly true :D
Nov 18 2021
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Nov 18, 2021 at 11:51:09PM +0000, SealabJaster via Digitalmars-d wrote:
 On Thursday, 18 November 2021 at 17:52:44 UTC, H. S. Teoh wrote:
 Exactly what I said.  The day will come when the world realizes just
 how much of a liability an inherently-unsafe language is, and how
 much it's costing businesses, and the tables will turn.
[...]
 A shame we didn't get  safe by default pushed through, because (from
 what I recall) extern(C) functions were for some reason also
 considered  safe by default, which caused too much backlash.
[...] Honestly, it was a big loss for D that safe by default failed to get through simply due to such a small detail. IMO the benefits of safe by default far exceed any squabble we may have over how extern(C) functions should behave. Still, safe itself leaves much to be desired: https://issues.dlang.org/buglist.cgi?keywords=safe&list_id=238237&resolution=--- It's not bad in its current state, but could be so much more had a more complete job been done. The fact that it's implemented as a blacklist rather than a whitelist also means that there are likely many holes in it that we just haven't found yet. What should've been done, is to implement it as a whitelist, and then each time somebody gets blocked by safe for something that's actually safe, we can review it and conservatively expand the whitelist. With a blacklist implementation, it's anybody's guess where in the exponentially-many combinations of language features there might be loopholes in safe, which is a far less tractible problem. (Yes, a blacklist implementation and a whitelist implementation will eventually both converge to the same thing. But a blacklist implementation will continue to have loopholes until it converges, whereas a whitelist implementation is guaranteed safe, with only the occasional inconvenience when a valid operation is wrongly blocked. When it comes to memory safety and potential security exploits, it's always better to err on the safe side.) T -- Don't throw out the baby with the bathwater. Use your hands...
Nov 18 2021
parent reply SealabJaster <sealabjaster gmail.com> writes:
On Friday, 19 November 2021 at 00:09:20 UTC, H. S. Teoh wrote:
 ...
I wonder if we should rename the language to "swiss cheese" considering the amount of holes we have.
Nov 18 2021
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Nov 19, 2021 at 12:35:49AM +0000, SealabJaster via Digitalmars-d wrote:
 On Friday, 19 November 2021 at 00:09:20 UTC, H. S. Teoh wrote:
 ...
I wonder if we should rename the language to "swiss cheese" considering the amount of holes we have.
No, cheese grater. :-P T -- Making non-nullable pointers is just plugging one hole in a cheese grater. -- Walter Bright
Nov 18 2021
prev sibling parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Friday, 19 November 2021 at 00:35:49 UTC, SealabJaster wrote:
 On Friday, 19 November 2021 at 00:09:20 UTC, H. S. Teoh wrote:
 ...
I wonder if we should rename the language to "swiss cheese" considering the amount of holes we have.
D - the quantum cheese! No, I finally got it: "It might look like a pile of shit, but it's actually cool" The holes are obviously a feature: https://www.asme.org/topics-resources/content/what-termites-can-teach-engineers
Nov 18 2021
prev sibling parent reply forkit <forkit gmail.com> writes:
On Thursday, 18 November 2021 at 17:52:44 UTC, H. S. Teoh wrote:
 Exactly what I said.  The day will come when the world realizes 
 just how much of a liability an inherently-unsafe language is, 
 and how much it's costing businesses, and the tables will turn.


 T
unsafe really is subjective (and we can learn this fact from observing how different countries, and different people dealt with the pandemic). There is a tolerance for risk, always, up to a point. In anycase... C has, and will always have, value. I like this statement below: "..because we see everything with its real nature in C" - quoted from: https://forum.dlang.org/post/vfsxzwrieivwqipicgka forum.dlang.org Any langauge that tries to improve on C, no matter how worthwhile the effort, is inherently deceptive.
Nov 18 2021
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.21 01:08, forkit wrote:
 On Thursday, 18 November 2021 at 17:52:44 UTC, H. S. Teoh wrote:
 Exactly what I said.  The day will come when the world realizes just 
 how much of a liability an inherently-unsafe language is, and how much 
 it's costing businesses, and the tables will turn.


 T
unsafe really is subjective (and we can learn this fact from observing how different countries, and different people dealt with the pandemic). There is a tolerance for risk, always, up to a point. In anycase... C has, and will always have, value. I like this statement below: "..because we see everything with its real nature in C" - quoted from: https://forum.dlang.org/post/vfsxzwrieivwqipicgka forum.dlang.org Any langauge that tries to improve on C, no matter how worthwhile the effort, is inherently deceptive.
I think that statement itself is deceptive. C is not all that close to how modern hardware actually operates.
Nov 18 2021
parent reply forkit <forkit gmail.com> writes:
On Friday, 19 November 2021 at 00:38:15 UTC, Timon Gehr wrote:
 I think that statement itself is deceptive. C is not all that 
 close to how modern hardware actually operates.
I mean the abstraction of the C memory model... i.e. " ..one or more contiguous sequences of bytes. Each byte in memory has a unique address." This is still an appropriate abstraction, even in modern times. When working at a low-level, this is still, even today, an appropriate and suitable abstraction, on which to build your ideas. No language can model 'actual hardware', and even if it could, the human brain could never use such a language.
Nov 18 2021
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.21 03:36, forkit wrote:
 On Friday, 19 November 2021 at 00:38:15 UTC, Timon Gehr wrote:
 I think that statement itself is deceptive. C is not all that close to 
 how modern hardware actually operates.
I mean the abstraction of the C memory model... i.e. " ..one or more contiguous sequences of bytes. Each byte in memory has a unique address." This is still an appropriate abstraction, even in modern times. ...
Depends on what you want to do.
 When working at a low-level, this is still, even today, an appropriate 
 and suitable abstraction, on which to build your ideas.
 ...
Debatable, but this is not even what C gives you, so...
 No language can model 'actual hardware', and even if it could, the human 
 brain could never use such a language.
 
 
You are moving the goalposts. There were two statements: - "..because we see everything with its real nature in C" - "Any langauge that tries to improve on C, no matter how worthwhile the effort, is inherently deceptive." Those are not useful statements. They are not true.
Nov 18 2021
parent reply forkit <forkit gmail.com> writes:
On Friday, 19 November 2021 at 02:46:58 UTC, Timon Gehr wrote:
 ...
 Those are not useful statements. They are not true.
That is not a useful statement. It is not true.
Nov 18 2021
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.21 04:42, forkit wrote:
 On Friday, 19 November 2021 at 02:46:58 UTC, Timon Gehr wrote:
 ...
 Those are not useful statements. They are not true.
That is not a useful statement. It is not true.
https://en.wikipedia.org/wiki/Hitchens%27s_razor
Nov 18 2021
prev sibling next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 Each byte in memory has a unique address.
This is not true and has never been true, at least in the x86 world.
Nov 18 2021
parent reply forkit <forkit gmail.com> writes:
On Friday, 19 November 2021 at 02:54:08 UTC, Adam D Ruppe wrote:
 On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 Each byte in memory has a unique address.
This is not true and has never been true, at least in the x86 world.
you're conflating memory addressing in hardware, and memory addressing in software. I'm only interested in the software abstraction (and even a description of the hardware addressing is itself likely to be an abstraction). The software abstraction is the lowest level I want to go. Which is an array of addressable memory. I build on that. I don't need to go any lower. Ultimately, everything in the universe is an abstraction, at some level or another.
Nov 18 2021
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 03:40:59 UTC, forkit wrote:
 On Friday, 19 November 2021 at 02:54:08 UTC, Adam D Ruppe wrote:
 On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 Each byte in memory has a unique address.
This is not true and has never been true, at least in the x86 world.
you're conflating memory addressing in hardware, and memory addressing in software. I'm only interested in the software abstraction (and even a description of the hardware addressing is itself likely to be an abstraction). The software abstraction is the lowest level I want to go. Which is an array of addressable memory. I build on that. I don't need to go any lower. Ultimately, everything in the universe is an abstraction, at some level or another.
C represents the hardware, except when it does not, got it.
Nov 18 2021
parent reply forkit <forkit gmail.com> writes:
On Friday, 19 November 2021 at 06:26:09 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 03:40:59 UTC, forkit wrote:
 On Friday, 19 November 2021 at 02:54:08 UTC, Adam D Ruppe 
 wrote:
 On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 Each byte in memory has a unique address.
This is not true and has never been true, at least in the x86 world.
you're conflating memory addressing in hardware, and memory addressing in software. I'm only interested in the software abstraction (and even a description of the hardware addressing is itself likely to be an abstraction). The software abstraction is the lowest level I want to go. Which is an array of addressable memory. I build on that. I don't need to go any lower. Ultimately, everything in the universe is an abstraction, at some level or another.
C represents the hardware, except when it does not, got it.
C represents a workable (and portable) abstraction of hardware. Would you rather be programming in a language that requires you to send a command to trigger 5 volts along a specific wire, so you can read or store ... one bit? (and even that is an abstraction).
Nov 19 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 19 November 2021 at 09:26:26 UTC, forkit wrote:
 C represents a workable (and portable) abstraction of hardware.

 Would you rather be programming in a language that requires you 
 to send a command to trigger 5 volts along a specific wire, so 
 you can read or store ... one bit? (and even that is an 
 abstraction).
Motorola 68000 assembly is nice, much more flexible than C.
Nov 19 2021
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 09:26:26 UTC, forkit wrote:
 On Friday, 19 November 2021 at 06:26:09 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 03:40:59 UTC, forkit wrote:
 On Friday, 19 November 2021 at 02:54:08 UTC, Adam D Ruppe 
 wrote:
 On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 Each byte in memory has a unique address.
This is not true and has never been true, at least in the x86 world.
you're conflating memory addressing in hardware, and memory addressing in software. I'm only interested in the software abstraction (and even a description of the hardware addressing is itself likely to be an abstraction). The software abstraction is the lowest level I want to go. Which is an array of addressable memory. I build on that. I don't need to go any lower. Ultimately, everything in the universe is an abstraction, at some level or another.
C represents the hardware, except when it does not, got it.
C represents a workable (and portable) abstraction of hardware. Would you rather be programming in a language that requires you to send a command to trigger 5 volts along a specific wire, so you can read or store ... one bit? (and even that is an abstraction).
I would rather be programming in languages like Modula-2, Ada, or heck D, without the minefield that C brought into the world. Sad historical facts, the set of languages created for system programing 10 years before C came to the world, starting with JOVIAL, do provide bounds checking by default and proper strings. The DoD security assessent to Multics, which contrary to UNIX folkore went on and had a life, even with AT&T bailing out, was much higher than UNIX thanks to PL/I.
 Although the first edition of K&R described most of the rules 
 that brought C's type structure to its present form, many 
 programs written in the older, more relaxed style persisted, 
 and so did compilers that tolerated it. To encourage people to 
 pay more attention to the official language rules, to detect 
 legal but suspicious constructions, and to help find interface 
 mismatches undetectable with simple mechanisms for separate 
 compilation, Steve Johnson adapted his pcc compiler to produce 
 lint [Johnson 79b], which scanned a set of files and remarked 
 on dubious constructions.
-- Dennis M. Ritchie on https://www.bell-labs.com/usr/dmr/www/chist.html Dennis knew what he created, C advocates to this day apparently not.
Nov 19 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 19 November 2021 at 10:13:02 UTC, Paulo Pinto wrote:
 -- Dennis M. Ritchie on 
 https://www.bell-labs.com/usr/dmr/www/chist.html

 Dennis knew what he created, C advocates to this day apparently 
 not.
Thanks for the link! Important quote, take notice Boehm GC-defenders: **«C is hostile to automatic garbage collection.»** HOSTILE! Nothing less. And I agree. Another important quote: *«As should be clear from the history above, C evolved from typeless languages. It did not suddenly appear to its earliest users and developers as an entirely new language with its own rules; instead we continually had to adapt existing programs as the language developed, and make allowance for an existing body of code.»* So, we are stuck with the flaws introduced by backwards compatibility requirements of the 1970s in 2021. \*cheers\* D should not be so concerned with breakage, just do D3 and get it right.
Nov 19 2021
next sibling parent zjh <fqbqrr 163.com> writes:
On Friday, 19 November 2021 at 10:47:12 UTC, Ola Fosheim Grøstad 
wrote:
 D should not be so concerned with breakage, just do D3 and get 
 it right.
Right.
Nov 19 2021
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 10:47:12 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 19 November 2021 at 10:13:02 UTC, Paulo Pinto wrote:
 -- Dennis M. Ritchie on 
 https://www.bell-labs.com/usr/dmr/www/chist.html

 Dennis knew what he created, C advocates to this day 
 apparently not.
Thanks for the link! Important quote, take notice Boehm GC-defenders: **«C is hostile to automatic garbage collection.»** HOSTILE! Nothing less. And I agree. ...
This is why D community should take care when celebrating designs in other languages like Objective-C and Swift's ARC. As I already mentioned a couple of times, Objective-C ARC did not came out being the best technical option for Objective-C just like that, rather due to the failure of making Objective-C GC with a tracing GC based on Boehm work without issues. Still available on the documentation archive, https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/GarbageCollection/Introduction.html#//apple_ref/doc/uid/TP40002431 Check "Architecture", "Design Patterns to Use, and to Avoid", "Inapplicable Patterns", just for starters. So when Apple rebooted their design, they followed up on what Visual C++ extensions already did for COM (e.g. _com_ptr_t), and made Objective-C compiler perform the retain/release Cocoa messages itself. Coined their reference counting approach as ARC, and made a whole marketing message how ARC tops tracing GC, and the world of App devs cluessless of compiler design issues cheered in unison. Swift as natural evolution from Objective-C, with 1:1 interoperability goals with the Cocoa ecosystem and Objective-C runtime, naturally had double down on ARC. The alternative with a tracing GC would require an engineering effort similar to how .NET interops with COM, https://docs.microsoft.com/en-us/dotnet/standard/native-interop/cominterop So beware when discussing what D should adopt from other languages, usually there is more to the whole story than just RC vs tracing GC.
Nov 19 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 19 November 2021 at 12:22:05 UTC, Paulo Pinto wrote:
 This is why D community should take care when celebrating 
 designs in other languages like Objective-C and Swift's ARC.
Well, an Apple engineer did point out in this forum that one could achieve better performance than Objective-C ARC, because of the historic constraints Objective-C had to deal with. Anyway, I am in favour of ARC for shared and local GC with actors for non-shared. I think that model fits well with most D users expect and also where hardware is heading. It would also set D apart from the other alternatives.
 So beware when discussing what D should adopt from other 
 languages, usually there is more to the whole story than just 
 RC vs tracing GC.
D should not try to become another language. It must differentiate itself by looking at the open positions in the design space. E.g. I am a bit torn by C++ compatibility. Yes, it is good for indie game developers, but can also be very limiting. Very difficult choice to make.
Nov 19 2021
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 20/11/2021 1:34 AM, Ola Fosheim Grøstad wrote:
 Anyway, I am in favour of ARC for shared and local GC with actors for 
 non-shared. I think that model fits well with most D users expect and 
 also where hardware is heading.
I'm more in favor of ARC completely. Tie it into scope, and the compiler can elide the calls. Add an operator overload so that you can get a non-scope reference to memory and it might be a very nice situation for us.
Nov 19 2021
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 12:45:45 UTC, rikki cattermole 
wrote:
 On 20/11/2021 1:34 AM, Ola Fosheim Grøstad wrote:
 Anyway, I am in favour of ARC for shared and local GC with 
 actors for non-shared. I think that model fits well with most 
 D users expect and also where hardware is heading.
I'm more in favor of ARC completely. Tie it into scope, and the compiler can elide the calls. Add an operator overload so that you can get a non-scope reference to memory and it might be a very nice situation for us.
Just beware of not relying too much in such optimizations, "ARC in Swift: Basics and beyond" "Learn about the basics of object lifetimes and ARC in Swift. Dive deep into what language features make object lifetimes observable, consequences of relying on observed object lifetimes and some safe techniques to fix them." https://developer.apple.com/videos/play/wwdc2021/10216/
Nov 19 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 19 November 2021 at 12:45:45 UTC, rikki cattermole 
wrote:
 I'm more in favor of ARC completely. Tie it into scope, and the 
 compiler can elide the calls. Add an operator overload so that 
 you can get a non-scope reference to memory and it might be a 
 very nice situation for us.
That is more conventional, and in some sense easier because it is more homogeneous. I think some people will complain about performance, but I am not against it.
Nov 19 2021
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 20/11/2021 2:37 AM, Ola Fosheim Grøstad wrote:
 On Friday, 19 November 2021 at 12:45:45 UTC, rikki cattermole wrote:
 I'm more in favor of ARC completely. Tie it into scope, and the 
 compiler can elide the calls. Add an operator overload so that you can 
 get a non-scope reference to memory and it might be a very nice 
 situation for us.
That is more conventional, and in some sense easier because it is more homogeneous. I think some people will complain about performance, but I am not against it.
For system resources, I think this is the best way forward. Due to the fact that threads actually matter here, and you really need to "free" resources where they were allocated. But also allow references to leak to other threads. But yeah, a fiber aware GC that could clean things up as it goes along would be absolutely amazing for stuff like web development.
Nov 19 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 20 November 2021 at 00:55:36 UTC, rikki cattermole 
wrote:
 For system resources, I think this is the best way forward. Due 
 to the fact that threads actually matter here, and you really 
 need to "free" resources where they were allocated. But also 
 allow references to leak to other threads.
Yes. Regardless, ARC requires more compiler restructuring. Right now regular RC + local GC is the easy implementation... But regular RC is not convincing anyone, sigh...
Nov 19 2021
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 20/11/2021 2:21 PM, Ola Fosheim Grøstad wrote:
 On Saturday, 20 November 2021 at 00:55:36 UTC, rikki cattermole wrote:
 For system resources, I think this is the best way forward. Due to the 
 fact that threads actually matter here, and you really need to "free" 
 resources where they were allocated. But also allow references to leak 
 to other threads.
Yes. Regardless, ARC requires more compiler restructuring.
To me ARC is just what we have now with a couple of compiler hooks. So it shouldn't need restructuring for this.
 Right now regular RC + local GC is the easy implementation...
 
 But regular RC is not convincing anyone, sigh...
I use it, but it is expensive, I know this. But it is the only way to make the resources go away guaranteed (unless something messes with the thread state). However a lot of the usage of the RC could work with scope and possibly even const. So there is a lot of potential easy optimizations being missed due to the fact that we don't have the methods to call specifically for RC. I.e. void someFunc(scope RCData data) { someOtherFunc(data); } At no point from that point forward would RC methods need to be called. But copy constructors, postblit and destructors would need to be called regardless on a struct. Its a real shame.
Nov 19 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 20 November 2021 at 01:59:07 UTC, rikki cattermole 
wrote:
 On 20/11/2021 2:21 PM, Ola Fosheim Grøstad wrote:
 On Saturday, 20 November 2021 at 00:55:36 UTC, rikki 
 cattermole wrote:
 For system resources, I think this is the best way forward. 
 Due to the fact that threads actually matter here, and you 
 really need to "free" resources where they were allocated. 
 But also allow references to leak to other threads.
Yes. Regardless, ARC requires more compiler restructuring.
To me ARC is just what we have now with a couple of compiler hooks. So it shouldn't need restructuring for this.
I had an idea for a LLVM hack, but Apple ARC engineers pointed out that there were issues to me so I consider that to be unworkable. Meaning: I trust their experience. So you have to do it over a suitable high level IR. I don't think D has that... At least not to my knowledge.
Nov 19 2021
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 20/11/2021 4:27 PM, Ola Fosheim Grøstad wrote:
 I had an idea for a LLVM hack, but Apple ARC engineers pointed out that 
 there were issues to me so I consider that to be unworkable. Meaning: I 
 trust their experience. So you have to do it over a suitable high level 
 IR. I don't think D has that... At least not to my knowledge.
Yeah, the DFA probably does get gnarly beyond certain patterns. And truth be told I think we *do* need to build that IR at some point, because right now with the return ref and all those attributes its just garbage that humans have to write that rather than letting the compiler figure it out.
Nov 19 2021
parent reply tsbockman <thomas.bockman gmail.com> writes:
On Saturday, 20 November 2021 at 03:38:01 UTC, rikki cattermole 
wrote:
 And truth be told I think we *do* need to build that IR at some 
 point, because right now with the return ref and all those 
 attributes its just garbage that humans have to write that 
 rather than letting the compiler figure it out.
The compiler already does figure most of it out. That's how it can emit compile time error messages for missing or incorrect attributes, or implementation violations of an attribute's guarantees. The attributes exist primarily for two reasons: 1) To verify programmer intent, that the inferred attributes of the code written match the intended attributes. 2) To specify APIs independent of implementation, for `extern` linking, `interface`s and base `class`es with multiple implementations that might imply different attributes, etc. Attribute soup is unavoidable for (2) public APIs in general; there is nowhere else the information *can* come from in many cases except from an explicit specification, regardless of how sophisticated the compiler is. Redundant specification of attributes for (1) verification purposes could be dropped, but I'd rather not since I find the compiler frequently catches mistakes or fuzzy thinking on my part by comparing explicit attributes to inferred.
Nov 19 2021
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 20/11/2021 6:13 PM, tsbockman wrote:
 2) To specify APIs independent of implementation, for `extern` linking, 
 `interface`s and base `class`es with multiple implementations that might 
 imply different attributes, etc.
Indeed, there is no way around that. The compiler can emit them for .di files though.
 Attribute soup is unavoidable for (2) public APIs in general; there is 
 nowhere else the information *can* come from in many cases except from 
 an explicit specification, regardless of how sophisticated the compiler is.
 
 Redundant specification of attributes for (1) verification purposes 
 could be dropped, but I'd rather not since I find the compiler 
 frequently catches mistakes or fuzzy thinking on my part by comparing 
 explicit attributes to inferred.
If it works for you, go for it. But this approach does not make memory safety easy. It makes it harder for the majority of people and that is the problem.
Nov 19 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 20 November 2021 at 05:51:17 UTC, rikki cattermole 
wrote:
 But this approach does not make memory safety easy. It makes it 
 harder for the majority of people and that is the problem.
It is not only about harder, it is also about not having clutter. When people can choose between system development language 1 and 2... clutter matters.
Nov 20 2021
prev sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 19 November 2021 at 12:22:05 UTC, Paulo Pinto wrote:
 Swift as natural evolution from Objective-C, with 1:1 
 interoperability goals with the Cocoa ecosystem and Objective-C 
 runtime, naturally had double down on ARC.

 The alternative with a tracing GC would require an engineering 
 effort similar to how .NET interops with COM,

 https://docs.microsoft.com/en-us/dotnet/standard/native-interop/cominterop

 So beware when discussing what D should adopt from other 
 languages, usually there is more to the whole story than just 
 RC vs tracing GC.
I think it is besides the point whatever is the best, tracing GC or reference counted GC or whatever. The truth is that both have their advantages and disadvantages. The problem for D is that the language is limited to tracing GC and I think it is a mistake. The goal should be to create a language that can potentially support any GC type, preferably with as little change as possible when switching between them. This also might include the support for compacting the heap. It's challenging to create a language that supports all these different types but I think D should have gone a step higher in the abstraction in order to be more versatile. It is also clear that memory management is something that is still evolving and maybe in the future someone might come up with an even better method that is interesting for D to adopt. That's why the language design should take that into consideration.
Nov 19 2021
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 15:20:06 UTC, IGotD- wrote:
 On Friday, 19 November 2021 at 12:22:05 UTC, Paulo Pinto wrote:
 Swift as natural evolution from Objective-C, with 1:1 
 interoperability goals with the Cocoa ecosystem and 
 Objective-C runtime, naturally had double down on ARC.

 The alternative with a tracing GC would require an engineering 
 effort similar to how .NET interops with COM,

 https://docs.microsoft.com/en-us/dotnet/standard/native-interop/cominterop

 So beware when discussing what D should adopt from other 
 languages, usually there is more to the whole story than just 
 RC vs tracing GC.
I think it is besides the point whatever is the best, tracing GC or reference counted GC or whatever. The truth is that both have their advantages and disadvantages. The problem for D is that the language is limited to tracing GC and I think it is a mistake. The goal should be to create a language that can potentially support any GC type, preferably with as little change as possible when switching between them. This also might include the support for compacting the heap. It's challenging to create a language that supports all these different types but I think D should have gone a step higher in the abstraction in order to be more versatile. It is also clear that memory management is something that is still evolving and maybe in the future someone might come up with an even better method that is interesting for D to adopt. That's why the language design should take that into consideration.
That would be affine/linear types, the best ergonomics for them is to combine a tracing GC, with such types. Instead of having them all around the program, only make use of them when performance requirements so demand them. This is the approach being taken by Swift, Haskell, OCaml, and even D with life. It would be interesting to see life + GC settle all discussions on D's approach to memory management, but somehow I feel it won't happen.
Nov 19 2021
parent reply Tejas <notrealemail gmail.com> writes:
On Friday, 19 November 2021 at 15:37:04 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:20:06 UTC, IGotD- wrote:
 [...]
That would be affine/linear types, the best ergonomics for them is to combine a tracing GC, with such types. Instead of having them all around the program, only make use of them when performance requirements so demand them. This is the approach being taken by Swift, Haskell, OCaml, and even D with life. It would be interesting to see life + GC settle all discussions on D's approach to memory management, but somehow I feel it won't happen.
What do you think about Nim's ARC + GC solution? They call it ORC : https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
Nov 19 2021
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 15:46:15 UTC, Tejas wrote:
 On Friday, 19 November 2021 at 15:37:04 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:20:06 UTC, IGotD- wrote:
 [...]
That would be affine/linear types, the best ergonomics for them is to combine a tracing GC, with such types. Instead of having them all around the program, only make use of them when performance requirements so demand them. This is the approach being taken by Swift, Haskell, OCaml, and even D with life. It would be interesting to see life + GC settle all discussions on D's approach to memory management, but somehow I feel it won't happen.
What do you think about Nim's ARC + GC solution? They call it ORC : https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
It is an old idea that goes back to systems like Mesa/Cedar in the early 1980's. https://archive.org/details/bitsavers_xeroxparctddingGarbageCollectionandRuntimeTypestoa_1765837 Used to create this workstation OS at Xerox PARC, https://m.youtube.com/watch?v=z_dt7NG38V4
Nov 19 2021
parent reply Araq <rumpf_a web.de> writes:
On Friday, 19 November 2021 at 17:41:23 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:46:15 UTC, Tejas wrote:
 On Friday, 19 November 2021 at 15:37:04 UTC, Paulo Pinto wrote:
 [...]
What do you think about Nim's ARC + GC solution? They call it ORC : https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
It is an old idea that goes back to systems like Mesa/Cedar in the early 1980's. https://archive.org/details/bitsavers_xeroxparctddingGarbageCollectionandRuntimeTypestoa_1765837 Used to create this workstation OS at Xerox PARC, https://m.youtube.com/watch?v=z_dt7NG38V4
Mesa/Cedar used deferred reference counting plus a cycle collector, that's Nim's old default GC, ORC is completely different... But hey, what do I know, I only implemented both.
Nov 19 2021
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:
 On Friday, 19 November 2021 at 17:41:23 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:46:15 UTC, Tejas wrote:
 On Friday, 19 November 2021 at 15:37:04 UTC, Paulo Pinto 
 wrote:
 [...]
What do you think about Nim's ARC + GC solution? They call it ORC : https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
It is an old idea that goes back to systems like Mesa/Cedar in the early 1980's. https://archive.org/details/bitsavers_xeroxparctddingGarbageCollectionandRuntimeTypestoa_1765837 Used to create this workstation OS at Xerox PARC, https://m.youtube.com/watch?v=z_dt7NG38V4
Mesa/Cedar used deferred reference counting plus a cycle collector, that's Nim's old default GC, ORC is completely different... But hey, what do I know, I only implemented both.
Different in what way, given the optimizations referred in the paper and plans for future work, which unfortunately never realised given the team's move into Olivetti, where they eventually created Modula-2+ and Modula-3.
Nov 19 2021
parent reply Araq <rumpf_a web.de> writes:
On Friday, 19 November 2021 at 19:41:59 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:
 On Friday, 19 November 2021 at 17:41:23 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:46:15 UTC, Tejas wrote:
 On Friday, 19 November 2021 at 15:37:04 UTC, Paulo Pinto 
 wrote:
 [...]
What do you think about Nim's ARC + GC solution? They call it ORC : https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
It is an old idea that goes back to systems like Mesa/Cedar in the early 1980's. https://archive.org/details/bitsavers_xeroxparctddingGarbageCollectionandRuntimeTypestoa_1765837 Used to create this workstation OS at Xerox PARC, https://m.youtube.com/watch?v=z_dt7NG38V4
Mesa/Cedar used deferred reference counting plus a cycle collector, that's Nim's old default GC, ORC is completely different... But hey, what do I know, I only implemented both.
Different in what way, given the optimizations referred in the paper and plans for future work, which unfortunately never realised given the team's move into Olivetti, where they eventually created Modula-2+ and Modula-3.
ORC is precise, it doesn't do conservative stack marking, ORC's cycle detector uses "trial deletion", not "mark and sweep", ORC removes cycle candidates in O(1) which means it can exploit acyclic structures at runtime better than previous algorithms, ORC has a heuristic for "bulk cycle detection"...
Nov 22 2021
next sibling parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Monday, 22 November 2021 at 10:16:28 UTC, Araq wrote:
 On Friday, 19 November 2021 at 19:41:59 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:
 [...]
Different in what way, given the optimizations referred in the paper and plans for future work, which unfortunately never realised given the team's move into Olivetti, where they eventually created Modula-2+ and Modula-3.
ORC is precise, it doesn't do conservative stack marking, ORC's cycle detector uses "trial deletion", not "mark and sweep", ORC removes cycle candidates in O(1) which means it can exploit o acyclic structures at runtime better than previous algorithms, ORC has a heuristic for "bulk cycle detection"...
ORC seems like a pretty nice solution
Nov 22 2021
parent reply IGotD- <nise nise.com> writes:
On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:
 ORC seems like a pretty nice solution
Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.
Nov 22 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Nov 22, 2021 at 02:27:00PM +0000, IGotD- via Digitalmars-d wrote:
 On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:
 
 ORC seems like a pretty nice solution
Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.
I skimmed through the paper yesterday. Very interesting indeed! The nicest thing about it is that it's completely transparent: application code doesn't have to know that ORC is being used (rather than, e.g., tracing GC). As long as the refcount is updated correctly (under the hood by the language), the rest just takes care of itself. As far as the user is concerned, it might as well be a tracing GC instead. This is good news, because it means that if we hypothetically implement such a scheme in D, you could literally just flip a compiler switch to switch between tracing GC and ORC, and pretty much the code would Just Work(tm). (Unfortunately, a runtime switch isn't possible because this is still a ref-counting system, so pointer updates will have to be done differently when using ORC.) T -- GEEK = Gatherer of Extremely Enlightening Knowledge
Nov 23 2021
parent reply Araq <rumpf_a web.de> writes:
On Tuesday, 23 November 2021 at 17:55:30 UTC, H. S. Teoh wrote:
 On Mon, Nov 22, 2021 at 02:27:00PM +0000, IGotD- via 
 Digitalmars-d wrote:
 On Monday, 22 November 2021 at 12:33:46 UTC, Imperatorn wrote:
 
 ORC seems like a pretty nice solution
Yes, since the cycle detection is automatic there is no necessity to badge references as 'weak' in order to avoid cyclic references. So it is a compromise between reference counting and tracing GC. Nice really since automatic memory management should really be automatic.
I skimmed through the paper yesterday. Very interesting indeed! The nicest thing about it is that it's completely transparent: application code doesn't have to know that ORC is being used (rather than, e.g., tracing GC). As long as the refcount is updated correctly (under the hood by the language), the rest just takes care of itself. As far as the user is concerned, it might as well be a tracing GC instead. This is good news, because it means that if we hypothetically implement such a scheme in D, you could literally just flip a compiler switch to switch between tracing GC and ORC, and pretty much the code would Just Work(tm). (Unfortunately, a runtime switch isn't possible because this is still a ref-counting system, so pointer updates will have to be done differently when using ORC.) T
As long as D doesn't distinguish GC'ed pointers from non-GC'ed pointers and allows for unprincipled unions I fail to see how it's "good news". Multi-threading is also a problem, in Nim we can track global variables and pass "isolated" subgraphs between threads so that the RC ops do not have to be atomic. Copying ORC over to D is quite some work and in the end you might have a D that is just a Nim with braces. Well ... you would still have plenty of D specific quirks left I guess.
Nov 23 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 23 November 2021 at 19:22:11 UTC, Araq wrote:
 As long as D doesn't distinguish GC'ed pointers from non-GC'ed 
 pointers and allows for unprincipled unions I fail to see how 
 it's "good news".
The union issue can be fixed by using a selector-function, but it sounds like Nim and its memory management solution is higher level than D. It would probably be a mistake for D to follow there given the focus on ```importC``` etc.
Nov 23 2021
parent reply IGotD- <nise nise.com> writes:
On Tuesday, 23 November 2021 at 21:14:39 UTC, Ola Fosheim Grøstad 
wrote:
 It would probably be a mistake for D to follow there given the 
 focus on ```importC``` etc.
Not sure if I'm interpreting your answer correctly but there is no contradiction between managed pointers and raw pointers when it comes to interoperability. Nim can simply cast raw pointers from a managed pointers and pass them to FFI. Higher level of abstraction when in comes to memory management will not hurt FFI at all. The same cautions with FFIs as we have today must of course be taken.
Nov 23 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 23 November 2021 at 22:21:51 UTC, IGotD- wrote:
 Not sure if I'm interpreting your answer correctly but there is 
 no contradiction between managed pointers and raw pointers when 
 it comes to interoperability.
I don't know enough about Nim, but it would be a mistake to replace one memory management solution with another one, if it: 1. Still does not satisfy people who want something slightly higher level than C++, but low level enough to create competitive game engines. 2. Requires semantic changes that makes D more like Nim. 3. Increases the complexity of the compiler unnecessarily. I would strongly favour simple schemes. So I'd rather see actor-local GC + ARC (without cycle detection). Complex schemes tend to go haywire when people go hard in on low level hand-optimization. Programmers need to understand what goes on. As can be seen in the forums, many have a hard time understanding how the current GC works (despite it being quite simplistic). I can only imagine how many will fail to understand ORC…
Nov 23 2021
parent reply Araq <rumpf_a web.de> writes:
On Tuesday, 23 November 2021 at 22:40:20 UTC, Ola Fosheim Grøstad 
wrote:
 I would strongly favour simple schemes. So I'd rather see 
 actor-local GC + ARC (without cycle detection).
That's only simple because it's just a vague idea in your head. See for example, https://www.ponylang.io/media/papers/orca_gc_and_type_system_co-design_for actor_languages.pdf for a real implementation...
Nov 24 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 24 November 2021 at 11:19:09 UTC, Araq wrote:
 On Tuesday, 23 November 2021 at 22:40:20 UTC, Ola Fosheim 
 Grøstad wrote:
 I would strongly favour simple schemes. So I'd rather see 
 actor-local GC + ARC (without cycle detection).
That's only simple because it's just a vague idea in your head. See for example, https://www.ponylang.io/media/papers/orca_gc_and_type_system_co-design_for actor_languages.pdf for a real implementation...
You are very presumptuous. Pony is a high level language.
Nov 24 2021
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Nov 23, 2021 at 07:22:11PM +0000, Araq via Digitalmars-d wrote:
[...]
 As long as D doesn't distinguish GC'ed pointers from non-GC'ed
 pointers and allows for unprincipled unions I fail to see how it's
 "good news".
Hmm you're right, unprincipled unions throw a monkey wrench into the works. :-/ I don't see GC'ed vs. non-GC'ed pointers as a problem; the collector could tell them apart from their values (whether they fall into the range of GC-managed heap), just like is done with today's D's GC.
 Multi-threading is also a problem, in Nim we can track global
 variables and pass "isolated" subgraphs between threads so that the RC
 ops do not have to be atomic.
[...] Having thread-local heaps would solve this, except for immutable which is implicitly shared. Well, that, and the mess with `shared` and passing stuff between threads... Hmm. I wonder if this could be addressed by repurposing `shared` to qualify data that could have references from multiple threads. This would make it a LOT more useful than it is now, and would let us do ORC-like memory management with tracing of non-shared data without locks. Shared stuff would have to be handled differently, of course. T -- English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall
Nov 24 2021
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Monday, 22 November 2021 at 10:16:28 UTC, Araq wrote:
 On Friday, 19 November 2021 at 19:41:59 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:
 [...]
Different in what way, given the optimizations referred in the paper and plans for future work, which unfortunately never realised given the team's move into Olivetti, where they eventually created Modula-2+ and Modula-3.
ORC is precise, it doesn't do conservative stack marking, ORC's cycle detector uses "trial deletion", not "mark and sweep", ORC removes cycle candidates in O(1) which means it can exploit acyclic structures at runtime better than previous algorithms, ORC has a heuristic for "bulk cycle detection"...
Thanks.
Nov 22 2021
prev sibling parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Friday, 19 November 2021 at 19:02:45 UTC, Araq wrote:
 On Friday, 19 November 2021 at 17:41:23 UTC, Paulo Pinto wrote:
 On Friday, 19 November 2021 at 15:46:15 UTC, Tejas wrote:
 [...]
It is an old idea that goes back to systems like Mesa/Cedar in the early 1980's. https://archive.org/details/bitsavers_xeroxparctddingGarbageCollectionandRuntimeTypestoa_1765837 Used to create this workstation OS at Xerox PARC, https://m.youtube.com/watch?v=z_dt7NG38V4
Mesa/Cedar used deferred reference counting plus a cycle collector, that's Nim's old default GC, ORC is completely different... But hey, what do I know, I only implemented both.
Love your work on Nim btw /Secret fan
Nov 19 2021
prev sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Friday, 19 November 2021 at 02:36:31 UTC, forkit wrote:
 On Friday, 19 November 2021 at 00:38:15 UTC, Timon Gehr wrote:
 I think that statement itself is deceptive. C is not all that 
 close to how modern hardware actually operates.
I mean the abstraction of the C memory model... i.e. " ..one or more contiguous sequences of bytes. Each byte in memory has a unique address." This is still an appropriate abstraction, even in modern times.
Funny that x86 real mode contradicts that definition. Each byte had 4096 different possible addresses.
 When working at a low-level, this is still, even today, an 
 appropriate and suitable abstraction, on which to build your 
 ideas.

 No language can model 'actual hardware', and even if it could, 
 the human brain could never use such a language.
Nov 19 2021
parent forkit <forkit gmail.com> writes:
On Friday, 19 November 2021 at 08:13:48 UTC, Patrick Schluter 
wrote:
 Funny that x86 real mode contradicts that definition. Each byte 
 had 4096 different possible addresses.
That may be true. But C was designed around the concept of memory being a linear array of cells. And to this day, that remains a sound and relevant abstraction. That array is my play ground ;-) And, I still think C is the best language, when that is the level of abstraction you're working with.
Nov 19 2021
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:


 D -- The best programming language!

 [...]
I *really* want to reply to this in detail but as it turns out my DConf talk does a lot of that already. Which is weird for something I wrote and recorded last week :P I obviously love the compiler daemon idea, but I don't know how that's practically feasible given the codebase we have right now.
Nov 17 2021
parent reply Robert Schadek <rburners gmail.com> writes:
On Wednesday, 17 November 2021 at 21:58:17 UTC, Atila Neves wrote:
 On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
 wrote:


 D -- The best programming language!

 [...]
I *really* want to reply to this in detail but as it turns out my DConf talk does a lot of that already. Which is weird for something I wrote and recorded last week :P
I do not think I said anything new. I believe I have been saying the same things as long as I have been part of the quarterly industry meetings.
 I obviously love the compiler daemon idea, but I don't know how 
 that's practically feasible given the codebase we have right 
 now.
The dmd codebase will have to compile the new codebase, and serve as a reference for the language. Nothing more. This is a rewrite!
Nov 17 2021
parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 18 November 2021 at 07:50:44 UTC, Robert Schadek 
wrote:
 The dmd codebase will have to compile the new codebase, and 
 serve as a reference for the language.
 Nothing more. This is a rewrite!
Yep. I'd keep the DMD parser. (Just because I think writing parsers is a pain in the butt.) But the rest is going to be an informed rewrite to avoid mistakes of the past. (Which are reasonable since we didn't know what D would become back then.) As I said previously I am already working on a prototype. I am going to announce it as soon as it can compile more than a simple test.
Nov 18 2021
parent Tejas <notrealemail gmail.com> writes:
On Thursday, 18 November 2021 at 09:08:43 UTC, Stefan Koch wrote:
 On Thursday, 18 November 2021 at 07:50:44 UTC, Robert Schadek 
 wrote:
 The dmd codebase will have to compile the new codebase, and 
 serve as a reference for the language.
 Nothing more. This is a rewrite!
Yep. I'd keep the DMD parser. (Just because I think writing parsers is a pain in the butt.) But the rest is going to be an informed rewrite to avoid mistakes of the past. (Which are reasonable since we didn't know what D would become back then.) As I said previously I am already working on a prototype. I am going to announce it as soon as it can compile more than a simple test.
Very excited to see that prototype :D Just curious, how do you find the time for this? You're working on `newCTFE`, `core.reflect`/`codegen`, a `JIT`, taskifying `dmd` and this was there as well? What's your secret to this ultra productive programming output?
Nov 18 2021
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/16/2021 1:00 PM, Robert Schadek wrote:
 GitHub has >10^7 accounts, D's bugzilla has what, 10^3?
 No matter what feature github is missing there is no reason to not migrate to
 github.
Sebastian wanted to do this, and we already gave him the go-ahead on it a couple years ago.
Nov 19 2021
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Saturday, 20 November 2021 at 07:07:55 UTC, Walter Bright 
wrote:
 On 11/16/2021 1:00 PM, Robert Schadek wrote:
 GitHub has >10^7 accounts, D's bugzilla has what, 10^3?
 No matter what feature github is missing there is no reason to 
 not migrate to
 github.
Sebastian wanted to do this, and we already gave him the go-ahead on it a couple years ago.
You mean Wilzbach, not Koppe? Yes I realize Sebastiaan Koppe is two a's in the name but it's so easy to mess those up that it's hard to be sure without the surname.
Nov 20 2021
parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Saturday, 20 November 2021 at 15:32:08 UTC, Dukc wrote:
 On Saturday, 20 November 2021 at 07:07:55 UTC, Walter Bright 
 wrote:
 Sebastian wanted to do this, and we already gave him the 
 go-ahead on it a couple years ago.
You mean Wilzbach, not Koppe? Yes I realize Sebastiaan Koppe is two a's in the name but it's so easy to mess those up that it's hard to be sure without the surname.
Ha, even for us :)
Nov 20 2021
prev sibling parent reply Robert Schadek <rburners gmail.com> writes:
On Saturday, 20 November 2021 at 07:07:55 UTC, Walter Bright 
wrote:
 Sebastian wanted to do this, and we already gave him the 
 go-ahead on it a couple years ago.
I saw that and the discussion degenerated https://github.com/dlang/projects/issues/43 and then nobody took charge.
Nov 20 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/20/2021 7:34 AM, Robert Schadek wrote:
 On Saturday, 20 November 2021 at 07:07:55 UTC, Walter Bright wrote:
 Sebastian wanted to do this, and we already gave him the go-ahead on it a 
 couple years ago.
I saw that and the discussion degenerated https://github.com/dlang/projects/issues/43 and then nobody took charge.
You're right, it needs someone to take charge and keep hammering on it.
Nov 20 2021
parent Robert Schadek <rburners gmail.com> writes:
I'm looking it two right now.

The migration tool is not that great IMO, and this is a nice side 
project during dconf.

But to reiterate my point, I don't think this is an argument 
about the features of the tool but a social/visibility/ease of 
entry issue.

If I have something I'll post.
Nov 20 2021
prev sibling next sibling parent reply Gleb <gleb.tsk gmail.com> writes:
Gentlemen, good afternoon.
Let me make a couple of comments.
I think D is an excellent language for rapid development and I am 
trying to
popularize it here in this vein.

This year I (suddenly) resumed my education
(second grade) and suddenly found myself as old experienced dude 
in a young student environment.

Tasks to be solved include programming. D ideally lie on almost 
all occasions.
It would seem that we are going "forward and upward". But ... 
There are several
stoppers at once.

First of all, the GUI and huge difficulties (for a student ---
not a professional programmer, but only a programming user) to 
draw the GUI
elements.
The data plotter window on D is an almost insoluble problem for a
student.

There are no full-fledged signals --- slots with the ability to 
exchange
data between threads (in the style of Qt) => whole familiar 
sections
immediately drop out.

In general, the absence of bindings to Qt and the
laboriousness of creating bindings to ordinary libraries is 
already a huge
stopper.

Async? Oops.

Finally, the situation, that most of the packages in the
dub registry are simply not compilable by the current versions of 
the compilers
(at least, without the shamanic dances with "depercated". It's 
inaccessible to the
average student).

IMHO, I would like to draw the first attention to this, then
the spread of the language could follow the same path as Python 
in the recent
past. From students to experienced peoples.

Automatically and successfully.
Nov 24 2021
next sibling parent reply zjh <fqbqrr 163.com> writes:
On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:

 In general, the absence of bindings to Qt and the
 laboriousness of creating bindings to ordinary libraries is 
 already a huge stopper.
QtE5.
Nov 24 2021
parent Gleb <gleb.tsk gmail.com> writes:
On Wednesday, 24 November 2021 at 10:29:54 UTC, zjh wrote:
 On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:

 In general, the absence of bindings to Qt and the
 laboriousness of creating bindings to ordinary libraries is 
 already a huge stopper.
QtE5.
Yes. And nope. It's very incomplete. Unfortunately. It's impossible to use basic popular widgets, like QCustomPlot, for example.
Nov 24 2021
prev sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:
 There are no full-fledged signals --- slots with the ability to 
 exchange
 data between threads (in the style of Qt) => whole familiar 
 sections
 immediately drop out.
D has a native message system between threads. Qt signals is a special case as it also can be used in the same thread and then it is just a function call. Also there are syntax sugar for declaring Qt signals in C++. I don't know any language that natively implements signals as Qt does.
Nov 24 2021
next sibling parent reply Gleb <gleb.tsk gmail.com> writes:
On Wednesday, 24 November 2021 at 10:40:39 UTC, IGotD- wrote:
 On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:
 There are no full-fledged signals --- slots with the ability 
 to exchange
 data between threads (in the style of Qt) => whole familiar 
 sections
 immediately drop out.
D has a native message system between threads. Qt signals is a special case as it also can be used in the same thread and then it is just a function call. Also there are syntax sugar for declaring Qt signals in C++. I don't know any language that natively implements signals as Qt does.
Yes.But see: There is not any standard (or well-known) library with some functionality like Qt provide (non-GUI part). So methods that good lies to sig-slot paradigm can not be simply expressed in D. Sadly...
Nov 24 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 24 November 2021 at 11:36:21 UTC, Gleb wrote:
 functionality like Qt provide (non-GUI part). So methods that 
 good lies to sig-slot paradigm can not be simply expressed in 
 D. Sadly...
Not sure what you mean, but I don't think "slot-signals" are in high demand. I've certainly never felt a need to use: https://dlang.org/phobos/std_signals.html But if you want to improve on it, you probably could. So it is really up to you, if you are interested.
Nov 24 2021
parent Gleb Kulikov <gleb.tsk gmail.com> writes:
On Wednesday, 24 November 2021 at 14:55:39 UTC, Ola Fosheim 
Grøstad wrote:

 functionality like Qt provide (non-GUI part). So methods that 
 good lies to sig-slot paradigm can not be simply expressed in 
 D. Sadly...
 But if you want to improve on it, you probably could. So it is 
 really up to you, if you are interested.
Well, yes ... No, I can't. The message loop is a basic feature and should be designed by the founders of the library. Is not it so? It seems to me that now is the time to postpone a little any new changes in the language, and focus on the applied aspect. And, suddenly, compatibility.
Nov 25 2021
prev sibling next sibling parent Gleb <gleb.tsk gmail.com> writes:
On Wednesday, 24 November 2021 at 10:40:39 UTC, IGotD- wrote:
 On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:
 There are no full-fledged signals --- slots with the ability 
 to exchange
 data between threads (in the style of Qt) => whole familiar 
 sections
 immediately drop out.
D has a native message system between threads. Qt signals is a special case as it also can be used in the same thread and then it is just a function call. Also there are syntax sugar for declaring Qt signals in C++. I don't know any language that natively implements signals as Qt does.
And yes, it is absolutely certain that Phobos should include a message loop, highly desirable in the form of slot -- signals.
Nov 24 2021
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 24 November 2021 at 10:40:39 UTC, IGotD- wrote:
 On Wednesday, 24 November 2021 at 10:23:22 UTC, Gleb wrote:
 There are no full-fledged signals --- slots with the ability 
 to exchange
 data between threads (in the style of Qt) => whole familiar 
 sections
 immediately drop out.
D has a native message system between threads. Qt signals is a special case as it also can be used in the same thread and then it is just a function call. Also there are syntax sugar for declaring Qt signals in C++. I don't know any language that natively implements signals as Qt does.
Delphi and .NET events.
Nov 24 2021
prev sibling next sibling parent Vinod K Chandran <kcvinu82 gmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:


 The casual D user, when he finds a bug, will never report it we 
 he has to
 create a special account on our bugzilla.
That's true. Last year I found a bug in a process related function. Actually, I didn't know it was a bug. When Mr. M Parker happened to see my code and he said it was a bug. Then I asked him what to do next. He said, report it. I said "Okay" and then I forget it.
Nov 25 2021
prev sibling parent Robert Schadek <rburners gmail.com> writes:
On Tuesday, 16 November 2021 at 21:00:48 UTC, Robert Schadek 
wrote:

 This can be fixed quite easily as well:

 ```D
 private auto someFunIR(R)(R r) { ... }

 private auto someFunRAR(R)(R r) { ...  }

 auto somFun(R)(R r) {
 	static if(isInputRange!R) {
 		someFunIR(r);
 	} else static if(isRandomAccessRange!R) {
 		someFunRAR(r);
 	} else {
 		static assert(false, "R should be either be an "
 				~ "InputRange but " ~ inputRangeErrorFormatter!R
 				~ "\n or R should be an RandomAccessRange but "
 				~ randomAccessRangeErrorFormatter!R
 				~ "\n therefore you can call " ~ __FUNCTION__);
 	}
 }
 ```
Actually done and announced here https://forum.dlang.org/thread/clyiounnxlnupinbafpy forum.dlang.org
Jan 05 2022