digitalmars.D - Inline assembler for AArch64 code generator
- Walter Bright (4/4) Nov 24 Pete Williamson was interested in a D project, so I suggested writing an...
- jmh530 (5/11) Nov 25 AIs are getting better and better at programming. They still make
- Sergey (7/11) Nov 25 Full rewrite of DMD with
- jmh530 (10/21) Nov 25 Haha, to one shot it? Definitely not yet. Limiting factor is
- Walter Bright (2/9) Nov 26 Well-defined components of it, maybe!
- matheus (12/20) Nov 25 Yes and using data for training without user consent.
- jmh530 (11/20) Nov 25 Well if your code has a license that anyone can do whatever they
- matheus (7/16) Nov 25 Well I wasn't talking about this kind of code or data. I'm
- Walter Bright (3/4) Nov 26 Pete told me he used the DMD instruction generator instr.d as training d...
- jmh530 (11/15) Nov 26 At least with ChatGPT, you can make your own custom GPT (a RAG)
- Walter Bright (3/6) Nov 26 The generated code is straightforward and looks good, but it could be sh...
Pete Williamson was interested in a D project, so I suggested writing an inline assembler for the DMD AArch64 code generator project. This evening at the D Coffee Haus he showed his work so far. He used AI to generate the code! He's going to polish it up and submit it as a PR.
Nov 24
On Tuesday, 25 November 2025 at 07:45:01 UTC, Walter Bright wrote:Pete Williamson was interested in a D project, so I suggested writing an inline assembler for the DMD AArch64 code generator project. This evening at the D Coffee Haus he showed his work so far. He used AI to generate the code! He's going to polish it up and submit it as a PR.AIs are getting better and better at programming. They still make mistakes and everything they write should be checked and tested thoroughly, but they’re much better than they were even two years ago.
Nov 25
On Tuesday, 25 November 2025 at 12:49:55 UTC, jmh530 wrote:AIs are getting better and better at programming. They still make mistakes and everything they write should be checked and tested thoroughly, but they’re much better than they were even two years ago.Full rewrite of DMD with - Opus 4.5/GPT5/Gemini3 Pro/Grok 5 team - from scratch in pure D (without C) - without old design of single data structure - portable and performant When? :)
Nov 25
On Tuesday, 25 November 2025 at 13:00:59 UTC, Sergey wrote:On Tuesday, 25 November 2025 at 12:49:55 UTC, jmh530 wrote:Haha, to one shot it? Definitely not yet. Limiting factor is probably size of context right now. If we assume Moore's law-style growth in context limits, [1], then something like this is probably a few years off (just guessing without knowing how much context you would actually need to do this). Experiments should probably start with attempting to modernize smaller pieces of the code base. [1] https://towardsdatascience.com/towards-infinite-llm-context-windows-e099225abaaf/AIs are getting better and better at programming. They still make mistakes and everything they write should be checked and tested thoroughly, but they’re much better than they were even two years ago.Full rewrite of DMD with - Opus 4.5/GPT5/Gemini3 Pro/Grok 5 team - from scratch in pure D (without C) - without old design of single data structure - portable and performant When? :)
Nov 25
On 11/25/2025 5:00 AM, Sergey wrote:Full rewrite of DMD with - Opus 4.5/GPT5/Gemini3 Pro/Grok 5 team - from scratch in pure D (without C) - without old design of single data structure - portable and performant When? :)Well-defined components of it, maybe!
Nov 26
On Tuesday, 25 November 2025 at 12:49:55 UTC, jmh530 wrote:On Tuesday, 25 November 2025 at 07:45:01 UTC, Walter Bright wrote:Yes and using data for training without user consent. We will turn from thinkers to reviewers and maybe not even that in the future. by the way I just learned that Coca Cola, a $47 B company created their new holidays commercial through AI, which people pointing out some weird things. It will be interesting do watching DConf with AI presenters in coming years. PS: I'm retiring from this industry, but I feel for the new generation. Matheus....... AIs are getting better and better at programming. They still make mistakes and everything they write should be checked and tested thoroughly, but they’re much better than they were even two years ago.
Nov 25
On Tuesday, 25 November 2025 at 16:05:25 UTC, matheus wrote:[snip] Yes and using data for training without user consent.Well if your code has a license that anyone can do whatever they want with your code, then people will do whatever they want with it...We will turn from thinkers to reviewers and maybe not even that in the future.I think things will change, but I don't have a good sense of whether what you say will be true. I think part of what the vibe coding trend has revealed is that there is a huge pent-up demand for the skills that historically were the province of programmers.by the way I just learned that Coca Cola, a $47 B company created their new holidays commercial through AI, which people pointing out some weird things.I think the internet has universally agreed that was AI slop.It will be interesting do watching DConf with AI presenters in coming years.I've done things like ask ChatGPT what D should do to evolve and improve. Answers weren't bad.
Nov 25
On Tuesday, 25 November 2025 at 18:07:30 UTC, jmh530 wrote:On Tuesday, 25 November 2025 at 16:05:25 UTC, matheus wrote:Well I wasn't talking about this kind of code or data. I'm talking about usage of data without consent and knowledge or everything stored on the cloud. There are many articles and even some cases being discussed about this. Anyway we will need to wait to see the outcome of this. Matheus.[snip] Yes and using data for training without user consent.Well if your code has a license that anyone can do whatever they want with your code, then people will do whatever they want with it... ...
Nov 25
On 11/25/2025 8:05 AM, matheus wrote:Yes and using data for training without user consent.Pete told me he used the DMD instruction generator instr.d as training data for the AI!
Nov 26
On Wednesday, 26 November 2025 at 08:55:36 UTC, Walter Bright wrote:On 11/25/2025 8:05 AM, matheus wrote:At least with ChatGPT, you can make your own custom GPT (a RAG) by feeding in files and instructions. I don’t think this is the same as training or fine-tuning because the weights don’t change, but it becomes like a knowledge base that the GPT can check first. I built a custom D GPT by feeding in the spec. You could probably take the same approach and feed in the whole of the DMD code base. There are limits, like size and number of files, so you would have to combine multiple files together. I’ve been too busy with other stuff to try.Yes and using data for training without user consent.Pete told me he used the DMD instruction generator instr.d as training data for the AI!
Nov 26
On 11/25/2025 4:49 AM, jmh530 wrote:AIs are getting better and better at programming. They still make mistakes and everything they write should be checked and tested thoroughly, but they’re much better than they were even two years ago.The generated code is straightforward and looks good, but it could be shrunken considerably by coalescing similar code and using tables. But it's a good start.
Nov 26









jmh530 <john.michael.hall gmail.com> 