www.digitalmars.com         C & C++   DMDScript  

D - synchronize issues

reply Kevin Quick <kevin.quick surgient.com> writes:
I would recommend against keeping the "synchronize" keyword.

* Implementing this requires a knowledge of the current hardware
  environment (UP v.s. SMP).  While a simple test-and-set spinloop
  would presumably never conflict in a UP environment, it's still
  overhead, especially because this may require cache synchronization
  wait cycles.

* What if the statement is a complex statement?
  - The synchronization requirement may not follow code scoping.  For
    example, how would the following C excerpt be re-coded to be as
    efficient code in D?

        thread_lock(X);
        while (protected_test) {
            if (protected_test2) {
                thread_unlock(X);
                return;
            }
            if (protected_test3) continue;
            thread_unlock(X);
            do_lots_of_stuff;
            thread_lock(X);
          increment_pvar:
            protected_var += 1;
            thread_unlock(X);
            do_lots_more_stuff;
            read(open_fildesc, buf, 100);
            thread_lock(X);
        }

  - Scope management now has to include sychronization shells, which
    can become very complex, especially with the "goto" statement.
    Not only must synchronization be released, on a goto out of
    the complex statement, but they must also be acquired on goto into
    the statement.  Continuing the above example, what if code later on
    stated:

        thread_lock(Y);
        if (a_different_protected_test) {
            thread_unlock(Y);
            thread_lock(X);
            goto increment_pvar;
        } else
            thread_unlock(Y);

* For synchronization related to an Object (using the Expression), how
  are multiple disparate or nested synchronizations handled?

        synchronize ( my_obj ) {
            do_part_1;
            func_a(my_obj);
            do_part_2;
            func_a(a_different_obj);
        }

        func_a(an_obj) {
        {
            synchronize (an_obj) {
                an_obj++;
            }
        }

  Is the second synchronize allowed?  If so, how much storage do you
  reserve in Object for synchronization state?  I'll want that so that
  when the another thread calls "func_a(a_different_obj)" while the
  first is in "do_part_2" that I can figure out how the deadlock
  occurred.

* The statement makes assumptions about the underlying OS's scheduling
  model, and may not provide the richness or scheduling capabilities
  that the OS's scheduler provides.
  - Is this a pre-emptable threads model so that spinning on one thread
    will let the scheduler interrupt?
  - If the object is locked, the OS might decide to schedule the thread
    holding the object.

* I think this will interfere with the determinism that a real-time
  application would desire:  The more code protected by a spinlock,
  the greater the range of CPU spin times and the more imprecise the
  real-time determinism becomes.


-- 
________________________________________________________________________
Kevin Quick                  Surgient Networks               Project UDI
kevin.quick surgient.com      Austin,  Texas                      Editor
+1 512 241 4801              www.surgient.com         www.projectudi.org
Aug 20 2001
parent reply "Walter" <walter digitalmars.com> writes:
Synchronized statements are not a new idea, they work well in other
languages. They're basically just syntactic sugar around a try-finally
construct, where the first statement in the try acquires the mutex and the
finally releases it. Any operating system that supports multithreaded
programming should be able to provide a suitable primitive mutex
object. -Walter


Kevin Quick wrote in message ...
I would recommend against keeping the "synchronize" keyword.

* Implementing this requires a knowledge of the current hardware
  environment (UP v.s. SMP).  While a simple test-and-set spinloop
  would presumably never conflict in a UP environment, it's still
  overhead, especially because this may require cache synchronization
  wait cycles.

* What if the statement is a complex statement?
  - The synchronization requirement may not follow code scoping.  For
    example, how would the following C excerpt be re-coded to be as
    efficient code in D?

        thread_lock(X);
        while (protected_test) {
            if (protected_test2) {
                thread_unlock(X);
                return;
            }
            if (protected_test3) continue;
            thread_unlock(X);
            do_lots_of_stuff;
            thread_lock(X);
          increment_pvar:
            protected_var += 1;
            thread_unlock(X);
            do_lots_more_stuff;
            read(open_fildesc, buf, 100);
            thread_lock(X);
        }

  - Scope management now has to include sychronization shells, which
    can become very complex, especially with the "goto" statement.
    Not only must synchronization be released, on a goto out of
    the complex statement, but they must also be acquired on goto into
    the statement.  Continuing the above example, what if code later on
    stated:

        thread_lock(Y);
        if (a_different_protected_test) {
            thread_unlock(Y);
            thread_lock(X);
            goto increment_pvar;
        } else
            thread_unlock(Y);

* For synchronization related to an Object (using the Expression), how
  are multiple disparate or nested synchronizations handled?

        synchronize ( my_obj ) {
            do_part_1;
            func_a(my_obj);
            do_part_2;
            func_a(a_different_obj);
        }

        func_a(an_obj) {
        {
            synchronize (an_obj) {
                an_obj++;
            }
        }

  Is the second synchronize allowed?  If so, how much storage do you
  reserve in Object for synchronization state?  I'll want that so that
  when the another thread calls "func_a(a_different_obj)" while the
  first is in "do_part_2" that I can figure out how the deadlock
  occurred.

* The statement makes assumptions about the underlying OS's scheduling
  model, and may not provide the richness or scheduling capabilities
  that the OS's scheduler provides.
  - Is this a pre-emptable threads model so that spinning on one thread
    will let the scheduler interrupt?
  - If the object is locked, the OS might decide to schedule the thread
    holding the object.

* I think this will interfere with the determinism that a real-time
  application would desire:  The more code protected by a spinlock,
  the greater the range of CPU spin times and the more imprecise the
  real-time determinism becomes.


--
________________________________________________________________________
Kevin Quick                  Surgient Networks               Project UDI
kevin.quick surgient.com      Austin,  Texas                      Editor
+1 512 241 4801              www.surgient.com         www.projectudi.org
Aug 20 2001
parent reply Kevin Quick <kevin.quick surgient.com> writes:
"Walter" <walter digitalmars.com> writes:

 Synchronized statements are not a new idea, they work well in other
 languages. They're basically just syntactic sugar around a try-finally
 construct, where the first statement in the try acquires the mutex and the
 finally releases it. Any operating system that supports multithreaded
 programming should be able to provide a suitable primitive mutex
 object. -Walter
By your last sentence you appear to by tying D to a specific operating system implementation, since D will have to generate mutex calls specific to the operating system. The C language is pure and can be implemented anywhere. It sounds like D is restricted to: * A specific operating system for which it generates the right mutex calls? * The subset of operating systems that are thread-based Defining D as a new language is a noble effort... keep the language pure and portable, don't target x86, Linux, *nix, or any other specific environment. Your first sentence isn't much of an answer either... are those languages intended to operate at the same level as C and D, or are they higher-level languages which have less focus on implementation? One of the attractions C has held for so long is that it provides a (usually) reasonable balance between higher-order programmatic abstractions and lower-level implementation control; D should have the same goal if it seeks to gain similar levels of acceptance. Have you analyzed the implementation of synchronization in those other languages relative to the issues raised in my original post? Those are legitimate implementation issues and you need to at least have an answer for them even if you retain the synchronize statement. -Kevin P.S. Along the lines of language purity, my next recommendation would be to drop the asm statement: * It forces you to maintain two languages, not one, and to implement the compiler accordingly. * It introduces impure LOW-LEVEL non-portable elements into the code * Modern build and link tools are good enough that any desired assembly can be implemented in .s files, run through a separate assembler, and then linked with the D-generated code. Modern linkers can still perform inline optimization to provide efficiency. * This allows ABI-specific elements to be placed in ABI-specific locations, rather than being embedded in core code (an issue I noticed in a separate thread here as well). -- ________________________________________________________________________ Kevin Quick Surgient Networks Project UDI kevin.quick surgient.com Austin, Texas Editor +1 512 241 4801 www.surgient.com www.projectudi.org
Aug 21 2001
next sibling parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
Kevin Quick wrote:
 P.S. Along the lines of language purity, my next recommendation would be
 to drop the asm statement:
   * It forces you to maintain two languages, not one, and to implement the
     compiler accordingly.
   * It introduces impure LOW-LEVEL non-portable elements into the code
   * Modern build and link tools are good enough that any desired assembly
     can be implemented in .s files, run through a separate assembler, and
     then linked with the D-generated code.  Modern linkers can still
     perform inline optimization to provide efficiency.
   * This allows ABI-specific elements to be placed in ABI-specific
     locations, rather than being embedded in core code (an issue I
     noticed in a separate thread here as well).
Ah, on this note, while I don't feel strongly one way or another about the presence/absence of an asm statement, I _do_ think that "volatile" should be retained in D; if you have reasonably portable multithreading, the volatile attribute is very useful and far more portable than inline asm. Not to mention that it shouldn't be difficult to implement for a compiler savant. :) Alternately, maybe there's something you can do with a synchronize block to ensure that variables are always reloaded on their first use within a synch -- i.e. implicitly volatile. (Ooh! "synch"! There's your solution to synchronize/synchronise!) -RB
Aug 21 2001
parent reply Dan Hursh <hursh infonet.isl.net> writes:
Russell Bornschlegel wrote:
 Ah, on this note, while I don't feel strongly one way or another about the
 presence/absence of an asm statement, I _do_ think that "volatile" should
 be retained in D; if you have reasonably portable multithreading, the
 volatile attribute is very useful and far more portable than inline asm.
 Not to mention that it shouldn't be difficult to implement for a compiler
 savant. :)
 
 Alternately, maybe there's something you can do with a synchronize block
 to ensure that variables are always reloaded on their first use within a
 synch -- i.e. implicitly volatile.
 
 (Ooh! "synch"! There's your solution to synchronize/synchronise!)
 
 -RB
WRT the volatile functionality, I'd be curious what the D way of doing this is. My first guess would be to us an object, a global or a dynamicaly allocated variable reference (like an object). Dan
Aug 25 2001
parent reply "Walter" <walter digitalmars.com> writes:
Dan Hursh wrote in message <3B8890A9.82A61BDA infonet.isl.net>...
WRT the volatile functionality, I'd be curious what the D way of doing
this is.  My first guess would be to us an object, a global or a
dynamicaly allocated variable reference (like an object).
The way to do it in D is: synchronize { foo = blah; } Whatever happens in the synchronize block is guaranteed to be atomic. My problem with the volatile keyword is the x86 CPU is not helpful - it doesn't guarantee writes of over 32 bits to be atomic (this includes 80 bit floats). Hence, to implement volatile, it would have to be wrapped in a mutex anyway.
Aug 26 2001
parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
Walter wrote:
 
 Dan Hursh wrote in message <3B8890A9.82A61BDA infonet.isl.net>...
WRT the volatile functionality, I'd be curious what the D way of doing
this is.  My first guess would be to us an object, a global or a
dynamicaly allocated variable reference (like an object).
The way to do it in D is: synchronize { foo = blah; } Whatever happens in the synchronize block is guaranteed to be atomic. My problem with the volatile keyword is the x86 CPU is not helpful - it doesn't guarantee writes of over 32 bits to be atomic (this includes 80 bit floats). Hence, to implement volatile, it would have to be wrapped in a mutex anyway.
My understanding of "volatile" is different from this -- I'd have to read the C spec to be sure, but I've always assumed that I was "on my own" for atomicity and that volatile merely told the compiler/optimizer that something outside the current thread of execution could modify the variable. -RB
Aug 26 2001
parent reply "Walter" <walter digitalmars.com> writes:
Russell Bornschlegel wrote in message <3B89DEA1.10F530B2 estarcion.com>...
My understanding of "volatile" is different from this -- I'd
have to read the C spec to be sure, but I've always assumed
that I was "on my own" for atomicity and that volatile merely
told the compiler/optimizer that something outside the current
thread of execution could modify the variable.
In C, volatile essentially means the generated code should not cache the value in a register. But in the advent of multithreaded programming, volatile has been generalized to mean that it is accessed atomically. Today, I believe the C definition of volatile is obsolete and the atomic meaning is what is needed. Volatile is such a rarely used attribute in C that it seems also to be an excessive burden to propagate it all through the internal typing system of the compiler.
Aug 27 2001
parent reply Kevin Quick <kevin.quick surgient.com> writes:
"Walter" <walter digitalmars.com> writes:

 Russell Bornschlegel wrote in message <3B89DEA1.10F530B2 estarcion.com>...
My understanding of "volatile" is different from this -- I'd
have to read the C spec to be sure, but I've always assumed
that I was "on my own" for atomicity and that volatile merely
told the compiler/optimizer that something outside the current
thread of execution could modify the variable.
In C, volatile essentially means the generated code should not cache the value in a register. But in the advent of multithreaded programming, volatile has been generalized to mean that it is accessed atomically. Today, I believe the C definition of volatile is obsolete and the atomic meaning is what is needed.
volatile in its original C interpretation is very important for accessing device registers or local memory that may be DMA'd to by hardware. Essentially it tells the compiler that it's *not* OK to use the value it read 50 instructions ago that happens to still be in a register because something *external* to the program may have changed the location's value. -Kevin
Aug 27 2001
parent reply weingart cs.ualberta.ca (Tobias Weingartner) writes:
In article <m3d75hzdu8.fsf surgient.com>, Kevin Quick wrote:
 "Walter" <walter digitalmars.com> writes:
 
 Russell Bornschlegel wrote in message <3B89DEA1.10F530B2 estarcion.com>
My understanding of "volatile" is different from this -- I'd
have to read the C spec to be sure, but I've always assumed
that I was "on my own" for atomicity and that volatile merely
told the compiler/optimizer that something outside the current
thread of execution could modify the variable.
In C, volatile essentially means the generated code should not cache the value in a register. But in the advent of multithreaded programming, volatile has been generalized to mean that it is accessed atomically. Today, I believe the C definition of volatile is obsolete and the atomic meaning is what is needed.
volatile in its original C interpretation is very important for accessing device registers or local memory that may be DMA'd to by hardware. Essentially it tells the compiler that it's *not* OK to use the value it read 50 instructions ago that happens to still be in a register because something *external* to the program may have changed the location's value.
There is more to this that meets the eye. Allow me. 1) There is volatile, the ability something outside of the compilers control to modify some state in the environment the program is running in. DMA, control registers, etc. 2) There is atomic, the ability to make sure things get modified in one swoop. The ability of multiple threads/processes to modify the same thing at the same time, and only 1 of N will be there. There is no corruption happening, even with more than one thing going after the same memory location. 3) There is syncronisation, the ability to have a certain set of things finished, before you test/modify something else. The most common case is to have an alpha cpu do a whole bunch of floating-point operations in an asynchronous mode (the quickest available), and at the end of them, synchronise, and let any exceptions filter through (this requires the equivelant of a pipeline flush). Note that the synchronisation primitive can not be "moved". IE: strength-recude in a loop, etc. 4) There is ordering, which can be (although tedious) done by using synchronisation primitives. Anyhow, if D supports all of these, that would be cool... Note, C only --Toby.
Aug 27 2001
parent Dan Hursh <hursh infonet.isl.net> writes:
Tobias Weingartner wrote:
 
 In article <m3d75hzdu8.fsf surgient.com>, Kevin Quick wrote:
 "Walter" <walter digitalmars.com> writes:

 Russell Bornschlegel wrote in message <3B89DEA1.10F530B2 estarcion.com>
My understanding of "volatile" is different from this -- I'd
have to read the C spec to be sure, but I've always assumed
that I was "on my own" for atomicity and that volatile merely
told the compiler/optimizer that something outside the current
thread of execution could modify the variable.
In C, volatile essentially means the generated code should not cache the value in a register. But in the advent of multithreaded programming, volatile has been generalized to mean that it is accessed atomically. Today, I believe the C definition of volatile is obsolete and the atomic meaning is what is needed.
volatile in its original C interpretation is very important for accessing device registers or local memory that may be DMA'd to by hardware. Essentially it tells the compiler that it's *not* OK to use the value it read 50 instructions ago that happens to still be in a register because something *external* to the program may have changed the location's value.
There is more to this that meets the eye. Allow me. 1) There is volatile, the ability something outside of the compilers control to modify some state in the environment the program is running in. DMA, control registers, etc. 2) There is atomic, the ability to make sure things get modified in one swoop. The ability of multiple threads/processes to modify the same thing at the same time, and only 1 of N will be there. There is no corruption happening, even with more than one thing going after the same memory location. 3) There is syncronisation, the ability to have a certain set of things finished, before you test/modify something else. The most common case is to have an alpha cpu do a whole bunch of floating-point operations in an asynchronous mode (the quickest available), and at the end of them, synchronise, and let any exceptions filter through (this requires the equivelant of a pipeline flush). Note that the synchronisation primitive can not be "moved". IE: strength-recude in a loop, etc. 4) There is ordering, which can be (although tedious) done by using synchronisation primitives. Anyhow, if D supports all of these, that would be cool... Note, C only
I agree that it great that D supports all of these. I guess I just took the synchronize keyword to mean "atomic" but not "don't cache". I was just wondering how I would read a flag from an interrupt handler or another thread w/o caching. I guess the loads and stores are supposed to be a part of the atomic operation. Dan
Aug 27 2001
prev sibling next sibling parent reply "Bradeeoh" <bradeeoh crosswinds.net> writes:
I think if Walter decides to gear the language he is developing towards
multi-threading operating systems, that's his peroggative.  :)  As longs as
it's platform independant amongst those o/ses   :)

-Brady

"Kevin Quick" <kevin.quick surgient.com> wrote in message
news:m3u1z1cvqt.fsf surgient.com...
 "Walter" <walter digitalmars.com> writes:

 Synchronized statements are not a new idea, they work well in other
 languages. They're basically just syntactic sugar around a try-finally
 construct, where the first statement in the try acquires the mutex and
the
 finally releases it. Any operating system that supports multithreaded
 programming should be able to provide a suitable primitive mutex
 object. -Walter
By your last sentence you appear to by tying D to a specific operating system implementation, since D will have to generate mutex calls specific to the operating system. The C language is pure and can be implemented anywhere. It sounds like D is restricted to: * A specific operating system for which it generates the right mutex calls? * The subset of operating systems that are thread-based Defining D as a new language is a noble effort... keep the language pure and portable, don't target x86, Linux, *nix, or any other specific environment. Your first sentence isn't much of an answer either... are those languages intended to operate at the same level as C and D, or are they higher-level languages which have less focus on implementation? One of the attractions C has held for so long is that it provides a (usually) reasonable balance between higher-order programmatic abstractions and lower-level implementation control; D should have the same goal if it seeks to gain similar levels of acceptance. Have you analyzed the implementation of synchronization in those other languages relative to the issues raised in my original post? Those are legitimate implementation issues and you need to at least have an answer for them even if you retain the synchronize statement. -Kevin P.S. Along the lines of language purity, my next recommendation would be to drop the asm statement: * It forces you to maintain two languages, not one, and to implement the compiler accordingly. * It introduces impure LOW-LEVEL non-portable elements into the code * Modern build and link tools are good enough that any desired assembly can be implemented in .s files, run through a separate assembler, and then linked with the D-generated code. Modern linkers can still perform inline optimization to provide efficiency. * This allows ABI-specific elements to be placed in ABI-specific locations, rather than being embedded in core code (an issue I noticed in a separate thread here as well). -- ________________________________________________________________________ Kevin Quick Surgient Networks Project UDI kevin.quick surgient.com Austin, Texas Editor +1 512 241 4801 www.surgient.com www.projectudi.org
Aug 21 2001
parent reply "Walter" <walter digitalmars.com> writes:
"Bradeeoh" <bradeeoh crosswinds.net> wrote in message
news:9lu77h$2glk$1 digitaldaemon.com...
 I think if Walter decides to gear the language he is developing towards
 multi-threading operating systems, that's his peroggative.  :)  As longs
as
 it's platform independant amongst those o/ses   :)
If the platform doesn't support multiple threads, then the synchronize statement would just revert to a no-op.
Aug 23 2001
parent reply Kevin Quick <kevin.quick surgient.com> writes:
"Walter" <walter digitalmars.com> writes:
 
 If the platform doesn't support multiple threads, then the synchronize
 statement would just revert to a no-op.
I'm afraid I'm still unconvinced. Linux has several implementations of threads available, including pthreads and sstthreads (http://sourceforge.net/projects/ssthreads/). Because of this, D itself has to be rebuilt depending on which threads package I'm intending to use, even on the same platform. This also removes any binary portability of D output... I can't take a D exectuable I built on Linux and run it on another POSIX x86 OS. -Kevin
Aug 24 2001
next sibling parent reply Russ Lewis <russ deming-os.org> writes:
Kevin Quick wrote:

 "Walter" <walter digitalmars.com> writes:
 If the platform doesn't support multiple threads, then the synchronize
 statement would just revert to a no-op.
I'm afraid I'm still unconvinced. Linux has several implementations of threads available, including pthreads and sstthreads (http://sourceforge.net/projects/ssthreads/). Because of this, D itself has to be rebuilt depending on which threads package I'm intending to use, even on the same platform. This also removes any binary portability of D output... I can't take a D exectuable I built on Linux and run it on another POSIX x86 OS.
Why not dynamically link your D executable with a library? You could export your threading features to a binary library. Better yet, export it to a .d module; then the author of that .d module can call a C function directly, include assembly language code, or just declare the function (which means that D will look for it in a library you import).
Aug 24 2001
parent Kevin Quick <kevin.quick surgient.com> writes:
Russ Lewis <russ deming-os.org> writes:

 Kevin Quick wrote:
 
 Linux has several implementations of threads available, including
 pthreads and sstthreads (http://sourceforge.net/projects/ssthreads/).
 Because of this, D itself has to be rebuilt depending on which threads
 package I'm intending to use, even on the same platform.

 This also removes any binary portability of D output... I can't take a
 D exectuable I built on Linux and run it on another POSIX x86 OS.
Why not dynamically link your D executable with a library? You could export your threading features to a binary library. Better yet, export it to a .d module; then the author of that .d module can call a C function directly, include assembly language code, or just declare the function (which means that D will look for it in a library you import).
That library would need to provide a full threading implementation (start_thread, stop_thread, signal_thread, ...) as well as the mutex operations. Since this library might be provided by someone else, the library's API needs to be standardized, in part so that the D compiler can generate the correct thread_mutex calls, and in the other part so that my code can do its thread stuff. This is necessary even to provide source compatibility between platforms. The current D specification assumes the first part (above), and makes no provisions for the second part. IMHO, it needs to jump off the fence onto one side or the other: (a) no process/thread/sync in the language and leave it all to libraries (dynamically or statically linked), or (b) implement a full threading interface in the language (even if the language then simply generates *specified* standardized API calls to a thread library provided on the target). -Kevin
Aug 27 2001
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
I confess I don't see the problem. D binaries will be as portable or
unportable as C ones. The source will be more portable, as with the
synchronize statement you needn't know or care which threading library is
used.

Kevin Quick wrote in message ...
"Walter" <walter digitalmars.com> writes:
 If the platform doesn't support multiple threads, then the synchronize
 statement would just revert to a no-op.
I'm afraid I'm still unconvinced. Linux has several implementations of threads available, including pthreads and sstthreads (http://sourceforge.net/projects/ssthreads/). Because of this, D itself has to be rebuilt depending on which threads package I'm intending to use, even on the same platform. This also removes any binary portability of D output... I can't take a D exectuable I built on Linux and run it on another POSIX x86 OS. -Kevin
Aug 24 2001
parent Kevin Quick <kevin.quick surgient.com> writes:
"Walter" <walter digitalmars.com> writes:

 I confess I don't see the problem. D binaries will be as portable or
 unportable as C ones. The source will be more portable, as with the
 synchronize statement you needn't know or care which threading library is
 used.
None of the statements in the C language depend on external libraries. I can compile pure C code on a Linux x86 and run it under another OS supporting ELF object formats (e.g. FreeBSD, NetBSD, UnixWare, Solaris, etc.) If I chose to use a library (e.g. stdio for printf, etc.), I can use a standardized library which is *usually* available on most platforms, or I can use a different library which may be available only on the subset of platforms interesting to me. However, use of that library is through *function calls* with a defined API. I'd expect a call to printf embedded in my executable to work the same way on the above-named platforms. D will need to translate the "synchronize" statement into an external library call. The threads library on the target platform must support that call, therefore the API must be specified (as part of D). If this isn't done, then the executable I create for Linux x86 pthreads won't run under any other OS (unless it has the same pthreads support). That's the first part of the problem. The second part of the problem is allowing the programmer to select the threading support base. For example, if I'm using Linux, D may be built using pthreads, which is a widely used threading library. However, what if I wanted to use the D implementation of SST threads instead? My D code would use calls like sst_thread_create, etc. but the synchronize statement would still call pthread_mutex_lock... not good. I have to care what threading library I'm using unless D fully specifies all thread-related functionality (e.g. thread_create, thread_delete, thread_signal, etc.) so that I don't need to call the thread library myself. -Kevin
Aug 27 2001
prev sibling parent "Walter" <walter digitalmars.com> writes:
"Kevin Quick" <kevin.quick surgient.com> wrote in message
news:m3u1z1cvqt.fsf surgient.com...
 P.S. Along the lines of language purity, my next recommendation would be
 to drop the asm statement:
   * It forces you to maintain two languages, not one, and to implement the
     compiler accordingly.
   * It introduces impure LOW-LEVEL non-portable elements into the code
   * Modern build and link tools are good enough that any desired assembly
     can be implemented in .s files, run through a separate assembler, and
     then linked with the D-generated code.  Modern linkers can still
     perform inline optimization to provide efficiency.
   * This allows ABI-specific elements to be placed in ABI-specific
     locations, rather than being embedded in core code (an issue I
     noticed in a separate thread here as well).
I relied for years on a separate assembler. The trouble was, there were always random bugs in random versions of the assembler. Couple that with Microsoft kept changing the syntax of the asm, and even kept revamping the command line interface. It was just a lot of grief. Implementing the inline asm in the C compiler just eliminated these problems - I had a consistent, reliable assembler to use. If there is no asm statement in D, then I'd be using the native C compiler for its asm statement. If you've ever tried gcc's inline assembler, you'll know why that's a bad idea! Yes, of course, inline assembler is not portable between CPUs. But it *is* portable between, say, win32 and linux, and I see its primary use is in implementing system dependent code anyway.
Aug 23 2001