www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Opportunity: Software Execution Time Determinism

reply =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
I'm currenly in an industry with extremely high demands on 
software determinism, both in space and time (hard realtime).

My conclusion so far is that most safety-critical industries 
today are in desperate need of better (any) language support for 
guaranteeing determinism, especially time. The amount of 
time/money spent/wasted on complicated extra tooling processes 
and tests to assure time-determinism is just insane, at least in 
the avionics industry.

If D was to attack this problem in the same systematic way it can 
restrict execution behaviour with ` safe pure nothrow  nogc` D 
would have yet another killing feature in its feature pack.

I'm aware of the lack of absolute time-determinism in the CPU 
architectures of today. But the industry still uses such 
architectures, sometimes with memory-caches disabled and 
forbidding of multi-core/cpu in its products.

Have anybody though about adding support for this in D? I assume 
it would have to integrate with the backend in various 
complicated ways. Both the frontend and the backend would need to 
have options for generation of code that promotes deterministic 
execution over smallest average time-execution (which is 
currently the default optimization route taken by most compilers 
and library algorithms).

Are there any languages or compilers that tries to attack this 
problem?

Note that this problem is highly related to concept of 
"cyclomatic complexity".

See also:

https://en.wikipedia.org/wiki/Cyclomatic_complexity
Apr 13 2016
next sibling parent reply Nick B <nick.barbalich gmail.com> writes:
On Wednesday, 13 April 2016 at 15:50:40 UTC, Nordlöw wrote:
 I'm currenly in an industry with extremely high demands on 
 software determinism, both in space and time (hard realtime).
 I'm aware of the lack of absolute time-determinism in the CPU 
 architectures of today.
What is absolute time-determinism in a CPU architectures ? and why is it important in hard real time environments ? Nick
Apr 13 2016
parent reply =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 13 April 2016 at 22:25:12 UTC, Nick B wrote:
 What is absolute time-determinism in a CPU architectures ?
Take the expression "absolute time-determinism" with a grain of salt. I'm saying that eventhough the machine code doesn't contain any branches and caches have been invalidated prior to the start of each frame, the execution time of the program may still contain variations that depend on the *values* being inputted into the calculation. The reason for this is that, on some CPU-architectures, some instructions such as trigonometric functions are implemented using microcode that requires different amount of clock-cycles depending on the parameter(s) set to them. At my work, these variations have actually been measured and documented and are used to calculate worst-variations of WCET. A compiler backend, such as DMDs, could be enhanced to leverage these variations automatically.
Apr 15 2016
parent reply Observer <here nowhere.org> writes:
On Friday, 15 April 2016 at 08:03:53 UTC, Nordlöw wrote:
 On Wednesday, 13 April 2016 at 22:25:12 UTC, Nick B wrote:
 What is absolute time-determinism in a CPU architectures ?
Take the expression "absolute time-determinism" with a grain of salt. I'm saying that eventhough the machine code doesn't contain any branches and caches have been invalidated prior to the start of each frame, the execution time of the program may still contain variations that depend on the *values* being inputted into the calculation. The reason for this is that, on some CPU-architectures, some instructions such as trigonometric functions are implemented using microcode that requires different amount of clock-cycles depending on the parameter(s) set to them. At my work, these variations have actually been measured and documented and are used to calculate worst-variations of WCET. A compiler backend, such as DMDs, could be enhanced to leverage these variations automatically.
It seems to me that you're also a slave to many details of the compiler back-end, notably exactly what instructions are output. That will likely change under different optimization levels, and can also change in unexpected ways when nearby code changes and instructions get re-ordered by a peephole optimizer that decides it now has a chance to kick in and modify surrounding code. Not to mention that you're subject to optimizer changes over time in successive versions of the compiler. I'm curious: how often do you consider it necessary to re-validate all the assumptions that were made in a particular code review?
Apr 16 2016
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 16/04/2016 7:09 PM, Observer wrote:
 It seems to me that you're also a slave to many details of the
 compiler back-end, notably exactly what instructions are output.
 That will likely change under different optimization levels, and
 can also change in unexpected ways when nearby code changes and
 instructions get re-ordered by a peephole optimizer that decides
 it now has a chance to kick in and modify surrounding code.  Not
 to mention that you're subject to optimizer changes over time in
 successive versions of the compiler.  I'm curious:  how often do
 you consider it necessary to re-validate all the assumptions that
 were made in a particular code review?
Random thought we could piggy back on -cov/profile and allow real world usage to show what its execution time is (min/max/mean).
Apr 16 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
The compiler could be fairly easily compute cyclomatic complexity, but how that 
would be used to determine max time escapes me.

For example, how many times would a particular loop be executed? Isn't this the 
halting problem, i.e. not computable?

Andrei has done some great work on determining big O complexity, but that's
only 
a small part of this problem.

I don't know about any work in this area, but I can see it would be valuable.
Apr 13 2016
next sibling parent reply Simen Kjaeraas <simen.kjaras gmail.com> writes:
On Wednesday, 13 April 2016 at 22:58:26 UTC, Walter Bright wrote:
 The compiler could be fairly easily compute cyclomatic 
 complexity, but how that would be used to determine max time 
 escapes me.

 For example, how many times would a particular loop be 
 executed? Isn't this the halting problem, i.e. not computable?
The first step is simple - we care only about functions being constant-time. Let's invent a keyword for that: constanttime. constanttime functions can only call other functions marked constanttime, and may not contain conditionals, gotos or while-loops. constanttime functions may contain for and foreach-loops, iff the number of iterations are known at compile-time, and 'break' is never used. The part about conditionals seems a bit harsh, but it's got to be there for determinism. Constant time is easy, and may or may not be enough to cover Nordlöw's needs. Anything beyond it is very likely to be halting problem stuff.
 Andrei has done some great work on determining big O complexity,
 but that's only a small part of this problem.
I have a friend who works on timing attacks in cryptography. Knowing the implementation and measuring the time it takes to multiply two bigints can help you guess what the numbers are, so in his case constanttime would be exactly what he wants, while big-O would be useless. Not knowing Nordlöw's use case I can't say for sure what he actually needs. -- Simen
Apr 13 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/13/2016 5:31 PM, Simen Kjaeraas wrote:
 On Wednesday, 13 April 2016 at 22:58:26 UTC, Walter Bright wrote:
 The compiler could be fairly easily compute cyclomatic complexity, but how
 that would be used to determine max time escapes me.

 For example, how many times would a particular loop be executed? Isn't this
 the halting problem, i.e. not computable?
The first step is simple - we care only about functions being constant-time. Let's invent a keyword for that: constanttime. constanttime functions can only call other functions marked constanttime, and may not contain conditionals, gotos or while-loops. constanttime functions may contain for and foreach-loops, iff the number of iterations are known at compile-time, and 'break' is never used.
Very interesting. Recursion would have to be disallowed as well.
 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism.
Understood.
 Not knowing Nordlöw's use case I can't say for sure what he
 actually needs.
Your ideas are good. Let's see what Nordlöw says.
Apr 13 2016
next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Wed, 13 Apr 2016 18:35:34 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 4/13/2016 5:31 PM, Simen Kjaeraas wrote:
 On Wednesday, 13 April 2016 at 22:58:26 UTC, Walter Bright wrote: =20
 The compiler could be fairly easily compute cyclomatic complexity,
 but how that would be used to determine max time escapes me.

 For example, how many times would a particular loop be executed?
 Isn't this the halting problem, i.e. not computable? =20
The first step is simple - we care only about functions being constant-time. Let's invent a keyword for that: constanttime. constanttime functions can only call other functions marked constanttime, and may not contain conditionals, gotos or while-loops. constanttime functions may contain for and foreach-loops, iff the number of iterations are known at compile-time, and 'break' is never used. =20
=20 Very interesting. Recursion would have to be disallowed as well. =20
Unless you can calculate the recursion 'depth'. Though that could get very complicated.
=20
 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism. =20
=20 Understood. =20 =20
 Not knowing Nordl=C3=B6w's use case I can't say for sure what he
 actually needs. =20
=20 Your ideas are good. Let's see what Nordl=C3=B6w says. =20
Such deterministic code is usually very restricted. This is expected. See also: https://en.wikipedia.org/wiki/Worst-case_execution_time I assume Nordl=C3=B6w only cares about the WCET, not 'complete' determinism (if a loop executes 5 or 6 times depending on input data the run time is not deterministic but the maximum is) As a compiler we can't give a useful estimate of WCET without knowing the target architecture very well (calculating some upper bound is not too difficult, but if you ignore pipelining and caches the calculated WCET might be 4-5x higher than the real WCET). What we can do though is limit the language to a subset which can be WCET analyzed by other tools. As these tools have to determine if code is time-deterministic as well it might make sense to have a closer look at some WCET tools: http://users.ece.utexas.edu/~bevans/courses/ee382c/lectures/spring2000/23_h= wsw/cinderella.pdf https://www.rapitasystems.com/WCET-Tools
Apr 14 2016
next sibling parent dgls <dglsparsons gmail.com> writes:
 From my understanding of real-time systems WCET typically depends 
not only on the architecture of the target platform, but also on 
the criticality of the system.

On Thursday, 14 April 2016 at 08:14:59 UTC, Johannes Pfau wrote:
 Such deterministic code is usually very restricted. This is 
 expected. See also: 
 https://en.wikipedia.org/wiki/Worst-case_execution_time
Perhaps the best example of a restricted subset of a language would be Ada's ravenscar profile, for High integrity real-time systems: http://www.dit.upm.es//~str/proyectos/ork/documents/RP_spec.pdf
 As a compiler we can't give a useful estimate of WCET without 
 knowing the target architecture very well (calculating some 
 upper bound is not too difficult, but if you ignore pipelining 
 and caches the calculated WCET might be 4-5x higher than the 
 real WCET).
I am agreed on this, I fail to see how a complier could provide any useful estimate. Not only does this measurement depend on the target hardware, but on the criticality of the task - a calculated upper bound is used for tasks with hard-deadlines, but measurement is usually sufficient for tasks with soft deadlines.
Apr 14 2016
prev sibling parent =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 14 April 2016 at 08:14:59 UTC, Johannes Pfau wrote:
 I assume Nordlöw only cares about the WCET, not 'complete'
 determinism (if a loop executes 5 or 6 times depending on input 
 data the run time is not deterministic but the maximum is)
Yep, that's what is meant here.
Apr 14 2016
prev sibling parent reply qznc <qznc web.de> writes:
On Thursday, 14 April 2016 at 01:35:34 UTC, Walter Bright wrote:
 On 4/13/2016 5:31 PM, Simen Kjaeraas wrote:
 On Wednesday, 13 April 2016 at 22:58:26 UTC, Walter Bright 
 wrote:
 The compiler could be fairly easily compute cyclomatic 
 complexity, but how
 that would be used to determine max time escapes me.

 For example, how many times would a particular loop be 
 executed? Isn't this
 the halting problem, i.e. not computable?
The first step is simple - we care only about functions being constant-time. Let's invent a keyword for that: constanttime. constanttime functions can only call other functions marked constanttime, and may not contain conditionals, gotos or while-loops. constanttime functions may contain for and foreach-loops, iff the number of iterations are known at compile-time, and 'break' is never used.
Very interesting. Recursion would have to be disallowed as well.
 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism.
Understood.
A limited version is allowed. For example, on x86 the CMOV instruction. The basic idea: You have to do all branches and select the result. This is about code generation and cannot be checked via annotation.
Apr 14 2016
parent Tamas <user dlang.io> writes:
I think the problem can be separated into two:
1. Guarantee a given Big O complexity of a function, (that's what 
matters for most cases),
2. Guarantee / tell the worst case execution time / CPU cycles of 
a given  constanttime function.
Combining these two, we can calculate the worst case execition 
time for the full program.

 constanttime can have branches, each branch with different worst 
case execution time, it won't change the  constanttime nature of 
the funcion. (But makes it harder to calculate the worst case 
execution time, and the average case and the worst case won't be 
the same.)

 constanttime could also have loops / recursion, if the number of 
iterations / recursion is limited. I.e. sorting 5 numbers during 
the median of 5 algorithm is  constanttime, even if it calls a 
sort function which is O(N log N).  I.e. this is valid:

 constanttime auto sort5(Range)(Range values)
{
   assert(values.length <= 5);
   return values.bubblesort();
}

/* inferred:  quadratic */ int[] bubblesort(int[] values) {
   for (int i=0; i<values.length; i++)
     for (int j=i+1; j<values.length; j++)
       if (values[j] < values[i])
         swap(values[i], values[j]);
}

If the number of iteration in a function is proportional to the 
input's length, then it's  linear or  BigO_N. If there are two 
such iterations nested, then it's  quadratic or  BigO_N2, and so 
on.
These could be very well inferred and guaranteed by the compiler. 
But the Compiler might not realize that the amortized time is 
less than what it looks like from the first glance. So the user 
could annotate a function inferred as  quadratic as  linear. Then 
the linearity of the function could be validated / guaranteed 
during runtime using the amortization credit system: the function 
should allocate credits based on the input size and its promise, 
then use this credit to call  constattime functions, while not 
running out of credits. These credits could be passed-on to 
 linear functions as well, those will use more credits, according 
their input sizes. If the promise is broken and the program runs 
out of credit, it should break immediately. This validation could 
be turned off, similar to assert calls.
Apr 14 2016
prev sibling next sibling parent reply Marc =?UTF-8?B?U2Now7x0eg==?= <schuetzm gmx.net> writes:
On Thursday, 14 April 2016 at 00:31:34 UTC, Simen Kjaeraas wrote:
  constanttime functions can only call other functions marked
  constanttime, and may not contain conditionals, gotos or
 while-loops.

  constanttime functions may contain for and foreach-loops, iff
 the number of iterations are known at compile-time, and 'break'
 is never used.

 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism.
It can be relaxed: All alternative branches must take the same number of cycles, though this can be hard to determine. Recursion is an interesting problem, though: to reliably detect it, the compiler has to know the entire call graph, i.e. not extern functions are allowed. On the other hand, without conditionals, recursion will always result in an endless loop, which will immediately be recognized in testing.
Apr 14 2016
parent Alex Burton <alexibu mac.com> writes:
On Thursday, 14 April 2016 at 09:46:34 UTC, Marc Schütz wrote:
 On Thursday, 14 April 2016 at 00:31:34 UTC, Simen Kjaeraas 
 wrote:
  constanttime functions can only call other functions marked
  constanttime, and may not contain conditionals, gotos or
 while-loops.

  constanttime functions may contain for and foreach-loops, iff
 the number of iterations are known at compile-time, and 'break'
 is never used.

 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism.
It can be relaxed: All alternative branches must take the same number of cycles, though this can be hard to determine.
I've done this manually on simple microcontrollers. You add up the time taken for instructions on each path and balance them with nops. Probably impossible on a cpu instruction set which might be run on many different processors with different pipeline lengths, parellel processing units etc. Definitely need hypertheading off :)
Apr 14 2016
prev sibling parent reply =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 14 April 2016 at 00:31:34 UTC, Simen Kjaeraas wrote:
 The first step is simple - we care only about functions being
 constant-time. Let's invent a  keyword for that:  constanttime.

  constanttime functions can only call other functions marked
  constanttime, and may not contain conditionals, gotos or
 while-loops.

  constanttime functions may contain for and foreach-loops, iff
 the number of iterations are known at compile-time, and 'break'
 is never used.

 The part about conditionals seems a bit harsh, but it's got to
 be there for determinism.

 Constant time is easy, and may or may not be enough to cover
 Nordlöw's needs. Anything beyond it is very likely to be halting
 problem stuff.
Yest, that's what I had in mind. Those mentioned are the low-hanging fruits. The situation is as follows: A set of processes are tightly scheduled to run after one another within a 1/N frame, where N is typically either 60 Hz. Each of these processes run on a single CPU. The scheduler defines a specific order for the execution of the different processes. After each process has completed its execution within a frame it copies its out-values to the in-values of other processes that uses its results which are then executed. Each process is assigned a time- and space-budget collectively called WCRU (worst-case-resource-usage). Execution space can be restricted by forbidding any kind of dynamic memory (de)allocations. This is often straightforward but could be useful to have restriction qualifer for aswell, for instance ` noheap`. Then a set of tedious manual code reviews are performed on the executable code to assert that it doesn't contain any non-deterministic constructs. After that the main function of each process is run 100-1000 times and the WCET (worst-case-execution-time) paired with WCSU (worst-case-stack-usage) is calculated through the maximum of a set of executions typically a thousand with the correct "stimuli" as input. Figuring out this stimuli is also cumbersome manual work. To conclude, the manual code review could be greatly simplified or perhaps completely removed if a, say constanttime as mentioned above, function attribute was available in the language and respected by the compiler.
Apr 14 2016
parent =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 14 April 2016 at 22:20:21 UTC, Nordlöw wrote:
 Then a set of tedious manual code reviews are performed on the 
 executable code
Executable source code, that is.
Apr 14 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/13/16 6:58 PM, Walter Bright wrote:
 The compiler could be fairly easily compute cyclomatic complexity, but
 how that would be used to determine max time escapes me.

 For example, how many times would a particular loop be executed? Isn't
 this the halting problem, i.e. not computable?

 Andrei has done some great work on determining big O complexity, but
 that's only a small part of this problem.

 I don't know about any work in this area, but I can see it would be
 valuable.
Tarjan was among the first to study the problem: http://www.cs.princeton.edu/courses/archive/spr06/cos423/Han outs/Amortized.pdf. He assigned "computational credits" to functions and designed a type system in which functions cannot consume more credits than assigned. One simpler thing we can do is add an attribute: void fun() credits(2); meaning fun() consumes two computational credits. Then a function that calls fun() n times consumes n*2 credits etc. Generally we wouldn't aim for automatically determining credits consumed, but we can define a library that allows the user to declare credits appropriately. Andrei
Apr 13 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/13/2016 6:13 PM, Andrei Alexandrescu wrote:
 Tarjan was among the first to study the problem:
 http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/Amortized.pdf.
 He assigned "computational credits" to functions and designed a type system in
 which functions cannot consume more credits than assigned.

 One simpler thing we can do is add an attribute:

 void fun()  credits(2);

 meaning fun() consumes two computational credits. Then a function that calls
 fun() n times consumes n*2 credits etc. Generally we wouldn't aim for
 automatically determining credits consumed, but we can define a library that
 allows the user to declare credits appropriately.
This looks like it could be combined with Simen's constanttime idea. After all, a function would have to run in constant time in order for the credits to be consistent. Or maybe go one step further, and use BigO(2) ? BigO(2) would imply constant time. This would fit in with your research on combining complexities.
Apr 13 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/13/2016 8:50 AM, Nordlöw wrote:
 I'm currenly in an industry with extremely high demands on software
determinism,
 both in space and time (hard realtime).

 My conclusion so far is that most safety-critical industries today are in
 desperate need of better (any) language support for guaranteeing determinism,
 especially time. The amount of time/money spent/wasted on complicated extra
 tooling processes and tests to assure time-determinism is just insane, at least
 in the avionics industry.
I'd be interested if you can give an overview of the existing tools/techniques for dealing with this.
Apr 13 2016
parent reply =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 14 April 2016 at 01:49:03 UTC, Walter Bright wrote:
 I'd be interested if you can give an overview of the existing 
 tools/techniques for dealing with this.
A combination of time-consuming boring manual reviews (over and over again) and time-consuming calls to QAC++. There may be more I'm not yet aware of or have forgotten :)
Apr 14 2016
parent reply Observer <here nowhere.org> writes:
On Thursday, 14 April 2016 at 22:33:15 UTC, Nordlöw wrote:
 On Thursday, 14 April 2016 at 01:49:03 UTC, Walter Bright wrote:
 I'd be interested if you can give an overview of the existing 
 tools/techniques for dealing with this.
A combination of time-consuming boring manual reviews (over and over again) and time-consuming calls to QAC++. There may be more I'm not yet aware of or have forgotten :)
It's been a long time since I was involved in real-time work, but back in that time frame, I used to collect documents on all sorts of computer-related topics. My set of papers on "Real-Time Programming and Scheduling" runs to a dozen volumes. The point is, nobody should think that this area can be suitably addressed with just a few language tweaks. It's really a thesis-level topic (and was, many times in those days, and I would expect so since then as well). Before you start thinking about language-level support, educate yourself about the larger context. Start with "Rate Monotonic Analysis" and follow leads from there.
Apr 14 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/14/2016 5:28 PM, Observer wrote:
 It's been a long time since I was involved in real-time work,
 but back in that time frame, I used to collect documents on all
 sorts of computer-related topics.  My set of papers on "Real-Time
 Programming and Scheduling" runs to a dozen volumes.  The point
 is, nobody should think that this area can be suitably addressed
 with just a few language tweaks.  It's really a thesis-level topic
 (and was, many times in those days, and I would expect so since
 then as well).  Before you start thinking about language-level
 support, educate yourself about the larger context.  Start with
 "Rate Monotonic Analysis" and follow leads from there.
My worry would be coming up with a language feature, implementing it, and then discovering it is useless.
Apr 14 2016
parent reply Observer <here nowhere.org> writes:
On Friday, 15 April 2016 at 02:29:12 UTC, Walter Bright wrote:
 On 4/14/2016 5:28 PM, Observer wrote:
 Nobody should think that this area can be suitably addressed 
 with just a few language tweaks.  It's
 really a thesis-level topic.
My worry would be coming up with a language feature, implementing it, and then discovering it is useless.
I don't want to be entirely discouraging about this. Much has happened in the programming world over decades of thinking and development, and real-time work is certainly an interesting problem, especially as we evolve computing toward IoT. But it will take sustained effort. Someone like Nordlöw, who has a personal stake in the outcome, will have to pick up the ball and run with it. I think the right approach would be the D equivalent of a strong technical proposal such as is done in the N-series papers in the C and C++ language-standards evolution process. That is, papers that include motivation, background, scope, proposed-design, and impact sections. I don't know whether DIPs as they are presently practiced are up to grade for this; the few that I've scanned seem light on sufficient background as compared to what I believe would be necessary for a topic as complex as real-time work.
Apr 14 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/14/2016 8:24 PM, Observer wrote:
 On Friday, 15 April 2016 at 02:29:12 UTC, Walter Bright wrote:
 On 4/14/2016 5:28 PM, Observer wrote:
 Nobody should think that this area can be suitably addressed with just a few
 language tweaks.  It's
 really a thesis-level topic.
My worry would be coming up with a language feature, implementing it, and then discovering it is useless.
I don't want to be entirely discouraging about this. Much has happened in the programming world over decades of thinking and development, and real-time work is certainly an interesting problem, especially as we evolve computing toward IoT. But it will take sustained effort. Someone like Nordlöw, who has a personal stake in the outcome, will have to pick up the ball and run with it. I think the right approach would be the D equivalent of a strong technical proposal such as is done in the N-series papers in the C and C++ language-standards evolution process. That is, papers that include motivation, background, scope, proposed-design, and impact sections. I don't know whether DIPs as they are presently practiced are up to grade for this; the few that I've scanned seem light on sufficient background as compared to what I believe would be necessary for a topic as complex as real-time work.
Yeah, I'd like to see a proposal from Per who actually works in the field.
Apr 14 2016
parent reply =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Friday, 15 April 2016 at 04:01:16 UTC, Walter Bright wrote:
 Yeah, I'd like to see a proposal from Per who actually works in 
 the field.
Do you mean DIP? Are there anything else that's unclear about the needs of my company?
Apr 15 2016
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/15/2016 12:10 PM, Nordlöw wrote:
 On Friday, 15 April 2016 at 04:01:16 UTC, Walter Bright wrote:
 Yeah, I'd like to see a proposal from Per who actually works in the field.
Do you mean DIP?
Yes.
Apr 16 2016
prev sibling parent =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 13 April 2016 at 15:50:40 UTC, Nordlöw wrote:
 The amount of time/money spent/wasted on complicated extra 
 tooling processes and tests to assure time-determinism is just 
 insane, at least in the avionics industry.
The controlling standards here are: https://en.wikipedia.org/wiki/DO-178B https://en.wikipedia.org/wiki/ARINC_653
Apr 14 2016