www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - If you needed any more evidence that memory safety is the future...

reply Jack Stouffer <jack jackstouffer.com> writes:
https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

A buffer overflow bug caused heartblead 2.0 for hundreds of 
thousands of sites. Here we are 57 years after ALGOL 60 which had 
bounds checking, and we're still dealing with bugs from C's 
massive mistake.

This is something that valgrind could have easily picked up, but 
the devs just didn't use it for some reason. Runtime checking of 
this stuff is important, so please, don't disable safety checks 
with DMD if you're dealing with personal info.

If you use a site on this list 
https://github.com/pirate/sites-using-cloudflare and you're not 
using two factor auth, please change your password ASAP.
Feb 23
next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
Jack Stouffer wrote:

 This is something that valgrind could have easily picked up, but the 
 devs just didn't use it for some reason. Runtime checking of this 
 stuff is important, so please, don't disable safety checks with DMD 
 if you're dealing with personal info.
or, even better: don't disable bounds checking at all. never. if you are *absolutely* sure that bounds checking *IS* the bottleneck (you *did* used your profiler to find this out, did you?), you can selectively avoid bounds checking by using `arr.ptr[i]` instead of `arr[i]` (and yes, this is unsafe; but what would you expect by removing safety checks?). forget about "-release" dmd arg. forget about "-boundscheck=off". no, really, they won't do you any good. after all, catching a bug in your program when it doesn't run in controlled environment is even more important than catching a bug in debugging session! don't hate your users by giving 'em software with all safety measures removed! please.
Feb 23
parent reply Chris Wright <dhasenan gmail.com> writes:
On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug in your
 program when it doesn't run in controlled environment is even more
 important than catching a bug in debugging session! don't hate your
 users by giving 'em software with all safety measures removed! please.
Especially since -release disables assertions and contracts. If you really want extra validation that's too expensive in the general case, you can use `version(ExpensiveValidation)` or the like.
Feb 24
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 24.02.2017 16:29, Chris Wright wrote:
 On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug in your
 program when it doesn't run in controlled environment is even more
 important than catching a bug in debugging session! don't hate your
 users by giving 'em software with all safety measures removed! please.
Especially since -release disables assertions and contracts.
No. Worse. It turns failures into UB.
Feb 24
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Fri, 24 Feb 2017 21:16:28 +0100, Timon Gehr wrote:

 On 24.02.2017 16:29, Chris Wright wrote:
 On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug in your
 program when it doesn't run in controlled environment is even more
 important than catching a bug in debugging session! don't hate your
 users by giving 'em software with all safety measures removed! please.
Especially since -release disables assertions and contracts.
No.
It does in fact disable assertions and contracts.
 Worse. It turns failures into UB.
Which is what ketmar described.
Feb 24
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 February 2017 at 00:50:36 UTC, Chris Wright wrote:
 On Fri, 24 Feb 2017 21:16:28 +0100, Timon Gehr wrote:
 Worse. It turns failures into UB.
Which is what ketmar described.
D allows asserts being turned into assumes. Which is potentially unsound.
Feb 24
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 25.02.2017 01:50, Chris Wright wrote:
 On Fri, 24 Feb 2017 21:16:28 +0100, Timon Gehr wrote:

 On 24.02.2017 16:29, Chris Wright wrote:
 On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug in your
 program when it doesn't run in controlled environment is even more
 important than catching a bug in debugging session! don't hate your
 users by giving 'em software with all safety measures removed! please.
Especially since -release disables assertions and contracts.
No.
It does in fact disable assertions and contracts. ...
If 'disable' (as can be reasonably expected) means the compiler will behave as if they were never present, then it does not. If it means AssertErrors will not be thrown, then this is indeed what DMD will do in practice, but it is not guaranteed by the spec.
 Worse. It turns failures into UB.
Which is what ketmar described.
Ketmar described the removal of safety measures. With -release, assertions pose an additional safety risk.
Feb 25
parent reply Chris Wright <dhasenan gmail.com> writes:
On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the compiler will
 behave as if they were never present, then it does not.
https://dlang.org/dmd-linux.html#switch-release Plus I actually tested it.
 Ketmar described the removal of safety measures. With -release,
 assertions pose an additional safety risk.
Assertions not executing is not undefined behavior.
Feb 25
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Saturday, 25 February 2017 at 14:38:33 UTC, Chris Wright wrote:
 On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the 
 compiler will behave as if they were never present, then it 
 does not.
https://dlang.org/dmd-linux.html#switch-release Plus I actually tested it.
 Ketmar described the removal of safety measures. With 
 -release, assertions pose an additional safety risk.
Assertions not executing is not undefined behavior.
http://forum.dlang.org/thread/hqxoldeyugkazolllsna forum.dlang.org
Feb 25
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 25.02.2017 15:38, Chris Wright wrote:
 On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the compiler will
 behave as if they were never present, then it does not.
https://dlang.org/dmd-linux.html#switch-release
This literally says "[...] assertion failures are undefined behaviour". https://en.wikipedia.org/wiki/Confirmation_bias
 Plus I actually tested it.
 ...
Why would that matter?
 Ketmar described the removal of safety measures. With -release,
 assertions pose an additional safety risk.
Assertions not executing is not undefined behavior.
I didn't say it was. I know my claim seems insane, but it is actually true. http://forum.dlang.org/post/lr4kek$2rd$1 digitalmars.com
Feb 25
next sibling parent reply Stefan Koch <uplink.coder googlemail.com> writes:
On Saturday, 25 February 2017 at 21:12:13 UTC, Timon Gehr wrote:

 I know my claim seems insane, but it is actually true.

 http://forum.dlang.org/post/lr4kek$2rd$1 digitalmars.com
The optimizer can currently not take advantage of it. and I don't see how that would change in the near future.
Feb 25
parent Johannes Pfau <nospam example.com> writes:
Am Sat, 25 Feb 2017 21:19:59 +0000
schrieb Stefan Koch <uplink.coder googlemail.com>:

 On Saturday, 25 February 2017 at 21:12:13 UTC, Timon Gehr wrote:
 
 I know my claim seems insane, but it is actually true.

 http://forum.dlang.org/post/lr4kek$2rd$1 digitalmars.com  
The optimizer can currently not take advantage of it. and I don't see how that would change in the near future.
in GCC/GDC: if -release: assert(expr); ==> if(!expr) __builtin_unreachable(); Would be trivial to implement but with unpredictable consequences. -- Johannes
Feb 26
prev sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Sat, 25 Feb 2017 22:12:13 +0100, Timon Gehr wrote:

 On 25.02.2017 15:38, Chris Wright wrote:
 On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the compiler will
 behave as if they were never present, then it does not.
https://dlang.org/dmd-linux.html#switch-release
This literally says "[...] assertion failures are undefined behaviour".
... It says it doesn't emit code for assertions. Then it says assertion failures are undefined behavior. How does that even work?
 https://en.wikipedia.org/wiki/Confirmation_bias
Fuck you.
Feb 25
next sibling parent reply Chris Wright <dhasenan gmail.com> writes:
On Sat, 25 Feb 2017 21:49:43 +0000, Chris Wright wrote:

 On Sat, 25 Feb 2017 22:12:13 +0100, Timon Gehr wrote:
 
 On 25.02.2017 15:38, Chris Wright wrote:
 On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the compiler will
 behave as if they were never present, then it does not.
https://dlang.org/dmd-linux.html#switch-release
This literally says "[...] assertion failures are undefined behaviour".
... It says it doesn't emit code for assertions. Then it says assertion failures are undefined behavior. How does that even work?
As far as I can tell, it's worded poorly enough to be incorrect. The undefined behavior is what happens after the would-be assertion failure occurs. The compiler is free to emit code as if the assertion passed, or if there is no way for the assertion to pass, it is free to do anything it wants. However, the assertion isn't emitted, so there is no assertion failure. That part is defined behavior; it was defined in the preceding sentence.
Feb 25
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 February 2017 at 22:37:15 UTC, Chris Wright wrote:
 The undefined behavior is what happens after the would-be 
 assertion failure occurs. The compiler is free to emit code as 
 if the assertion passed, or if there is no way for the 
 assertion to pass, it is free to do anything it wants.
No. That would be implementation defined behaviour. Undefined behaviour means the whole program is illegal, i.e. not covered by the language at all.
Feb 25
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 02/26/2017 12:17 AM, Ola Fosheim Grostad wrote:
 On Saturday, 25 February 2017 at 22:37:15 UTC, Chris Wright wrote:
 The undefined behavior is what happens after the would-be assertion
 failure occurs. The compiler is free to emit code as if the assertion
 passed, or if there is no way for the assertion to pass, it is free to
 do anything it wants.
No. That would be implementation defined behaviour. Undefined behaviour means the whole program is illegal, i.e. not covered by the language at all.
"Bad things happen" by a different name smells just as foul.
Feb 25
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 26 February 2017 at 06:02:59 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 02/26/2017 12:17 AM, Ola Fosheim Grostad wrote:
 On Saturday, 25 February 2017 at 22:37:15 UTC, Chris Wright 
 wrote:
 The undefined behavior is what happens after the would-be 
 assertion
 failure occurs. The compiler is free to emit code as if the 
 assertion
 passed, or if there is no way for the assertion to pass, it 
 is free to
 do anything it wants.
No. That would be implementation defined behaviour. Undefined behaviour means the whole program is illegal, i.e. not covered by the language at all.
"Bad things happen" by a different name smells just as foul.
Most languages don't accept undefined behaviour, or rather, require it to be detected at either compile time or run time. Are there any languages outside the C family that that allows illegal programs to compile and run undetected under the assumption that such source code will never be compiled (assuming that the programmer will assure that this never happens)? Implementation defined is different, as the spec can put can put constraints on the implementation, e.g. how a program terminates if you run out of memory might vary, but the spec might specify that an exception should be issued before terminating.
Feb 26
prev sibling next sibling parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 February 2017 at 21:49:43 UTC, Chris Wright wrote:
 On Sat, 25 Feb 2017 22:12:13 +0100, Timon Gehr wrote:

 On 25.02.2017 15:38, Chris Wright wrote:
 On Sat, 25 Feb 2017 13:23:03 +0100, Timon Gehr wrote:
 If 'disable' (as can be reasonably expected) means the 
 compiler will behave as if they were never present, then it 
 does not.
https://dlang.org/dmd-linux.html#switch-release
This literally says "[...] assertion failures are undefined behaviour".
... It says it doesn't emit code for assertions. Then it says assertion failures are undefined behavior. How does that even work?
LLVM and other optimizers provide functionality for introducing axioms directly. D allows compilers to turn asserts into axioms without proof. If axioms are contradicting each other the whole program becomes potentially undefined (i.e. True and False become arbitrary).
Feb 25
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 02/25/2017 04:49 PM, Chris Wright wrote:
 It says it doesn't emit code for assertions.

 Then it says assertion failures are undefined behavior.

 How does that even work?
Obviously the would-be failure. No need for the docs to be pedantic about everything. It'd read like the average RFC, for the few people who would bother trying to read it.
Feb 25
prev sibling next sibling parent reply Chris M <chrismohrfeld comcast.net> writes:
On Friday, 24 February 2017 at 20:16:28 UTC, Timon Gehr wrote:
 On 24.02.2017 16:29, Chris Wright wrote:
 On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about 
 "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug 
 in your
 program when it doesn't run in controlled environment is even 
 more
 important than catching a bug in debugging session! don't 
 hate your
 users by giving 'em software with all safety measures 
 removed! please.
Especially since -release disables assertions and contracts.
No. Worse. It turns failures into UB.
How so?
Feb 24
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 25.02.2017 04:12, Chris M wrote:
 On Friday, 24 February 2017 at 20:16:28 UTC, Timon Gehr wrote:
 On 24.02.2017 16:29, Chris Wright wrote:
 On Fri, 24 Feb 2017 09:14:24 +0200, ketmar wrote:
 forget about "-release" dmd arg. forget about "-boundscheck=off". no,
 really, they won't do you any good. after all, catching a bug in your
 program when it doesn't run in controlled environment is even more
 important than catching a bug in debugging session! don't hate your
 users by giving 'em software with all safety measures removed! please.
Especially since -release disables assertions and contracts.
No. Worse. It turns failures into UB.
How so?
With -release, the optimizer is allowed to assume that assertions pass. There is no switch to disable assertions. https://dlang.org/dmd-linux.html#switch-release "compile release version, which means not emitting run-time checks for contracts and asserts. Array bounds checking is not done for system and trusted functions, and assertion failures are undefined behaviour."
Feb 25
prev sibling parent reply Kagamin <spam here.lot> writes:
On Friday, 24 February 2017 at 20:16:28 UTC, Timon Gehr wrote:
 No. Worse. It turns failures into UB.
On the other hand disabled bounds check can result in buffer overflow, which is already UB enough, so asserts turned into assumes won't add anything new.
Mar 03
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03.03.2017 16:52, Kagamin wrote:
 On Friday, 24 February 2017 at 20:16:28 UTC, Timon Gehr wrote:
 No. Worse. It turns failures into UB.
On the other hand disabled bounds check can result in buffer overflow, which is already UB enough, so asserts turned into assumes won't add anything new.
Not every program with a wrong assertion in it exceeds array bounds.
Mar 06
parent reply Kagamin <spam here.lot> writes:
On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
 Not every program with a wrong assertion in it exceeds array 
 bounds.
Until it does.
Mar 07
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 7 March 2017 at 15:48:12 UTC, Kagamin wrote:
 On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
 Not every program with a wrong assertion in it exceeds array 
 bounds.
Until it does.
Going outside array bounds isn't necessarily the same as a contradiction.
Mar 07
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 07.03.2017 16:48, Kagamin wrote:
 On Monday, 6 March 2017 at 21:05:13 UTC, Timon Gehr wrote:
 Not every program with a wrong assertion in it exceeds array bounds.
Until it does.
Not necessarily so. With -release, it will be able to both exceed and not exceed array bounds at the same time in some circumstances. What I'm not buying is that the existence of UB in some circumstances justifies introducing more cases where UB is unexpectedly introduced. It's a continuum. Generally, if you add more failure modes, you will have more exploits. I might need to point out that -release does not disable bounds checking in safe code while it has been stated that -release introduces UB for assertion failures in safe code. There is no flag for disabling assertion/contract checking without potentially introducing new UB. Why is this the best possible situation?
Mar 08
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
 On 07.03.2017 16:48, Kagamin wrote:
 I might need to point out that -release does not disable bounds 
 checking in  safe code while it has been stated that -release 
 introduces UB for assertion failures in  safe code.

 There is no flag for disabling assertion/contract checking 
 without potentially introducing new UB.

 Why is this the best possible situation?
Even with a failed assertion, I believe safe does still guarantee that no memory violations will happen. The program will go awry, but it will just misbehave. It won't stomp memory that might be of another type or even executable code. I believe that's why it's done how it is.
Mar 08
parent Dukc <ajieskola gmail.com> writes:
On Wednesday, 8 March 2017 at 19:21:58 UTC, Dukc wrote:
 On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
 On 07.03.2017 16:48, Kagamin wrote:
 [snip]
Sorry, accidently accounted that quote to a wrong person.
Mar 08
prev sibling parent Kagamin <spam here.lot> writes:
On Wednesday, 8 March 2017 at 15:48:47 UTC, Timon Gehr wrote:
 What I'm not buying is that the existence of UB in some 
 circumstances justifies introducing more cases where UB is 
 unexpectedly introduced. It's a continuum. Generally, if you 
 add more failure modes, you will have more exploits.
With buffer overflows you're already sort of screwed, so assumes don't really change the picture. If you chose UB yourself, why would you care? Performance obviously took precedence.
 I might need to point out that -release does not disable bounds 
 checking in  safe code while it has been stated that -release 
 introduces UB for assertion failures in  safe code.
UB in safe code doesn't sound good no matter the cause.
Mar 09
prev sibling next sibling parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

 [...]
This isn't evidence that memory safety is "the future", though. This is evidence that people do not follow basic engineering practices (for whatever seemingly valid reasons - such as a project deadline - at the time). Writing a program (with manual memory management) that does not have dangerous memory issues is not an intrinsically hard task. It does, however, require you to *design* your program, not *grow* it (which, btw, is what a software *engineer* should do anyway). Systems such as memory ownership+borrowing, garbage collection, (automatic) reference counting can mitigate the symptoms (and I happily use any or all of them when they are the best tool for the task at hand), but none of them will solve the real issue: The person in front of the screen (which includes you and me).
Feb 24
next sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner 
wrote:
 This isn't evidence that memory safety is "the future", though.
 This is evidence that people do not follow basic engineering 
 practices (for whatever seemingly valid reasons - such as a 
 project deadline - at the time).

 Writing a program (with manual memory management) that does not 
 have dangerous memory issues is not an intrinsically hard task. 
 It does, however, require you to *design* your program, not 
 *grow* it (which, btw, is what a software *engineer* should do 
 anyway).
If the system in practice does not bear any resemblance to the system in theory, then one cannot defend the theory. If, in practice, programming languages without safety checks produces very common bugs which have caused millions of dollars in damage, then defending the language on the theory that you might be able to make it safe with the right effort is untenable. Why is it that test CIs catch bugs when people should be running tests locally? Why is it that adding unittest blocks to the language made unit tests in D way more popular when people should always be writing tests? Because we're human. We make mistakes. We put things off that shouldn't be put off. It's like the new safety features on handheld buzzsaws which make it basically impossible to cut yourself. Should people be using these things safely? Yes. But, accidents happen, so the tool's design takes human behavior into account and we're all the better for it. Using a programing language which doesn't take human error into account is a recipe for disaster.
Feb 24
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
 It's like the new safety features on handheld buzzsaws which 
 make it basically impossible to cut yourself. Should people be 
 using these things safely? Yes. But, accidents happen, so the 
 tool's design takes human behavior into account and we're all 
 the better for it.
Chainsaws are effective, but dangerous. So you should have both training and use safety equipment. Training and safety equipment is available for C-like languages (to the level of provable correctness), and such that it doesn't change the runtime performance. But at the end of the day it all depends, for some context it matters less if program occasionally fails than others. It is easier to get a small module correct than a big application with many interdependencies etc. If you don't want to max out performance you might as well consider Go, Java, C#, Swift etc. I don't really buy into the idea that a single language has to cover all bases.
Feb 24
next sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Friday, 24 February 2017 at 15:15:00 UTC, Ola Fosheim Grøstad 
wrote:
 I don't really buy into the idea that a single language has to 
 cover all bases.
Neither do I. But, the progenitor of that idea is that languages have understood use-cases, and that using them outside of those areas is non-optimal. I've come to believe that any program that handles personal user data made in a language without memory safety features is not only non-optimal, but irresponsible.
Feb 24
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 24 February 2017 at 17:18:03 UTC, Jack Stouffer wrote:
 Neither do I. But, the progenitor of that idea is that 
 languages have understood use-cases, and that using them 
 outside of those areas is non-optimal.
The way I see it system level programming is usually not well supported by languages. What I want is not "memory safety", but adequate tools for doing dangerous stuff like pointer arithmetics to and from SIMD representations on the stack with less chances of making mistakes. But I don't want any performance/flexibility/memory layout sacrifices or code bloat. I don't really buy that bullet-proof and under-performing solutions is improving on system level programming. It is an improvement for application level programming and performant libraries. A language that prevents me from using dangerous constructs is a non-solution. A language that detects that I spuriously might en up overwriting an unintended stack frame/storage is a solution. Of course, the latter is also a lot harder to create (requires formal proofs).
 I've come to believe that any program that handles personal 
 user data made in a language without memory safety features is 
 not only non-optimal, but irresponsible.
Maybe, but most personal user data is at some level handled by programs written in C: database engines and operating systems. Although I've noticed that the current trend is to focus less on performance and more on scaling, e.g. cochroachdb is an implementation of a Spanner like SQL database in Go.
Feb 24
parent Kagamin <spam here.lot> writes:
On Friday, 24 February 2017 at 21:22:10 UTC, Ola Fosheim Grøstad 
wrote:
 I don't really buy that bullet-proof and under-performing 
 solutions is improving on system level programming. It is an 
 improvement for application level programming and performant 
 libraries.

 Maybe, but most personal user data is at some level handled by 
 programs written in C: database engines and operating systems. 
 Although I've noticed that the current trend is to focus less 
 on performance and more on scaling, e.g. cochroachdb is an 
 implementation of a Spanner like SQL database in Go.
If it doesn't scale, then it's slow no matter what it's written in. For example SQL is slow even though it's very optimized: you simply can't handle millionfold increase in server load and data size with C optimizations, and that increase happens just fine. If it's 1usec vs 1msec it doesn't matter because the user doesn't see such difference, it it's 30sec vs 60sec it's still doesn't matter, because both are beyond user patience. Performance doesn't work incrementally, it just either works or doesn't, so you're unlikely to achieve anything by making it twice as fast. Also why Cloudflare wrote new parser? Because ragel parser was slow. It's written in C and does all funny C stuff, but is slow. So where's famous C performance?
Mar 03
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Friday, 24 February 2017 at 15:15:00 UTC, Ola Fosheim Grøstad 
wrote:
 Chainsaws are effective, but dangerous. So you should have both 
 training and use safety equipment. Training and safety 
 equipment is available for C-like languages (to the level of 
 provable correctness), and such that it doesn't change the 
 runtime performance.
With chainsaws, those are probably provided if you use one professionally. But an average Joe getting his firewood from his small personal wood plantation is somewhat unlikely to have both. I don't how common chainsaws and their usage are among non-professionals elsewhere, but here they are common. The same thing applies for programming languages. A pro might be able to verify safety of C with some LLVM advanced tools or whatever, but not all coders are experienced nor skillful. For a team with lots of such members, using a language in such manner is too elitist. Too many things to learn and care about, and thus won't be done. And you can't have code being done only or even primarily by the best only, because the less advanced need experience too. That's not to say chainsaws or C should be banned. But it's to say that the less extra effort safety requres, the more effective it is.
Feb 24
prev sibling parent Kagamin <spam here.lot> writes:
On Friday, 24 February 2017 at 15:15:00 UTC, Ola Fosheim Grøstad 
wrote:
 If you don't want to max out performance you might as well 
 consider Go, Java, C#, Swift etc. I don't really buy into the 
 idea that a single language has to cover all bases.
Ewww, java? Why not COBOL?
Mar 03
prev sibling next sibling parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
 On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner 
 wrote:
 This isn't evidence that memory safety is "the future", though.
 This is evidence that people do not follow basic engineering 
 practices (for whatever seemingly valid reasons - such as a 
 project deadline - at the time).

 Writing a program (with manual memory management) that does 
 not have dangerous memory issues is not an intrinsically hard 
 task. It does, however, require you to *design* your program, 
 not *grow* it (which, btw, is what a software *engineer* 
 should do anyway).
If the system in practice does not bear any resemblance to the system in theory, then one cannot defend the theory. If, in practice, programming languages without safety checks produces very common bugs which have caused millions of dollars in damage, then defending the language on the theory that you might be able to make it safe with the right effort is untenable.
Since I have not defended anything, this is missing the point.
 Why is it that test CIs catch bugs when people should be 
 running tests locally? Why is it that adding unittest blocks to 
 the language made unit tests in D way more popular when people 
 should always be writing tests?
These are fallacies of presupposition.
 Because we're human. We make mistakes.
I agree, but still missing the point I made.
 We put things off that shouldn't be put off.
Assumption, but I won't dispute it in my personal case.
 It's like the new safety features on handheld buzzsaws which 
 make it basically impossible to cut yourself. Should people be 
 using these things safely? Yes. But, accidents happen, so the 
 tool's design takes human behavior into account and we're all 
 the better for it.
Quite, but that's not exclusive to memory bugs (though they are usually the ones with the most serious implications) and still misses the point of my argument. If you want *evidence of memory safety being the future*, you have to write programs making use of *memory safety*, put them out into the wild and let people try to break them for at least 10-15 years (test of time). *Then* you have to provide conclusive (or at the very least hard to refute) proof that the reason that no one could break them were the memory safety features; and then, *finally*, you can point to all the people *still not using memory safe languages* and say "Told you so". I know it sucks, but that's the price as far as I'm concerned; and it's one *I'm* trying to help pay by using a language like D with a GC, automatic reference counting, and scope guards for memory safety. You *cannot* appropriate one (or even a handful) examples of someone doing something wrong in language A as evidence for language feature C (still missing from A) being *the future*, just because feature C is *supposed* to make doing those things wrong harder. They are evidence that there's something wrong and it needs fixing. I personally think memory safety might be one viable option for that (even if it only addresses one symptom), but I've only ever witnessed over-promises such as "X is the future" in anything engineering related play out to less than what was promised.
 Using a programing language which doesn't take human error into 
 account is a recipe for disaster.
Since you're going for extreme generalization, I'll bite: Humans are a recipe for disaster.
Feb 24
parent reply Kagamin <spam here.lot> writes:
On Friday, 24 February 2017 at 19:19:57 UTC, Moritz Maxeiner 
wrote:
 *Then* you have to provide conclusive (or at the very least 
 hard to refute) proof that the reason that no one could break 
 them were the memory safety features; and then, *finally*, you 
 can point to all the people *still not using memory safe 
 languages* and say "Told you so".
Such proof is impossible because correct programs can be written in unsafe languages.
Mar 03
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 3 March 2017 at 16:43:05 UTC, Kagamin wrote:
 On Friday, 24 February 2017 at 19:19:57 UTC, Moritz Maxeiner 
 wrote:
 *Then* you have to provide conclusive (or at the very least 
 hard to refute) proof that the reason that no one could break 
 them were the memory safety features; and then, *finally*, you 
 can point to all the people *still not using memory safe 
 languages* and say "Told you so".
Such proof is impossible because correct programs can be written in unsafe languages.
And you can write memory incorrect programs in what's currently called memory safe languages[1], which is we need more programs in such languages to reach a reasonable sample size for comparison and analysis against programs in classic languages such as C/C++. A formal, mathematical proof is impossible, yes, but if you have a large enough sample size of programs in a memory safe(r) language, *and* can verify that they are indeed memory correct (and thus not open to all the usual attack vectors), then that falls what I'd categorize under "hard to refute". But you're right, I should've been more specific, my bad. [1] https://www.x41-dsec.de/reports/Kudelski-X41-Wire-Report-phase1-20170208.pdf
Mar 03
parent reply Kagamin <spam here.lot> writes:
On Friday, 3 March 2017 at 17:33:14 UTC, Moritz Maxeiner wrote:
 And you can write memory incorrect programs in what's currently 
 called memory safe languages[1]
Those look like mistakes in interfacing between C and Rust. So it's not really written in a safe language. And most of them are in cryptographic security rather than memory safety. Safe languages give no advantage there. But it still does demonstrate lack of safety issues.
 A formal, mathematical proof is impossible, yes, but if you 
 have a large enough sample size of programs in a memory safe(r) 
 language, *and* can verify that they are indeed memory correct 
 (and thus not open to all the usual attack vectors), then that 
 falls what I'd categorize under "hard to refute". But you're 
 right, I should've been more specific, my bad.
Does anybody try to refute it? Safe languages are not rejected for their safety.
Mar 07
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Tuesday, 7 March 2017 at 16:18:01 UTC, Kagamin wrote:
 On Friday, 3 March 2017 at 17:33:14 UTC, Moritz Maxeiner wrote:
 And you can write memory incorrect programs in what's 
 currently called memory safe languages[1]
Those look like mistakes in interfacing between C and Rust. So it's not really written in a safe language. And most of them are in cryptographic security rather than memory safety. Safe languages give no advantage there. But it still does demonstrate lack of safety issues.
Then we need to define "memory safe language" a lot stricter than it's currently being used, and both D and Rust won't qualify as memory safe (since you can write unsafe code in them).
 A formal, mathematical proof is impossible, yes, but if you 
 have a large enough sample size of programs in a memory 
 safe(r) language, *and* can verify that they are indeed memory 
 correct (and thus not open to all the usual attack vectors), 
 then that falls what I'd categorize under "hard to refute". 
 But you're right, I should've been more specific, my bad.
Does anybody try to refute it? Safe languages are not rejected for their safety.
Right now, of course not, since the burden of proof is on the side advocating memory safety (i.e. us).
Mar 07
parent reply XavierAP <n3minis-git yahoo.es> writes:
On Tuesday, 7 March 2017 at 21:24:43 UTC, Moritz Maxeiner wrote:
 Then we need to define "memory safe language" a lot stricter 
 than it's currently being used, and both D and Rust won't 
 qualify as memory safe (since you can write unsafe code in 
 them).
D does not claim to be memory-safe always. It does afaik do so within safe environments (barring internal runtime or compiler bugs of course). Even C# has the same approach of allowing "unsafe" environments.
 A formal, mathematical proof is impossible, yes, but if you 
 have a large enough sample size of programs in a memory 
 safe(r) language, *and* can verify that they are indeed 
 memory correct (and thus not open to all the usual attack 
 vectors), then that falls what I'd categorize under "hard to 
 refute". But you're right, I should've been more specific, my 
 bad.
Does anybody try to refute it? Safe languages are not rejected for their safety.
Right now, of course not, since the burden of proof is on the side advocating memory safety (i.e. us).
I don't agree on the burden of proof. It is a safe assumption that if you increase safety checks, safety will be improved. It cannot or needn't be proven. If someone proposes installing railing in a stairway, or a fence along a railway, to decrease accidents, who would demand this to be proven? How, in a sandbox parallel universe that we control as gods and can rewind in time? Because there is no other way. Plus statistics can prove nothing -- this logical truth cannot be overstated. Even if you invested for the sake of an experiment in setting up a race between huge teams of equally qualified programmers given the same exact tasks, nothing could be truly proven. But we're even talking about simply having more experience from completely different projects and developers among the evaluated languages or families. Actually we have quite a lot of experience already, by now Java and later .NET have been around for most of the time C++ has so far, just as an for example.
Mar 07
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Tuesday, 7 March 2017 at 22:07:51 UTC, XavierAP wrote:
 On Tuesday, 7 March 2017 at 21:24:43 UTC, Moritz Maxeiner wrote:
 [...]
D does not claim to be memory-safe always.It does afaik do so within safe environments (barring internal runtime or compiler bugs of course). Even C# has the same approach of allowing "unsafe" environments.
And as I've pointed out before, if your safe code can call hidden, unsafe code it doesn't even know about then your guarantees mean nothing and you're back to trusting programmers.
 [...]
Does anybody try to refute it? Safe languages are not rejected for their safety.
Right now, of course not, since the burden of proof is on the side advocating memory safety (i.e. us).
I don't agree on the burden of proof. It is a safe assumption that if you increase safety checks, safety will be improved.
If those safety checks actually get applied to those parts that need them (i.e. by the programmers writing programs in that language), I'd probably agree. But there's no guarantee that that is the case, as your friend, hidden unsafe code, is still there. Besides that, it's a hypothesis, and like with *all* of them the burden of proof lies with the people proposing/claiming it.
 It cannot or needn't be proven. If someone proposes installing 
 railing in a stairway, or a fence along a railway, to decrease 
 accidents, who would demand this to be proven?
A person with a good sense of engineering (or for that matter the scientific method) in them ought to demand that both your railing, as well as your fence get proven to actually deal with the kinds of issues they are supposed to deal with before approving their installation. Which is what institutions like [1] are for with regards to material engineering products. Doing anything else is reckless endangerment since it gives you the feeling of being safe without actually being safe. Like using safe in D, or Rust, and being unaware of unsafe code hidden from you behind "safe" facades.
 Plus statistics can prove nothing -- this logical truth cannot 
 be overstated.
It's called empirical evidence and it's one of the most important techniques in science[2] to create foundation for a hypothesis. [1] https://en.wikipedia.org/wiki/Technischer_%C3%9Cberwachungsverein [2] http://www.juliantrubin.com/bigten/millikanoildrop.html
Mar 08
next sibling parent reply XavierAP <n3minis-git yahoo.es> writes:
On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner wrote:
 On Tuesday, 7 March 2017 at 22:07:51 UTC, XavierAP wrote:
 Plus statistics can prove nothing -- this logical truth cannot 
 be overstated.
It's called empirical evidence and it's one of the most important techniques in science[2] to create foundation for a hypothesis.
No, mistaking historical data as empirically valid is the most dangerous scientific mistake. The empirical method requires all conditions to be controlled, in order for factors to be isolated, and every experiment to be reproducible.
Mar 08
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 13:14:19 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner 
 wrote:
 On Tuesday, 7 March 2017 at 22:07:51 UTC, XavierAP wrote:
 Plus statistics can prove nothing -- this logical truth 
 cannot be overstated.
It's called empirical evidence and it's one of the most important techniques in science[2] to create foundation for a hypothesis.
No, mistaking historical data as empirically valid is the most dangerous scientific mistake. The empirical method requires all conditions to be controlled, in order for factors to be isolated, and every experiment to be reproducible.
This is true for controlled experiments like the one I pointed to and this model works fine for those sciences where controlled experiments are applicable (e.g. physics). For (soft) sciences where human behaviour is a factor - and it usually is one you cannot reliably control - using quasi-experiments with a high sample size is a generally accepted practice to accumulate empirical data.
Mar 08
parent reply XavierAP <n3minis-git yahoo.es> writes:
On Wednesday, 8 March 2017 at 14:02:40 UTC, Moritz Maxeiner wrote:
 On Wednesday, 8 March 2017 at 13:14:19 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner 
 wrote:
 On Tuesday, 7 March 2017 at 22:07:51 UTC, XavierAP wrote:
 Plus statistics can prove nothing -- this logical truth 
 cannot be overstated.
It's called empirical evidence and it's one of the most important techniques in science[2] to create foundation for a hypothesis.
No, mistaking historical data as empirically valid is the most dangerous scientific mistake. The empirical method requires all conditions to be controlled, in order for factors to be isolated, and every experiment to be reproducible.
This is true for controlled experiments like the one I pointed to and this model works fine for those sciences where controlled experiments are applicable (e.g. physics). For (soft) sciences where human behaviour is a factor - and it usually is one you cannot reliably control - using quasi-experiments with a high sample size is a generally accepted practice to accumulate empirical data.
Right, but that's why "soft" sciences that use any "soft" version of the empirical method, have no true claim to being actual sciences. And it's why whenever you don't like an economist's opinion, you can easily find another with the opposite opinion and his own model. There are other sane approaches for "soft" sciences where (controlled) experiments aren't possible: https://en.wikipedia.org/wiki/Praxeology#Origin_and_etymology Of course these methods have limits on what can be inferred, whereas with models tuned onto garbage historical statistics you can keep publishing to scientific journals forever, and never reach any incontestable conclusion.
Mar 08
parent Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 14:50:18 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 14:02:40 UTC, Moritz Maxeiner 
 wrote:
 [...]

 This is true for controlled experiments like the one I pointed 
 to and this model works fine for those sciences where 
 controlled experiments are applicable (e.g. physics).
 For (soft) sciences where human behaviour is a factor - and it 
 usually is one you cannot reliably control - using 
 quasi-experiments with a high sample size is a generally 
 accepted practice to accumulate empirical data.
Right, but that's why "soft" sciences that use any "soft" version of the empirical method, have no true claim to being actual sciences.
That is an opinion, though; same as my initial position that enough empirical data about whether people in memory safe languages (but where your safe code can call hidden unsafe code without you knowing it) actually end up creating memory safe programs could provide enough foundation to exclaim "I told you so" if it turns out that the discrepancy is significant enough (what significant means in this context is, of course, another opinion).
 And it's why whenever you don't like an economist's opinion, 
 you can easily find another with the opposite opinion and his 
 own model.
I'm not an economist and can neither speak to the assumptions in this, nor the conclusion.
 There are other sane approaches for "soft" sciences where 
 (controlled) experiments aren't possible:

 https://en.wikipedia.org/wiki/Praxeology#Origin_and_etymology

 Of course these methods have limits on what can be inferred, 
 whereas with models tuned onto garbage historical statistics 
 you can keep publishing to scientific journals forever, and 
 never reach any incontestable conclusion.
Thank you, I'll put praxeology on my list of things to read up on.
Mar 08
prev sibling parent reply XavierAP <n3minis-git yahoo.es> writes:
On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner wrote:
 Doing anything else is reckless endangerment since it gives you 
 the feeling of being safe without actually being safe. Like 
 using  safe in D, or Rust, and being unaware of unsafe code 
 hidden from you behind "safe" facades.
Safe code should be unable to call unsafe code -- including interop with any non-D or binary code, here I agree. I was supposing this is already the case in D but I'm not really sure.
Mar 08
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 13:30:42 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner 
 wrote:
 Doing anything else is reckless endangerment since it gives 
 you the feeling of being safe without actually being safe. 
 Like using  safe in D, or Rust, and being unaware of unsafe 
 code hidden from you behind "safe" facades.
Safe code should be unable to call unsafe code -- including interop with any non-D or binary code, here I agree. I was supposing this is already the case in D but I'm not really sure.
You can hide unsafe code in D by annotating a function with trusted the same way you can hide unsafe code in Rust with unsafe blocks.
Mar 08
parent reply Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 3/8/2017 5:56 AM, Moritz Maxeiner via Digitalmars-d wrote:
 On Wednesday, 8 March 2017 at 13:30:42 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 12:42:37 UTC, Moritz Maxeiner wrote:
 Doing anything else is reckless endangerment since it gives you the 
 feeling of being safe without actually being safe. Like using  safe 
 in D, or Rust, and being unaware of unsafe code hidden from you 
 behind "safe" facades.
Safe code should be unable to call unsafe code -- including interop with any non-D or binary code, here I agree. I was supposing this is already the case in D but I'm not really sure.
You can hide unsafe code in D by annotating a function with trusted the same way you can hide unsafe code in Rust with unsafe blocks.
Clearly marked is an interesting definition of hidden.
Mar 08
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 17:40:29 UTC, Brad Roberts wrote:
 [...]
 You can hide unsafe code in D by annotating a function with 
  trusted the same way you can hide unsafe code in Rust with 
 unsafe blocks.
Clearly marked is an interesting definition of hidden.
--- module mymemorysafexyzlibrary; struct Context { /* ... */ } safe Context* createContextSafely() { return () trusted { // What's gonna happen if you use this? // Ask your memory allocation professional void* foo = malloc(Context.sizeof-1); return cast(Data*) foo; }(); } --- The operating word here being "can". The above is semantically equivalent (assuming the delegate gets optimized out) to an unsafe block inside a Rust function. And yes, that's what I consider hidden unsafe code, and it means that if you call function `bar` from a safe function `foo`, `bar` being marked as safe does not save you from auditing `bar`'s source code.
Mar 08
parent reply XavierAP <n3minis-git yahoo.es> writes:
On Wednesday, 8 March 2017 at 21:02:23 UTC, Moritz Maxeiner wrote:
 On Wednesday, 8 March 2017 at 17:40:29 UTC, Brad Roberts wrote:
 [...]
 You can hide unsafe code in D by annotating a function with 
  trusted the same way you can hide unsafe code in Rust with 
 unsafe blocks.
Clearly marked is an interesting definition of hidden.
The operating word here being "can". The above is semantically equivalent (assuming the delegate gets optimized out) to an unsafe block inside a Rust function. And yes, that's what I consider hidden unsafe code, and it means that if you call function `bar` from a safe function `foo`, `bar` being marked as safe does not save you from auditing `bar`'s source code.
Indeed safety isn't transitive as I thought. safe may call trusted, which may include any unsafe implementation as long as the external interface does not. I suppose it was decided back at the time that the opposite would be too restrictive. Then truly safe client code can rely on simple trust established from the bottom up originating from systems unsafe code that is at least hopefully long lasting and stable and more tested (even if manually lol). If client code, often rapidly updated, scarcely tested and under pressure of feature creep, is written in safe D, this can still reduce the amount of failure modes. Also at least as of 2010 Andrei's book stated that "At the time of this writing, SafeD is of alpha quality -- meaning that there may be unsafe programs [ safe code blocks] that pass compilation, and safe programs that don't -- but is an area of active development." And 7 years later in this forum I'm hearing many screams for nogc but little love for safe...
Mar 08
next sibling parent Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 22:38:24 UTC, XavierAP wrote:
 On Wednesday, 8 March 2017 at 21:02:23 UTC, Moritz Maxeiner 
 wrote:
 [...]

 The operating word here being "can". The above is semantically 
 equivalent (assuming the delegate gets optimized out)  to an 
 unsafe block inside a Rust function. And yes, that's what I 
 consider hidden unsafe code, and it means that if you call 
 function `bar` from a  safe function `foo`, `bar` being marked 
 as  safe does not save you from auditing `bar`'s source code.
Indeed safety isn't transitive as I thought. safe may call trusted, which may include any unsafe implementation as long as the external interface does not. I suppose it was decided back at the time that the opposite would be too restrictive. Then truly safe client code can rely on simple trust established from the bottom up originating from systems unsafe code that is at least hopefully long lasting and stable and more tested (even if manually lol).
If the use case has no problem with that kind of trust, indeed. Unfortunately even already long established, and presumably stable C APIs have tended to turn into horrible nightmares on many an occasion. *cough* openssl *cough*, so this will need to be something to evaluate on a project by project, dependency by dependency basis imho.
 If client code, often rapidly updated, scarcely tested and 
 under pressure of feature creep, is written in  safe D, this 
 can still reduce the amount of failure modes.
I don't disagree with that. Writing your own code in safe has considerable advantages (first and foremost personal peace of mind :) ). It's just that other people writing their code in safe does not provide you as a potential user of their code with any guarantees. You need to either extend those people the exact kind of trust you would if they had written their code in system, or audit their code. It does make auditing considerably faster, though, since you can search for all instances of trusted and evaluate their internals and how they're being interfaced with (i.e. you can omit auditing safe functions that don't call trusted functions).
 Also at least as of 2010 Andrei's book stated that "At the time 
 of this writing, SafeD is of alpha quality -- meaning that 
 there may be unsafe programs [ safe code blocks] that pass 
 compilation, and safe programs that don't -- but is an area of 
 active development." And 7 years later in this forum I'm 
 hearing many screams for  nogc but little love for  safe...
Well, I can't speak for others, but I generally just use the GC for most things (which is by definition memory safe sans any bugs) and when I do need to step outside of it I use scope guards, refcounting, and have valgrind run (the only annoying part about valgrind with D is that there are some 96 bytes that it always reports as possibly lost and you have to suppress that). Also, when I look at the list of things forbidden in safe[1] I don't see anything I actually do, anyway, so the current implementation status of safe has so far not been a particular concern of mine. [1] https://dlang.org/spec/function.html#safe-functions
Mar 08
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Wed, Mar 08, 2017 at 10:38:24PM +0000, XavierAP via Digitalmars-d wrote:
[...]
 Also at least as of 2010 Andrei's book stated that "At the time of
 this writing, SafeD is of alpha quality -- meaning that there may be
 unsafe programs [ safe code blocks] that pass compilation, and safe
 programs that don't -- but is an area of active development." And 7
 years later in this forum I'm hearing many screams for  nogc but
 little love for  safe...
To be fair, though, in the past several months Walter has merged quite a number of PRs to dmd that close many of the holes found in safe. I don't think we can say safe is bulletproof yet, but it would be unfair to say that no progress has been made. T -- Leather is waterproof. Ever see a cow with an umbrella?
Mar 08
prev sibling parent Jerry <hurricane hereiam.com> writes:
On Friday, 24 February 2017 at 14:35:44 UTC, Jack Stouffer wrote:
 Why is it that test CIs catch bugs when people should be 
 running tests locally?
CI tests all platforms, not just the one a user is on. It does it simultaneously as well. In the case of something like DMD, it's a pain in the ass to setup and run. There's no documentation on how to do it either. I think LDC's wiki has some information on how it needs to be setup but it's a bit different as they are providing information on how to run the tests the way LDC has them setup. Which is different to how it is done in DMD.
Feb 25
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner 
wrote:
 On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer 
 wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

 [...]
This isn't evidence that memory safety is "the future", though.
Completely agreed. This only shows that memory safety is not the present. Not that it is "the future". This reasoning reminds me of the Keynes quote: "The market can stay irrational longer than you can stay solvent."
Mar 02
next sibling parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Thursday, 2 March 2017 at 23:00:34 UTC, Guillaume Piolat wrote:
 On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner 
 wrote:
 On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer 
 wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

 [...]
This isn't evidence that memory safety is "the future", though.
Completely agreed. This only shows that memory safety is not the present. Not that it is "the future".
For what it's worth: I do hope memory safety becomes a common feature and what languages like D and Rust do on that front is great (even though both D's still heavily integrated GC as well as Rust's static analysis have their downsides). My major gripe, though, is still that people tend to create "safe" wrappers around "unsafe" (mostly) C libraries, which (in the sense of safety) doesn't really help me as a developer at all: Now I not only have to trust that the C library doesn't do horribly stuff (or audit its source), I *also* have to extend the same trust/time to the wrapper, because since it must interface with C all possible compiler guarantees for what that wrapper actually *does* are null and void (-> D's system / Rust's unsafe blocks). Great, if I *truly* care about safety my workload has increased significantly compared to just using the "unsafe" C APIs myself (which is easy in D and a PITA in Rust)! In reality, of course, I just use the wrapper and die a little inside about the fact that I have to trust even more people to get things right when all evidence shows that I totally shouldn't. TL/DR: I wish people would write more native libraries in safe languages, but who has the time for that?
Mar 02
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2017-03-03 03:11, Moritz Maxeiner wrote:

 For what it's worth: I do hope memory safety becomes a common feature
 and what languages like D and Rust do on that front is great (even
 though both D's still heavily integrated GC as well as Rust's static
 analysis have their downsides).
 My major gripe, though, is still that people tend to create "safe"
 wrappers around "unsafe" (mostly) C libraries, which (in the sense of
 safety) doesn't really help me as a developer at all:
 Now I not only have to trust that the C library doesn't do horribly
 stuff (or audit its source), I *also* have to extend the same trust/time
 to the wrapper, because since it must interface with C all possible
 compiler guarantees for what that wrapper actually *does* are null and
 void (-> D's  system / Rust's unsafe blocks).
 Great, if I *truly* care about safety my workload has increased
 significantly compared to just using the "unsafe" C APIs myself (which
 is easy in D and a PITA in Rust)!
 In reality, of course, I just use the wrapper and die a little inside
 about the fact that I have to trust even more people to get things right
 when all evidence shows that I totally shouldn't.
 TL/DR: I wish people would write more native libraries in safe
 languages, but who has the time for that?
So we need operating systems and the core libraries to be built from the ground up with memory safety in mind in these kind of languages. -- /Jacob Carlborg
Mar 03
parent reply Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 3 March 2017 at 09:22:31 UTC, Jacob Carlborg wrote:
 On 2017-03-03 03:11, Moritz Maxeiner wrote:

 [...]
 TL/DR: I wish people would write more native libraries in safe
 languages, but who has the time for that?
So we need operating systems and the core libraries to be built from the ground up with memory safety in mind in these kind of languages.
That would be a good next step from an engineering standpoint, I agree, to proceed to minimize the amount of trust in people you need to have vs verifiable safety. I have considered porting something like seL4[1] to Rust, but ultimately this would take a significant amount of time and even if done you'd then have the biggest problem any new kernel faces: Hardware support. Driver development is AFAIK mostly done by people working for the hardware manufacturer and you're going to have a hard (probably closer to impossible) time convincing them to spend money on driver development for you. And if they don't you'll have close to 30 years of hardware support to catch up on by yourself. But suppose you limit yourself to a single (or at most a handful of homogeneous) platform(s) like [2], e.g. a new AArch64 board. Suppose you even take one where the hardware is open so you can audit its schematics, then you'll *still* either have to use proprietary firmware for the (partially onboard) periphery (and have unsafe interfaces to them), or - once again - write all the device firmware yourself. And once you've done all of that you're still missing userspace, i.e. you have a nice new OS without any actual use for it (yet). So you either start writing your own incompatible, safe userspace, or you're going to decide to integrate the userspace of existing OSs (probably POSIX?) to your new OS, so you're going to be writing your own (safe) libc, (safe) pthread, etc, exposing (once again) unsafe APIs to the top. It will be safer than what we currently have on e.g Linux since you can probably make sure that unsafe use of them won't result in kernel exploits, though; this will, of course, take even more time. Finally, at the arduous end of your journey you're likely going to notice what - in my experience - most new OSs I've observed of the years experience: Essentially nobody is interested in actually switching to a volunteer-based OS. Honestly, I think you need serious corporate backing, a dedicated team, and like 5-10 years (low estimate) of guaranteed development time to have a snowballs chance in hell to pull this off and the only possible sponsors for this I'm both aware of and would currently trust not to cut you off in the middle are either already working on their own OS[3], or have dedicated their R&D to other things[4]. [1] https://sel4.systems/ [2] https://genode.org/ [3] http://fuchsia.googlesource.com/ [4] https://www.ibm.com/watson/
Mar 03
parent reply Jacob Carlborg <doob me.com> writes:
On 2017-03-03 16:23, Moritz Maxeiner wrote:

 That would be a good next step from an engineering standpoint, I agree,
 to proceed to minimize the amount of trust in people you need to have vs
 verifiable safety.
 I have considered porting something like seL4[1] to Rust, but ultimately
 this would take a significant amount of time and even if done you'd then
 have the biggest problem any new kernel faces: Hardware support. Driver
 development is AFAIK mostly done by people working for the hardware
 manufacturer and you're going to have a hard (probably closer to
 impossible) time convincing them to spend money on driver development
 for you. And if they don't you'll have close to 30 years of hardware
 support to catch up on by yourself.
 But suppose you limit yourself to a single (or at most a handful of
 homogeneous) platform(s) like [2], e.g. a new AArch64 board. Suppose you
 even take one where the hardware is open so you can audit its
 schematics, then you'll *still* either have to use proprietary firmware
 for the (partially onboard) periphery (and have unsafe interfaces to
 them), or - once again - write all the device firmware yourself.
 And once you've done all of that you're still missing userspace, i.e.
 you have a nice new OS without any actual use for it (yet). So you
 either start writing your own incompatible, safe userspace, or you're
 going to decide to integrate the userspace of existing OSs (probably
 POSIX?) to your new OS, so you're going to be writing your own (safe)
 libc, (safe) pthread, etc, exposing (once again) unsafe APIs to the top.
 It will be safer than what we currently have on e.g Linux since you can
 probably make sure that unsafe use of them won't result in kernel
 exploits, though; this will, of course, take even more time.
 Finally, at the arduous end of your journey you're likely going to
 notice what - in my experience - most new OSs I've observed of the years
 experience: Essentially nobody is interested in actually switching to a
 volunteer-based OS.
 Honestly, I think you need serious corporate backing, a dedicated team,
 and like 5-10 years (low estimate) of guaranteed development time to
 have a snowballs chance in hell to pull this off and the only possible
 sponsors for this I'm both aware of and would currently trust not to cut
 you off in the middle are either already working on their own OS[3], or
 have dedicated their R&D to other things[4].
I agree. The only potential hope I see would be to port Linux to a memory safe language. -- /Jacob Carlborg
Mar 05
next sibling parent Moritz Maxeiner <moritz ucworks.org> writes:
On Sunday, 5 March 2017 at 11:48:23 UTC, Jacob Carlborg wrote:
 [...]

 I agree. The only potential hope I see would be to port Linux 
 to a memory safe language.
That would indeed eliminate essentially all of those tasks; unfortunately porting Linux in itself would require a tremendous amount of work. The only realistic way I could this of to do this would be to follow what was done e.g. in the dmd frontend and is currently being done with remacs[1]: Iteratively translate file by file, function by function. By the time you are done doing that with the Linux kernel, however - and I'm guessing 5 years is again a low estimate for the amount of work - your version will've become horribly out of sync with upstream, and then you'll continuously have to catch up with it. Unless of course you eventually decide that from point X on forward you don't sync with upstream anymore and lose future driver support (since the Linux kernel's API changes with every minor release). I'm not saying it shouldn't be attempted, btw, but anyone trying needs to be fully aware of what he/she gets into and assemble a sizeable, reliable group of people dedicated to the task imho. [1] https://github.com/Wilfred/remacs
Mar 05
prev sibling parent reply Minty Fresh <minty fresh.com> writes:
On Sunday, 5 March 2017 at 11:48:23 UTC, Jacob Carlborg wrote:
 On 2017-03-03 16:23, Moritz Maxeiner wrote:

 That would be a good next step from an engineering standpoint, 
 I agree,
 to proceed to minimize the amount of trust in people you need 
 to have vs
 verifiable safety.
 I have considered porting something like seL4[1] to Rust, but 
 ultimately
 this would take a significant amount of time and even if done 
 you'd then
 have the biggest problem any new kernel faces: Hardware 
 support. Driver
 development is AFAIK mostly done by people working for the 
 hardware
 manufacturer and you're going to have a hard (probably closer 
 to
 impossible) time convincing them to spend money on driver 
 development
 for you. And if they don't you'll have close to 30 years of 
 hardware
 support to catch up on by yourself.
 But suppose you limit yourself to a single (or at most a 
 handful of
 homogeneous) platform(s) like [2], e.g. a new AArch64 board. 
 Suppose you
 even take one where the hardware is open so you can audit its
 schematics, then you'll *still* either have to use proprietary 
 firmware
 for the (partially onboard) periphery (and have unsafe 
 interfaces to
 them), or - once again - write all the device firmware 
 yourself.
 And once you've done all of that you're still missing 
 userspace, i.e.
 you have a nice new OS without any actual use for it (yet). So 
 you
 either start writing your own incompatible, safe userspace, or 
 you're
 going to decide to integrate the userspace of existing OSs 
 (probably
 POSIX?) to your new OS, so you're going to be writing your own 
 (safe)
 libc, (safe) pthread, etc, exposing (once again) unsafe APIs 
 to the top.
 It will be safer than what we currently have on e.g Linux 
 since you can
 probably make sure that unsafe use of them won't result in 
 kernel
 exploits, though; this will, of course, take even more time.
 Finally, at the arduous end of your journey you're likely 
 going to
 notice what - in my experience - most new OSs I've observed of 
 the years
 experience: Essentially nobody is interested in actually 
 switching to a
 volunteer-based OS.
 Honestly, I think you need serious corporate backing, a 
 dedicated team,
 and like 5-10 years (low estimate) of guaranteed development 
 time to
 have a snowballs chance in hell to pull this off and the only 
 possible
 sponsors for this I'm both aware of and would currently trust 
 not to cut
 you off in the middle are either already working on their own 
 OS[3], or
 have dedicated their R&D to other things[4].
I agree. The only potential hope I see would be to port Linux to a memory safe language.
By Linux, I hope you don't mean the kernel itself. Because outside of being an entirely fruitless venture, it shows a lack of understanding of what's involved in kernel programming. Most memory safe languages I know don't take well to using bit arithmetic with pointers, deliberately smashing the stack, self modifying code, and a whole plethora of things required to work with the CPU in a freestanding environment. Within the span of a single function call, an address in memory is easily able to refer to a different address on physical memory. Forgive me if I'm wrong, but I don't think you can get that much benefit out of memory safety when you change the address of the stack pointer manually and start to manually prefill it with new values for general registers.
Mar 08
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 8 March 2017 at 13:12:12 UTC, Minty Fresh wrote:
 On Sunday, 5 March 2017 at 11:48:23 UTC, Jacob Carlborg wrote:
 On 2017-03-03 16:23, Moritz Maxeiner wrote:

 That would be a good next step from an engineering 
 standpoint, I agree,
 to proceed to minimize the amount of trust in people you need 
 to have vs
 verifiable safety.
 I have considered porting something like seL4[1] to Rust, but 
 ultimately
 this would take a significant amount of time and even if done 
 you'd then
 have the biggest problem any new kernel faces: Hardware 
 support. Driver
 development is AFAIK mostly done by people working for the 
 hardware
 manufacturer and you're going to have a hard (probably closer 
 to
 impossible) time convincing them to spend money on driver 
 development
 for you. And if they don't you'll have close to 30 years of 
 hardware
 support to catch up on by yourself.
 But suppose you limit yourself to a single (or at most a 
 handful of
 homogeneous) platform(s) like [2], e.g. a new AArch64 board. 
 Suppose you
 even take one where the hardware is open so you can audit its
 schematics, then you'll *still* either have to use 
 proprietary firmware
 for the (partially onboard) periphery (and have unsafe 
 interfaces to
 them), or - once again - write all the device firmware 
 yourself.
 And once you've done all of that you're still missing 
 userspace, i.e.
 you have a nice new OS without any actual use for it (yet). 
 So you
 either start writing your own incompatible, safe userspace, 
 or you're
 going to decide to integrate the userspace of existing OSs 
 (probably
 POSIX?) to your new OS, so you're going to be writing your 
 own (safe)
 libc, (safe) pthread, etc, exposing (once again) unsafe APIs 
 to the top.
 It will be safer than what we currently have on e.g Linux 
 since you can
 probably make sure that unsafe use of them won't result in 
 kernel
 exploits, though; this will, of course, take even more time.
 Finally, at the arduous end of your journey you're likely 
 going to
 notice what - in my experience - most new OSs I've observed 
 of the years
 experience: Essentially nobody is interested in actually 
 switching to a
 volunteer-based OS.
 Honestly, I think you need serious corporate backing, a 
 dedicated team,
 and like 5-10 years (low estimate) of guaranteed development 
 time to
 have a snowballs chance in hell to pull this off and the only 
 possible
 sponsors for this I'm both aware of and would currently trust 
 not to cut
 you off in the middle are either already working on their own 
 OS[3], or
 have dedicated their R&D to other things[4].
I agree. The only potential hope I see would be to port Linux to a memory safe language.
By Linux, I hope you don't mean the kernel itself. Because outside of being an entirely fruitless venture, it shows a lack of understanding of what's involved in kernel programming. Most memory safe languages I know don't take well to using bit arithmetic with pointers, deliberately smashing the stack, self modifying code, and a whole plethora of things required to work with the CPU in a freestanding environment. Within the span of a single function call, an address in memory is easily able to refer to a different address on physical memory. Forgive me if I'm wrong, but I don't think you can get that much benefit out of memory safety when you change the address of the stack pointer manually and start to manually prefill it with new values for general registers.
I will just leave this here. https://muen.codelabs.ch/
Mar 08
parent Moritz Maxeiner <moritz ucworks.org> writes:
On Wednesday, 8 March 2017 at 13:50:28 UTC, Paulo Pinto wrote:
 [...]

 I will just leave this here.

 https://muen.codelabs.ch/
This seems really cool, but I though seL4[1] were the first in the field. Guess I'll have some more research to do :p [1] https://sel4.systems/
Mar 08
prev sibling parent reply Kagamin <spam here.lot> writes:
On Friday, 3 March 2017 at 02:11:38 UTC, Moritz Maxeiner wrote:
 My major gripe, though, is still that people tend to create 
 "safe" wrappers around "unsafe" (mostly) C libraries, which (in 
 the sense of safety) doesn't really help me as a developer at 
 all
Wrappers are needed because C libraries have unsafe (and underdocumented) API that's easy to get wrong. I saw it happening twice in druntime. Safety is like optimization: you can handle it one or twice, but code handles it always, that makes a difference.
Mar 03
parent Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 3 March 2017 at 16:38:52 UTC, Kagamin wrote:
 On Friday, 3 March 2017 at 02:11:38 UTC, Moritz Maxeiner wrote:
 My major gripe, though, is still that people tend to create 
 "safe" wrappers around "unsafe" (mostly) C libraries, which 
 (in the sense of safety) doesn't really help me as a developer 
 at all
Wrappers are needed because C libraries have unsafe (and underdocumented) API that's easy to get wrong. I saw it happening twice in druntime. Safety is like optimization: you can handle it one or twice, but code handles it always, that makes a difference.
And the wrappers can get it wrong just the same as if I'd done it myself, i.e. I need to either audit the wrapper's code or trust yet one more (or multiple) persons to get things right. Of course you're right about the reduction of points of failure, but that still doesn't help me have more confidence in them.
Mar 03
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 03/02/2017 06:00 PM, Guillaume Piolat wrote:
 On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner wrote:
 On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

 [...]
This isn't evidence that memory safety is "the future", though.
Completely agreed. This only shows that memory safety is not the present. Not that it is "the future".
I think it's safe enough to just go ahead and interpret it as "...evidence that memory safety is important and SHOULD be the direction we take." It's English, not an ISO RFC.
Mar 02
next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 3 March 2017 at 02:48:46 UTC, Nick Sabalausky 
(Abscissa) wrote:
 I think it's safe enough to just go ahead and interpret it as 
 "...evidence that memory safety is important and SHOULD be the 
 direction we take."
In D you have less memory corruption than in C++, which in its modern incarnation has much less than in C, etc. That C programs have a lot of healine-making memory corruptions says not much about D. Unsafe D provides: - initialization - bounds checking - slices and it does take away a lot of memory corruptions. My point is that memory safety beyond what unsafe D may not provides as much value as it sounds. I could quote the Pareto rule, which is gimmicky but exactly how I feel about it.
Mar 03
prev sibling parent Moritz Maxeiner <moritz ucworks.org> writes:
On Friday, 3 March 2017 at 02:48:46 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 03/02/2017 06:00 PM, Guillaume Piolat wrote:
 On Friday, 24 February 2017 at 13:38:57 UTC, Moritz Maxeiner 
 wrote:
 On Friday, 24 February 2017 at 06:59:16 UTC, Jack Stouffer 
 wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139

 [...]
This isn't evidence that memory safety is "the future", though.
Completely agreed. This only shows that memory safety is not the present. Not that it is "the future".
I think it's safe enough to just go ahead and interpret it as "...evidence that memory safety is important and SHOULD be the direction we take."
I agree with the sentiment that taking that direction is likely to yield significant benefits in the long run for both developers, as well as end users. But "important" is another one of those things that are entirely dependent on one's viewpoint. If I have a business that incorporates a risk analysis based on penalties due to past bug occurrences and likely presence of more bugs in my software and come to the conclusion that investing in a transition to a memory safer language (however we define that) is just not worth the associated costs then it's not important for me. I'll assume for the moment, though, that with you mean the D community and the direction D should take in the future. In which case I agree, though technically a correct garbage collector is memory safe by definition (unless I missed something). What kind of changes to D (spec, druntime, phobos) would you envision (I'm honestly curious)? And are they possible without breaking existing user code (because I don't think that with the current userbase size D can survive yet another break of the phobos/tango, D1/D2 dimensions)?
 It's English, not an ISO RFC.
Interpretations in engineering are often necessary (I'm looking at you, ISO "specification" 7814-4), but in a technical discussion I don't want to interpret. I want to discuss the topic at hand; and I consider hyperboles such as "X is the future" to be detrimental to the effort of X, whatever X is. And besides, while I consider memory safety to be important and use it whenever viable, unless there is sufficient proof that people using languages with memory safety builtin actually produce memory safe(r) programs we don't have a leg to stand on. And while this may seem an intuitive and reasonable hypothesis, it's still something that has to be proven; one current case shows to me, at least, that people writing Rust code can (and sometimes do) make the same kinds of mistakes they would've made in C regarding memory safety[1]. Which does not suprise me, honestly, since all of these languages I'm aware of currently allow you to expose a "safe" API over "unsafe" internal mechanics (or in the linked example the other way around) and if the unsafe code is broken you're screwed. Period. The only kind of language for which I'd implicitly accept the conclusion that writing in it produces more memory safe programs than in others is one where unsafe operations are utterly forbidden. This, of course, is impractical, since it means no C interop and would make such a language more or less irrelevant. [1] https://www.x41-dsec.de/reports/Kudelski-X41-Wire-Report-phase1-20170208.pdf
Mar 03
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Feb 24, 2017 at 06:59:16AM +0000, Jack Stouffer via Digitalmars-d wrote:
 https://bugs.chromium.org/p/project-zero/issues/detail?id=1139
 
 A buffer overflow bug caused heartblead 2.0 for hundreds of thousands
 of sites. Here we are 57 years after ALGOL 60 which had bounds
 checking, and we're still dealing with bugs from C's massive mistake.
Walter was right that the biggest mistake of C was conflating pointers and arrays. That single decision, which seemed like a clever idea in a day and age where saving a couple of bytes seemed so important (how times have changed!), has cost the industry who knows how much as a consequence. More scarily yet, this particular pointer bug was obscured because it occurred in *generated* code. The language it was generated from (Ragel) appears not to have any safety checks in this respect, but "blindly" generated C code that simply followed whatever the source code said. As if pointer bugs aren't already too easy to inadvertently write, now we have an additional layer of abstraction to make them even less obvious to the programmer, who now has to mentally translate the higher-level constructs into low-level pointer manipulations in order to even realize something may have gone wrong. Talk about leaky(!) abstractions...
 This is something that valgrind could have easily picked up, but the
 devs just didn't use it for some reason. Runtime checking of this
 stuff is important, so please, don't disable safety checks with DMD if
 you're dealing with personal info.
[...] The elephant in the room is that the recent craze surrounding the "cloud" has conveniently collected large numbers of online services under a small number of umbrellas, thereby greatly expanding the impact of any bug that occurs in the umbrella. Instead of a nasty bug that impacts merely one or two domains, we now have a nasty bug that singlehandedly affects 4 *million* domains. Way to go, "cloud" technology! T -- Spaghetti code may be tangly, but lasagna code is just cheesy.
Feb 24
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 02/24/2017 12:47 PM, H. S. Teoh via Digitalmars-d wrote:
 The elephant in the room is that the recent craze surrounding the
 "cloud" has conveniently collected large numbers of online services
 under a small number of umbrellas, thereby greatly expanding the impact
 of any bug that occurs in the umbrella.  Instead of a nasty bug that
 impacts merely one or two domains, we now have a nasty bug that
 singlehandedly affects 4 *million* domains.  Way to go, "cloud"
 technology!
Indeed. The big original *point* of what became the internet, and why the internet got as successful as it did, was decentralization. The past decade or so of recentralization is a shame, to say the least. But I suppose it was inevitable: Now that corporations are involved, corporate interests are involved, and corporate motivator #1 is "control as much of the territory as you can: size == profit".
Feb 25
parent reply H. S. Teoh <hsteoh quickfur.ath.cx> writes:
On Sunday, 26 February 2017 at 03:54:54 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 02/24/2017 12:47 PM, H. S. Teoh via Digitalmars-d wrote:
 The elephant in the room is that the recent craze surrounding 
 the "cloud" has conveniently collected large numbers of online 
 services under a small number of umbrellas, thereby greatly
 expanding the impact of any bug that occurs in the umbrella.
 Instead of a nasty bug that impacts merely one or two domains,
 we now have a nasty bug that singlehandedly affects 4
 *million* domains.  Way to go, "cloud" technology!
Indeed. The big original *point* of what became the internet, and why the internet got as successful as it did, was decentralization. The past decade or so of recentralization is a shame, to say the least. But I suppose it was inevitable: Now that corporations are involved, corporate interests are involved, and corporate motivator #1 is "control as much of the territory as you can: size == profit".
Yet another nail in the coffin: https://www.theregister.co.uk/2017/03/01/aws_s3_outage/ --T
Mar 02
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
H. S. Teoh wrote:

 Yet another nail in the coffin:

 https://www.theregister.co.uk/2017/03/01/aws_s3_outage/
i just can't stop laughing.
Mar 02
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 2 March 2017 at 20:59:44 UTC, ketmar wrote:
 H. S. Teoh wrote:

 Yet another nail in the coffin:

 https://www.theregister.co.uk/2017/03/01/aws_s3_outage/
i just can't stop laughing.
Seems like it was a fat finger error http://www.geekwire.com/2017/amazon-explains-massive-aws-outage-says-employee-error-took-servers-offline-promises-changes/
Mar 02
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Mar 02, 2017 at 09:51:44PM +0000, jmh530 via Digitalmars-d wrote:
 On Thursday, 2 March 2017 at 20:59:44 UTC, ketmar wrote:
 H. S. Teoh wrote:
 
 Yet another nail in the coffin:
 
 https://www.theregister.co.uk/2017/03/01/aws_s3_outage/
i just can't stop laughing.
Seems like it was a fat finger error http://www.geekwire.com/2017/amazon-explains-massive-aws-outage-says-employee-error-took-servers-offline-promises-changes/
Yes, which inevitably happens every now and then, because of human fallability. But again, the elephant in the room is that in the good ole clear-weather days, such an error would at most take out one or two (or a small handful) of related sites; whereas in today's cloudy situation a single error in umbrella services like AWS can mean the outage of thousands or maybe even millions of otherwise-unrelated sites. And thanks to the frightening notion of the Internet of Things, one day all it will take is a single failure and society would stop functioning altogether. (Even more frightening than catastrophic failure is when a security vulnerability is replicated across redundant umbrella systems, thereby effectively making the surface of attack universally wide -- and then there's no telling what kind of disastrous consequences will ensue. Cloudbleed is only the tip of the iceberg.) T -- Let X be the set not defined by this sentence...
Mar 02
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 2 March 2017 at 22:25:49 UTC, H. S. Teoh wrote:
 But again, the elephant in the room is that in the good ole 
 clear-weather days, such an error would at most take out one or 
 two (or a small handful) of related sites; whereas in today's 
 cloudy situation a single error in umbrella services like AWS 
 can mean the outage of thousands or maybe even millions of 
 otherwise-unrelated sites.

 And thanks to the frightening notion of the Internet of Things, 
 one day all it will take is a single failure and society would 
 stop functioning altogether.
Reminds me of Nassim Taleb's work on black swans and antifragility.
Mar 02
prev sibling next sibling parent Moritz Maxeiner <moritz ucworks.org> writes:
On Thursday, 2 March 2017 at 22:25:49 UTC, H. S. Teoh wrote:
 [...]
 
 http://www.geekwire.com/2017/amazon-explains-massive-aws-outage-says-employee-error-took-servers-offline-promises-changes/
Yes, which inevitably happens every now and then, because of human fallability. But again, the elephant in the room is that in the good ole clear-weather days, such an error would at most take out one or two (or a small handful) of related sites; whereas in today's cloudy situation a single error in umbrella services like AWS can mean the outage of thousands or maybe even millions of otherwise-unrelated sites.
To me it seems like a lot of people - once again - gambled (and lost) on one of the primary criteria of reliable engineering: Redundancy. The relevant question now, I think, is why do people keep doing this (as this is not a new phenomenon)? My current favorite hypothesis (as I don't have enough reliable data) is that they simply don't *have* to care about a couple of hours of downtime in the sense that whatever profits they may lose per year related to those outages does not come close to what they save by not paying for redundancy.
 And thanks to the frightening notion of the Internet of Things, 
 one day all it will take is a single failure and society would 
 stop functioning altogether.
One of the primary reasons (for us all) to invest in technological heterogeneity, imho: Multiple competing hardware platforms, operating systems, software stacks, etc. The more entities we have that perform similar functions but don't necessarily work the same the higher our resistance against this kind of outcome (analogous to - IIRC - how diverse ecosystems tend to be more resistant to unforeseen changes).
Mar 02
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 2 March 2017 at 22:25:49 UTC, H. S. Teoh wrote:
 But again, the elephant in the room is that in the good ole 
 clear-weather days, such an error would at most take out one or 
 two (or a small handful) of related sites; whereas in today's 
 cloudy situation a single error in umbrella services like AWS 
 can mean the outage of thousands or maybe even millions of 
 otherwise-unrelated sites.
Well, but on average the outcome (SLA) is better, assuming that they can have more specialised personell and spend more time on harnessing the infrastructure. It's just that you get global scale downtime.
Mar 02