www.digitalmars.com         C & C++   DMDScript  

c++.stlsoft - example of <list> with auto_buffer ?

reply Denis <denis-bz-py t-online.de> writes:
Would anyone have a short example or two of <list> s with auto_buffer,
i.e. "STLSoft for dummies" ?  I'm looking at somebody else's code with <list>,
just want to speed it up.
Thanks,
cheers
  -- denis
Jul 28 2009
parent reply "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
Hi Denis

I'm a bit unclear what you want to achieve. Can you maybe provide a bit of
pseudo-code to illustrate your point?

Cheers

Matt

"Denis" <denis-bz-py t-online.de> wrote in message
news:h4n2of$2n75$1 digitalmars.com...
 Would anyone have a short example or two of <list> s with auto_buffer,
 i.e. "STLSoft for dummies" ?  I'm looking at somebody else's code with <list>,
 just want to speed it up.
 Thanks,
 cheers
   -- denis

Jul 29 2009
parent reply Denis <denis-bz-py t-online.de> writes:
Matthew Wilson Wrote:

 Hi Denis
 
 I'm a bit unclear what you want to achieve. Can you maybe provide a bit of
pseudo-code to illustrate your point ?

Right; how do I do the below ? I thought buffer_t would be an allocator, wrong -- // vector on stack, stlsoft::auto_buffer ? #include <stlsoft/memory/auto_buffer.hpp> #include <vector> using namespace std; int main( int argc, char* argv[] ) { typedef stlsoft::auto_buffer<int, 64> buffer_t; vector<int, buffer_t> v; // <-- faster vector<int> ? wrong int n = (argv[1] ? atoi( argv[1] ) : 10000000); while( --n >= 0 ){ v.push_back( 0 ); v.pop_back(); } }
Jul 31 2009
parent reply "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
Ah, I see

This is accounted for by using the pod_vector class template, as in:

    #include <stlsoft/containers/pod_vector.hpp>

    stlsoft::pod_vector<int, std::allocator<int>, 64> v;

HTH

Matt


"Denis" <denis-bz-py t-online.de> wrote in message
news:h4udcv$1vvi$1 digitalmars.com...
 Matthew Wilson Wrote:

 Hi Denis

 I'm a bit unclear what you want to achieve. Can you maybe provide a bit of
pseudo-code to illustrate your point ?

Right; how do I do the below ? I thought buffer_t would be an allocator, wrong -- // vector on stack, stlsoft::auto_buffer ? #include <stlsoft/memory/auto_buffer.hpp> #include <vector> using namespace std; int main( int argc, char* argv[] ) { typedef stlsoft::auto_buffer<int, 64> buffer_t; vector<int, buffer_t> v; // <-- faster vector<int> ? wrong int n = (argv[1] ? atoi( argv[1] ) : 10000000); while( --n >= 0 ){ v.push_back( 0 ); v.pop_back(); } }

Aug 02 2009
parent reply Denis <denis-bz-py t-online.de> writes:
Thanks Matt,
  what I really want is vector<int> / stack<int> faster than stl;
thought naively that stack allocation would be fast
(for macs, gcc -- I have no idea if apple have optimized stl at all.)
What would you suggest ?
Thanks, cheers
  -- denis
Aug 03 2009
parent reply Matt Wilson <matthewwilson acm.org> writes:
Denis Wrote:

 Thanks Matt,
   what I really want is vector<int> / stack<int> faster than stl;
 thought naively that stack allocation would be fast

stlsoft::pod_vector<int> is effectively vector<int> using stack In certain circumstances it is faster. See section 32.2.8 of Imperfect C++
 (for macs, gcc -- I have no idea if apple have optimized stl at all.)
 What would you suggest ?

Without further detail on your requirements, I'd suggest pod_vector.
 Thanks, cheers
   -- denis
 

Aug 03 2009
parent reply Denis <denis-bz-py t-online.de> writes:
Matt,
  what I'm looking for is fast <vector> push / pop / []
but stlsoft is quite a bit slower --

    * 4 ns int[10], fixed size on the stack
    * 40 ns <vector>
    * 1300 ns <stlsoft/containers/pod_vector.hpp>

for the stupid test below -- just 2 push, v[0] v[1], 2 pop, on one platform,
mac ppc, only; your mileage will vary.
Nonetheless factors > 2 surprise me.
("The purpose of computing is insight, not numbers" -- but for timing, you need
numbers.)

#include <stlsoft/containers/pod_vector.hpp>
#include <stdio.h>
using namespace std;

int main( int argc, char* argv[] )
{
        // times for 2 push, v[0] v[1], 2 pop, mac g4 ppc gcc-4.2 -O3 --
    // Vecint10 v;  // stack int[10]: 4 ns
    vector<int> v;  // 40 ns
    // stlsoft::pod_vector<int> v;  // 1300 ns
    // stlsoft::pod_vector<int, std::allocator<int>, 64> v;

    int n = (argv[1] ? atoi( argv[1] ) : 10) * 1000000;
    int sum = 0;

    while( --n >= 0 ){
        v.push_back( n );
        v.push_back( n );
        sum += v[0] + v[1];
        v.pop_back();
        v.pop_back();
    }
    printf( "sum: %d\n", sum );

}
Aug 04 2009
parent reply "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
First off, the performance advantage of pod_vector (and auto_buffer) lies in
the assumption that _in the majority of cases_ the
number of elements you will be using <= the internal size.

Second, your test is very specific. It's not hard to come up with another test
that shows different results. I've attached one
similar to yours, but that pushes N and then pops them all.

The results for GCC 3.4 are:

std::vector:
 10: 10us (100998990)
 100: 9us (100998990)
 1000: 23us (100998990)
 10000: 226us (100998990)
stlsoft::pod_vector<int> (64):
 10: 3us (100998990)
 100: 6us (100998990)
 1000: 67us (100998990)
 10000: 3721us (100998990)
stlsoft::pod_vector<int> (256):
 10: 3us (100998990)
 100: 5us (100998990)
 1000: 50us (100998990)
 10000: 1518us (100998990)
stlsoft::pod_vector<int> (2048):
 10: 10us (100998990)
 100: 5us (100998990)
 1000: 39us (100998990)
 10000: 530us (100998990)

And for VC++ 7.1:

std::vector:
 10: 8us (100998990)
 100: 11us (100998990)
 1000: 27us (100998990)
 10000: 277us (100998990)
stlsoft::pod_vector<int> (64):
 10: 3us (100998990)
 100: 4us (100998990)
 1000: 37us (100998990)
 10000: 4320us (100998990)
stlsoft::pod_vector<int> (256):
 10: 3us (100998990)
 100: 4us (100998990)
 1000: 31us (100998990)
 10000: 1250us (100998990)
stlsoft::pod_vector<int> (2048):
 10: 10us (100998990)
 100: 5us (100998990)
 1000: 27us (100998990)
 10000: 421us (100998990)

As is often the case with these things, you need to determine in real program
circumstances whether it affords you a performance
advantage. Thankfully, you can swap it in/out with std::vector via the
pre-processor.

HTH

Matt

"Denis" <denis-bz-py t-online.de> wrote in message
news:h59in6$2ev8$1 digitalmars.com...
 Matt,
   what I'm looking for is fast <vector> push / pop / []
 but stlsoft is quite a bit slower --

     * 4 ns int[10], fixed size on the stack
     * 40 ns <vector>
     * 1300 ns <stlsoft/containers/pod_vector.hpp>

 for the stupid test below -- just 2 push, v[0] v[1], 2 pop, on one platform,
mac ppc, only; your mileage will vary.
 Nonetheless factors > 2 surprise me.
 ("The purpose of computing is insight, not numbers" -- but for timing, you
need numbers.)

 #include <stlsoft/containers/pod_vector.hpp>
 #include <stdio.h>
 using namespace std;

 int main( int argc, char* argv[] )
 {
         // times for 2 push, v[0] v[1], 2 pop, mac g4 ppc gcc-4.2 -O3 --
     // Vecint10 v;  // stack int[10]: 4 ns
     vector<int> v;  // 40 ns
     // stlsoft::pod_vector<int> v;  // 1300 ns
     // stlsoft::pod_vector<int, std::allocator<int>, 64> v;

     int n = (argv[1] ? atoi( argv[1] ) : 10) * 1000000;
     int sum = 0;

     while( --n >= 0 ){
         v.push_back( n );
         v.push_back( n );
         sum += v[0] + v[1];
         v.pop_back();
         v.pop_back();
     }
     printf( "sum: %d\n", sum );

 }

Aug 04 2009
next sibling parent reply Denis <denis-bz-py t-online.de> writes:
Matt,
  you're right, it's a trivial specific test, plus I'm missing something:
how do I use pod_vector with auto_buffer ?
Unfortunately I can't open your attachment, Firefox gets newsgroup.php ?

A funny thing is that in the test exactly as below, stlsoft::pod_vector<int> v(
64 )
gives a different long long sum than stlsoft::pod_vector<int> v --
impossible, it must be n * (n-1) in all cases ?
Thanks,
cheers
  -- denis

int main( int argc, char* argv[] )
{
        // times for 2 push, v[0] v[1], 2 pop, mac g4 ppc gcc-4.2 -O3 --
    // vector<int> v;  // 40 ns
    // stlsoft::pod_vector<int> v;
    stlsoft::pod_vector<int> v( 64 );  // sum: -10737373320000000 ??
    // stlsoft::pod_vector<int, std::allocator<int>, 64> v;

    int n = (argv[1] ? atoi( argv[1] ) : 10) * 1000000;
    long long sum = 0;

    while( --n >= 0 ){
        v.push_back( n );
        v.push_back( n );
        sum += v[0] + v[1];
        v.pop_back();
        v.pop_back();
    }
    printf( "sum: %lld\n", sum );  // 10M: 99999990000000
}




Matthew Wilson Wrote:

 First off, the performance advantage of pod_vector (and auto_buffer) lies in
the assumption that _in the majority of cases_ the
 number of elements you will be using <= the internal size.
 
 Second, your test is very specific. It's not hard to come up with another test
that shows different results. I've attached one
 similar to yours, but that pushes N and then pops them all.

Aug 05 2009
next sibling parent "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
"Denis" <denis-bz-py t-online.de> wrote in message
news:h5bl90$2nsh$1 digitalmars.com...
 Matt,
   you're right, it's a trivial specific test, plus I'm missing something:
 how do I use pod_vector with auto_buffer ?

Ah, sorry. I didn't realise this was a source of confusion. pod_vector is implemented in terms of auto_buffer.
 Unfortunately I can't open your attachment, Firefox gets newsgroup.php ?

Ok, will repost as content Matt
Aug 05 2009
prev sibling parent "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
Here's the test program I used



#include <stlsoft/containers/pod_vector.hpp>
#include <platformstl/performance/performance_counter.hpp>
#include <vector>
#include <stdio.h>


template <typename V>
int calculate(int n, V* = NULL)
{
  V v;

  int sum = 0;

  while( --n >= 0 )
  {
    v.push_back(n);
    //v.push_back(n);
    sum += v[0] /* + v[1] */;
    //v.pop_back();
    //v.pop_back();
  }

  while(!v.empty())
  {
    v.pop_back();
  }

  return sum;
}


int main( int argc, char* argv[] )
{
  typedef std::vector<int>                    vector_t;
  typedef stlsoft::pod_vector<int>                pod_vector_64_t;
  typedef stlsoft::pod_vector<int, std::allocator<int>, 256>   
pod_vector_256_t;
  typedef stlsoft::pod_vector<int, std::allocator<int>, 2048>  
pod_vector_2028_t;

  const int REPEATS = 100;

  int ITERATIONS[] = { 10, 100, 1000, 10000 };

  int total = 0;

  platformstl::performance_counter  counter;

  platformstl::performance_counter::interval_type  
times[4][STLSOFT_NUM_ELEMENTS(ITERATIONS)];

  int sums[4] = { 0, 0, 0, 0 };

  { for(int WARMUPS = 2; 0 != --WARMUPS; )
  {
    { for(size_t i = 0; i != STLSOFT_NUM_ELEMENTS(ITERATIONS); ++i)
    {
      int ITERATION = ITERATIONS[i];

      counter.start();
      sums[0] += calculate<vector_t>(ITERATION);
      counter.stop();
      times[0][i] = counter.get_microseconds();

      counter.start();
      sums[1] += calculate<pod_vector_64_t>(ITERATION);
      counter.stop();
      times[1][i] = counter.get_microseconds();

      counter.start();
      sums[2] += calculate<pod_vector_256_t>(ITERATION);
      counter.stop();
      times[2][i] = counter.get_microseconds();

      counter.start();
      sums[3] += calculate<pod_vector_2028_t>(ITERATION);
      counter.stop();
      times[3][i] = counter.get_microseconds();
    }}
  }}

  puts("std::vector:");
  { for(size_t i = 0; i != STLSOFT_NUM_ELEMENTS(ITERATIONS); ++i)
  {
    printf("\t%d:\t%dus\t(%d)\n", ITERATIONS[i], (int)times[0][i], sums[0]);
  }}

  puts("stlsoft::pod_vector<int> (64):");
  { for(size_t i = 0; i != STLSOFT_NUM_ELEMENTS(ITERATIONS); ++i)
  {
    printf("\t%d:\t%dus\t(%d)\n", ITERATIONS[i], (int)times[1][i], sums[1]);
  }}

  puts("stlsoft::pod_vector<int> (256):");
  { for(size_t i = 0; i != STLSOFT_NUM_ELEMENTS(ITERATIONS); ++i)
  {
    printf("\t%d:\t%dus\t(%d)\n", ITERATIONS[i], (int)times[2][i], sums[2]);
  }}

  puts("stlsoft::pod_vector<int> (2048):");
  { for(size_t i = 0; i != STLSOFT_NUM_ELEMENTS(ITERATIONS); ++i)
  {
    printf("\t%d:\t%dus\t(%d)\n", ITERATIONS[i], (int)times[3][i], sums[3]);
  }}


  return 0;
}
Aug 05 2009
prev sibling parent reply Denis <denis-bz-py t-online.de> writes:
Matt, ignore the previous mail on funny sum, dumb of me
 (vector<int> v( 64 ) inits to 0, pod_vector to what ?)
  -- d
Aug 05 2009
parent reply "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
It doesn't initialise content, that's right. (Hence, in part, the name
pod_vector.)

Actually, I thought it was able to do so on a policy-basis, as can the
multidimensional arrays, but I'm wrong. Possibly the simplest
workaround for you at the moment is to derive a class from your intended
specialisation and provide a ctor with a call to memset.

class mypv
 : stlsoft::pod_vector<int, std::allocator<int>, 512>
{
public:
  typedef stlsoft::pod_vector<int, std::allocator<int>, 512>  parent_class_type;

public:
  explicit mypv(size_t n)
    : parent_class_type(n)
  {
    memset(&(*this)[0], 0, sizeof(int) * n);
  }

Deriving a non-polymorphic class is something of a design smell, but as long as
you don't add member variables (or virtual methods),
you should be ok.

In hindsight, I think that pod_vector not initialising in that ctor is a design
flaw, and I'm considering changing it. I will
probably do so for STLSoft 1.10.

Matt


----- Original Message ----- 
From: "Denis" <denis-bz-py t-online.de>
Newsgroups: c++.stlsoft
Sent: Wednesday, August 05, 2009 8:18 PM
Subject: Re: example of <list> with auto_buffer ?


 Matt, ignore the previous mail on funny sum, dumb of me
  (vector<int> v( 64 ) inits to 0, pod_vector to what ?)
   -- d

Aug 05 2009
parent reply "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
 In hindsight, I think that pod_vector not initialising in that ctor is a
design flaw, and I'm considering changing it. I will
 probably do so for STLSoft 1.10.

I've now made this change to the STLSoft 1.10 branch, the latest alpha release of which will made in a few days' time. Note that I also plan to change the template parameter ordering - to more the allocator type into last place - in the same way as was done for auto_buffer with STLSoft 1.9. As with auto_buffer, a pod_vector_old template will be added for backwards compatibility. Cheers Matt P.S. May I ask how you came to hear about STLSoft, and whether you're using any other of its facilities?
Aug 05 2009
parent "Matthew Wilson" <matthew hat.stlsoft.dot.org> writes:
This is now available in STLSoft 1.10 (alpha 12)

Let me know how you go

Matt

"Matthew Wilson" <matthew hat.stlsoft.dot.org> wrote in message
news:h5d0kv$1p20$1 digitalmars.com...
 In hindsight, I think that pod_vector not initialising in that ctor is a
design flaw, and I'm considering changing it. I will
 probably do so for STLSoft 1.10.

I've now made this change to the STLSoft 1.10 branch, the latest alpha release of which will made in a few days' time. Note that I also plan to change the template parameter ordering - to more the allocator type into last place - in the same way as was done for auto_buffer with STLSoft 1.9. As with auto_buffer, a pod_vector_old template will be added for backwards

 Cheers

 Matt

 P.S. May I ask how you came to hear about STLSoft, and whether you're using
any other of its facilities?

Aug 11 2009