www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Error: variable i cannot be read at compile time

reply Vino <vino.bheeman hotmail.com> writes:
Hi All,

  Request your help on the below error for the below program.

Error:
CReadCol.d(20): Error: variable i cannot be read at compile time
CReadCol.d(21): Error: variable i cannot be read at compile time
CReadCol.d(22): Error: variable i cannot be read at compile time


Program
import std.algorithm: joiner, sort, countUntil, uniq;
import std.container.array;
import std.csv: csvReader;
import std.stdio: File, writeln;
import std.typecons: Tuple, tuple;

auto read () {
Array!string Ucol1, Ucol2;
Array!int Ucol3;
int rSize;
auto file = 
File("C:\\Users\\bheev1\\Desktop\\Current\\Script\\Others\\ColRead.csv", "r");
foreach (record; 
file.byLineCopy.joiner("\n").csvReader!(Tuple!(string, string, 
int)))
{ Ucol1.insertBack(record[0]); Ucol2.insertBack(record[1]); 
Ucol3.insertBack(record[2]); rSize = record.length; }
return tuple(Ucol1, Ucol2, Ucol3, rSize);
}
/***********************************************************************************/
auto Master (int Size) {
Array!int Keycol;
for(int i = 0; i < Size; i++) {
typeof(read()[i]) Datacol;
Datacol.insertBack(sort(read[i].dup[]).uniq);
foreach(k; read[i]) { Keycol.insertBack(Datacol[].countUntil(k)); 
} }
return tuple (Datacol[], Keycol[]);
}

void main () {
int Size = read[3];
writeln(Master(Size));
}

From,
Vino.B
Jan 04
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 01/04/2018 08:51 AM, Vino wrote:

 auto read () {
[...]
 return tuple(Ucol1, Ucol2, Ucol3, rSize);
 }
read() returns a tuple of values of different types.
 for(int i = 0; i < Size; i++) {
 typeof(read()[i]) Datacol;
typeof is a compile-time expression but there cannot be a consistent result to that expression when i is not known at compile-time. You might try using a 'static foreach' but this time Size is not a compile-time expression: static foreach(i; 0 .. Size) { typeof(read()[i]) Datacol; Error: variable Size cannot be read at compile time Ali
Jan 04
parent reply Vino <vino.bheeman hotmail.com> writes:
On Thursday, 4 January 2018 at 18:49:21 UTC, Ali Çehreli wrote:
 On 01/04/2018 08:51 AM, Vino wrote:

 auto read () {
[...]
 return tuple(Ucol1, Ucol2, Ucol3, rSize);
 }
read() returns a tuple of values of different types.
 for(int i = 0; i < Size; i++) {
 typeof(read()[i]) Datacol;
typeof is a compile-time expression but there cannot be a consistent result to that expression when i is not known at compile-time. You might try using a 'static foreach' but this time Size is not a compile-time expression: static foreach(i; 0 .. Size) { typeof(read()[i]) Datacol; Error: variable Size cannot be read at compile time Ali
Hi Ali, Thank you very much, can you suggest the best way around this issue. From, Vino.B
Jan 05
parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 5 January 2018 at 09:09:00 UTC, Vino wrote:
   Thank you very much, can you suggest the best way around this 
 issue.
What exactly are you trying to do in Master()? The code seems very broken. Each time you write read[i] is will call read() and read the whole file, you're going to read the file so many times in this code. I don't think that was the intent.
Jan 05
parent reply Vino <vino.bheeman hotmail.com> writes:
On Friday, 5 January 2018 at 12:10:33 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 09:09:00 UTC, Vino wrote:
   Thank you very much, can you suggest the best way around 
 this issue.
What exactly are you trying to do in Master()? The code seems very broken. Each time you write read[i] is will call read() and read the whole file, you're going to read the file so many times in this code. I don't think that was the intent.
Hi, Please find the full code, the below code will read a ColRead.csv file which contains the below entry Miller America 23 John India 42 Baker Austrilia 21 Zsuwalski Japan 45 Baker America 45 Miller India 23 import std.algorithm: countUntil, joiner, sort, uniq; import std.container.array; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; auto read (){ Array!string Ucol1, Ucol2; Array!int Ucol3; int rSize; auto file = File("C:\\Users\\bheev1\\Desktop\\Current\\Script\\Others\\ColRead.csv", "r"); foreach (record; file.byLineCopy.joiner("\n").csvReader!(Tuple!(string, string, int))) { Ucol1.insertBack(record[0]); Ucol2.insertBack(record[1]); Ucol3.insertBack(record[2]); rSize = record.length; } return tuple(Ucol1, Ucol2, Ucol3, rSize); } void main () { Array!int Key; int Size = read[3]; static foreach(i; 0 .. Size) { typeof(read()[i]) Data; Data.insertBack(sort(read[0].dup[]).uniq); foreach(i; read[i]) { Key.insertBack(Data[].countUntil(i)); } } }
Jan 05
parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 5 January 2018 at 12:40:41 UTC, Vino wrote:
 What exactly are you trying to do in Master()?
Please find the full code,
Sorry, I'm asking what problem are you solving, what the program should do, what is its idea. Not what code you have written.
Jan 05
parent reply Vino <vino.bheeman hotmail.com> writes:
On Friday, 5 January 2018 at 12:47:39 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 12:40:41 UTC, Vino wrote:
 What exactly are you trying to do in Master()?
Please find the full code,
Sorry, I'm asking what problem are you solving, what the program should do, what is its idea. Not what code you have written.
Hi, I am trying to implement data dictionary compression, and below is the function of the program, Function read: This function read a csv file which contains 3 column as and stores the value of each column in an array Col1: Array1 (Ucol1), Col2: Array2 (Ucol2), Col3: Array3(Ucol3) and returns the data. CSV file content: Miller America 23 John India 42 Baker Australia 21 Zsuwalski Japan 45 Baker America 45 Miller India 23 Function Main This function receives the data from the function read. Creates an array based of the function return type – ( typeof(read()[i]) Data ); Sorts the data and removes the duplicates and stores the data in the above array. Then using “countUntil” function we can accomplish the data dictionary compression. Result The above file will be stored as Data File: Data-Col1.txt which contains [Baker, John, Miller, Zsuwalski] Data-Col2.txt which contains [America, Australia , India, Japan] Data-Col3.txt which contains [21, 23, 42, 45] Index File: Index-Col1.txt which contains [2, 1, 0, 3, 0, 2] Index -Col2.txt which contains [0, 2, 1, 3, 0, 2] Index -Col3.txt which contains [1, 2, 0, 3, 3, 1] The program works for a single column. From, Vino.B
Jan 05
next sibling parent Vino <vino.bheeman hotmail.com> writes:
On Friday, 5 January 2018 at 13:09:25 UTC, Vino wrote:
 On Friday, 5 January 2018 at 12:47:39 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 12:40:41 UTC, Vino wrote:
 What exactly are you trying to do in Master()?
Please find the full code,
Sorry, I'm asking what problem are you solving, what the program should do, what is its idea. Not what code you have written.
Hi, I am trying to implement data dictionary compression, and below is the function of the program, Function read: This function read a csv file which contains 3 column as and stores the value of each column in an array Col1: Array1 (Ucol1), Col2: Array2 (Ucol2), Col3: Array3(Ucol3) and returns the data. CSV file content: Miller America 23 John India 42 Baker Australia 21 Zsuwalski Japan 45 Baker America 45 Miller India 23 Function Main This function receives the data from the function read. Creates an array based of the function return type – ( typeof(read()[i]) Data ); Sorts the data and removes the duplicates and stores the data in the above array. Then using “countUntil” function we can accomplish the data dictionary compression. Result The above file will be stored as Data File: Data-Col1.txt which contains [Baker, John, Miller, Zsuwalski] Data-Col2.txt which contains [America, Australia , India, Japan] Data-Col3.txt which contains [21, 23, 42, 45] Index File: Index-Col1.txt which contains [2, 1, 0, 3, 0, 2] Index -Col2.txt which contains [0, 2, 1, 3, 0, 2] Index -Col3.txt which contains [1, 2, 0, 3, 3, 1] The program works for a single column. From, Vino.B
More Info: If we change the below line static foreach(i; 0 .. 1) Output: ["Baker", "John", "Miller", "Zsuwalski"][2, 1, 0, 3, 0, 2] static foreach(i; 1 .. 2) ["America", "Austrilia", "India", "Japan"][0, 2, 1, 3, 0, 2]) static foreach(i; 2 .. 3) [21, 23, 42, 45][1, 2, 0, 3, 3, 1] Instead of manually chaning the values I used the variable Size where the value of the Size if from the read function (read[3] ) where read[3] is rSize = record.length; If I use the variable Size as static foreach(i; 0 .. Size) I am getting an error : “Error: variable Size cannot be read at compile time”. From, Vino.B
Jan 05
prev sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 5 January 2018 at 13:09:25 UTC, Vino wrote:
 Sorry, I'm asking what problem are you solving, what the 
 program should do, what is its idea. Not what code you have 
 written.
Hi, I am trying to implement data dictionary compression, and below is the function of the program, Function read: This function read a csv file which contains 3 column as and stores the value of each column in an array Col1: Array1 (Ucol1), Col2: Array2 (Ucol2), Col3: Array3(Ucol3) and returns the data. CSV file content: Miller America 23 John India 42 Baker Australia 21 Zsuwalski Japan 45 Baker America 45 Miller India 23 Function Main This function receives the data from the function read. Creates an array based of the function return type – ( typeof(read()[i]) Data ); Sorts the data and removes the duplicates and stores the data in the above array. Then using “countUntil” function we can accomplish the data dictionary compression.
Thank you for the explanation, this is a nice little task. Here's my version of solution. I've used ordinary arrays instead of std.container.array, since the data itself is in GC'ed heap anyway. I used csv file separated by tabs, so told csvReader to use '\t' for delimiter. import std.algorithm: countUntil, joiner, sort, uniq, map; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; import std.meta; import std.array : array; //we know types of columns, so let's state them once alias ColumnTypes = AliasSeq!(string, string, int); alias Arr(T) = T[]; auto readData() { auto file = File("data.csv", "r"); Tuple!( staticMap!(Arr, ColumnTypes) ) res; // tuple of arrays foreach (record; file.byLineCopy.joiner("\n").csvReader!(Tuple!ColumnTypes)('\t')) foreach(i, T; ColumnTypes) res[i] ~= record[i]; // here res[i] can have different types return res; } //compress a single column auto compress(T)(T[] col) { T[] vals = sort(col.dup[]).uniq.array; auto ks = col.map!(v => col.countUntil(v)).array; return tuple(vals, ks); } void main() { auto columns = readData(); foreach(i, ColT; ColumnTypes) { // here the data can have different type for different i auto vk = compress(columns[i]); writeln(vk[0][]); //output data, you can write files here writeln(vk[1][]); //output indices } }
Jan 05
parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 5 January 2018 at 17:50:13 UTC, thedeemon wrote:
 Here's my version of solution. I've used ordinary arrays 
 instead of std.container.array, since the data itself is in 
 GC'ed heap anyway.
 I used csv file separated by tabs, so told csvReader to use 
 '\t' for delimiter.
And since lines of the file are copied to heap anyway, it's easier to skip unnecessary line splitting and joining and do the reading simpler: import std.file : readText; auto readData(string fname) { Tuple!( staticMap!(Arr, ColumnTypes) ) res; // array of tuples foreach (record; fname.readText.csvReader!(Tuple!ColumnTypes)('\t')) foreach(i, T; ColumnTypes) res[i] ~= record[i]; return res; }
Jan 05
parent reply thedeemon <dlang thedeemon.com> writes:
On Friday, 5 January 2018 at 17:59:32 UTC, thedeemon wrote:
     Tuple!( staticMap!(Arr, ColumnTypes) ) res; // array of 
 tuples
Sorry, I meant tuple of arrays, of course.
Jan 05
parent reply Vino <vino.bheeman hotmail.com> writes:
On Friday, 5 January 2018 at 18:00:34 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 17:59:32 UTC, thedeemon wrote:
     Tuple!( staticMap!(Arr, ColumnTypes) ) res; // array of 
 tuples
Sorry, I meant tuple of arrays, of course.
Hi Deemon, Thank you very much, I tested your code, initially the code did not produce the expected output, and found an issue in the the key line of code as below, after updating the output was as expected. Can you please let me know how to change the array from standard array to container array. auto ks = col.map!(v => col.countUntil(v)).array; // Your code(col.countUntil) auto ks = col.map!(v => vals.countUntil(v)).array; // Changed code(vals.countUntil) From, Vino.B
Jan 05
next sibling parent Vino <vino.bheeman hotmail.com> writes:
On Saturday, 6 January 2018 at 06:47:33 UTC, Vino wrote:
 On Friday, 5 January 2018 at 18:00:34 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 17:59:32 UTC, thedeemon wrote:
     Tuple!( staticMap!(Arr, ColumnTypes) ) res; // array of 
 tuples
Sorry, I meant tuple of arrays, of course.
Hi Deemon, Thank you very much, I tested your code, initially the code did not produce the expected output, and found an issue in the the key line of code as below, after updating the output was as expected. Can you please let me know how to change the array from standard array to container array. auto ks = col.map!(v => col.countUntil(v)).array; // Your code(col.countUntil) auto ks = col.map!(v => vals.countUntil(v)).array; // Changed code(vals.countUntil) From, Vino.B
Hi Deemon, Was able to convert 50% of the code to container array and facing some issue import std.algorithm: countUntil, joiner, sort, uniq, map; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; import std.meta: AliasSeq; import std.container.array; alias ColumnTypes = AliasSeq!(string, string, int); alias Arr(T) = Array!T; auto readData() { auto file = File("C:\\Users\\bheev1\\Desktop\\Current\\Script\\Others\\TColRead.csv", "r"); Arr!(Tuple!ColumnTypes) res; foreach (record; file.byLineCopy.joiner("\n").csvReader!(Tuple!ColumnTypes)) { res.insertBack(record); } return tuple(res[]); // replace this line with writeln(res[]); gives the expected output } auto compress(T)(Array!T col) { Arr!int ks; Array!T vals; vals.insertBack(sort(col.dup[]).uniq); ks.insertBack(col.map!(v => vals.countUntil(v))); return tuple(vals, ks); } void main() { auto columns = readData(); foreach(i, ColT; ColumnTypes) { //Facing some issue at this point auto vk = compress(columns[i]); writeln(vk[0][], vk[1][]); } } From, Vino.B
Jan 06
prev sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Saturday, 6 January 2018 at 06:47:33 UTC, Vino wrote:
 On Friday, 5 January 2018 at 18:00:34 UTC, thedeemon wrote:
 On Friday, 5 January 2018 at 17:59:32 UTC, thedeemon wrote:
     Tuple!( staticMap!(Arr, ColumnTypes) ) res; // array of 
 tuples
Sorry, I meant tuple of arrays, of course.
Hi Deemon, Thank you very much, I tested your code, initially the code did not produce the expected output, and found an issue in the the key line of code as below, after updating the output was as expected. Can you please let me know how to change the array from standard array to container array.
Here's a version with Array, it's very similar: import std.algorithm: countUntil, joiner, sort, uniq, map; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; import std.meta; import std.file : readText; import std.container.array; alias ColumnTypes = AliasSeq!(string, string, int); auto readData(string fname) { // returns tuple of Arrays Tuple!( staticMap!(Array, ColumnTypes) ) res; foreach (record; fname.readText.csvReader!(Tuple!ColumnTypes)('\t')) foreach(i, T; ColumnTypes) res[i].insert(record[i]); return res; } auto compress(T)(ref Array!T col) { auto vals = Array!T( sort(col.dup[]).uniq ); auto ks = Array!ptrdiff_t( col[].map!(v => vals[].countUntil(v)) ); return tuple(vals, ks); } void main() { auto columns = readData("data.csv"); foreach(i, ColT; ColumnTypes) { auto vk = compress(columns[i]); writeln(vk[0][]); //output data, you can write files here writeln(vk[1][]); //output indices } }
Jan 06
parent reply Vino <vino.bheeman hotmail.com> writes:
On Saturday, 6 January 2018 at 15:32:14 UTC, thedeemon wrote:
 On Saturday, 6 January 2018 at 06:47:33 UTC, Vino wrote:
 [...]
Here's a version with Array, it's very similar: import std.algorithm: countUntil, joiner, sort, uniq, map; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; import std.meta; import std.file : readText; import std.container.array; [...]
Hi Deemon, Thank you very much, moving to second phase. From, Vino.B
Jan 07
parent reply Vino <vino.bheeman hotmail.com> writes:
On Sunday, 7 January 2018 at 12:09:32 UTC, Vino wrote:
 On Saturday, 6 January 2018 at 15:32:14 UTC, thedeemon wrote:
 On Saturday, 6 January 2018 at 06:47:33 UTC, Vino wrote:
 [...]
Here's a version with Array, it's very similar: import std.algorithm: countUntil, joiner, sort, uniq, map; import std.csv: csvReader; import std.stdio: File, writeln; import std.typecons: Tuple, tuple; import std.meta; import std.file : readText; import std.container.array; [...]
Hi Deemon, Thank you very much, moving to second phase. From, Vino.B
Hi Deemon, Just noticed that the output writes the data and key as 2 values , but the requirnment is to write to six files, e.g Data File 1 ["Baker", "John", "Johnson", "Jones", "Miller", "Millers", "Millman", "Zsuwalski"] Key File 1 [4, 1, 6, 7, 0, 4, 3, 4, 2, 1, 6, 5] Data File 2 ["America", "Austrilia", "Canada", "Chile", "China", "India", "Japan", "Netherlands"] Key File 2 [0, 5, 1, 6, 4, 2, 1, 5, 7, 0, 4, 3] Data File 3 [18, 21, 23, 42, 45] Key File 3 [2, 3, 1, 4, 4, 2, 0, 1, 3, 3, 3, 2] From, Vino.B
Jan 07
parent reply thedeemon <dlang thedeemon.com> writes:
On Sunday, 7 January 2018 at 12:59:10 UTC, Vino wrote:
  Just noticed that the output writes the data and key as 2 
 values , but the requirnment is to write to six files, e.g
That's the part you can implement yourself. Just replace those writelns with writing to corresponding files.
Jan 07
parent reply Vino <vino.bheeman hotmail.com> writes:
On Sunday, 7 January 2018 at 17:23:20 UTC, thedeemon wrote:
 On Sunday, 7 January 2018 at 12:59:10 UTC, Vino wrote:
  Just noticed that the output writes the data and key as 2 
 values , but the requirnment is to write to six files, e.g
That's the part you can implement yourself. Just replace those writelns with writing to corresponding files.
HI Deemon, I tried to manipulate the writeln's as below but the output is not as expected as it prints the data in row wise, where as we need it in column wise. writeln(vk[0][0]); Baker America 18 From, Vino.B
Jan 07
parent reply thedeemon <dlang thedeemon.com> writes:
On Sunday, 7 January 2018 at 17:30:26 UTC, Vino wrote:

  I tried to manipulate the writeln's as below but the output is 
 not as expected as it prints the data in row wise, where as we 
 need it in column wise.
You've said before you need 6 different files, not some tables. Also, after the "compression" data columns will have different length. How exactly do you want to combine them into a table?
Jan 07
parent reply Vino <vino.bheeman hotmail.com> writes:
On Monday, 8 January 2018 at 05:38:44 UTC, thedeemon wrote:
 On Sunday, 7 January 2018 at 17:30:26 UTC, Vino wrote:

  I tried to manipulate the writeln's as below but the output 
 is not as expected as it prints the data in row wise, where as 
 we need it in column wise.
You've said before you need 6 different files, not some tables. Also, after the "compression" data columns will have different length. How exactly do you want to combine them into a table?
Hi Deemon, The output required is like this, (1) Read a table data form the csv file John, America,23 John, India, 22 Astro, Canada, 21 2) Sort and remove the duplicates from each column by column Take a copy of each column and sort and remove the duplicates (col.dup) and store the resultant data of each column in seprate files like below. Column 1 ( Store the data to text Datafille1) Astro John Column 2 ( Store the data to text Datafille2) America Canada India Column 3 ( Store the data to text Datafille3) 21 22 23 Using the function countUntil find the keys for each of the column and store the result of each column in another files. Key for column to text Keyfille1 original column1[].map!(v => Sorted Column1[].countUntil(v)) ); Key for column to text Keyfille2 original column2[].map!(v => Sorted Column2[].countUntil(v)) ); Key for column to text Keyfille3 original column3[].map!(v => Sorted Column3[].countUntil(v)) ); From, Vino.B
Jan 07
next sibling parent thedeemon <dlang thedeemon.com> writes:
On Monday, 8 January 2018 at 07:37:31 UTC, Vino wrote:
  I tried to manipulate the writeln's as below but the output 
 is not as expected as it prints the data in row wise, where 
 as we need it in column wise.
...
 Using the function countUntil find the keys for each of the 
 column and store the result of each column in another files.
So, you don't need row wise output.
Jan 08
prev sibling parent reply thedeemon <dlang thedeemon.com> writes:
On Monday, 8 January 2018 at 07:37:31 UTC, Vino wrote:
  I tried to manipulate the writeln's as below but the output 
 is not as expected as it prints the data in row wise, where 
 as we need it in column wise.
Ah, sorry, now I think I get it. Your problem is you get output like ["a","b","c"] and instead you want a b c right? Well, I think you know how to write a loop and output one item per line inside this loop.
Jan 08
parent Vino <vino.bheeman hotmail.com> writes:
On Monday, 8 January 2018 at 08:22:21 UTC, thedeemon wrote:
 On Monday, 8 January 2018 at 07:37:31 UTC, Vino wrote:
  I tried to manipulate the writeln's as below but the output 
 is not as expected as it prints the data in row wise, where 
 as we need it in column wise.
Ah, sorry, now I think I get it. Your problem is you get output like ["a","b","c"] and instead you want a b c right? Well, I think you know how to write a loop and output one item per line inside this loop.
HI Deemon, Yes the output is required as below, and trying the same, but still no luck. a b c From, Vino.B
Jan 08