If the only thing that varies is the number of columns at the end then that should be easy to do.
Let's convert your example two lines into an actual file we use for testing.
filename sample temp;
options parmcards=sample;
parmcards;
ENTI|CIRBE|OPE|FIN|TPR|SIT|CONT|SEC|ACTE|PR|IMP-RIDISPU|IMP-RIDISPO|A|1|2|3|4|5|6|7|8|
9999|00000000000000000009999999|0000000000000000000000000000099999|K12|V44|I22|0020| | |01|000000000000063.12|000000000000000.00|N|0|0|0|0|0|0|0|0|
;
Now let's read the first line and pull that 8 from the end of it and put it into a macro variable.
data n_vars ;
infile sample obs=1;
input ;
n_vars = scan(trim(_infile_),-1,'|');
call symputx('n_vars',n_vars);
run;
Now let's write a data step to read the file. We will use the macro variable N_VARS to set the upperbound on how many of those variables to read from the end. I just guessed how you want to define the variables from how the one example record looks like. Notice how I used valid SAS names for the variables instead of trying to use the column headers exactly. I replaced the hyphens with underscores. I named those variables at the end of the lines as VAR1, VAR2, ... since you cannot start a variable name with a digit.
data want ;
infile sample dsd dlm='|' truncover firstobs=2;
length
ENTI $10
CIRBE $20
OPE $30
FIN $8
TPR $8
SIT $8
CONT $10
SEC $8
ACTE $8
PR $8
IMP_RIDISPU 8
IMP_RIDISPO 8
A $8
var1-var&n_vars 8
;
input enti -- var&n_vars ;
run;
Writing your own data step gives you complete control over how the variables are defined. It also insures that the variables are defined the same every time, even when sometimes the variables are empty in a particular version of the text file. Also notice how much simpler the code is when you write your own data step instead of the gibberish that PROC IMPORT generates.
... View more