I have a file to load which doesn't seem to be in too unusual a format, but I'm not sure if it is possible to load it using TPT. Can anyone let me know whether this is possible, or if I have to write some unix script to edit the file before I can load it.
The data looks like this:
SVC_NAME,OPUNIT_NO,DEPOT_CODE,OPU_TYPE,REGION_NUMBER
"Aberdeen",7240,"051","M",4
"Aberystwyth SC",7252,"067","M",6
"Middleton",7130,"002","M",3
"Aylesham",7120,"061","M",5
The first row is a header which has the column names from the source system.
The data rows have char/varchar fields which are quoted, and number fields which are not. The fields are delimited by commas.
If I define the file as delimited it states that all fields need to be varchar/vardate. This would mean that all the fields would need to be quoted, which they aren't.
Also, I want to skip the first row, but it seems that even if I set SkipRows=1 it still checks that it matches the same schema as the data rows, and therefore each of the fields needs to be quoted.
I don't think this is an unusual format. In fact the CSVLD function would be able to split this out into the correct fields. Unfortunately I tried using this function on a fairly large table of unformated records similar to the above and it caused Teradata to crash. It seems more sensible to be able to load this using TPT into the correct fields in the first place, which is why I'm trying to find out whether it is possible or not.
Can anyone let me know whether this is possible or not?
i tried selecting C:\Program Files\Java\jre7 , even then its showing please select jre path.