Quantcast
Channel: Teradata Forums - Tools
Viewing all 4252 articles
Browse latest View live

Problem to import a delimited text file in teradata table - forum topic by Jorge_rvt

$
0
0

Good afternoon, everyone.

I'm completely new in the world of Teradata.

I have a problem and i would need help from people with much more experience than me.

The problem: When i import (from MULTILOAD) a text file  with a delimited symbol '|' to a teradata auxiliary table (created by me) i have a problem is that the insertion of the data.

Can you help me? I swear on my soul that I was investigating for two weeks and did not find a solution because all the examples I found do not work. I think the problem may be the input file. 

Can any one HELP ME?

Thank you very much to all.

 

 

The delimited text file

---------------------------

 

IBS|ByC|5|FECHA|20070101|-1|-1|        6684,5061|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070102|-1|-1|       -282593,15|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070103|-1|-1|       2756278,31|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070104|-1|-1|       -1010329,2|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070105|-1|-1|       115768,226|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070106|-1|-1|       1936545,95|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070107|-1|-1|       2687359,46|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070108|-1|-1|       1453068,16|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070109|-1|-1|       -2690372,1|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070110|-1|-1|       -2085283,5|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070111|-1|-1|       267984,834|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070112|-1|-1|       289931,665|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070113|-1|-1|       1290998,73|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070114|-1|-1|       1055463,79|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070115|-1|-1|       -103615,62|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070116|-1|-1|       -2233793,3|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070117|-1|-1|       -276442,39|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070118|-1|-1|       -382398,29|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070119|-1|-1|        -26900,09|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070120|-1|-1|       1003009,66|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070121|-1|-1|       1179042,28|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070122|-1|-1|       117305,433|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070123|-1|-1|       -1576164,8|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070124|-1|-1|       71149,2559|SUMA_AMT_INCL_TAX

IBS|ByC|5|FECHA|20070125|-1|-1|       138511,473|SUMA_AMT_INCL_TAX

 

 

My aux Table

----------------

my table has 9 columns and all the columns are defined like a char

 

 

 

 

 

 

 

the .ml file

-------------

 

 .LOGTABLE    DTV_HMG_CTL.VALIDACION_MLOG;

    .logon 172.22.138.40/CONS_HMG_USER,CONS_HMG_USER;

     DATABASE DTV_HMG_CTL;

 

    DROP TABLE  DTV_HMG_CTL.VALIDACION_wk;

    DROP TABLE  DTV_HMG_CTL.VALIDACION_et;

    DROP TABLE  DTV_HMG_CTL.VALIDACION_uv;

 

    .BEGIN IMPORT MLOAD AMPCHECK NONE TABLES DTV_HMG_CTL.VALIDACION_TERADATA9 

 

    WORKTABLES  DTV_HMG_CTL.VALIDACION_wk

    ERRORTABLES DTV_HMG_CTL.VALIDACION_et

                DTV_HMG_CTL.VALIDACION_uv

     SLEEP 4    

     TENACITY 100    

     SESSIONS 4     ;

 

    .layout LAYOUT1;

 

                .FIELD AMBIENTE * VARCHAR(3);

                .FIELD MODELO * VARCHAR(30);

                .FIELD COM_ID * VARCHAR(1);

                .FIELD APERTURA_1 * VARCHAR(30);

                .FIELD APERTURA_VALOR1 * VARCHAR(30);

                .FIELD APERTURA_2 * VARCHAR(30);

                .FIELD APERTURA_VALOR2 * VARCHAR(30);

                .FIELD CANTIDAD * VARCHAR(30);

                .FIELD OBJETO_VALIDADOR * VARCHAR(30);

                .FIELD NewLine * VARCHAR(1);

    

     .DML LABEL LABELA;

 

    INSERT INTO DTV_HMG_CTL.VALIDACION_TERADATA9 

    (

    AMBIENTE

    , MODELO

    , COM_ID

    , APERTURA_1

    , APERTURA_VALOR1

    , APERTURA_2

    , APERTURA_VALOR2

    , CANTIDAD

    , OBJETO_VALIDADOR

    )

VALUES

(

 AMBIENTE = :AMBIENTE 

,MODELO = :MODELO 

,COM_ID = :COM_ID

,APERTURA_1 = :APERTURA_1 

,APERTURA_VALOR1 = :APERTURA_VALOR1 

,APERTURA_2 = :APERTURA_2 

,APERTURA_VALOR2 = :APERTURA_VALOR2 

,CANTIDAD = :CANTIDAD 

,OBJETO_VALIDADOR = :OBJETO_VALIDADOR

);

    

.import infile C:\Teradata\Pirri\IBS_Ventas_Colombia.TXT  

format vartext '|'

             layout LAYOUT1

             apply LABELA;   

    

    .END MLOAD;

    .LOGOFF;

EOF

 

# END of MultiLoad

 

when i import here is the error:
 

0021 EOF

     # END of MultiLoad

**** 10:30:31 UTY3410 A semi-colon was not found to terminate the current

     statement.

 

Forums: 

Problem to import a delimited text file in teradata table - response (1) by dnoeth

$
0
0

There should be no 
EOF
# END of MultiLoad

and you don't have to define the new line

                .FIELD NewLine * VARCHAR(1);

 

this is part of the VARTEXT definition, 1 or 2 bytes depending on UNIX/Windows.

 

And you should check if there's a linebreak after the last row in your input file

 

DIeter

Mload error - 2794 - UPI is an identity column - response (7) by cheeli

$
0
0

Thank you Ivy.  We have checked the max value for the identity column in the landing table and could see that the max value for the id column in the table is way smaller than max value 2147483646 defined for identity column definition.  We have also checked to see if there are any values for the ID column with negative values(which should start with once max value is reached), but couldn't find them either.
 
I have gone through the manual and could find that identity column will start to cycle values (repeat) after it reaches the max value defined for the identity column.  Where as we are receiving the duplicates on the identity column which has not reached the max value.  Have you ever faced this scenario.  Why is this happening? I am a bit perplexed.

Mload - upsert scenario when Target row updated by multiple source rows - forum topic by cheeli

$
0
0

Greetings Experts,
Does a multiload job fail ever due to data issue?  If so under what circumstances does it fail before completing the APPLICATION phase?

For instance, one of our job (that does not use multiload) fails occassionally in MERGE statement with issue "Target row updated by multiple source rows" due to the duplicates from the source.

If we use Mload in this scenario (the target table has NUPI) will the upsert logic fail (as it finds multiple source rows to update a single target row) or runs to success?

If it fails so, does it imply that Mload ignores the duplicates on the PI even though it is NUPI which is not the case?

If it completes successfully, will the qualified rows in the target table be updated twice as there are 2 rows with same PI value in source or just once as  "mark duplicate update rows" is the default error treatment option there by it marks the duplicate update rows from source to UV table?

How does Mload distinguish entire duplicate records that are present in source and the records that have came after restart (reading the same rows after checkpoint that has already been sent to worktable)

 

Forums: 

Keeping idle connections alive! - forum topic by barani_sachin

$
0
0

Hi All,
    Is there anyway to keep the connections alive in SQL Assistant. In the system i am using its expiring every 3 minutes and i have to login again. any shortcuts?
Thanks in advance.

Forums: 

Loading a timestamp column with Fastload - response (3) by chinmayi

$
0
0

Hi,
I facing a problem with TPT FASTLOAD. I have a column in my Teradata table with datatype TIMESTAMP(6). When I am trying to load data from a binary flat file (the binary flat file is created using TPT FASTEXPORT) into my TD table, I am getting some problems.
My TD table is :
create table edw_base.test_binary_load (col1  CHAR(20) , col2 TIMESTAMP(6));
Now, the problems I am facing with FASTLOAD are:
1.     When I had tried-
:col2(timestamp(6),format 'yyyy-mm-ddBhh:mi:ss.SSSSSS')
ERROR:
       TPT_INFRA: At "yyyy" missing RPAREN_ in Rule: DML Statement List
TPT_INFRA: TPT02932: Error: Invalid token near line 33 (text was ':')
TPT_INFRA: TPT02932: Error: Invalid token near line 33 (text was ':')
Compilation failed due to errors. Execution Plan was not generated.
Job script compilation failed.
2.     When I had tried-
:col2(timestamp(6),format "yyyy-mm-ddBhh:mi:ss.SSSSSS")
ERROR:
Syntax error, expected something like a string or a Unicode character literal between the 'format' keyword and the word 'yyyy-mm-ddBhh:mi:ss.SSSSSS'.
3.      Another problem with NULL value. The column “col2” takes NULL also. Now, when I am trying to load data from the binary flat file into TD table, I am getting 6760 error code.
 
Please tell me how to solve these problems . 
 

Loading "N" Number of table of different structure (MYSQL) in to "N' number of Table (Teradata)Using single Script - forum topic by sujitvarpe

$
0
0

Our Requirement is We have 'N' number of source table(MySQL) having different structure  need to load to teradata Table using one script but If source and target structure changes (Column added or deleted) then Script should not change, for that they have given configuration table/file  which contain information like source table name ,target table name and column to be excluded.Lets take a e.g if employee table has columns emp no,emp name,emp address and configuration table contain column to be excluded as emp address then we need to load emp no,emp name in to teradata using one script.one script should work for all different structure tables.Can we achieve this using Teradata Utility? Reply will be appreciated.

 

MySQL Table                                                                       

 

 Empolyee_MySQL(empno,empname,adress)                              

 

department_MySQL (dept name,dept id,manager )

 

 

 Configuration Table                                               

 

Config(Source Table (Empolyee_MySQL),Target Table (Empolyee_Teradata),Address (need to exclude))

 

Teradata Table

 

 Empolyee_Teradata(empno,empname,adress)                              

 

department_Teradata (dept name,dept id,manager )

 

 

 

 

 

 

 

Forums: 

Keeping idle connections alive! - response (1) by dnoeth

$
0
0

 
This is no setting in SQLA.
But your DBA might automatically disconnect idle sessions (although 3 minutes is quite short)
 
Dieter


Mload - upsert scenario when Target row updated by multiple source rows - response (1) by dnoeth

$
0
0

Hi Cheeli,
MLoad will simply apply all INSERTs/UPDATEs to the target row in the correct order. If the target table is SET and a duplicate row is inserted or an update results in a duplicate row it might be logged to the error table base on MARK/IGNORE, but it will never fail.
MLoad adds a unique "match tag" to each record, based on this it can distinguish between duplicate rows in the input file and those as a result of sending the same rows a 2nbd time after a restart.
Depending on your needs you might also try to create an error table for the target and the add LOGGING ALL ERRORS to the MERGE, but then the row causing the "Target row updated by multiple source rows" error will not be updated.
Or you apply GROUP BY to your USING select to pre-aggregate.
 
Dieter

TPT 64bit on Windows 7 sits on FINISH indefinitely - response (14) by Albi

$
0
0

Which version of JRE is needed for TPT Wizard 14.10 to work?
Many thanks!
Albi

TPT Installation Verification Script - forum topic by goldminer

$
0
0

At one time I found a script that I had been able to run in order to verify a TPT installation.  I am upgrading the TTU from 13.10 to 14.0 on an ETL Windows server.  I would like to use this script again but can't seem to locate it.  Is anyone in this forum familiar with this Teradata provided script?  It basically sets everthing up for a TPT Load and then loads about 10 records... like I said just to verify everything is working.
 
Thanks,
 
Joe

Forums: 

TPT Installation Verification Script - response (1) by goldminer

$
0
0

I believe I have found it so thought I would also share...
The script is called tptvalidate.bat and is located in the following directory:
C:\Program Files (x86)\Teradata\Client\14.00\Teradata Parallel Transporter\sample\validate
Contents from the readme file:
Overview:
  This directory contains a quick start validation tool to validate Teradata
  Parallel Transporter (TPT) after installation.
Directory Contents:
  tptvalidate.bat -- This script is installed on Windows platform. The script
  will execute the following job scripts in the quickstart directory:
  qsetup1.txt, qstart1.txt, qsetup2.txt, qstart2.txt and qcleanup.txt.
 
  Usage: tptvalidate.bat [TdpId] [UserName] [UserPassword]
  where: [TdpId] is a database name ID.
         [UserName] is a database user name.
         [UserPassword] is a database user password.
  tptvalidate.ksh -- This script is installed on unix platforms. The script
  will execute the following job scripts in the quickstart directory:
  qsetup1.txt, qstart1.txt, qsetup2.txt, qstart2.txt and qcleanup.txt.
  Usage: ./tptvalidate.ksh <TdpId> <UserName> <UserPassword>
         where: <TdpId> is a database name ID
            <UserName> is database user name.
               <UserPassword> is database user password.

Bteq didn't give error message - forum topic by bahadirbabacan

$
0
0

Hi everyone,
We are calling stored procedure from bteq, sometimes it exits with bteq exit code 8 but it doesn't give any error messages.
Here is the bteq log:
 

.logon TD_2700/TPTLoader,

 

 *** Logon successfully completed.

 *** Teradata Database Release is 14.00.03.03

 *** Teradata Database Version is 14.00.03.03

 *** Transaction Semantics are BTET.

 *** Session Character Set Name is 'ASCII'.

 

 *** Total elapsed time was 1 second.

 

+---------+---------+---------+---------+---------+---------+---------+----

 

call  TRANSFORM_GN(136914);

 *** Total elapsed time was 45 seconds.

 

+---------+---------+---------+---------+---------+---------+---------+----

 

.LOGOFF

 *** You are now logged off from the DBC.

+---------+---------+---------+---------+---------+---------+---------+----

.EXIT

 *** Exiting BTEQ...

 *** RC (return code) = 8

BTEQ 14.00.00.05 Fri Aug 30 00:53:41 2013

 

Forums: 

Bteq didn't give error message - response (1) by bahadirbabacan

$
0
0

When I queried from qrylog table for error code, it gave partition violation error but we couldn't see the error in bteq session.

Mload - upsert scenario when Target row updated by multiple source rows - response (2) by cheeli

$
0
0

Hi Dieter,

If the target table (doesn't have unique constraint) is Multiset and has NUPI and the

source has duplicates on the PI (but not the entire record is duplicate i.e. for each

combination of PI, there are say 2 rows), then the multiload will update the one qualifying

row in target 2 times.

As upsert is equivalent of Merge statement (atleast according to me) which is

deterministic, the Merge statement fails in this case.  So, doesn't the upsert fail in this

case?  If the upsert succeeds is this the difference between upsert and merge?

If the default values specified for one of the column say col1 for the target table and

when not specified in the INSERT statement of DML clause of the multiload, will the

worktable consists these default values for the rows for the target table or will the

default values be applied only when inserting into the target table?
 


Mload - identify the erroneous record using the sourcesequence in Informatica - forum topic by cheeli

$
0
0

Greetings Experts,
How to identify the erroneous rows in the source that are in ET table for a Mload table which is done through Informatica.  Is it the exact location of the client row data i.e., say if source sequence in ET table is 570, then should I look into the 570 row of the source. 
 
Say if the errlimit is set to 10 and there are only 8 rows in ET table and the job succeeds.
Will the source file or named pipe exists to identify the erroneous records or should we refer directly to the base table which is read by Informatica. 
 
If it is from source table, how to identify the exact row that caused the issue using sourcesequence? (without the where clause with column value in ET)  Should I sort the table according to the PI (or PPI and PI) and then find the 570th row using the row_num()  or can it be directly pointed to the table without order by clause of 570.  If so, how far is a select reliable to extract the 570th row directly from the table without order by clause?
 
 

Forums: 

TPT load source files with regular expression - forum topic by wm185023

$
0
0

We are now using TPT and we have a requirement to load multiple files at the same time. In the beginning we use the (*) in the file name but we have a challenge of using the wildcard character (*). Anyway to use TPT load data file contain regular expression such as ABC_AAA_[0-9].dat, ABC_AAA_BBB_[0-9][0-9].dat? 
Here is the example case:
File layout - 1
ABC_AAA_01.dat
ABC_AAA_02.dat
ABC_AAA_03.dat
File layout - 2
ABC_AAA_BBB_01.dat
ABC_AAA_BBB_02.dat
These 2 file layouts are located in the same directory. If we define the FileName = 'ABC_AAA_*.dat' in TPT script, all 5 source files will be considered by TPT to load and it will cause failure because the 2nd layout is different from 1st layout.

Tags: 
Forums: 

TPT load source files with regular expression - response (1) by msk.khurram

TPT load source files with regular expression - response (2) by feinholz

$
0
0

TPT does not support "ABC_AAA_[0-9].dat" syntax.
If you want to use the wildcard syntax, all files must adhere to the same layout.
Thus, in this particular scenario, due to the way the files are named, you would need to separate out the files with the different layouts into separate directories.
 

TPT load source files with regular expression - response (3) by feinholz

$
0
0

Another thought: If you are trying to load the data from all 5 files, but you know you need to use 2 different load tasks to accomplish the job, you can use one step to load the files from ABC_AAA_BBB_*.dat, then use a subsequent TPT job step to move those files to an archive directory, then use yet another job step to load the data from ABC_AAA_*.dat.

Viewing all 4252 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>