Quantcast
Channel: Teradata Forums - Tools
Viewing all 4252 articles
Browse latest View live

SQL ASSISTANT - forum topic by mahadevan.sureshkumar

$
0
0

Hello,
Can we run TERADATA Utilities such as BTEQ, MLOAD through SQL Assistant?
-Suresh.
 

Forums: 

TerradataParallelTransporter-Load error - response (2) by chinthucs

$
0
0

What are the possibilities to get this error i.e any missing params will cause this error? or any pblm in dll

Data mismatch while migrating data from SAS to Teradata using TPT - forum topic by EUsha

$
0
0

Hi,
I have been trying to load a 200GB dataset from SAS onto Teradata using TPT multiload option. The load was successful but i noted a record count mismatch in SAS and Teradata table. What could be the reasons for this? There was no records captured in UV & ET tables. Also suggest any apporach to identifiy these mismatch records. 

Forums: 

TerradataParallelTransporter-Load error - response (3) by tomnolan

$
0
0

The Teradata JDBC Driver is the supported interface for Java applications to transfer data to and from the Teradata Database.
 
Using JNI with TPT is not supported.

portable Sql Assitant? - forum topic by mauricio.rodriguez

$
0
0

Hello.
Does anybody knows if there is a portable version of Sql Assistant?
Like a folder I can copy to my pc and just run the .exe?
 
thanks

Forums: 

SQL ASSISTANT - response (1) by krishaneesh

$
0
0

They can be ran from your client but not from SQL Assistant

Data mismatch while migrating data from SAS to Teradata using TPT - response (1) by feinholz

$
0
0

There could be several reasons. But before I try to answer the questions, the best thing to do is to provide me with the entire output from the TPT job. It will tell us how many rows were actually sent to Teradata and other information which might point to where the discrepencies are.
 
The number of rows in the source table (that you think needed to be moved) would also be helpful.
 

“Warning: RDBMS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS” running bteq on AIX - forum topic by gneal

$
0
0

After installing Teradata version 14.0 new on AIX v6.1 test box, I login using “bteq” command and then get the error “Warning: RDBMS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS” and then just hangs until I can only kick out.  Then receive  "*** Warning: Exiting because of three BREAKs!   *** Exiting BTEQ...      *** RC (return code) = 2".  Note: /etc/services file only lists “tdmst  1025/tcp” and “tpa” does not exist in /etc/init.d file.  Components installed successfully.   Thank you.

Forums: 

portable Sql Assitant? - response (1) by rpc1

$
0
0

Hi
Try Squirrel client, it's a java application, you need only JRE to run it (and jdbc for teradata)

Data mismatch while migrating data from SAS to Teradata using TPT - response (2) by EUsha

$
0
0

Hi,
Find below the script and the log of this run. This log also has the sql to populate the missing rows (or the data mismatch rows)  into an error table (TABLE1_ERR). This query is also failing during this run.
=========================================================================
script:
=====
LIBNAME TD TERADATA USER=USER1 PASSWORD=xxxxxxxx SERVER="xx.xxx.xxx.xx" DATABASE=STG_DB;

OPTIONS DBIDIRECTEXEC;

libname cisdata "/COPY/isdata_subsets";

option msglevel=i nostsuffix sastrace=',,,s' sastraceloc=saslog
sql_ip_trace=source;

option dbfmtignore dbidirectexec nofmterr sqlgeneration=dbmust
sqlmapputto=sas_put;

options fullstimer;

options sastrace=',,,d' sastraceloc=saslog;

proc sql;

insert into TD.TABLE1
(
    TPT=YES MULTILOAD=YES
    TPT_MIN_SESSIONS=1
    TPT_MAX_SESSIONS=15
)
select
COL1                           
,COL2
,COL3
,COL4
,COL5
,COL6
,COL7
,COL8
,COL9
,COL10
,COL11
,COL12
,COL13
,COL14                      
,COL15
,COL16
,COL17
,COL18
,COL19
,COL20                    
,COL21
,COL22
,COL23
,COL24
,COL25
,COL26                      
,COL27
,COL28
,COL29                   
,COL30           
,2012
from CISDATA.TABLE1
;

insert into TD.TABLE1_ERR (FASTLOAD=YES SESSIONS=20)
SELECT
T1.COL1
,T1.COL2
,T1.COL3
,T1.COL4
,T1.COL5
,T1.COL6
,T1.COL7
,T1.COL8
,T1.COL9
,T1.COL10
,T1.COL11
,T1.COL12
,T1.COL13
,T1.COL14
,T1.COL15
,T1.COL16
,T1.COL17
,T1.COL18
,T1.COL19
,T1.COL20
,T1.COL21
,T1.COL22
,T1.COL23
,T1.COL24
,T1.COL25
,T1.COL26
,T1.COL27
,T1.COL28
,T1.COL29
,T1.COL30
FROM CISDATA.TABLE1 T1
LEFT OUTER JOIN
TD.TABLE1 T2
ON T1.COL5=T2.COL5
WHERE
T2.COL5 IS NULL
;
quit;

 
Log of this run:
===========
1 The SAS System                               06:07 Sunday, March 16, 2014

NOTE: Copyright (c) 2002-2010 by SAS Institute Inc., Cary, NC, USA.
NOTE: SAS (r) Proprietary Software 9.3 (TS1M2)
      
NOTE: This session is executing on the Linux 2.6.32-431.3.1.el6.x86_64 (LIN X64) platform.

NOTE: Enhanced analytical products:

SAS/STAT 12.1

You are running SAS 9. Some SAS 8 files will be automatically converted
by the V9 engine; others are incompatible.  Please see
http://support.sas.com/rnd/migration/planning/platform/64bit.html

PROC MIGRATE will preserve current SAS file attributes and is
recommended for converting all your SAS libraries from any
SAS 8 release to SAS 9.  For details and examples, please see
http://support.sas.com/rnd/migration/index.html

This message is contained in the SAS news file, and is presented upon
initialization.  Edit the file "news" in the "misc/base" directory to
display site-specific news and information in the program log.
The command line option "-nonews" will prevent this display.

NOTE: SAS initialization used:
      real time           0.33 seconds
      cpu time            0.02 seconds
      

NOTE: AUTOEXEC processing beginning; file is /phcommon/code/autoexec.sas.

NOTE: Libref IGS12M06 was successfully assigned as follows:
      Engine:        V9
      Physical Name: /data/smart06
NOTE: Libref IGS12M08 was successfully assigned as follows:
      Engine:        V9
      Physical Name: /data//smart08
NOTE: Libref LIBRARY was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/ref/fmt
NOTE: Libref HIF_TEMP was successfully assigned as follows:
      Engine:        V9
      Physical Name: /temp
NOTE: Libref SYSMART was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/ref/dmart
NOTE: Libref SCMACRO was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/test_macro_lib/testmacro
NOTE: Libref HESFMT was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/load/sas/load_template/formats

Data mismatch while migrating data from SAS to Teradata using TPT - response (3) by EUsha

$
0
0

The number of rows in the source table is 591290000.
Looks like the log hasn't been completely published. Find below the same:
1 The SAS System                               06:07 Sunday, March 16, 2014

NOTE: Copyright (c) 2002-2010 by SAS Institute Inc., Cary, NC, USA.
NOTE: SAS (r) Proprietary Software 9.3 (TS1M2)
      
NOTE: This session is executing on the Linux 2.6.32-431.3.1.el6.x86_64 (LIN X64) platform.

NOTE: Enhanced analytical products:

SAS/STAT 12.1

You are running SAS 9. Some SAS 8 files will be automatically converted
by the V9 engine; others are incompatible.  Please see
http://support.sas.com/rnd/migration/planning/platform/64bit.html

PROC MIGRATE will preserve current SAS file attributes and is
recommended for converting all your SAS libraries from any
SAS 8 release to SAS 9.  For details and examples, please see
http://support.sas.com/rnd/migration/index.html

This message is contained in the SAS news file, and is presented upon
initialization.  Edit the file "news" in the "misc/base" directory to
display site-specific news and information in the program log.
The command line option "-nonews" will prevent this display.

NOTE: SAS initialization used:
      real time           0.33 seconds
      cpu time            0.02 seconds
      

NOTE: AUTOEXEC processing beginning; file is /phcommon/code/autoexec.sas.

NOTE: Libref IGS12M06 was successfully assigned as follows:
      Engine:        V9
      Physical Name: /data/smart06
NOTE: Libref IGS12M08 was successfully assigned as follows:
      Engine:        V9
      Physical Name: /data//smart08
NOTE: Libref LIBRARY was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/ref/fmt
NOTE: Libref HIF_TEMP was successfully assigned as follows:
      Engine:        V9
      Physical Name: /temp
NOTE: Libref SYSMART was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/ref/dmart
NOTE: Libref SCMACRO was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/test_macro_lib/testmacro
NOTE: Libref HESFMT was successfully assigned as follows:
      Engine:        V9
      Physical Name: /analysis/load/sas/load_template/formats

“Warning: RDBMS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS” running bteq on AIX - response (1) by Fred

$
0
0

Most often, this is due to a network error of some sort which allows the initial logon handshake (very short messages) to succeed but data transmission (longer messages) fails.
You could enable CLIv2 trace (set COPANOMLOG=filename) and/or use a network trace to try to isolate the problem.

End of Record marker with TPT - forum topic by tstrick4

$
0
0

I have a CSV file that I'm attempting to load via TPT.  I have created the CSV file in Excel.  When I try to load the file with the appropriate number of delimiters, I am getting a BUFFERMAXSIZE error.  When I add another delimiter to the end of each record, the file loads just fine.  This isn't a huge issue, but I'm just confused and would like to understand more about end of record markers.  My schema definition is below as well as my Dataconnector properties

DEFINE SCHEMA MVT_INPUT
	Description 'MOVEMENTS INPUT DEFINITION'
	(
		tk_num VARCHAR(5),
		tk_descr VARCHAR(30),
		rule VARCHAR(2),
		TagName1 VARCHAR(255),
		TagName2 VARCHAR(255)
	);

 

ATTRIBUTES
		(
			FileName = 'GravLoad.csv', 
			Format = 'DELIMITED', 
			OpenMode = 'Read', 
			DirectoryPath = 'D:\Tony\csv', 
			IndicatorMode = 'N', 
			TextDelimiter = ','
		)

 

Forums: 

Data mismatch while migrating data from SAS to Teradata using TPT - response (4) by feinholz

$
0
0

This looks like SAS is using DataStage to load the data, which also means the job is using TPTAPI, and not script-based TPT (correct me if I am wrong).
 
The information you sent me was the SA (or DataStage) information and I am unfamiliar with those tools to be of any help there.
 
I need to see any type of logging from the TPTAPI operators, and if there isn't any then you must enable tracing in TPTAPI and capture the output so that I can see what the TPTAPI operators are doing, and the number of rows they think they are loading.
 
With TPTAPI, the operators will send all rows they receive from the application to Teradata. If rows are missing from the target table, you might need to see if DataStage is somehow filtering some rows.
 
 
Also, check to see if they might be any duplicate rows in the source table that will not make it into the target table.
 
Is the target table a SET table oir MULTISET table?
 

Data mismatch while migrating data from SAS to Teradata using TPT - response (5) by sgarlapa

$
0
0

Please find out the entire row level duplicates in source. This count should explain why the records less in target table, as you are using the fast load functionality. (Fast load can't support row level duplicates even the target it SET table)
 
Regards,
Sri


Data mismatch while migrating data from SAS to Teradata using TPT - response (6) by feinholz

$
0
0

The original post said:
 
"I have been trying to load a 200GB dataset from SAS onto Teradata using TPT multiload option."
 
Thus, they may be using the Update operator.
One of their INSERTs reference MULTILOAD=YES, the other one says FASTLOAD=YES.
So, the customer will have to verify.
 

BTEQ exporting multiple files based on column value in a table - response (1) by SuSeSi

$
0
0

You can try with two bteq scripts. Run 1st betq script and get the unique IDs exported to a file. With a shell script, read the exported file and pass value and call 2nd bteq script which export the data with a WHERE clause as the passed value.
 

Data mismatch while migrating data from SAS to Teradata using TPT - response (7) by EUsha

$
0
0

My target table is a MULTISET table.
This first INSERT in the script (which uses TPT MULTILOAD) is to load the data from SAS dataset onto Teradata target.
The second INSERT (which uses TPT FASTLOAD) is to identifty the mismatch records & insert them into an error table with a LEFT OUTER JOIN between the SAS dataset and the Teradata target table. But this second INSERT failed as shown in the log due to network connection being lost.
Log has not been completely published above. Find below the remaining log contents of this run:
NOTE: AUTOEXEC processing completed.

1          LIBNAME TD6700 TERADATA USER=USER1 PASSWORD=XXXXXXXXXX SERVER="xx.xxx.xxx.xx" DATABASE=STG_DB;
NOTE: Libref TD6700 was successfully assigned as follows:
      Engine:        TERADATA
      Physical Name: xx.xxx.xxx.xx
2          
3          
4          OPTIONS DBIDIRECTEXEC;
5          
6          libname cisdata "/COPY/isdata_subsets";
NOTE: Libref CISDATA was successfully assigned as follows:
      Engine:        V9
      Physical Name: /COPY/isdata_subsets
7          
8          
9          
10         
11         option msglevel=i nostsuffix sastrace=',,,s' sastraceloc=saslog
12         sql_ip_trace=source;
13         
14         option dbfmtignore dbidirectexec nofmterr sqlgeneration=dbmust
15         sqlmapputto=sas_put;
16         
17         
18         
19         options fullstimer;
20         
21         options sastrace=',,,d' sastraceloc=saslog;
22         
23         
24         
25         proc sql;
26         
27         
28         insert into TD6700.TABLE1
29         (
30             TPT=YES MULTILOAD=YES
31             TPT_MIN_SESSIONS=1
32             TPT_MAX_SESSIONS=15
33         )
34         select
35         COL1
36         ,COL2
37         ,COL3
38         ,COL4
39         ,COL5
40         ,COL6
41         ,COL7
42         ,COL8
43         ,COL9
44         ,COL10
45         ,COL11
46         ,COL12
47         ,COL13
48         ,COL14
49         ,COL15
50         ,COL16

Data mismatch while migrating data from SAS to Teradata using TPT - response (8) by EUsha

$
0
0

51         ,COL17
52         ,COL18
53         ,COL19
54         ,COL20
55         ,COL21
56         ,COL22
57         ,COL23
58         ,COL24
59         ,COL25
60         ,COL26
61         ,COL27
62         ,COL28
63         ,COL29
64         ,COL30
65         ,2012
66         from cisdata.TABLE1
67         ;
 
TERADATA_0: Prepared: on connection 1
SELECT * FROM STG_DB."TABLE1"
 
TERADATA: trforc: COMMIT WORK
NOTE: The SQL statement cannot be executed directly against the database because it involves heterogeneous data sources.
 
TERADATA_1: Prepared: on connection 1
SELECT * FROM STG_DB."TABLE1"
 
TERADATA: trforc: COMMIT WORK
NOTE: 591288139 rows were inserted into TD6700.TABLE1.

TERADATA: trforc: COMMIT WORK
68         insert into TD6700.TABLE1_ERR (FASTLOAD=YES SESSIONS=20)
69         SELECT
70         T1.COL1
71         ,T1.COL2
72         ,T1.COL3
73         ,T1.COL4
74         ,T1.COL5
75         ,T1.COL6
76         ,T1.COL7
77         ,T1.COL8
78         ,T1.COL9
79         ,T1.COL10
80         ,T1.COL11
81         ,T1.COL12
82         ,T1.COL13
83         ,T1.COL14
84         ,T1.COL15
85         ,T1.COL16
86         ,T1.COL17
87         ,T1.COL18
88         ,T1.COL19
89         ,T1.COL20
90         ,T1.COL21
91         ,T1.COL22
92         ,T1.COL23
93         ,T1.COL24
94         ,T1.COL25
95         ,T1.COL26
96         ,T1.COL27
97         ,T1.COL28
98         ,T1.COL29
99         ,T1.COL30
100        FROM CISDATA.TABLE1 T1
101        LEFT OUTER JOIN
102        TD6700.TABLE1 T2
103        ON T1.COL5=T2.COL5
104        WHERE
105        T2.COL5 IS NULL
106        ;
 
TERADATA_2: Prepared: on connection 1
SELECT * FROM STG_DB."TABLE1_ERR"
 
 
TERADATA_3: Prepared: on connection 2
SELECT * FROM STG_DB."TABLE1"
 
TERADATA: trforc: COMMIT WORK
TERADATA: trforc: COMMIT WORK
NOTE: The SQL statement cannot be executed directly against the database because it involves heterogeneous data sources.
 
TERADATA_4: Prepared: on connection 1
SELECT * FROM STG_DB."TABLE1_ERR"
 
 
TERADATA_5: Prepared: on connection 2
SELECT * FROM STG_DB."TABLE1"
 
NOTE: SAS threaded sort was used.
TERADATA: trqacol- No casting. Raw row size=10, Casted size=10, CAST_OVERHEAD_MAXPERCENT=20%
 
TERADATA_6: Prepared: on connection 2
SELECT "COL5" FROM STG_DB."TABLE1"
 
ERROR: Teradata prepare: MTDP: EM_DBC_CRASH_A(220): Network connection to the DBC was lost. SQL statement was: SELECT "COL5" FROM
       STG_DB."TABLE1" .
ERROR: PROC SQL runtime error for operation=sqxsrc.
ERROR: An error has occurred.
TERADATA: trforc: COMMIT WORK
TERADATA: trforc: COMMIT WORK  failed
ERROR: ERROR: Commit: MTDP: EM_DBC_CRASH_A(220): Network connection to the DBC was lost.
    ROLLBACK has been issued (Any rows processed after the last COMMIT are lost).
TERADATA: trforc: COMMIT WORK
TERADATA: trforc: COMMIT WORK  failed
ERROR: ERROR: Commit: MTDP: EM_DBC_CRASH_A(220): Network connection to the DBC was lost.
    ROLLBACK has been issued (Any rows processed after the last COMMIT are lost).
NOTE: PROC SQL set option NOEXEC and will continue to check the syntax of statements.
107        quit;
NOTE: The SAS System stopped processing this step because of errors.
NOTE: SAS set option OBS=0 and will continue to check statements. This might cause NOTE: No observations in data set.
NOTE: PROCEDURE SQL used (Total process time):
      real time           4:10:49.27
      user cpu time       1:06:02.68
      system cpu time     30:06.61
      memory              265945.73k
      OS Memory           272940.00k
      Timestamp           03/16/2014 10:18:35 AM
      Page Faults                       23
      Page Reclaims                     132698
      Page Swaps                        0
      Voluntary Context Switches        18277898
      Involuntary Context Switches      14349900
      Block Input Operations            726750456
      Block Output Operations           518342256
      

ERROR: Errors printed on page 4.

NOTE: SAS Institute Inc., SAS Campus Drive, Cary, NC USA 27513-2414
NOTE: The SAS System used:
      real time           4:10:50.78
      user cpu time       1:06:02.78
      system cpu time     30:06.66
      memory              270981.82k
      OS Memory           272940.00k
      Timestamp           03/16/2014 10:18:35 AM
      Page Faults                       43
      Page Reclaims                     139890
      Page Swaps                        0
      Voluntary Context Switches        18278062
      Involuntary Context Switches      14349909
      Block Input Operations            726754264
      Block Output Operations           518342552
      
 

End of Record marker with TPT - response (1) by SuSeSi

$
0
0

A buffer is created based on the schema, Now, record is scanned till the next delimiter and copied to the buffer.
Now, considering your case where last column does not have a delimiter. When the value (no. of character) is more/less than the buffer size, it will error out. It need to be exactly same number of character defined in the schema if you are not using delimiter in the last column. 
If delimiter is used for the last column, then TPT knows how many characters the column is having.
 
 

Viewing all 4252 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>