Quantcast
Channel: Teradata Forums - Tools
Viewing all 4252 articles
Browse latest View live

Error Starting SQL Assistant v14.10.0.4 - response (3) by MikeDempsey

$
0
0

If this still isn't fixed email me at mike.dempsey@Teradata.com.
It is actually a problem with your Windows setup rather than with SQL Assistant, but it is simply a matter of updating your registry to correct the pointer to 'My Documents'. (You will need admin rights to do it so you should probably contact your IT support folks.)


Issues in querying Informix Database using Teradata SQL assistant via ODBC - response (2) by MikeDempsey

$
0
0

This was first reported a year or so back.
When I traced the application I found that the Microsoft .Net Data Provider for ODBC was specifying a length of 2 for the data buffer when retreieving character data.
That's only long enough to return the null character at the end of a string - so the result is always empty strings returned to the application.
This occurs only when using the newer Informix ODBC drivers (3.5 and above I think it was)
I tried changing a bunch of different settings within the ODBC DSN but I could not persuade it to set a more reasonable buffer size.
Unfortunately SQL Assistant has no direct control over what is passed to the Informix driver. As a .Net application it always works indirectly through the .Net Data Provider, which in turn talks to the Informix ODBC driver.
Informix did tell us that they had 'steamlined' the ODBC driver for performance ... and apparently something they changed makes microsoft's provider make a very stupid choice in buffer size... but we were not able to determine what that was.
As a result NO version of SQL Assistant after 12.0 will work correctly with Informix unless you use the older ODBC driver.
(Version 12.0 should work fine since it is not a .Net app so it directly talks to the Informix ODBC driver and will use the correct buffer size.)
 

Bug in SQL Assistant 15.00 - response (1) by MikeDempsey

$
0
0

In 15.0 we changed the shortcut for Toggle Comment to be Ctrl+/.
This is more consistant with other apps and allows us to use Ctrl+D to remove (Delete) all bookmarks.
Although almost anything can be changed by using 'Customize' there was code in the app to detect Ctrl+D and change it to Ctrl+/ so it was undoing your change every time it restarted.
In the latest efix that should only happen the first time you run 15.0.
 

SQL Assistant v.15.00 Keyboard Shortcut problems - Comment - response (1) by MikeDempsey

$
0
0

In 15.0 Ctrl+D is used to remove Bookmarks while Ctrl+/ is used to toggle comments.
The latest efix also allows you to use Toggle Comment to comment the current line when no text is selected.
(It does however use /* ... */ comments rather than -- comments.)

SQL Assistant problem - response (4) by MikeDempsey

$
0
0

Starting with SQL Assistant 15.0 the option "Submit only the selected query text, when highlighted" now applies to Import also.

TERADATA 13.10 - DATASTAGE 8.5 Issues with FastLoad functionality - response (2) by Fred

$
0
0

Yes, the DataStage 8.5 Teradata connector uses TPTAPI.
A TPT job with LOAD STANDALONE operator can be used to release the lock.

Mload Performance Issue - response (4) by Ivyuan

$
0
0

Comparing with the old output, this part looks suspicious:
**** 09:38:23 UTY0827 A checkpoint has been taken, recording that input record
     10000 has been processed for IMPORT 1 of this MultiLoad Import task.
**** 09:43:42 UTY1812 A checkpoint is being initiated because 20000 output
     records have been sent to the RDBMS.
Was  the data fed in the same way? This could be checked by reviewing the "IMPORT" command in the mload script.
--Ivy.

MLOAD ERROR : "UTY4019 Access module error '4' received during 'File open' operation: 'Requested file not found'" - response (2) by SYAMALAMEDURI

$
0
0

We load data to Teradata Tables from Mainframe Files using MLOAD Loader Connection. Hence it might be the Named Pipe Access Module. If it is not the way, can you please let me know the procedure of how to check the Access Module.
We use MLOAD 13.10.00.006 version.
Thanks in advance!!!


MLOAD ERROR : "UTY4019 Access module error '4' received during 'File open' operation: 'Requested file not found'" - response (3) by Ashoktera

$
0
0

Is it a scheduling issue ?  The job run first and then the mainframe file arrives later. That could be a reason why it finds the file when restart and runs successfully with no changes. 

mload with NOSTOP option ends with Return Code 4 - forum topic by ECernega

$
0
0

Hello,
In my mload script i have the following:
0037 .IMPORT INFILE xxxx
        FROM 4
        FORMAT  VARTEXT '^\'NOSTOP
        LAYOUT &load_table.
        APPLY Inserts;
The multiload skips the errors and inserts the records which respects the conditions
 
     ========================================================================
     =          MultiLoad Initial Phase                                     =
     ========================================================================
**** 01:00:27 UTY0829 Options in effect for this MultiLoad import task:
     .       Sessions:    9 session(s).
     .       Checkpoint:  15 minute(s).
     .       Tenacity:    4 hour limit to successfully connect load sessions.
     .       Errlimit:    No limit in effect.
     .       AmpCheck:    In effect for apply phase transitions.
**** 01:00:28 UTY0812 MLOAD session(s) requested: 9.
**** 01:00:28 UTY0815 MLOAD session(s) connected: 9.
     ========================================================================
     =          MultiLoad Acquisition Phase                                 =
     ========================================================================
**** 01:00:31 UTY4014 Access module error '61' received during 'pmReadDDparse'
     operation: 'Warning, too few columns !ERROR! Delimited Data Parsing error:
     Too few columns in row 1'
**** 01:00:31 UTY1808 Record 1 of Import 1 rejected due to preceding error.
**** 01:00:31 UTY4014 Access module error '61' received during 'pmReadDDparse'
     operation: 'Warning, too few columns !ERROR! Delimited Data Parsing error:
     Too few columns in row 2'
**** 01:00:31 UTY1808 Record 2 of Import 1 rejected due to preceding error.
**** 01:00:31 UTY4014 Access module error '61' received during 'pmReadDDparse'
     operation: 'Warning, too few columns !ERROR! Delimited Data Parsing error:
     Too few columns in row 3'
**** 01:00:31 UTY1808 Record 3 of Import 1 rejected due to preceding error.
**** 01:00:32 UTY0826 A checkpoint has been taken, recording that end of file
     has been reached for IMPORT 1 of this MultiLoad Import task.
**** 01:00:32 UTY1803 Import processing statistics
     .                                       IMPORT  1     Total thus far
     .                                       =========     ==============
     Candidate records considered:........       11632.......       11632
     Apply conditions satisfied:..........       11629.......       11629
     Candidate records not applied:.......           0.......           0
     Candidate records rejected:..........           3.......           3
 
But it ends with return code 4
     ========================================================================
     =          Logoff/Disconnect                                           =
     ========================================================================
**** 01:00:38 UTY6216 The restart log table has been dropped.
**** 01:00:38 UTY6212 A successful disconnect was made from the RDBMS.
**** 01:00:38 UTY2410 Total processor time used = '1.02 Seconds'
     .       Start : 01:00:26 - WED NOV 05, 2014
     .       End   : 01:00:38 - WED NOV 05, 2014
     .       Highest return code encountered = '4'
I have no other errors except the rejected records.
 
If i use tpump with NOSTOP option the return code is 0
My question: Is this the normal behavior of multiload and tpump if we use NOSTOP option?
I was expecting for both (mload and tpump) to have the same behavior and to finish with Return Code 0
I'm running on TD14.10
 
Thank you,
EC
 

Forums: 

how to select sessions,tenacity,sleep,checkpoint etc for fastload,multiload,tpump utilities - response (7) by vincent91

$
0
0

Hello,
We implemented this new architecture in our system :
Powercenter Realtime module received messages from MQ and each message is load in a TERADATA table using TPT STREAM operator.
My question concerns the right number of Session to set. We have 2 parameters : Min sessions and Max sessions.
Min sessions =1
Max sessions = ?
We think that Max sessions=1 is enough because we load (TPT STREAM) only one short message at a time.

 

can you tell me if this is a good choice or not ?

 

Thanks

 

TERADATA 13.10 - DATASTAGE 8.5 Issues with FastLoad functionality - response (3) by jpicon

$
0
0

Hello Steve, Fred,
Your input is highly appreciated.vI clearly was missing information on how Teradata connector works.

Thank you both for commenting on this!
Best Regards

how to select sessions,tenacity,sleep,checkpoint etc for fastload,multiload,tpump utilities - response (8) by feinholz

$
0
0

If you are only loading 1 message at a time (and I am assuming that 1 message means 1 row), then 1 session is fine, and you should set the pack factor to 1 so that the row is sent immediately (and not waiting for a buffer of rows to fill up).
 

mload with NOSTOP option ends with Return Code 4 - response (1) by Ivyuan

$
0
0

Hi,
The return code handling for this case is different between MultiLoad and TPump.
MultiLoad sets the return code to 4 due to the rejected rows, while TPump does not.
Thanks!
--Ivy.

TPT 15.00 Teradata HDP Sanbox data movement - forum topic by chillerm

$
0
0

Good Evening,
 
I'm trying to prove out using TPT 15.00 to move data back and forth between Teradata and Hadoop.  I'm hoping to get the benefit of using fastexport rather than a salvo of sqoop / tdch generated queries to pull my data.  I've installed v15.00 TTUs on the hortonworks sandbox VM, but am unable to get the jobs going.  Currently I'm trying to just use the sample scripts that came with TPT 15.00.
 
Here are my job variables:
 

[root@sandbox tpt_testing]# cat jobvars2.txt

/********************************************************/

/* TPT LOAD Operator attributes                         */

/********************************************************/

TargetTdpId               = 'TD1410VPCOP1'

,TargetUserName           = 'SYSDBA'

,TargetUserPassword       = 'SYS_2012$'

,TargetTable              = 'PTS00030_TBL'

 

/********************************************************/

/* TPT Export Operator attributes                       */

/********************************************************/

,SourceTdpId              = 'TD1410VPCOP1'

,SourceUserName           = 'SYSDBA'

,SourceUserPassword       = 'SYS_2012$'

,SelectStmt               = 'select * from PTS00030_TBL'

 

/********************************************************/

/* TPT LOAD Operator attributes                         */

/********************************************************/

,DDLErrorList             = '3807'

 

/********************************************************/

/* TPT DataConnector Hadoop specific attributes         */

/********************************************************/

,HadoopHost               = '10.0.0.34'

,HadoopJobType            = 'hive'

,HadoopFileFormat         = 'rcfile'

,HadoopTable              = 'PTS00030_TBL'

,HadoopTableSchema        = 'COL1 INT, COL2 STRING, COL3 STRING'

 

/********************************************************/

/* APPLY STATEMENT parameters                           */

/********************************************************/

,LoadInstances            = 1

[root@sandbox tpt_testing]#

 

 

 

 

 

 

Here are some of the errors / warnings encountered:

 

     ===================================================================

     =                                                                 =

     =                      Module Identification                      =

     =                                                                 =

     ===================================================================

 

     Load Operator for Linux release 2.6.32-431.11.2.el6.x86_64 on sandbox.hortonworks.com

     LoadMain   : 15.00.00.05

     LoadCLI    : 15.00.00.04

     LoadUtil   : 14.10.00.01

     PcomCLI    : 15.00.00.34

     PcomMBCS   : 14.10.00.02

     PcomMsgs   : 15.00.00.01

     PcomNtfy   : 14.10.00.05

     PcomPx     : 15.00.00.08

     PcomUtil   : 15.00.00.08

     PXICU      : 15.00.00.02

Teradata Parallel Transporter Hive_table_reader[1]: TPT19006 Version 15.00.00.02

Hive_table_reader[1]: TPT19206 Attribute 'TraceLevel' value reset to 'Statistics Only'.

Hive_table_reader[1]: TPT19010 Instance 1 directing private log report to 'dtacop-root-7782-1'.

Hive_table_reader[1]: TPT19011 Instance 1 restarting.

Hive_table_reader[1]: TPT19003 NotifyMethod: 'None (default)'

Hive_table_reader[1]: TPT19008 DataConnector Producer operator Instances: 1

     TDICU      : 15.00.00.00

Hive_table_reader[1]: TPT19203 Required attribute 'OpenMode' not found.  Defaulting to 'Read'.

Hive_table_reader[1]: TPT19003 ECI operator ID: 'Hive_table_reader-7782'

     CLIv2      : 15.00.00.03

 

 

Hive_table_reader[1]: TPT19222 Operator instance 1 processing file 'PTS00030_TBL'.

Hive_table_reader[1]: TPT19424 pmRepos failed. Request unsupported by Access Module (24)

Hive_table_reader[1]: TPT19308 Fatal error repositioning data.

Hive_table_reader[1]: TPT19015 TPT Exit code set to 12.

TPT_INFRA: TPT02263: Error: Operator restart error, status = Fatal Error

Task(SELECT_2[0001]): restart completed, status = Fatal Error

 

 

 

from the TDCH log:

 

 

14/11/06 19:34:38 INFO tool.TeradataExportTool: TPTExportTool starts at 1415331278661

14/11/06 19:34:41 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative

14/11/06 19:34:41 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir

14/11/06 19:34:41 INFO hive.metastore: Trying to connect to metastore with URI thrift://sandbox.hortonworks.com:9083

14/11/06 19:34:42 INFO hive.metastore: Connected to metastore.

java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

        at com.teradata.hive.mapreduce.TeradataHiveCombineFileInputFormat.getSplits(TeradataHiveCombineFileInputFormat.java:35)

        at com.teradata.hadoop.job.TPTExportJob.runJob(TPTExportJob.java:75)

        at com.teradata.hadoop.tool.TPTJobRunner.runExportJob(TPTJobRunner.java:193)

        at com.teradata.hadoop.tool.TPTExportTool.run(TPTExportTool.java:40)

        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

        at com.teradata.hadoop.tool.TPTExportTool.main(TPTExportTool.java:446)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

14/11/06 19:34:43 INFO tool.TeradataExportTool: job completed with exit code 10000

 

 

 

 

 

Looking online the TDCH error seems to be due to incompatability of certain jars.  However, I have the base HDP sandbox (2.1) install + loom, so not really much changed.  Any help in trouble shooting would be greatly appreciated.  I imagine learning how to track that error down to the actual jars that caused it would help...

 

Thanks!

 

 

 

 

 

 

Forums: 

Setup teradata server for studio express on personal machine - forum topic by aarsh.dave

$
0
0

Hi All,
I use teradata SQL Assistant to connect to teradata databases on my work machine. However, I installed teradata studio express on my personal machine for practice purposes.
However, I dont know what server I should connect to? Does it have a server like Oracle where we can access tables like Employee, Dept? Or create our own database and tables? Can I use my machine to act as server?
Please let me know your suggestions on the same.
Thanks,
Aarsh

Forums: 

TPT 15.00 Teradata HDP Sanbox data movement - response (1) by chillerm

$
0
0

Hey Steve (Feinholz),
 
I told you at Partners I'd mention you by name in my next post.  Well this is it :)  Any guidance you have would be great, though this is likely more suited for the TDCH folks than TPT.  Let me know if I should just open a ticket on TAYS too.
 
Thanks!!!

TPT 15.00 Teradata HDP Sanbox data movement - response (2) by feinholz

$
0
0

To better diagnose the issues, I would need to see the entire script.
There are inconsistencies in what is presented.
1. you mention TPT Export operator, but the Export operator only supports Teradata.
2. Even though you mention the Export operator, the output seems to indicate you are using the DataConnector operator to interface with Hadoop. That would be correct.
3. There is an error message involving the access module, but the DC operator should not be using any access modules when with Hadoop.
 
Thus, please supply the entire TPT script, and we should be able to help out.
 

TPT Template operators with LDAP authentication - response (17) by KM185041

$
0
0

Thanks for the confirmation Steve.

fastload on linux - response (8) by Harinath Vicky

$
0
0

IRI is handling this issue (data remapping, including field/column-level endian change support) and creating TD Fast/Multi-load configs via its CoSort or NextForm tools in Eclipse.

Viewing all 4252 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>