Quantcast
Channel: Teradata Forums - Tools
Viewing all 4252 articles
Browse latest View live

Named Pipe jobs failing on checkpoint file - response (1) by feinholz

$
0
0

I am not quite sure what you mean by TPTLOAD "named pipe" jobs.
I am assuming you mean that the job is sending the data through named pipes to the Load operator.
Since you also mentioned the Data Connector operator, then the job is doing one of 2 things. It is either trying to open/manage named pipes natively, or you must be using the Named Pipe Access Module.
If you are not using the Named Pipe Access Module, then you should. We do not recommend using native named pipes with TPT. Very hard to manage and they do not support restarts.
The NPAM supports restarts because it does create its own checkpoint (also called fallback) file to store data that it sends to the DC operator.
In any event, the DC operator will always create a checkpoint file because it has information of its own to keep track of while a job is running. the user should not concern itself about any temp files created under the covers.
 


Write Hierarchial file using Fast export - forum topic by Archer

$
0
0

Hi All,
I would like to know if there is a way to write a hierarchial file using fast export utility.
For ex, lets assume we have 5 levels of hierarchy. (L1, L2, L3, L4, L5).
And let's assume L1 can have two L2's, three L3's, 1 L4 and 1 L5. 
Individually all these levels mentioned are a table.
Assuming, there are two records at L1 level. The fast export file should write something like
L1 (1)
L2 (Record corresponding to L1(1))
L2 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L3 (Record corresponding to L1(1))
L4 (Record corresponding to L1(1))
L5 (Record corresponding to L1(1))
L1 (2)
L2 (Record corresponding to L1(2))
L2 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L3 (Record corresponding to L1(2))
L4 (Record corresponding to L1(2))
L5 (Record corresponding to L1(2))

 

Is possible to write such a hierarchial file using fast export. Appreciate if anyone could provide some pointers.

Forums: 

Write Hierarchial file using Fast export - response (1) by Raja_KT

$
0
0

To be more clear, maybe you can provide an example, because it may be misleading.
I feel a combination of fastexport with unix scripting may help here.
The word hierarchical, looks like you have xml, json, geojson.... formats...

Teradata SQL Grammar - response (1) by Raja_KT

$
0
0

Will the syntactical diagram from the document help you?

TERADATA TPUMP - forum topic by somasankaram

$
0
0

Hi all,
Can any one let me know the detailed functionality of TPUMP utility. On PI table, the row distribution is fairly even. But in case of NoPI, how is  the even distribution achieved?
 
In case of Fast Load on NoPI tables, if the no. of fast load sessions is less than the no. of amps in the system,then only those amps will be used for loading the data and then the deblocker task will perform a round robin technique to distribute the rows evenly to other amps. 
In case of TPUMP Load on NoPI table , I read that the hashing is done on query ID and all the rows that TPUMP fetches will be loaded to that AMP. Lets say, we have written a query to load the data in NoPI table using TPUMP. The query may fetch one row or multiple rows(say 100 rows). Since in TPUMP the hashing is done on query ID, the output would be 32 bit Row hash. If we take 16 or 20 bits to map a AMP, all the 100 rows goes to the same AMP.If this is the case, is it not leading to skewing?
 
Is the deblocker task performing a round robin technique, as in case of Fast load to acomplish even distribution?
 
Please help me in understanding the TPUMP utility functionality.
 
Thanks in advance,
 
Best Regards,
Shankar

Forums: 

TERADATA TPUMP - response (1) by dnoeth

$
0
0

Hi Shankar,
all rows go to the same AMP and are appended at the 'end' of the table probably in a single datablock, that's why it's much more efficient than distributing them to multiple AMPs and storing them in multiple blocks (in worst case read/write one datablock for each input row).
The next pack of rows will be stored on a different AMP based on the next query ID. If you got lots of rows randomly distributing those packs normally results in a good distribution.
And if it's a small number of rows you simply don't care :-)
 

Named Pipe jobs failing on checkpoint file - response (2) by goldminer

$
0
0

As always thanks Steve!  We had a contracting firm set up the Data Services configuration and they chose generic named pipes over NPAM for some reason.  I will try to get our ETL team to switch it over.
 
Joe

Write Hierarchial file using Fast export - response (2) by feinholz

$
0
0

The basic answer to the question is "no".
 
The record formats are explicitly documented in the FastExport manual and hierarchical records are not supported.
 


MLOAD and loading of: Empty Strings and Null Strings - response (6) by TDDeveloper

$
0
0

Thanks Stevef.  Even without the TRIM the blank columns are loaded as empty string with the TEXT format.  With the VARTEXT format both the blanks and the NULL (two adjacent delimitter char) columns are loaded as NULL. This is what I find as inconsistant, agree?

MLOAD and loading of: Empty Strings and Null Strings - response (7) by feinholz

$
0
0

After the holidays we will take a look. Inconsistent? I would just call it wrong, if it is true.
 
Ivy posted above that she showed a VARTEXT job loading blanks into the DBS and not NULL when the source data had blanks.
 

Teradata SQL Grammar - response (2) by sundarvenkata

$
0
0

I want a textual representation of the grammar. Like the BNF grammar here: http://savage.net.au/SQL/sql-92.bnf.html

TPT ODBC Operator - response (8) by secondino

$
0
0

SteveF,
I downloaded the TTU client 15.0 and installed it but I do not see the progress data direct drivers anywhere. is it becasue I need the Efixs and the License?
Scott

TERADATA TPUMP - response (2) by somasankaram

$
0
0

Hi dnoeth,
Thanks for the response.
Is there any document that explains in detail about TPUMP?
 
Best Regards,
Shankar

TERADATA TPUMP - response (3) by dnoeth

$
0
0

Hi Shankar,
everything's in the manuals, but not always easy to find :-)
http://www.info.teradata.com/HTMLPubs/DB_TTU_15_00/Database_Management/B035_1094_015K/ch08.060.076.html

TPT Load Operator Table Lock - response (8) by alchang

$
0
0

i am curious how to release a tpt load job?
i tried to execute a SQL: 
release tptload <Table_Name>;
but it is failed. 
thantks.


TPT Load Operator Table Lock - response (9) by feinholz

$
0
0

If you are using the Load operator, the only way to release a lock is to complete the Application Phase.
 
The only other option is to drop the target table (and error tables) and start the job from the beginning.
 
Are you using the Load operator or Update operator?

TPT ODBC Operator - response (9) by feinholz

$
0
0

The support of the bundling of the drivers was done post-GCA in an efix.
Please download all of the latest 15.0 efix patches TPT.
You will then need to contact Mark Haydostian (Mark.Haydostian@Teradata.com) for the license key.
 

MLOAD and loading of: Empty Strings and Null Strings - response (8) by Ivyuan

$
0
0

Hi TDDeveloper,
Can you share with me your MultiLoad script and the target table definition? Thanks!
--Ivy.

Teradata tpt - response (7) by akd2k6

$
0
0

thanks Syeve, it worked.
I am unloading the data by fastexport or tpt to comma separated file with select * from tab; and trying to load the same unloaded csv file to target table.
This source and target table columns are not all varchar, instead they can be char, date,timestamp,decimal  all.
 
Can this loading be achived by mload?if so, can you please give the options in mload.

Teradata tpt - response (8) by feinholz

$
0
0

MultiLoad is only a load tool, so not sure what you mean be "archive".
MultiLoad can load csv data. The record format is called "vartext".
In the schema definition, all columns must be defined as varchar in order to use vartext loading.
The MultiLoad Reference Manual has the necessary information for you.
 
Question: why are you unloading with FastExport (or TPT) and loading with MultiLoad? What is the ultimate goal here? And just fyi, you should always be using TPT.

Viewing all 4252 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>