If we run TPT script directly by using tbuild, then it generates job id and basing on that we find the present status by using twbstat and the view the log file by using tlogview.
But if we run TPT from informatica how to get the jobid and the TPT log file.
Regards,
Sunny
TPT log file when run from Informatica - forum topic by adisunny2002
Teradata Sql Assistant V13.10 error - forum topic by rychitre
I installed Teradata Sql Assistant V 13.10 on new machine. When I open it, it gives error as ‘Microsoft Access Database Engine [ACEDAO.dll] not found, or not registered correctly. History will not be available. ‘. With this error, we are not able to see the history. How to fix this error?
TPT log file when run from Informatica - response (1) by feinholz
Teradata Sql Assistant V13.10 error - response (1) by MikeDempsey
We stopped using ACEDAO a long time ago because there were issues with Microsoft's support for multiple versions [32 vs 64 bit] of MS Office. (they dont allow multiple versions on the same system)
You must be using a very old version of 13.10.
Download the latest version 13.11.0.7 (This is a TTU 13.10 version)
(This version was released 18 months ago.)
TPT & Quoted data bug? - response (1) by feinholz
This issue was fixed in 14.10.00.06.
Always a good idea, though, to download all of the latest patches for all packages of TPT 14.10.
Trying to understand the Secondary Index - response (10) by ranjith2009oracle
Hi,
In NUSI the base table distribution and Subtable distribution based on the primary index.So that is the Reason both base table and Subtable rows are existed in the same amp.Where as in USI base table distribution based on primary index and sub table distribution based on sub table secnodary index.so only both base table and Subtable rows are distributed in different amps.
How to execute a macro using TPT operator? - response (8) by vincent91
I don't know how but maybe someone have any suggestions on how to simulate condition based on ACTIVITYCOUNT anyway ?
Because I really need to introduce a condition in my JOB
Thanks
MLOAD and loading of: Empty Strings and Null Strings - response (3) by TDDeveloper
Ivyuan/Fred, Thanks for your reply.
The behavior of loading NULL is what I expect. Try loading a fixed length file with a column value all blanks. this column is loaded as an empty-string rather than NULL string. So there's no way except using a CASE statement to load NULL instead of empty-string when loading from a fixed format text file.
I am not sure if this inconsitancy in the treatment of loading blank column is documented.
MLOAD and loading of: Empty Strings and Null Strings - response (4) by feinholz
Any column value with all blanks should be loaded as a column of all blanks. Not NULL or empty string. The only way to load an empty string is to use the quoted vartext feature and quote your data. The empty string would then be denoted by "".
MLOAD and loading of: Empty Strings and Null Strings - response (5) by feinholz
There might be another way.
You mentioned that you have fixed length fields.
This usually means you are using the TEXT format and not VARTEXT.
Have you tried the TRIM function.
A field of all blanks when trimming the blanks should yield an empty string.
How to execute a macro using TPT operator? - response (9) by feinholz
As already noted, TPT does not currently support conditional logic.
As far as I know there is no other way to simulate conditional logic.
Why do you want to convert a BTEQ script to TPT?
plink ssh connection with no - forum topic by matan7779
I try in C# to make an ssh connection with plink.
I get empty output.
Process p = new Process();
p.StartInfo.FileName = @"plink.exe";
param = "-ssh -pw " + Pass + "" + User + "@" + Host + "" + Cmd;
string cd = Environment.CurrentDirectory;
if (File.Exists("plink.exe") == false)
{
throw new Exception("SSHClient: plink.exe not found.");
}
else
{
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.CreateNoWindow = true;
p.StartInfo.Arguments = param;
p.Start();
p.StandardInput.Write("plink.exe " + param);
standerOut = p.StandardOutput;
string dd = p.Responding.ToString();
string f = p.StandardOutput.ReadToEnd();
while (!p.HasExited)
{
if (!standerOut.EndOfStream)
{
strReturn += standerOut.ReadLine() + Environment.NewLine;
}
}
string x = p.StandardError.ToString();
}
MessageBox.Show(strReturn);
1. dd is true
2. f is empty
3. x i get "System.IO.StreamReader"
4. strReturn i get "" (empty results)
Please please help
tx
matan
plink ssh connection with no - response (1) by matan7779
Anf from cmd line it looks ok:
C:\>plink.exe -pw XXXXX -ssh root@XXXXXXXX ls -a
..
.history
.profile
..
..
..
C:\>
Any idea?
tx
Matan
plink ssh connection with no - response (2) by matan7779
I managed.
tx anyway all :)
How to execute a macro using TPT operator? - response (10) by Terrybuck
Yes in SOL there are the if-else statement used to give any type of condition.
-------------
http://www.sitewired.com
Teradata tpt - response (5) by akd2k6
Hi Steve,I am using now tpt unload and load.I need to execute the below sql
SQL = 'SELECT * FROM xxx.WZ1D02_BITPROC where CAST(fec_proceso as date format 'YYYY-MM-DD') >= '2014-12-02';'
I am keeping this in job variable file but getting error as-
Teradata Parallel Transporter Version 14.10.00.05
TPT_INFRA: TPT02026: Error: Line 12 of Local Job Variable File:
Quoted string lacks a single quote terminator character (')
Can you please advise how I can write the SQL query in job variable file.My current job variable file entery is as below and completly dinamically generated before calling the tbuild.
UsrId = 'xxxxx'
Pwd = 'xxxxx'
Tab = 'WZ1D02_BITPROC'
DB = 'xxx'
Tdp = 'xxx'
PrivateLog1 = 'file_writer_privatelog1_25559114'
PrivateLog2 = 'file_writer_privatelog2_25559114'
PrivateLog3 = 'file_writer_privatelog3_25559114'
Database_Table = 'xxx.WZ1D02_BITPROC'
OP_FILE ='WZ1D02_BITPROC.20141227200452.dat'
DIR_PATH = 'xxx/xxx/'
SQL = 'SELECT * FROM xxx.WZ1D02_BITPROC where CAST(fec_proceso as date format 'YYYY-MM-DD') >= '2014-12-02';'
Cached credentials causing ABU jobs to fail? - response (1) by seven11
sounds odd
Only thing I can think of is if the password in question has a character that doesn't work with the configuration script. I would try to avoid passwords containing dollar sign, single, or double quote characters. Suggest test with a plain password containing just numbers and letters (first character a letter).
FYI, you should be able to rerun the configuration script targeting just the RDBMS login details with the "-au" switch
/opt/teradata/abu_service/service/bin/configure –au
Teradata tpt - response (6) by feinholz
A string attribute value must be enclosed with single quotes.
Thus, you need to escape the single-quotes inside that string.
You need to double all of the single quotes insidfe the quoted string.
Named Pipe jobs failing on checkpoint file - forum topic by goldminer
We utilize an ETL tool from a vendor I wish to remain anonymous (starts with an S and ends with a P) :) This particular vendor has chosen to execute TPTLOAD via tbuild instead of the TPTAPI. There is no option in the GUI to execute via the TPTAPI. We are in the process of upgrading this tool. We are starting to experience random named pipe failures that may be related to checkpointing. When a TPTLOAD job fails, it creates a checkpoint file in the checkpoint directory on the server. This is created in the event the user wants to restart the job from the failure point. This is all very logical and straight forward in my mind. Where things start to fall apart is when a "named pipes" TPTLOAD is executed via tbuild. By definition a named pipe job cannot be restarted via a checkpoint file. This feature is reserved for physical files being read from disk. If a TPTLOAD "named pipes" job fails, why would the Data Connector look for a checkpoint file in the checkpoint directory on the server and then fail if it is not found. If the checkpoint file were found it could not be used as part of a "named pipes" restart process anyway...correct?
If a TPTLOAD "named pipes" job fails, we normally just clean up the error tables and restart the job from the beginning. We can do this because we do not have very tight SLAs or very large amounts of data. Sometimes (not sure what triggers this), when we try to restart, the job fails again because it found a checkpoint file associated with a prior run failure. A couple of specific questions that I have are:
1. Are checkpoint files always created for TPTLOAD "named pipe" jobs even though they will never be used? If so, why?
2. When a TPTLOAD "named pipe" job fails, are the checkpoint files always created and left in checkpoint files directory on the server? If so, do these files always need to be deleted before the "named pipe" job can be re-submitted?
3. Any idea why a GUI generated tbuild "named pipes" job would fail, be re-submitted, and then fail again because a checkpoint file exists?
I am really trying to understand the relationship between "named pipe" jobs and checkpoint files since they seem to be mutually exclusive.
Thanks,
Joe
Teradata SQL Grammar - forum topic by sundarvenkata
Hi,
I am planning to write a custom parser for Teradata SQL. Is the Teradata SQL grammar available for download somewhere? Is this grammar an open specification?
Thanks,
Sundar
Script-based TPT is a batch job that runs by itself.
When Informatica is used, the ETL tool uses TPTAPI, meaning it has direct access to the operators themselves, but there is no TPT "job". It is an ETL job.
TPTAPI does not generate a job id or a log.