Teradata TPT Script - response (28) by ulrich
BTEQ not remove trailing blank - response (1) by dnoeth
Hi Peter,
AFAIK there's no way to get trailing blanks in a BTEQ REPORT, this format was used to directly print reports and there's no reason to print trailing blanks :-)
A numeric value like the one in your example is usually right aligned, so if the last column was numeric this would avoid the truncation.
How does the Select within the macro look like?
Dieter
BTEQ not remove trailing blank - response (2) by PeterSchwennesen
Hi Dieter
The outer part of my select is:
So this means that if I format the last column like: ________123 in stead of 123______ then I get the defined record legnth? (_ is space)
br
Peter Schwennesen
SELECT SUBSTR(CAST(A.UgeId AS CHAR(6)),3,6) ||
SUBSTR(CAST(A.ButikId AS CHAR(20)),1,7) ||
CAST(SUBSTR(CAST(PluId AS CHAR(14)), 1 , INDEX(CAST(PluId AS CHAR(14)), '.') - 1) || '' AS CHAR (13)) ||
SUBSTR(CAST((CASE
WHEN A.OmsSalgStk < 0 THEN 10000000
ELSE A.OmsSalgStk + 10000000
END) AS CHAR(8)) ,3) ||
SUBSTR(CAST((CASE
WHEN A.OmsSalgInclMomsDKK < 0 THEN 1000000000
ELSE A.OmsSalgInclMomsDKK + 1000000000
END) AS CHAR(10)),2) ||
SUBSTR(CAST((CASE
WHEN A.AvgPris < 0 THEN 10000000
ELSE A.AvgPris + 10000000
END) AS CHAR(8)) ,2) ||
SUBSTR(CAST(A.Varetekst AS CHAR(30)),1,30) ||
SUBSTR(A.MaengdeEnhedNavn,1,3) ||
SUBSTR(CAST((CASE
WHEN A.Volumen < 0 THEN 1000000
ELSE A.Volumen + 1000000
END) AS CHAR(7)),2) ||
SUBSTR(CAST(A.BestNr AS CHAR(13)),1,13)
(TITLE '') --AS detail
FROM (
SELECT ...
) AS A
ORDER BY 1 DESC
;
BTEQ not remove trailing blank - response (3) by dnoeth
Hi Peter,
yes, correct :-)
Just change the CAST from ANSI to Teradata syntax, ANSI casts numeric values left aligned whereas Teradata does it right aligned:
A.BestNr (CHAR(13))
Caution, if the maximum number of characters (based on the column's FORMAT) is greater than 13 this cast will strip some digits.
Assuming BestNr is a numeric column you might simply add a FORMAT:
A.BestNr (CHAR(13), FORMAT 'Z(13)')
Btw, you don't have to use a SubString and Cast unless you want to extract a specific portion of the string.
And you don't have to concat all the columns, you probably do it because you don't want do all those (TITLE ''):
Change EXPORT REPORT to
EXPORT DATA followed by SET RECORDMODE OFF
This omits the title plus the titledashes (undocumented but quite nice)
If you don't want blanks between columns you can SET SEPARATOR '' or SET SEPARATOR 0
And finally: instead of typecasing to char/substring you might better add FORMATs, thus you got better control over the formatting (and REPORT format automatically applies the cast to char)
Dieter
BTEQ not remove trailing blank - response (4) by PeterSchwennesen
Hi Dieter
Thanks for the information.
Basically this SQL is some code left over from a college of mine, and not to introduce all too many "errors" I have kept the substring and cast() as he wrote it.
But your input has been of great help and inspiration.
br
Peter Schwennesen
MLOAD Question - forum topic by rajesharra
BTEQ login help-LDAP and ksh - response (1) by garyadmin2
assuming your Teradata system has been configured to use LDAP - then using your network userid and password try:
bteq
.logmech LDAP
.logon xxx/$USER,$PASSWD
and any gateway errors will be on :-
DBc.Software_Event_LogV
to check if your teradata database is configured, sign onto a node and run this, key your password when prompted:-
/opt/teradata/tdat/tdgss/xxxx/bin/tdsbind -u userid
BTEQ login help-LDAP and ksh - response (2) by garyadmin3
assuming your Teradata system has been configured to use LDAP -
then using your network userid and password try:
bteq
.logmech LDAP
.logon xxx/$USER,$PASSWD
and any gateway connection errors will be on :-
DBc.Software_Event_LogV
to check if your teradata database is configured, sign onto a node and run this, key your password when prompted:
/opt/teradata/tdat/tdgss/xxxx/bin/tdsbind -u userid
FastLoad Loop Through Multiple Files for Single Table - response (6) by TonyL
TPT is available on the TTU 13.10 media titled "Teradata Parallel Transporter".
MLOAD Question - response (1) by feinholz
Sequential.
Can BTEQ Prompt User for Password in Scripts - response (11) by sauravrout
What if, i am using a LDAP based id?
why mload accepts duplicates and why dont fastload - forum topic by bharathsft
Hi,
can you please tell why mload accepts duplicates, what happens inside and why fastload not allow multiset duplicates.
Please explain
Thanks in advance
why mload accepts duplicates and why dont fastload - response (1) by dnoeth
FastLoad is older than MLoad, when FL was implemented there was no MultiSet table in Teradata, yet.
This has been added later due to Standard SQL.
FastLoad doesn't know if a row is duplicate because it's an actual duplicate row or because there was a restart and the same row has been sent a second time. FL stores data directly in the target table and there's no place to store that kind of information.
If this feature was added it would mean more overhead/addidional space usage/slowing down the load and then it's no longer faster than MLoad. And why implementing this when there's already a tool (MLoad) which has this feature?
Btw, FL will load duplicates into a NoPI table (because there's no sort involved).
Dieter
Utilities to export data from Teradata to Oracle? - response (2) by vikk02
oh ok,Thanks steve
Fast load Vs Mload when AMP is down - forum topic by cheeli
Hello Experts,
No AMPs may go down (i.e., go offline) while fastload is processing, where as Multiload can work based on the value set through the parameter AMPCHECK. How is this achieved. What kind of processing is done by the load utilities in this case.
Assuming the both fallback and non-fallback tables in this scenario, can you please explain this them relative to fast load and Mload. Thank you for your time on this.
Mload acquisition and application phase - forum topic by cheeli
Hi Experts,
I am a bit confused on how Mload works in acquisition and application phase( how data moves,matching sequence numbers. One of my friend guided me that all the inserts, updates,deletes are done on worktables (acquisition phase) and finally they are just applied to the target table (application phase). Can you please elucidate on this.
Will the DML and data rows are concatenated or they are just compared during processing based on some conditions? Also, how does the same row that is being affected by multiple DML's handled in Mload.
I have been dwindling to get some categorical information on all these, but not to picture perfect. Thank you for your time on this.
Mload error - Highest return code encountered = '23' - forum topic by cheeli
Hello experts,
One of our Informatica job that uses Mload (Upsert) has failed with the return code 23. On further checking we found that there are some records in UV table. (in clean up phase)
SELECT COUNT(*) FROM databasename.UV_tgttable;
**** 21:02:09 UTY0820 Error table databasename.UV_tgttable contains 117 rows
We have verified and confirmed that there are no duplicates in the source table. What might have caused the issue.
We have checked with DBA's on this and they replied that
Either your SQL or data has some invalid characters. Please cross check source flat file for any invalid characters.
The question is, the UV_tgttable will contain UPI violations and the violations such as other constraint, others should (may) be in ET_tgttable. Why was UV_tgttable loaded with 117 rows even there are no duplicates. Will UV_tgttable contain the rows that have invalid characters.
Later the staging and target tables are truncated and high level team did a fresh load. Now it showed zero rows in UV_tgttable. ( we are not aware of any changes done to source flat file as we don't have access)
What might be the issue?
If there are rows in the error tables in cleanup phase, will the Mload job fail? (I hope not). So, if we are running Mload jobs through a batch, how can we identify when the records are missing as the error tables will be dropped which has records after Mload completion.
Thank you for your time on this.
FASTLOAD Error - Loading Fixed Length file to Table - forum topic by kiranwt
Hi Experts,
I'm getting the below error while loading flat file to Table using FL facility. I'm new to TD and tried all the options to resolve the issue. But, no luck.
My input file (EMP_FLAT) looks like below:
1xxx101000
2yyy202000
3zzz303000
My FL code is:
.SET RECORD UNFORMATTED;
SESSIONS 2;
.LOGON
DROP TABLE EREMP1;
DROP TABLE EREMP2;
DELETE FROM EMP;
DEFINE
:EID(CHAR(1)),
:ENM(CHAR(5)),
:ECD(CHAR(2)),
:ESAL(CHAR(5))
FILE=EMP_FLAT.TXT;
BEGIN LOADING EMP ERRORFILES EREMP1,EREMP2 CHECKPOINT 5;
INSERT INTO EMP (EMPID, EMPNAME, EMPCODE, EMPSAL) VALUES (:EID, :ENM, :ECD, :ESAL);
END LOADING
.LOGOFF
The error being hitting is -->
0008 BEGIN LOADING FPARTY
ERRORFILES ERFPARTY, UVFPARTY
CHECKPOINT 5;
**** 21:47:35 Number of AMPs available: 2
**** 21:47:35 BEGIN LOADING COMPLETE
===================================================================
= =
= Insert Phase =
= =
===================================================================
0009 INSERT INTO FPARTY(PARTYID,PARTYNAME,PARTYCODE,PARTYINCOME)
VALUES (:I_PID,:I_PNM,:I_PCD,:I_PINC);
**** 21:47:35 FDL4816 Statement rejected, cannot match elements with
DEFINEs
I_PID is not defined
I_PNM is not defined
I_PCD is not defined
I_PINC is not defined
===================================================================
= =
= Logoff/Disconnect =
= =
===================================================================
Any inputs on this would be highly appreciated.
Thanks in Advance.
Compression-issue - response (8) by amisaxen
Hi Dieter,
We have proposed MVC to our Client in T12 setup. Have conveyed technically that compression will not impact data either way. A developer has popped up with a good concern that some of these columns are derived from bases that can change , If any of these bases change the profile of the data in the tables will change, which means that a totally new set of ‘ideal’ compression values would apply.
Also he has queried
How often would the compression values be reviewed?
I my understanding, If the column values are more volatile for derived columns then we do not suggest applying the compression. Can you suggest your inputs on it if any.
Regards,
Amit
Try run
tbuild -V
on the command line
if this is not giving you an information at least the installation would be incomplete...