Hi Experts,
COuld you please give me some details of setting up data filters for selective data replication in SLT version 7.3.
We could not see proper tabs in IUUC_REPL_CONTENT transaction code.
ANy help on this is much appreciated.
Regards
Devi
Hi Experts,
COuld you please give me some details of setting up data filters for selective data replication in SLT version 7.3.
We could not see proper tabs in IUUC_REPL_CONTENT transaction code.
ANy help on this is much appreciated.
Regards
Devi
Experts
We started Seeing some performance issues while replicating table from our ECC to HANA using SLT. The logging table shows 28 M records, but when I check the table in HANA, its loading 5K records every few seconds. I checked the Latency for this table in LTR and see that Latency Max is "3 days". So the Logging table has records from the past three days.
I ran the stats on ECC system and still of no use. So how do I check where the problem is. Can we increase the Packet size. How do we check the Performance of this load. Also is there a place where I can check how many records being loaded to HANA. I did the Table health check from Expert Function and it seems to be fine.
I have total of 21 WP and out of which 10 are defined for the data loads.
Mahesh Shetty
Hello,
I have a query on data replication from non-sap (DB2) to sap (DB6) replication, my doubt is regarding the automatic replication of changes in this case. Is trigger technology and logging tables concept still applicable in this case ?
Thanks
Anita
Hi
We want to use SQL Server as a non-Abap source for SLT replication.
We are following this note
1774329 - Preparing your SAP instance to connect to remote SQL server
To create a connection from the SAP System to a SQL Server, you will need two database systemspecific
software components. These are the Database Shared Library (DBSL) from SAP and the
Microsoft SQL Server SNAC client library. You can download the SAP DBSL from SAP Service
Marketplace. The SNAC client can be downloaded from http://download.microsoft.com
We looked for the DBSL component and can only find it for SLT on Windows and on Linux - not on AIX.
Thanks
Diane
Hi,
I have created one SLT configuration to transfer the data from ERP system to HANA. The initial replication of the metadata data tables (DD02L, DD02T and DD08L) have been completed but I am not able to access the "Data Preview" option of these tables. I am receiving the following error message whenever I select the Data Preview option.
I have also tried to grant schema select rights to _SYS_REPO user, but it failed.
Can anyone please provide proper guidance to rectify this issue.
Hello,
We replicate ECC tables with fields having more than 16 characters in their technical names. Example: VBAP with FISCAL_INCENTIVE_ID "Incentive ID".
However in SLT the DDIC accepts only 16 characters. The result is:
- duplicate fields in SLT as FISCAL_INCENTIVE_ID becomes FISCAL_INCENTIVE and FISCAL_INCENTIVE already exists with description "Tax Incentive Type"
- errors occurs when activating structures and tables in the DDIC and Abap workbench stuff (function groups). We then have dumps in ST22.
We use DMIS 2011_1_700 0004 SAPK-11304INDMIS.
We,also have a sandbox using DMIS 2011_1_700 0007 SAPK-11307INDMIS with the same behavior.
Is someone having the same troubles?
Regards,
Laurent
Hello,
Is there a way with SLT to reload a table without dropping and creating it again?
Thanks,
Amir
Hello Experts,
I am facing one issue where huge number of unprocessed logging table records were found in SLT system for one table. I have check all setting and error logs but not found any evidence that causing the unprocessed records. In HANA system also it shows in replicated status. Could you please suggest me something other than to replicate same table again, as that option is not possible at this moment.
Hi,
We have just updated our DMIS from SP4 to SP7 in order to replicate view as it is supported only from SP6. We are confused with the way how the view works. In LTRS, we created the filter settings for the view. Then tried to add the view in LTRC, but it gave the error "Table class VIEW of table /POSDW/V_AGGRTI is not supported for data provisioning" with the option 'Start Load' or 'Start Replication'. The underlying tables of the view already exists in the target schema in HANA. Then i tried the option 'Create Table or DB view'. It created the view in the hana database and then disappeared from the LTRC table overview. The view structure in HANA is not as per the filter settings, the view structure is same as the source system.
My questions are:-
1) Is Create Table or DB view is only option for adding view? whether other options like Initial Load or Replication will not work for it?
2) Why the view is not created as per the filter settings mentioned in LTRS
3) Why the view is not displayed in the Tables Overview? Is it correct behaviour?
4) What difference does it make if we create the view directly at the HANA database instead of using SLT?
We have a table (VBRP) that is currently replicated to HANA from ECC through SLT. Currently it has more than 2 billion entries in HANA. Now there is a business requirement to add one more field to the table in HANA.
If the table is small, we can easily update the filter settings in SLT and then stop/start replication in SLT which should recreate the table in HANA with the additional field and do the full data initialization.
But in case of big table, doing a data initialization again will take long time. Is there a way to do it without doing full data initialization again?
Hi,
Is it possible to change the replication option in SLT configuration like from Real time to Schedule by interval or viceversa ?
Hi,
If we have real time replication scenario then is it mandatory to keep DD* tables under real time replication always? Or we can stop after the initial load.
Also, what are the negative impacts if we do not load DD* tables after initial loads.
Hello team
My DMIS version is as below, so please check on my points/questions accordingly and help me to understand on that.
Please check and help me to understand the below points
1. I have created configuration and loaded KNA1 table from ECC system to HANA system. It is successfully loaded. I have checked SM37 in SLT server for job details, i am not able to find any job created for this run and parallely i checked in hana studion-->dataprovisioning-->jobs, here also i am not finding any job created for that loading of KNA1 from ECC to HANA system.
So now i want to know where can i see the jobs created in background to load KNA1 data into HANA because i want to measure the time taken for loading KNA1 data into HANA. please check and help me to get the job details, do i need to check SM37 in ECC system, please clarify. In the meantime, please tell me where can i get complete statistics of TIME taken to load KNA1 data into HANA from ECC system.
2. I want to replicate BSEG and BKPF table from ECC to HANA system. so please check and suggest me on the below points.
a) Do i need to do any additional settings to replicate these two BSEG and BKPF tables because these tables are cluster tables and these tables are having huge amount of data in it.
b) I had see a blog from Tobias Koebler on "
How To filter on the initial load & parallelize replication DMIS 2011 SP06 or higher", please check and suggest me whether i need to follow these steps for replicating BSEG and BKPF tables. please clarify.
c) i want to keep filter on BKPF table and i want to extract data from BKPF and BSEG tables based on those filters. normally in abap we join these two tables by using inner join and in where condition we will put the filters. so now i want to know how i can join these two tables BSEG and BKPF tables while replicating because i dont want to extract all data. Based on the filters of one table lets say BKPF and the for that filtered data i want to extract BSEG data or vise-verse. please check and suggest me how to put the inner join for these two tables in SLT replication.
Additional inputs are welcome.
Thanks and Regards
Raj
Hi experts,
there is a problem with some tables during initial loading. In some cases the initial load for a table hangs by 75%.
Here are some facts:
some of the affected tables: MARA (ca. 3,5 Mio records), MARC (ca. 220 Mio records), S912 (> 60 Mio records)
number of tables in one initial load request: 43 tables
number of tables where the initial load finished successful: 37
average time for initial load with the constellation above: ca. 6 hours
abnormality in the loading statistics: the end time for the affected tables do not change any more while there are still records in the logging tables and unfinished portions.
Settings of SLT / ERP system:
currently the following numer of jobs are set in LTRC: 15 data transfer jobs, 15 initial load jobs, 6 calculation Jobs
SLT system:
Oracle 11.2.0.3.0
Netweaver 7.4 SPS8 , DMIS 2011_1-731 SPS 7
VMWare (6 VCPUs, 40 GB RAM)
Batch: 20 processes available
Dialog: 15 processes available
ERP system:
Oracle 11.2.0.3.0
Netweaver 7.4 SPS7, DMIS 2011_1-731 SPS 7
VMWare (4 VCPU, 32 GB RAM)
Batch: 20 processes available
Dialog: 30 processes available
Please assist me in the questions: what can be the reason and what can I do (resolve the problem and/or analyse the problem). As I know restarting the SLT master job can help in some cases.... but that is no permanent solution
Please let me know, if you need more information.
Regards Thorsten Füg
Hello all,
Upgraded the DMIS of an intended SLT replication server to DMSI 11 SP08. Created new configuration with ECC source system - successful. However, it only submitted IUC_REP_MSTR job - it did not submit any of the other jobs
- IUC_REP_CNTR_NNN (IUUC_REPL_MASTER_CONTROLLER),
- IUC_DEF_COBJ_nnn (IUUC_CREATE_COBJS_IF_TBL_ENTRY)
- IUC_CALC_ACP_nnn (DMC_MT_PREC_ACP_CALCULATION)
- IUC_LOAD_MT_nnn (DMC_MT_STARTER_BATCH)
So the DD02L, DD02T, DD08L are just in 'scheduled' status. Tried to manually trigger it from LTRC, but just don't do anything.
The other SLT replicaiton server (still in SP07) which is pointing to the same HANA DB - everything is working.
Any idea/hint to look for? Thanks in advance.
Regards,
Terry
Hello,
We would like to alter the data in one of the tables we are replicating.
The table is too big to alter so we are thinking of doing the following:
1. Suspend the replication.
2. copy the table and its data with DataServices to a new table and make the changes
3. Delete the data in the replicate table
4. Copy the table back to the replicated table.
Can someone advice what will be the influence on SLT?
Thanks,
Amir
From the SLT Operations Guide, section 5.2 I can see the following statement
5.2 Archiving Data in Source Systems
The trigger-based replication also considers the deletion in source tables by archive activities (since it is not possible to distinguish on the database level between delete actions cause by archiving versus regular deletion of data records). As a consequence, SAP LT Replication Server will also replicate archiving activities as delete actions in the SAP HANA database.
In a typical standalone/sidecar implementation of SAP HANA, I would assume that in most cases this is not favorable behavior or a desired function. In typical DW/DataMart implementations, the data should be persisted in the target even after the source system data may have been archived. I can refer back to how BW operates in this caes - any new/changed data is extracted to BW, but any archiving operations do not affect the already extracted data in the target system.
I know there is functionality available to load archived data into HANA, but that would seem like a troublsome method to 'put the pieces back together' and get a wholistic picture of all the historical data (online data + archived objects) and would present some interesting challenges in the target (HANA).
Is there any way to disable the functionality to replicate deleted data due to archiving? Is there anyone with some experience navigating around this hurdle in a standalone/sidecar scenario that can shed some light as to how they handled this?
Thanks,
Justin
Hi Team,
I have maintained source table entries (IUUC_PERF_OPTION) in SLT DEV, I would like to promote the table entries from DEV to QA landscape. Our mass transfer ID is different in both SLT DEV and QA. Hence, Please suggest me the approach to promote the entries from DEV to QA?
Thanks for your help in Advance!
Regards,
Sathish