The Power of the Information Design Tool

Imagine this scenario: you have customer delivery data in your SAP BW system and quality control data is contained in nine separate location specific Oracle databases. You need to pull the quality data associated with products that have been delivered to a customer over the last several months and will want to continue ongoing monitoring.

This would normally turn into a huge manual effort on the part of a business analyst who would pull the data from both systems and merge it together for the historical data along with a ongoing manual effort. Meanwhile, IT would put together a project to extract the data from one system into the other, or both into a separate system as an automated procedure. This type of project normally takes weeks, if not months, to complete once the funding has been secured and the project has been added to the release schedule. All this considered it could easily take six months to a year to complete. I just built an automated solution for this problem in two days using SAP BusinessObjects BI4 and the Information Design Tool Common Semantic Layer.

The best part is we now have a semantic layer that will let the business users build future reports as needs arise.

I created a new data foundation using a relational connection to the BW cube allowing it to automatically create tables and joins. The result looks messy, but if you allow the business layer to automatically create classes and objects it does a pretty good job of giving you something you can start from (much better than what the legacy Universe Design Tool builds from a Bex Query).

The quality data was already in an existing universe, which used nine Oracle connections to the different databases with the same table structures. To make reporting across systems possible I used derived tables to union the nine systems together. All I had to do was copy and paste the derived tables in the data foundation and then copy the classes and objects out of the existing business layer to my new universe. Lastly, I made a few minor alterations to the business layer and did some clean up to get a new universe that joined with the cube.

Once all the universe work was done the report development was pretty straight forward, pulling data from both database sources without anyone being the wiser. Surprisingly, performance is also pretty good. Crossing all the Oracle connections (against live, operational databases) and the SAP BW source, the report refreshes in about two minutes pulling a month’s worth of data on demand.

2 thoughts on “The Power of the Information Design Tool

  1. Hi,

    Not sure though about performance in real time scenarios as full data set is coming from Oracle & BW and then getting joined to show relevant records in report at DF Facade layer. How mature it is though, can you throw some light on it ?

    Vamsi

  2. Vasmi, You make a valid point about performance concerns. When we first started I saw it trying to do whole table scans on the BW side pulling back millions of rows of data., Through trial and error we were able to get it to apply restrictions on the BW side when it creates the query which improved performance dramatically. Right now it still does a table scan on some look up data, but applies a date range to the cube itself. We have both current and historical data from Oracle so the data federate is running 16 oracle queries resulting in a combined total around 500 thousand rows , and four BW queries pulling about combining for about 50 thousand rows.. Once the data federation is done the final result set is 200 combined rows for a month of data with about a 5 minutes run time. Given more time we would like to try to cut this down more, but the performance is considered expectable by the business and it does not appear to be taxing our system resources too much so unless something changes improving the performance further is low on our priories list.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.