企业绩效管理网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 1300|回复: 4

Cube Performance

[复制链接]

74

主题

392

帖子

562

积分

高级会员

Rank: 4

积分
562
QQ
发表于 2014-3-17 02:12:49 | 显示全部楼层 |阅读模式
I have a Summary Cube which reads data from Cube A, Cube B and Cube C via rules.

Each of this cube has 4 scenario( Summary, A, B, C).

All the 4 scenario is for "Current Version" on Summary Cube. On going i am doing archive on the current version to become Version 1, Version 2, Version 3. The archive includes all the 4 version.

The default view of Summary cube is Scenario A of the Current Version.

As the Summary Cube is taking more then 2 minutes to load the current version, i thought it may be because there are too much data in the cube due to the archive process. So I have remove the 3 scenario which may not be necessary and only left one scenario.

After removing the 3 scenario in the archive version, I try loading Scenario A of the Current Version again and the loading time doesnt seems to have improve much.

1) I would like someone to verify my conclusion that in regardless how much data there is on the Summary cube, the processing time to read data from Cube A, B and C will still be the same ?

2) If the default view loaded in Summary Cube is Scenario A, of year 2012, and I only write rules to read scenario A of year 2012, will this reduce the loading time ?

( i would say yes for question 2, but question 1 i tested it and it seems to be yes)

Let me know your feedback.
回复

使用道具 举报

83

主题

396

帖子

573

积分

高级会员

Rank: 4

积分
573
QQ
发表于 2014-3-17 03:14:10 | 显示全部楼层
I'm not sure I fully understand your issue, but from the looks of it I'd say take a look at the following things:

- Are your rules fetching all the data from the other cubes or just the data you need?
- Are you using feeders/are you feeding the rules correctly?
- Is your view simply too big for your server/machine to handle in a timely manner?
回复 支持 反对

使用道具 举报

77

主题

397

帖子

570

积分

高级会员

Rank: 4

积分
570
QQ
发表于 2014-3-17 03:17:14 | 显示全部楼层
First off you need to stop using the term "loading" since you aren't actually loading anything, you are referencing the data into the summary cube via a rule. Using the term "loading" is just confusing to us because loading is what you do to real data via a TI process. What you mean to say is how long will it take my view to "open" or "calculate", based on the number of cells I am referencing in my summary  cube. The answer is, it depends. The first factor that could affect the timing of data retrievals is the actual data in the detail cubes. Is it raw data, or are the rule calculated values? Rule calculated values referenced into a summary cube from a detail cube have to be calculated in the detail cube first so they will be slower to load then pulling in raw data. The second factor is are the referenced in values leaf elements in the detail cube, or are they consolidations. If you are moving what are consolidated elements from the detail cube, into leaf level elements in the summary cube, then those consolidations have to evaluated first before being moved. Once again, this is going to be slower than moving in leaf level data from the detail cube. The next factor is how many cells are in the view you are opening in the summary cube. If you are trying to open a small view in the summary cube, relatively low in the dimension hierarchies, then it is going to be faster than opening a view that references in points higher in the hierarchies. The bottom line is that using inter-cube rule references is always going to be fraught with performance issues. I'm not saying it shouldn't be used, as it is a very powerful feature, just be cognizant that it has the potential to really slow things down.

P.S. The size of the summary cube itself is not the major factor in how long it takes to open a view on it. The major factor is how big the view itself is. Having four different versions in the summary cube is not going to slow down the opening of a view on that cube, unless of course you are asking for values from all four versions at once. if your view is only referencing one version, having the other three in the cube is not going to make much of a difference.
回复 支持 反对

使用道具 举报

83

主题

421

帖子

617

积分

高级会员

Rank: 4

积分
617
QQ
发表于 2014-3-17 03:44:12 | 显示全部楼层
Thanks for the info.

to complicate things further, Cube A, B and C cubes data comes from the Actuals cube. If necessary , user will over write the actuals to reflect on a more accurate forecast on Cube A, B and C

SO it is more like the actuals cube is having data at a lower level (Order Level). The data (Customer Level) is feed into A, B and C Cube for Forecasting purpose. User will do the necessary adjustment on cube A, B and C. Once adjustment is done, all the data will be feed into the Summary Cube.

Have you seen any model like this before ?  you are right that this would be the case why it takes so long to open the view as the view is referencing data that comes from another cube.

1) I would like someone to verify my conclusion that in regardless how much data there is on the Summary cube, the processing time to read data from Cube A, B and C will still be the same ?

For the above question, does the amount of raw data in Summary cube  effect the amount of time it requires to open the view in Summary Cube ?
回复 支持 反对

使用道具 举报

76

主题

377

帖子

549

积分

高级会员

Rank: 4

积分
549
QQ
发表于 2014-3-17 05:40:53 | 显示全部楼层
winsonlee wrote:For the above question, does the amount of raw data in Summary cube  effect the amount of time it requires to open the view in Summary Cube ?
TM1 is just a computer program like any other. The amount of time it takes to do something is completely dependent on how many calculations the CPU has to execute to process your request. The amount of data in any of your cubes, by itself, is not the determining factor of how fast TM1 will perform. The major determining factor is how much are you asking TM1 to do to return a number. Keeping in mind that TM1 has a sparse consolidation algorithm built-in (and you configured the rules, if any, correctly with SKIPCHECK) then superfluous data, meaning parts of the cube you are not querying, will not have an effect on the query time (this isn't entirely true but for the most part it is) because TM1 can just ignore the data it doesn't need to perform the calculation. As long as you buy in to the concept of the sparse consolidation algorithm then everything else is just plain common sense. You don't need to be a TM1 expert to figure this out. .
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|手机版|小黑屋|企业绩效管理网 ( 京ICP备14007298号   

GMT+8, 2023-10-3 08:38 , Processed in 0.066554 second(s), 13 queries , Memcache On.

Powered by Discuz! X3.1 Licensed

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表