企业绩效管理网

 找回密码
 立即注册

QQ登录

只需一步,快速开始

查看: 575|回复: 7

Asynchronous processing

[复制链接]

87

主题

428

帖子

615

积分

高级会员

Rank: 4

积分
615
QQ
发表于 2014-3-16 04:27:05 | 显示全部楼层 |阅读模式
Hello,
I have read with interest that it is possible to mimic asynchronous processing in TM1 by executing a cmd file that in turn calls a TM1 process. This afternoon have been trying this and found that I can get asynchronous avtivity as long as I don't write to the cube. Am I getting some thing wrong or is what people would expect.

My Process Code:-
--------------------------------------------------
Prolog
--------------------------------------------------
NumericGlobalVariable('nDebug');
nDebug = 0;
StringGlobalVariable('sCubeName');
sCubeName = 'jiSales1';
StringGlobalVariable('sViewName');
sViewName = 'sys-jiSales1-' | psNamePrefix;
StringGlobalVariable('sDimName');
sDimName = 'Customer';
StringGlobalVariable('sSubsetName');
sSubsetName = 'sys-Customer-' | psNamePrefix;

ViewDestroy(sCubeName, sViewName);

if(SubsetExists(sDimName, sSubsetName) = 1);
SubsetDestroy(sDimName, sSubsetName);
endif;
SubsetCreateByMDX(sSubsetName, '{TM1FILTERBYPATTERN( {TM1FILTERBYLEVEL( {TM1SUBSETALL( [Customer] )}, 0)}, "' | psNamePrefix | '*")}');

if(ViewExists(sCubeName, sViewName) = 1);
ViewDestroy(sCubeName, sViewName);
endif;
ViewCreate(sCubeName, sViewName);
ViewSubsetAssign(sCubeName, sViewName, sDimName, sSubsetName);
ViewExtractSkipZeroesSet(sCubeName, sViewName, 0);
ViewExtractSkipRuleValuesSet(sCubeName, sViewName, 0);
ViewExtractSkipCalcsSet(sCubeName, sViewName, 0);

DatasourceCubeview = sViewName;

--------------------------------------------------
Data
--------------------------------------------------
sFileName = 'ji-' | psNamePrefix | '.txt';
ASCIIOutput(sFileName, TimSt(Now, 'H:i:s'));

nValue = CellGetN(sCubeName, vsScenario, vsCustomer, vsMeasure) + 1;
CellPutN(nValue , sCubeName, vsScenario, vsCustomer, vsMeasure);
ASCIIOutput(sFileName, vsCustomer, vsScenario, vsMeasure, NumberToStringEx(vnValue, '0.00', '.', ','));
ASCIIOutput(sFileName, TimSt(Now, 'H:i:s'));

nIndex = 0;
while(nIndex < 1000000);
nIndex = nIndex + 1;
end;

--------------------------------------------------
CMD File
--------------------------------------------------
cd "Crogram FilesCognosTM1bin"
tm1runti.exe /adminhost tbs0660 /server ji-tm1dev-01 /user johni /pwd piffle101 /process jiMulti-Process psNamePrefix=%1

--------------------------------------------------
Process to call the CMD file
--------------------------------------------------
ExecuteCommand('DocumentsbsmiMDX SamplesTm1RunTi.bat a', 0);
ExecuteCommand('DocumentsbsmiMDX SamplesTm1RunTi.bat d', 0);
回复

使用道具 举报

80

主题

399

帖子

573

积分

高级会员

Rank: 4

积分
573
QQ
发表于 2014-3-16 05:45:48 | 显示全部楼层
in 9.5.2 you should have no problem doing simultaneous writes to the same cube provided you have parallel interaction switched on.

In earlier 9.5 and 9.4 pre parallel interaction it should also be doable provided the different TIs are writing to different cubes or are using batch update mode.
回复 支持 反对

使用道具 举报

76

主题

396

帖子

582

积分

高级会员

Rank: 4

积分
582
QQ
发表于 2014-3-16 05:55:05 | 显示全部楼层
Hello,
Thanks for your reply, I do have parallel interaction switched on
ParallelInteraction=T
.

Here are the outputs when the write operation is included which show synchronous activity:-
                                                                                        Output1.PNG (82.42 KiB) Viewed 544 times                               


And here are the outputs when the write operation is NOT included which show asynchronous activity:-
                                                                                        Output2.PNG (54.81 KiB) Viewed 544 times
回复 支持 反对

使用道具 举报

62

主题

411

帖子

557

积分

高级会员

Rank: 4

积分
557
QQ
发表于 2014-3-16 06:59:55 | 显示全部楼层
Try replacing your ViewCreate and SubsetCreateByMDX with references to objects that already exist - so you can also get rid of the ViewDestroy and SubsetDestroy. These objects will need to be different for each parameter that you pass in, so keep the concatenation.

Even with PI you can still get locks on objects by creating metadata associated with them.

PI isn't quite the panacea that IBM would like you to think.
回复 支持 反对

使用道具 举报

82

主题

391

帖子

572

积分

高级会员

Rank: 4

积分
572
QQ
发表于 2014-3-16 07:00:34 | 显示全部楼层
Thanks for the tip Andy,

It seems that I can leave the ViewDestroy in the Epilog but must, as you suggest reference objects that already exist.  To do this it seems I can move the Prolog ViewCreate code into a new process which I call with ExcecuteProcess.  That all means it's a bit of a mess looking something like this:-

1) TM1 process to shell BAT file
2) BAT file launches TM1 process (P1) with the prefix parameter
3) P1 Executes another TM1 process (P2) with the prefix parameter
4) P2 Destroys views/subsets if they exist then creates views/subsets
5) P1 assigns DatasourceCubeview then performs data operations
6) P1 destroys views/subsets created by P2

What a phaf!  I now get asynchronous results as shown below:-
                                                                                        Output1.PNG (67.21 KiB) Viewed 482 times                               


I know the exmple I have used is completely unrealistic but I have found it useful to determine how parallell interaction is supposed to work.  Has anyone found that it speeds up large data write operations in the real world?
回复 支持 反对

使用道具 举报

90

主题

419

帖子

614

积分

高级会员

Rank: 4

积分
614
QQ
发表于 2014-3-16 08:05:50 | 显示全部楼层
Hi AmbPin,

In looking at your post I have one suggestion regarding the way you are approaching your asychronus processing.  Now, I'm on 9.5.1 and this very well may no longer be an issue in 9.5.2 however I thought it was worth mentioning.  Specifically, I'm refereing this section in your Data tab:

nIndex = 0;
while(nIndex < 1000000);
nIndex = nIndex + 1;
end;

Now, I'm assuming you are doing this so you don't fire off all of your processes at once and would would like a buffer in between calls.  I would sugget avoiding this approach.  We initially did something very similiar to this.  However, we found that if users were performing certain operations during these loops we ran into issues.  For example, if a user was exporting to Excel a small report, the export function would stay "stuck" in a "Commit" if you are watching in Top until all of these loops had completed running.  The result of this is none of your asynschronous processes really running until all of the calls finished executing.

Instead, I would suggest creating BAT file that creates a VBS script that "sleeps" and calling it via an EXECUTECOMMAND instead of constantly running loops within TI.  I have one that takes a parameter for the number of seconds to "sleep" if you're interested.

thanks
brad
回复 支持 反对

使用道具 举报

83

主题

418

帖子

603

积分

高级会员

Rank: 4

积分
603
QQ
发表于 2014-3-16 08:36:23 | 显示全部楼层
Hello Brad,

Thank you for your reply.  I know the while loop is bad and would never do that for real, it is simply there because my example had so little data and I wanted it to make the process last more than a second.

Have you used parallel processing?  I am really keen to know whether anyone has actually seen a performance increase specifically during data write operations.
回复 支持 反对

使用道具 举报

89

主题

395

帖子

598

积分

高级会员

Rank: 4

积分
598
QQ
发表于 2014-3-16 09:02:43 | 显示全部楼层
AmbPin wrote:Hello Brad,

Thank you for your reply.  I know the while loop is bad and would never do that for real, it is simply there because my example had so little data and I wanted it to make the process last more than a second.

Have you used parallel processing?  I am really keen to know whether anyone has actually seen a performance increase specifically during data write operations.
I assume by parallel processing you mean loading to cubes (or same cube) in parallel with TI?  For user driven manual data updates there's nothing special that you need to do with parallel interaction.  For the main high concurrency user input application that my team manages this has been great for us and PI has significantly improved per user performance by more or less eliminating locks and wait queues.

For TI driven loads yes there is more to it to allow simultaneous loads as you need to be careful to ensure there is no locking from metadata actions like creation of subsets and views, but if the TIs are loading from external source(s) such as a flat file then there is no special setup or special care or watchouts. It should "just work".  If you need to also manage clearing out sections of cubes prior to the load in my experience often this is easier to manage in serial then switch to parallel for the actual loading.  Whether this leads to a performance increase well what do you think?  If you have 1x 1GB flat file to load vs 10x 100MB files to load in parallel then the later is much faster end to end, not 10x faster mind you as the commit phase can't run in parallel and there seems to be some noticeable overhead from running multiple operations but if you have a lot of transaction volume to process and need to manage within batch processing time windows then it is definitely worthwhile.
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|手机版|小黑屋|企业绩效管理网 ( 京ICP备14007298号   

GMT+8, 2020-9-21 14:11 , Processed in 0.191788 second(s), 13 queries , Memcache On.

Powered by Discuz! X3.1 Licensed

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表