Hi,
our business scenario seems to use way more database disk space than expected.Can anyone help me to figure out what causes this huge amount of data?
The throughput of our business scenario is a few hundred process instances per week. Each process instance may run up to 12 weeks until completion and may start up to 9 subprocesses. We use a lot the intermediate timer event which triggers a web service call which again checks for certain conditions. If these conditions are not yet met the process token returns to the timer event and waits again a certain amount of time. This is done until conditions are finally met (which eventually will happen).
I found a description to calculate the required disk space depending on the process instance context data. It requires the size of the xsd of the context data objects when they are filled with typical data. How can I figure out this size? The xsd at design time is only 9 KB, the example in the description assumes 300KB having even more context data elements than my process.
What major elements in BPM cause a considerable increase in disk space consumption? I don't think it's only the process instance context data, as at least its history is stored as well. In our business scenario due to the intensive usage of the intermediate timer event a process instance can have up to a few thousand history entries displayed in NWA 'manage processes'. Does this produce a lot of database data?
How can I calculate the potential disk space for my process instances?
And if the database allocates way more disk space than the process instances are supposed to need, what else might need this disk space?
Thanks for any help.
Anja