Estimating Disk and Memory Requirements

This chapter helps you estimate disk and memory requirements. This chapter contains the following sections:

Note: If you are migrating from an earlier version of Essbase, see the Essbase Installation Guide for additional information about estimating space requirements.

This chapter uses a worksheet approach to help you keep track of the many components that you calculate. If you are using the printed version of this book, you can photocopy the worksheets. Otherwise, you can simulate the worksheets on your own paper. Labels, such as DA and MA help you keep track of the various calculated disk and memory component values,

Understanding How Essbase Stores Data

You need to understand the units of storage that Essbase uses in order to size a database. This discussion assumes that you are familiar with the following basic concepts before you continue:

An Essbase database consists of many different components. In addition to an outline file and a data file, Essbase uses several types of files and memory structures to manage data storage, calculation, and retrieval operations.

Table 93 describes the major components that you must consider when you estimate the disk and memory requirements of a database. "Yes" means the type of storage indicated is relevant, "No" means the type of storage is not relevant.

.

Table 93: Storage Units Relevant to Calculation of Disk and Memory Requirements  

Storage Unit
Description
Disk
Memory

Outline

A structure that defines all elements of a database. The number of members in an outline determines the size of the outline.

Yes

Yes

Data files

Files in which Essbase stores data values in data blocks in data files.

Named essxxxxx.pag, where xxxxx is a number. Essbase increments the number, starting with ess00001.pag, on each disk volume. Memory is also affected because Essbase copies the files into memory.

Yes

Yes

Data blocks

Subdivisions of a data file. Each block is a multidimensional array that represents all cells of all dense dimensions relative to a particular intersection of sparse dimensions.

Yes

Yes

Index files

Files that Essbase uses to retrieve data blocks from data files. Named essxxxxx.ind, where xxxxx is a number. Essbase increments the number, starting with ess00001.ind, on each disk volume

Yes

Yes

Index pages

Subdivisions of an index file. Contain index entries that point to data blocks. The size of index pages is fixed at 8 KB.

Yes

Yes

Index cache

A buffer in memory that holds index pages. Essbase allocates memory to the index cache at startup of the database.

No

Yes

Data file cache

A buffer in memory that holds data files. When direct I/O is used, Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. Not used with buffered I/O.

No

Yes

Data cache

A buffer in memory that holds data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed.

No

Yes

Calculator cache

A buffer in memory that Essbase uses to create and track data blocks during calculation operations.

No

Yes



Determining Disk Space Requirements

Essbase uses disk space for its server software and for each database. Before estimating disk storage requirements for a database, you must know how many dimensions the database includes, the sparsity and density of the dimensions, the number of members in each dimension, and how many of the members are stored members.

To calculate the disk space required for a database:

  1. Calculate the Factors To Be Used in Sizing Disk Requirements.
  2. Use the Estimating Disk Space Requirements for a Single Database to calculate the space required for each component of a single database. If your server contains more than one database, you must perform calculations for each database.
  3. Use the Estimating the Total Server Disk Space Requirement to calculate the final estimate for the server.

Note: The database sizing calculations in this chapter assume an ideal scenario with an optimum database design and unlimited disk space. The amount of space required is difficult to determine precisely because most multidimensional applications are sparse.

Factors To Be Used in Sizing Disk Requirements

Before estimating disk space requirements for a database, you must calculate the factors to be used in calculating the estimate. Later in the chapter you will use these values to calculate the components of a database. For each database, you will then add together the sizes of its components.

Table 94 lists the sections that provide instructions to calculate these factors. Go to the section indicated, perform the calculation, then write the calculated value in the Value column.

Table 94: Factors Affecting Disk Space Requirements of a Database

Database Sizing Factor
Label
Value

Potential Number of Data Blocks

DA

 

Number of Existing Data Blocks

DB

 

Size of Expanded Data Block

DC

 

Size of Compressed Data Block

DD

 



Potential Number of Data Blocks

The potential number of data blocks is the maximum number of data blocks possible in the database.

If the database is already loaded, you can see the potential number of blocks on the Statistics tab of the Database Information dialog box of Application Manager or on the Statistics tab of the Database Properties dialog box of Essbase Administration Services.

If the database is not already loaded, you must calculate the value.

To determine the potential number of data blocks, assume that data values exist for all combinations of stored members.

  1. Using Table 95 as a worksheet, list each sparse dimension and its number of stored members. If there are more than seven sparse dimensions, list the dimensions elsewhere and include all sparse dimensions in the calculation.
  2. The following types of members are not stored members:

  3. Multiply the number of stored members of the first sparse dimension (line a.) by the number of stored members of the second sparse dimension (line b.) by the number of stored members of the third sparse dimension (line c.), and so on. Write the resulting value to the cell labeled DA in Table 94.
  4. a * b * c * d * e * f * g (and so on) = potential number of blocks 
    

Table 95: List of Sparse Dimensions with Numbers of Stored Members

Enter Sparse Dimension Name
Enter Number of Stored Members

 

a.

 

 

b.

 

 

c.

 

 

d.

 

 

e.

 

 

f.

 

 

g.

 



Example

The Sample Basic database contains the following sparse dimensions:

Therefore, there are 19 * 25 = 475 potential data blocks.

Number of Existing Data Blocks

As compared with the potential number of blocks, the term existing blocks refers to those data blocks that Essbase actually creates. For Essbase to create a block, at least one value must exist for a combination of stored members from sparse dimensions. Because many combinations can be missing, the number of existing data blocks is usually much less than the potential number of data blocks.

If the database is already loaded, you can see the number of existing blocks on the Statistics tab of the Database Information dialog box of Essbase Application Manager or on the Statistics tab of the Database Properties dialog box of Essbase Administration Services. Write the value in the cell labeled DB in Table 94.

If the database is not already loaded, you must estimate a value.

To estimate the number of existing data blocks:

  1. Estimate a database density factor that represents the percentage of sparse dimension stored-member combinations that have values.
  2. Multiply this percentage against the potential number of data blocks and write the number of actual blocks to the cell labeled DB in Table 94.
  3. number of existing blocks = estimated density * potential number of blocks 
    

Example

The following three examples show different levels of sparsity and assume 100,000,000 potential data blocks:

Size of Expanded Data Block

The potential, expanded (uncompressed) size of each data block is based on the number of cells in a block and the number of bytes used for each cell. The number of cells in a block is based on the number of stored members in the dense dimensions. Essbase uses eight bytes to store each intersecting value in a block.

If the database is already loaded, you can see the size of an expanded data block on the Statistics tab of the Database Information dialog box of Application Manager or on the Statistics tab of the Database Properties dialog box of Essbase Administration Services.

If the database is not already loaded, you must estimate the value.

To determine the size of an expanded data block:

  1. Using Table 96 as a worksheet, enter each dense dimension and its number of stored members. If there are more than seven dense dimensions, list the dimensions elsewhere and include all dense dimensions in the calculation.
  2. The following types of members are not stored members:

  3. Multiply the number of stored members of the first dense dimension (line a) by the number of stored members of the second dense dimension (line b) by the number of stored members of the third dense dimension (line c), and so on, to determine the total number of cells in a block.
  4. a * b * c * d * e * f * g (and so on) = the total number of cells  
    

  5. Multiply the resulting number of cells by 8 bytes to determine the expanded block size. Write the resulting value to the cell labeled DC in Table 94.
  6. (Total number of cells) * 8 bytes per cell = expanded block size 
    

Table 96: Determining the Size of a Data Block

Enter Dense Dimension Name
Number of Stored Members

 

a.

 

 

b.

 

 

c.

 

 

d.

 

 

e.

 

 

f.

 

 

g.

 



Example

The Sample Basic database contains the following dense dimensions:

Perform the following calculations to determine the potential size of a data block in Sample Basic:

12 * 8 * 2 = 192 data cells 
192 data cells * 8 bytes = 1,536 bytes (potential data block size) 

Size of Compressed Data Block

Compression affects the actual disk space used by a data file. The two types of compression, bitmap and run-length encoding (RLE), affect disk space differently. For information about data compression unrelated to estimating size requirements, see Data Compression.

If you are not using compression or if you have enabled RLE compression, skip this calculation and proceed to Compressed Data Files.

Note: Due to sparsity also existing in the block, actual (compressed) block density varies widely from block to block. The calculations in this discussion are only for estimation purposes.

To calculate an average compressed block size when bitmap compression is enabled:

  1. Determine an average block density value.
  2. To determine the compressed block size, perform the following calculation and write the resulting block size to the cell labeled DD in Table 94.
  3. expanded block size * block density = compressed block size 
    

Example

Assume an expanded block size of 1,536 bytes and a block density of 25%:

1,536 bytes * .25 = 384 bytes (compressed block size) 

Estimating Disk Space Requirements for a Single Database

To estimate the disk-space requirement for a database, make a copy of Table 97 or use a separate sheet of paper as a worksheet for a single database. If multiple databases are on a server, repeat this process for each database. Write the name of the database on the worksheet.

Each row of this worksheet refers to a section that describes how to size that component. Perform each calculation and write the results in the appropriate cell in the Size column. The calculations use the factors that you wrote in Table 94.

Table 97: Worksheet for Estimating Disk Requirements for a Database

Database Name:
Database Component
Size

Compressed Data Files

DE

Fixed-Size Overhead

DF

Index Files

DG

Fragmentation Allowance

DH

Outline

DI

Work Areas (sum of DE through DI)

DJ

Linked Reporting Objects Considerations, if needed

DK

Total disk space required for the database.
Total the size values from DE through DK and write the result to Table 99.

 



After writing all the sizes in the Size column, add them together to determine the disk space requirement for the database. Add the database name and size to the list in Table 99. Table 99 is a worksheet for determining the disk space requirement for all databases on the server.

Repeat this exercise for each database on the server. After estimating disk space for all databases on the server, proceed to Estimating the Total Server Disk Space Requirement.

The following sections describe the calculations to use to estimate components that affect the disk-space requirements of a database.

Compressed Data Files

The calculation for the space required to store the compressed data files (essxxxxx.pag) uses the following factors from Table 94:

To estimate the space required for the compressed data files, multiply the compressed block size by the number of data blocks and write the size of the compressed data files cell labeled DE in Table 97.

Note: If compression is not used, substitute the expanded block size for the compressed block size in this formula.

Example

384 (compressed block size) * 15,000,000 (number of data blocks)
= 5,760,000,000 bytes (size of compressed data files) 

Fixed-Size Overhead

The following subtopics show how to calculate fixed-size overhead. Use one of two methods of calculation, depending whether the database uses bitmap compression, run-length encoding (RLE), or no compression.

Fixed Size Overhead Using Bitmap Compression

Calculations for fixed-size overhead using bitmap compression use the following factors and constants:

The compression bitmap uses one bit for each cell in a block. Dividing the expanded block size by 8 provides the number of cells, which equals the number of bits in the bitmap. Dividing this value again by 8 determines the number of bytes; therefore the following procedure divides the expanded block size by 64 to obtain the fixed-size overhead for each block.

To calculate the fixed-size overhead when bitmap compression is enabled:

  1. Determine the fixed-size overhead per block. Perform the following calculation.
  2. ((expanded block size in bytes/64)+72) = temporary value 
    

  3. Round up the temporary value from step 1 to the nearest multiple of eight.
    1. Divide the number by 8.
    2. Use the whole number only.
    3. If anything is left over add 1.
    4. Multiply by 8.
    5. The result of this calculation is the fixed-size overhead per block.

  4. Determine the fixed-size overhead for the database. Perform the following calculation and write the resulting value to the cell labeled DF in Table 97.
  5. fixed-size overhead per block * number of existing blocks 
    = fixed-size overhead for the database 
    

Example

Assume bitmap compression and an expanded block size of 4,802 bytes with a total of 15,000,000 existing blocks.

  1. Calculate the formula, then round the result to the nearest multiple of eight.
  2. (4,802 / 64) + 72 = 147.03 bytes  
    

  3. Round to the next multiple of eight, as shown in Table 98.
  4. Multiply the overhead per block by the number of blocks:
  5. 152 bytes * 15,000,000 blocks = 2,280,000,000 bytes (database overhead) 
    

Table 98: Fixed Size Overhead Calculation

Calculation for Rounding Up
Result

1. Divide the number of bytes by 8

147.03 / 8 = 18.3

2. Use the whole number only

18

3. If anything is left over add 1

18 + 1 = 19

4. Multiply by 8

19 * 8 = 152 bytes in overhead per block



Fixed Size Overhead Using Run-Length Encoding (RLE) or No Compression

Calculations for fixed-size overhead for databases not using bitmap compression use the following factors and constants:

To calculate the fixed-size overhead for a database that uses RLE or no compression:

  1. Using the 72-byte header as the fixed-size overhead per block.
  2. Determine the fixed-size overhead for the database. Perform the following calculation and write the resulting value to the cell labeled DF in Table 97.
  3. 72 * Number of existing blocks 
    = Fixed-size overhead for the database 
    

Example

Assume a total of 15,000,000 existing blocks.

72 bytes * 15,000,000 blocks = 1,080,000,000 bytes  

Index Files

The calculation for the space required to store the index files (essxxxxx.ind) uses the following factors:

To calculate the total size of a database index, including all index files. perform the following calculation. Write the size of the compressed data files to the cell labeled DG in Table 97.

number of existing blocks * 112 bytes = the size of database index 

Example

Assume a database with 15,000,000 blocks.

15,000,000 blocks * 112 = 1,680,000,000 bytes 

Note: If the database is already loaded, select Database > Information in Application Manager and look at the Files tab for the size of the index file. If you are using Essbase Administration Services, click the Storage tab on the Database Properties window.

Fragmentation Allowance

If you are using bitmap or RLE compression, a certain amount of fragmentation occurs. The amount of fragmentation is based on individual database and operating system configurations and cannot be precisely predicted.

As a rough estimate, calculate 20% of the compressed database size (value DE from Table 97) and write the result to the cell labeled DH in the same table.

Example

Assume a compressed database size of 5,769,000,000 bytes.

5,769,000,000 bytes * .2 = 1,153,800,000 bytes 

Outline

The space required by an outline can have two components.

To estimate the size of the outline:

  1. Estimate the main area of the outline by multiplying the number of members by a name-length factor between 350 and 450 bytes.
  2. If the database includes few aliases or very short aliases and short member names, use a smaller number within this range. If you know that the names or aliases are very long, use a larger number within this range.

    Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline.

    number of members * name-length factor = size of main area of outline 
    

    Note: See Limits, for the maximum sizes for member names and aliases.

    For memory space requirements calculated later in this chapter, use the size of the main area of the outline.

  3. For disk space requirements, if the outline includes attribute dimensions, calculate the size of the attribute association area for the database. Calculate the size of this area for each base dimension. Multiply the number of members of the base dimension by the sum of the count of members of all attribute dimensions associated with the base dimension, and then divide by 8.
  4. Note: Within the count of members, do not include Label Only members and shared members.

    (number of base-dimension members * sum of count of attribute-dimension members)/8 = size of attribute association area for a base dimension 
    

  5. Sum the attribute association areas of each dimension to determine the total attribute association area for the outline.
  6. For the total disk space required for the outline, add together the main outline area and the attribute association area, and write the result of this calculation to the cell labeled DI in Table 97.
  7. main area of outline + total attribute association area = total disk space required for the outline 
    

Example

Assume the outline has the following characteristics:

  1. Calculate the main area of the outline:
  2. 400 bytes (name-length factor) x 26,000 members = 10,400,000 bytes 
    

  3. Calculate the attribute association areas:
  4. Sum these areas for the total attribute association area for the database:
  5. 201,250 bytes + 3,750 bytes = 205,000 bytes 
    

  6. For a total estimate of outline disk space, add the main area of the outline and the total attribute association area:
  7. 10,400,000 bytes + 205,000 bytes = 10,605,000 bytes (outline disk space requirement) 
    

Note: Do not use this procedure to calculate outline memory space requirements. See The Outline Size Used in Memory.

Work Areas

Three different processes create temporary work areas on the disk:

To create these temporary work areas, Essbase may require disk space equal to the size of the entire database. Restructuring and migration need additional work space the size of the outline. Because none of these activities occur at the same time, a single allocation can represent all three requirements.

To calculate the size of a work area used for restructuring, migration, and recovery, calculate the sum of the sizes of the following database components from Table 97:

work area = size of compressed data files + fixed-size overhead
+ size of index files + fragmentation allowance + outline size 

Write the result of this calculation to the cell labeled DJ in Table 97.

Linked Reporting Objects Considerations

You can use the Linked Reporting Objects (LROs) feature to associate objects with data cells. The objects can be flat files, HTML files, graphics files, and cell notes. For information about linked reporting objects, see Linking Objects to Essbase Data.

Two aspects of LROs affect disk space:

To estimate the disk space requirements for linked reporting objects:

  1. Estimate the size of the objects. If a limit is set, multiply the number of LROs by that limit. Otherwise, sum the size of all anticipated LROs.
  2. Size the LRO catalog. Multiply the total number of LROs by 8192 bytes.
  3. Add together the two areas and write the result of this calculation to the cell labeled DK in Table 97.
  4. sum of LRO sizes + size of LRO catalog = LRO disk space requirement 
    

Example

Assume the database uses 1500 LROs which are composed composed of:

  1. Multiply 1000 * 512 bytes for 512,000 bytes maximum required for the stored URLs.
  2. Calculate the size of the LRO catalog. Multiply 1500 total LROs * 8192 bytes = 12,288,000 bytes.
  3. Add together the two areas:
  4. 512,000 bytes + 12,288,000 bytes = 12,800,000 bytes total LRO disk space requirement 
    

Estimating the Total Server Disk Space Requirement

The earlier calculations in this chapter estimate the data storage requirement for a single database. Often, more than one database resides on the server.

In addition to the data storage required for each database, the total Essbase data storage requirement on a server includes Essbase software. Allow approximately 80 to 202 MB (84,451,328 to 211,953,664 bytes) for the base installation of Essbase software and sample applications. The allowance varies by platform and file management system. For details, see the Essbase Installation Guide.

To estimate the total server disk space requirement:

  1. In the worksheet in Table 99, list the names and disk space requirements that you calculated for each database.
  2. Sum the database requirements and write the total in the cell labeled DL.
  3. In the cell labeled DM, write the appropriate disk space requirement for the software installed on the server.
  4. For the total server disk space requirement in bytes, sum the values in cells DL and DM. Write this value in the cell labeled DN.
  5. To convert to megabytes (MB), divide the value in cell DN by 1,048,576 bytes. Write this value in the cell labeled DO.

Table 99: Worksheet for Total Server Disk Space Requirement  

List of Databases
(From Table 97)
Size

a.

 

 

b.

 

 

c.

 

 

d.

 

 

e.

 

 

f.

 

 

g.

 

 

Sum of database disk sizes a + b + c + d + e + f + g

DL:

84,451,328 to 211,953,664 bytes for Essbase server software

DM:

Total Essbase server disk requirement in bytes: DL + DM

DN:

Total Essbase server disk requirement in megabytes (MB): DN divided by 1,048,576 bytes

DO:



Estimating Memory Requirements

The minimum memory requirement for running Essbase is 64 MB. On UNIX systems, the minimum requirement is 128 MB. Based on the number of applications and databases and the database operations on the server, the amount of memory you require may be more.

To estimate the memory required on the server:

  1. Calculate the startup memory requirement for each application.
  2. Using the worksheet in Table 97 to estimating memory requirements for each database, calculate the following components:
  3. Add these requirements together for all databases and applications, as shown in the Worksheet for Total Memory Requirement in Table 103.

Startup Memory Requirement for an Application

Each open application has the following memory requirement at startup:

Multiply the number of applications that will be running simultaneously on the server by the appropriate startup requirement and write the resulting value to the cell labeled ML in Table 103.

Startup Memory Requirement for a Single Database

To estimate the memory requirement for a database, make a copy of Table 100 or use a separate sheet of paper as a worksheet for a single database. If multiple databases are on a server, repeat this process for each database. Write the name of the database on the worksheet.

Each row links to information that describes how to size that component. Perform each calculation and note the results in the appropriate cell in the Size column. Some calculations use the factors that you wrote in Table 101. After filling in all the sizes in the Size column, add them together to determine the memory requirement for that database.

After estimating disk space for all databases on the server, proceed to Estimating the Total Server Disk Space Requirement.

Table 100: Worksheet for Estimating Memory Requirements for a Database

Database Name:
Memory Requirement
Size

Startup requirements per database:

Database outline. See The Outline Size Used in Memory.

MA:

Index cache. See Sizing the Index Cache.

MB:

Cache-related overhead. See Cache-Related Overhead.

MC:

Area for data structures. See Memory Area for Data Structures.

MD:

Operational requirements:

Memory used for data retrievals. See Estimating Additional Memory Requirements for Data Retrievals

ME:

Memory used for calculations. See Estimating Additional Memory Requirements for Calculations

MF:

Summarize the size values from MB through MF for an estimate of the total memory required for a database.

MG

Divide the value from MG. by 1,048,576 bytes for the total database memory requirement in megabytes (MB).

MH



In Table 103, enter the name of the database and the total memory requirement in megabytes, MH.

Factors To Be Used in Sizing Memory Requirements

Before you start the estimate, calculate factors to be used in calculating the estimate.

Table 101 lists sizing factors with references to sections in this and other chapters that provide information to determine these sizes. Go to the section indicated, perform the calculation, then return to Table 101 and write the size, in bytes, in the Value column of this table.

Later in this chapter, you can refer to Table 101 for values to use in various calculations.

Table 101: Factors Used to Calculate Database Memory Requirements

Database Sizing Factor
Value

The number of cells in a logical block. See The Number of Cells in a Logical Block.

MI:

The number of threads allocated through the ESSCMD, SERVERTHREADS. See the Technical Reference in the docs directory.

MJ:

Potential stored-block size. See Size of Expanded Data Block.

MK:



The calculations in this chapter do not account for other factors that affect how much memory is used. The following factors have complex implications and their affects on memory size cannot be calculated:

The Outline Size Used in Memory

The attribute association area included in disk space calculations is not a sizing factor for memory. Calculate only the main area of the outline.

For memory size requirements, outline size is calculated using the following factors:

To calculate the outline memory requirement, multiply the number of members by a name-length factor between 350 and 450 bytes and write the result to the cell labeled MA in Table 100.

If the database includes few aliases or very short aliases and short member names, use a smaller number within the 350-450 byte range. If you know that the names or aliases are very long, use a larger number within this range.

Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline:

memory size of outline = number of members * name-length factor 

Note: See Limits, for the maximum sizes for member names and aliases.

Example

Assuming the outline has 26,000 members and a median name-length, use the following calculation to estimate the outline size used in memory:

26,000 members * 400 bytes per member = 10,400,000 bytes 

Index Cache

At startup, Essbase sets aside memory for the index cache, the size of which can be user-specified. To determine the size of the index cache, see Sizing the Index Cache and write the size in the cell labeled MB in Table 100.

Cache-Related Overhead

Essbase uses additional memory while it works with the caches.

The calculation for this cache-related overhead uses the following factors:

To calculate the cache-related overhead at startup:

  1. Calculate half the index cache size, in bytes.
  2. index cache size * .5 = index cache-related overhead 
    

  3. Calculate additional cache overhead in bytes using the following formula:
  4. ((# of server threads allocated to the Essbase server process * 3) * 256) + 5242880 bytes = additional cache overhead 
    

  5. Sum the index cache overhead plus the additional cache overhead. Write the result to the cell labeled MC in Table 100.
  6. cache-related overhead = index cache-related overhead + additional cache overhead 
    

The Number of Cells in a Logical Block

The term logical block applies to an expanded block in memory.

To determine the cell count of a logical block, multiply together all members of each dense dimension (including Dynamic Calc and Dynamic Calc And Store members but excluding Label Only and shared members).

  1. Using Table 102 as a worksheet, enter each dense dimension and its number of members excluding Label Only and shared members. If there are more than seven dense dimensions, list the dimensions elsewhere and include all dense dimensions in the calculation.
  2. Multiply the number of members of the first dense dimension (line a.) by the number of members of the second dense dimension (line b.) by the number of members of the third dense dimension (line c.), and so on, to determine the total number of cells in a logical block. Write the result to the cell labeled ME in Table 101.
  3. a * b * c * d * e * f * g = the total number of cells  
    

Table 102: Determining the Number of Cells in a Logical Block  

Enter Dense Dimension Name
Number of Members

 

a.

 

 

b.

 

 

c.

 

 

d.

 

 

e.

 

 

f.

 

 

g.

 



Example

Excluding Label Only and shared members, the dense dimensions in Sample Basic contain 17 (Year), 14 (Measures), and 4 (Scenario) members. The calculation for the cell count of a logical block in Sample Basic is:

17 * 14 * 4 = 952 cells 

Memory Area for Data Structures

At application startup time, Essbase sets aside an area of memory based on the following factors:

To calculate the data structure area in memory:

  1. Use the following formula to calculate the size in bytes:
  2. Number of threads * ((Number of members in the outline * 26 bytes) + (Logical block cell count * 36 bytes)) 
    

  3. Write the result to the cell labeled MD in Table 100.

Example

Assuming 20 threads for the Sample Basic database, the startup area in memory required for data structures is calculated as follows:

20 threads * ((79 members * 26 bytes) + (952 cells * 36 bytes)) = 726,520 bytes 
726,520 bytes / 1,048,576 bytes = .7 MB 

Estimating Additional Memory Requirements for Database Operations

In addition to startup memory requirements, operations such as queries and calculations require additional memory. Because of many variables, the only way to estimate memory requirements of operations is to run sample operations and monitor the amount of memory used during these operations.

Estimating Additional Memory Requirements for Data Retrievals

Essbase processes requests for database information (queries) from a variety of sources. For example, Essbase processes queries from the Spreadsheet Add-in and from Report Writer. Essbase uses additional memory when it retrieves the data for these queries, especially when Essbase must perform dynamic calculations to retrieve the data. This section describes Essbase memory requirements for query processing.

Essbase is a multithreaded application in which queries get assigned to threads. Threads are automatically created when Essbase is started. In general, a thread exists until you shut down OLAP Server (for more information, see Running Essbase Servers, Applications, and Databases).

As Essbase processes queries, it cycles through the available threads. For example, assume 20 threads are available at startup. As each query is processed, Essbase assigns each succeeding query to the next sequential thread. After it has assigned the 20th thread, Essbase cycles back to the beginning, assigning the 21st query to the first thread.

While processing a query, a thread allocates some memory, and then releases most of it when the query is completed. Some of the memory is released to the operating system and some of it is released to the dynamic calculator cache for the database being used. However, the thread holds on to a portion of the memory for possible use in processing subsequent queries. As a result, after a thread has processed its first query, the memory held by the thread is greater than it was when Essbase first started.

Essbase uses the maximum amount of memory for query processing when both of these conditions are true:

In the example where 20 threads are available at startup, the maximum amount of memory is used for queries when at least 20 queries have been processed and the maximum number of simultaneous queries are in process.

Calculating the Maximum Amount of Additional Memory Required

To estimate query memory requirements by observing actual queries:

  1. Observe the memory used during queries.
  2. Calculate the maximum possible use of memory for query processing by adding together the memory used by queries that will be run simultaneously, then add the extra memory that had been acquired by threads that are now waiting for queries.

Use the following variables when you calculate the formula in Estimating the Maximum Memory Usage for A Query Before and After Processing:

Determining the Total Number of Threads

The potential number of threads available is based on the number of licensed ports that are purchased. The actual number of threads available depends on settings you define for the Agent or the server. Use the number of threads on the system as the value for Total#Threads in later calculations.

Estimating the Maximum Number of Concurrent Queries

Determine the maximum number of concurrent queries and use this value for Max#ConcQueries in later calculations. This value cannot exceed the value for Total#Threads.

Estimating the Maximum Memory Usage for A Query Before and After Processing

The memory usage of individual queries depends on the size of each query and the number of data blocks that Essbase needs to access to process each query. To estimate the memory usage, calculate the additional memory Essbase uses during processing and after processing each query.

Decide on several queries that you expect to use the most memory. Consider queries that must process large numbers of members; for example, queries that perform range or rank processing.

To estimate the memory usage of a query:

  1. Turn the dynamic calculator cache off by setting the ESSBASE.CFG setting DYNCALCACHEMAXSIZE to 0 (zero). Turning off the dynamic calculator cache enables measurement of memory still held by a thread by ensuring that after the query is complete, the memory used for blocks during dynamic calculations is released by the ESSSVR process to the operating system. For more information, see the Technical Reference in the docs directory.
  2. Start the Essbase application.
  3. Using memory monitoring tools for the operating system, note the memory used by OLAP Server before processing the query. Use the value associated with the ESSSVR process.
  4. Use this value for MemBeforeP.

  5. Run the query.
  6. Using memory monitoring tools for the operating system, note the peak memory usage of OLAP Server while the query is processed. This value is associated with the ESSSVR process.
  7. Use this value for MemDuringP.

  8. Using memory monitoring tools for the operating system, after the query is completed, note the memory usage of Essbase. This value is associated with the ESSSVR process.
  9. Use this value for MemAfterP.

  10. Calculate the following two values:
  11. When you have completed the above calculations for all the relevant queries, compare all results to determine the following two values:
  12. Insert the two values from step 7 into the formula in the following statement.
  13. The amount of additional memory required for data retrievals will not exceed:

    Max#ConcQueries * MAXAdditionalMemDuringP + (Total#Threads - Max#ConcQueries) * MAXAdditionalMemAfterP

    Write the result of this calculation, in bytes, to the cell labeled ME in Table 100.

Because this calculation method assumes that all of the concurrent queries are maximum-sized queries, the result may exceed your actual requirement. It is difficult to estimate the actual types of queries that will be run concurrently.

To adjust the memory used during queries, you can set values for the retrieval buffer and the retrieval sort buffer. For information, see Setting the Retrieval Buffer Size and Setting the Retrieval Sort Buffer Size.

Estimating Additional Memory Requirements Without Monitoring Actual Queries

If you cannot perform this test with actual queries, you can calculate a very rough estimate for the operational requirement of a query by summarizing the following values and multiplying the sum by the maximum number of possible concurrent queries:

Example

To estimate the maximum memory needed for concurrent queries, assume the following values:

Estimated memory for retrievals:

20 * (10,240 bytes + 10,240 bytes + 761,600 bytes)
= 15,641,600 bytes 

Estimating Additional Memory Requirements for Calculations

For existing calculation scripts, you can use the memory monitoring tools provided for the operating system on the server to observe memory usage. Run the most complex calculation and take note of the memory usage both before and while running the calculation. Calculate the difference and use that figure as the additional memory requirement for the calculation script.

To understand calculation performance, see Optimizing Calculations.

If you cannot perform a test with a calculation script, you can calculate a very rough estimate for the operational requirement of a calculation by adding together the following values:

For the total calculation requirement, summarize the amount of memory needed for all calculations that will be run simultaneously and write that total to the cell labeled MF in Table 100.

Note: The size and complexity of the calculation scripts affect the amount of memory required. The effects are difficult to estimate.

Estimating Total Essbase Memory Requirements

You can use Table 103 as a worksheet on which to calculate an estimate of the total memory required on the server.

Table 103: Worksheet for Total Server Memory Requirement

Component
Memory Required, in Megabytes (MB)

Sum of application startup memory requirements (see Startup Memory Requirement for an Application)

ML:

In rows a through g below, list concurrent databases (from copies of Table 100) and enter their respective memory requirements (MH) in the column to the right.

 

a.

 

MH:

b.

 

MH:

c.

 

MH:

d.

 

MH:

e.

 

MH:

f.

 

MH:

g.

 

MH:

Operating system memory requirement

MM:

Total estimated memory requirement for the server

MN:



To estimate the total Essbase memory requirement on a server:

  1. Make sure the total startup memory requirement for applications is recorded in the cell labeled ML, as described in section Startup Memory Requirement for an Application.
  2. List the largest set of databases that will run concurrently on the server. In the Memory Required column, for each database note the memory requirement estimated in the database requirements worksheet, Table 100.
  3. Determine the operating system memory requirement and write the value in megabytes to the cell labeled MM in Table 103.
  4. Total all values and write the result in the cell labeled MN.
  5. Compare the value in MN with the total available random-access memory (RAM) on the server.
  6. If cache memory locking is enabled, the total memory requirement should not exceed two-thirds of available RAM; otherwise, system performance can be severely degraded. If cache memory locking is disabled, the total memory requirement should not exceed available RAM.

    If there is insufficient memory available, you can redefine your cache settings and recalculate the memory requirements. This can be an iterative process. For guidelines, see Fine Tuning Cache Settings. In some cases, you may need to purchase additional RAM.




© 2002 Hyperion Solutions Corporation. All rights reserved.
http://www.hyperion.com