From Eigenpedia

Jump to: navigation, search



This document describes Farrago's code cache. This is a singleton pool associated with a DBMS instance; the pool is used to cache and share various objects generated and used in query processing. These objects are referred to as "code" to distinguish this from Fennel's data cache, which caches the contents of stored disk pages.


The code cache is intended to speed up the repeated execution of frequently needed SQL statements. Code caching is mainly of use in SQL environments where the same statements are repeated frequently, e.g.

  • OLTP: many transactional applications rely on a fairly small repertoire of prepared statements (e.g. the query to populate a list box, or the INSERT to add a new record for a given table)
  • OLAP: engines such as Mondrian may issue the same or similar SQL queries repeatedly as subcomponents of multi-dimensional results

Code caching may be of less use in ad-hoc query environments, but some of the caching may still be relevant (e.g. SQL/MED wrappers and servers).

Currently, all caching is in-memory. A two-level cache (just like a web browser with a combined in-memory and on-disk cache of web pages) might be useful in some environments, and wouldn't be terribly hard to implement.

Management Via SQL

An administrator can disable the code cache like this:

alter system set "codeCacheMaxBytes" = min;

The code cache can be set to grow without bound (until the JVM heap is exhausted):

alter system set "codeCacheMaxBytes" = max;

By default, a 2MB finite bound is set when a new Farrago database is initialized:

alter system set "codeCacheMaxBytes" = 20000000;

The contents of the cache can be cleared at any time:

call sys_boot.mgmt.flush_code_cache();

Cache Structure

The code cache is implemented as an instance of FarragoObjectCache. This class is similar to the Apache Commons Pool component, but includes some additional support needed by Farrago:

  • Support for sharing of entries by concurrent threads. Entries are pinned while in use (rather than borrowed as in Commons Pool). A pinned entry cannot be discarded from the cache; multiple threads are allowed to pin and share the same entry simultaneously if they specify false for the exclusive parameter to the pin method.
  • Support for resource accounting in terms of memory usage rather than number of objects. The caller is responsible for supplying estimates of reliable per-object memory usage so that the cache size can be capped at a fixed amount of memory (rather than a fixed number of objects).

Each entry in the cache must have a unique key associated with it.

Cache Contents

Here is a list of the object types currently known to be managed in the code cache:

  • Executable statement: This is a top-level encoding of an optimized non-DDL statement (either a query or a DML statement). If the SQL statement requires Java for execution, then the executable statement includes a package of one or more generated Java classes used in the implementation. If the SQL statement requires Fennel for execution, then the executable statement includes an XMI encoding which can be used to construct the corresponding ExecStreamGraph (however, the executable statement does not include an actual ExecStreamGraph object; that is cached separately). An executable statement also includes a small amount of other metadata about the statement such as the output row type it produces, any dynamic parameters, and the repository MOFID's of the catalog objects (e.g. tables) accessed by its execution. Like all cached objects, an executable statement is reusable, and it is also sharable; this means that multiple connections can be executing it simultaneously, all sharing the same object from the cache. The cache key for an executable statement is the canonically expanded text of the SQL statement after it has passed through SqlParser, SqlValidator, and unparse. The FarragoSessionPreparingStmt interface method mayCacheImplementation allows a session personality to selectively disable statement caching based on analysis of the statement implementation after preparation (for example, if the optimization resulted in hard-coding of information which may become out of date by the time of next execution).
  • Fennel ExecStreamGraph: This is a complex native-code object used by the Fennel executor to carry out dataflow execution during query processing. An ExecStreamGraph is non-sharable: in other words, if a single connection executes the same query over and over, it can keep reusing a single cached ExecStreamGraph; but if three connections execute the same query simultaneously, then three separate ExecStreamGraph instances have to be constructed, because each one maintains private state during execution. An ExecStreamGraph instance can be reconstructed at any time from the XMI in an executable statement; this is why these two code types can be cached independently. The cache key for an ExecStreamGraph is the XMI string used in construction.
  • Default value: This is the Rex representation for a column's default value. These are stored in the catalog as strings, and then parsed and cached as Rex trees. The cache key for a default value is the MOFID of the expression stored in the catalog. A default value is sharable because the kinds of Rex nodes which can appear in default value expressions are all immutable after construction.
  • SQL/MED wrapper: This is an instance of FarragoMedDataWrapper, typically just a lightweight object implicitly referencing the jar storing the wrapper implementation. A SQL/MED wrapper is reusable but not sharable (but the loaded jar itself is shared, even though the wrapper object instance is not). The cache key for a SQL/MED wrapper is the MOFID of its definition in the catalog.
  • SQL/MED server: This is an instance of FarragoMedDataServer, typically a connection of some kind to an external data server. A SQL/MED server may or may not be sharable; the SQL/MED SPI allows Farrago to interrogate the server to find out. For example, a connection to a file server might be sharable, whereas a connection to a database server might not be. The cache key for a SQL/MED server is the MOFID of its definition in the catalog.

Cache Victimization

By default, Farrago uses a basic LRU victim policy. Once the memory in use exceeds the set limit, objects are evicted in LRU order until memory usage falls below the limit. FarragoObjectCache supports pluggable policy replacement (this needs to be exposed at the FarragoSessionFactory level in order for extension projects to be able to supply their own policies.) For a smarter policy, we could also use additional information about the cost and benefit of caching each object:

  • memory usage: all else being equal, it's better to evict a single large object rather than a number of small objects
  • construction time: if we kept track of information such as the time required to optimize a SQL statement, we would prefer to evict the ones that were fast to reconstruct; it should be possible for FarragoObjectCache to do this transparently, since the construction happens inside of a callback from the pin method; System.nanoTime provides access to high-resolution timers (although this could be very unreliable in a high-concurrency environment; thread CPU time would be better)
  • execution time: it's better to cache a statement which executes very quickly as opposed to one which runs for a long time (for the one which runs for a long time, the construction time is negligible); tracking this would require help from the caller

It might be an interesting research project to come up with a good weighting of these factors.

Object Staleness

It is currently the responsibility of the caller to deal with stale objects; the cache itself has no way of testing an object. For example, FarragoDatabase.prepareStmtImpl decides whether a cached executable statement is stale by checking to make sure that all of the accessed catalog objects still exist. An example of a case where no staleness checking is being done (resulting in a bug) is FRG-161.

It would be nice to follow Apache Commons Pool in making this an explicit part of the cache infrastructure.

Object Memory Estimates

In general, it is difficult to come up with precise memory accounting for complex objects, so the best we can do is "bogobytes" (similar to the "bogomips" CPU ratings reported by the Linux kernel). In particular, the SQL/MED objects tie down other resources (such as network connections), not just memory.

The approach we take is to estimate hard-to-measure memory usage in terms of easy-to-measure quantity. For example, a coarse estimate for an ExecStreamGraph assumes its memory usage is proportional to the length of the XMI string used to construct it. (A finer estimate would use a different factor per ExecStream class.) In order to use this approach, several things are required:

  1. A means of computing actual memory used after constructing a cached object.
  2. A representative set of SQL statements. This is probably personality specific, meaning that eventually we may want to allow different personalities to supply different values for weighting factors.
  3. An estimation model such as linear regression.

The table below shows what we've come up with so far via experimentation with object types currently cached. Data used for determining weighting factors (including input SQL statements) is available in spreadsheet form (Media:CodeCacheMemoryEstimation.xls). For more details, please see the relevant source code.

Object Type Easy-to-measure Quantity Hard-to-measure Quantity Formula for Total Bogobytes (sums both easy and hard quantities) Memory Computation Methodology
Generated Java class bytecode size (obtained from Janino) JIT-compiled native code size 1.75*x, where x is bytecode size in bytes Use JRockit tools to monitor JIT optimizer. (Could be very different from hotspot, but order of magnitude is probably similar.) For a conservative estimate, requires executing statement long enough to make sure full JIT optimization kicks in.
ExecStreamGraph XMI string length C++ heap usage 1.5*x, where x is the number of bytes (not characters) in the XMI string Use the uordblks field returned by mallinfo; it covers both malloc and new, and includes chunk overhead (which it's better to account for anyway). Take the delta between entry to the visit for ProxyCmdCreateExecutionStreamGraph and exit from ProxyCmdPrepareExecutionStreamGraph. Note that memory acquired between ExecStreamGraph open/close does not need accounting here (since it gets released when the graph is returned to the cache).
Default value SQL string length Java heap usage for Rex tree 3*x, where sb is number of bytes in SQL string SWAG
SQL/MED wrapper none Java heap usage, other wrapper-specific resources 1000 (hard-coded constant) SWAG
SQL/MED server none Java heap usage, other server-specific resources 10000 (hard-coded constant) SWAG

Random jottings for ways to improve the estimates:

  • For Java heap usage, do something similar to FarragoTestCase.tearDownImpl, but use a better JVM API such as the one exposed to memory profilers (Sun seems to introduce a new technology for this in each release; I can't remember which is the latest) or JMX. This may not be completely reliable due to the whims of the garbage collector; ideally we want to be able to find out exactly how many heap bytes are reachable only via a given root object.
  • One complication is that the XMI string memory reference is shared between cache entries for both executable statements and ExecStreamGraphs. Our formulas attempt to address this by assigning half of the size to each on the assumption that their cache lifetimes will typically be coupled, but this doesn't correctly account for the concurrency case where multiple copies of an ExecStreamGraph are created.
Personal tools