DiCE does not use configuration files, commandline options or environment variables. Instead all configuration is done by calling configuration functions on different API components. Most configuration options need to be set before starting up the system but some can be adapted dynamically - see below for details. The following is a short overview of the available configuration options. See the corresponding reference pages for the complete set.
DiCE internally uses a thread pool to distribute the load among the processor cores. The size of the thread pool is configurable. Additionally the number of threads which can be active at any time can be configured up to the licensed number of cores
The number of active threads can be changed dynamically at runtime. This can be used to balance the processor load against the embedding applications needs or other applications needs on the same host. If you decrease the number of threads it may take a while until this limitation will be enforced because running operations will not be aborted.
The application can use any number of threads to call into the DiCE API. Those threads will exist in addition to the threads in the thread pool. The DiCE scheduling system will however include those threads in the count of active threads and will limit the usage of threads from the thread pool accordingly.
You can configure a memory limit. DiCE will try to keep its memory usage below that limit. To achieve this it will flush data to disk. Objects can be flushed if the system guarantees that other hosts in the network still have them or if they are the result of a job execution which can be repeated.
DiCE can be configured to use a swap directory to which it can flush out the contents of loaded assemblies. Those parts can then be dropped from memory if the memory limit was exceeded. They will automatically be reloaded on demand when they are accessed again.
The memory limits can be adapted dynamically. If you decrease the memory limit DiCE will make a best effort to reduce the memory usage to the given amount. Actually enforcing this limit may take a while and is not guaranteed to succeed.
Networking can be done in different ways: UDP multicast and TCP unicast with or without automatic host discovery are supported.
You can decide how to use the networking capabilities in your application. You can realize a conventional master/slave configuration where all database elements and jobs are created by the same host. In that scenario you would typically write a small stub application which does the configuration and starts the system and then waits until shutdown is requested. In the background DiCE would handle reception of data and would accept requests to do work and handle them.
But you are not restricted to this scenario. DiCE allows all hosts to edit the database and to initiate jobs. You can use this to implement peer-to-peer applications. It is up to your application to avoid conflicting changes in this scenario. To help doing that the DiCE API provides means for synchronizing changes to database objects for example by locking objects.
UDP Multicast Networking:
Using UDP multicast gives the best performance because data can be sent once on a host but be received by many hosts in parallel. Additionally there is no need to configure a host list so it is very easy to dynamically add and remove hosts. On the downside using UDP multicast for high bandwidth transmissions is not supported by all network switches and might require changes to the network infrastructure. For the UDP multicast case a multicast address, a network interface and a port can be configured. A host list is optional and acts as a filter which restricts the hosts which can join to the given list.
Hosts can be started dynamically and will automatically join without the need for configuration. A callback object can be given to DiCE which will be called when hosts have been added to the network or when hosts have left the network.
TCP/IP Networking:
Because multicasting with high bandwidth is not supported on all networks it is also possible to use a more conventional scheme using TCP/IP networking, which is supported on virtually all networks. In that case an address and port to listen on can be configured. A host list is mandatory if the discovery mechanism is not used. Hosts can still be added to and removed from the host list at any time using the DiCE API provided that the necessary redundancy level can be maintained (see below).
TCP/IP networking can be coupled with a host discovery mechanism, in which case an additional address needs to be given. In case of a multicast address, multicast will only be used to discover other hosts dynamically. In case of a unicast address, the host with the given address acts as master during the discovery phase. In both cases, the actual data transmission will be done using TCP/IP. Because this mode requires only low bandwidth multicasting it is supported by most networks and can be used to simplify the configuration even if high bandwidth multicast is not supported. Again a callback object can be given by the application to allow the application to keep track of added and leaving hosts.
Failure Recovery:
A redundancy level can be configured up to a certain maximum. The redundancy level configures how many copies of a certain database object will be kept at least on the network. The DiCE database guarantees that the system will continue working without data loss even when hosts fail if the following preconditions are met: The number of hosts failing at the same time must be less than the configured redundancy level and at least one host must survive. After host failures or administrative removal of hosts the database will also reestablish the redundancy level if the number of surviving hosts is high enough.
Dynamic Scheduling Configuration changes:
A DiCE instance in a multi-hosted system can dynamically be configured to stop delegating jobs to other hosts, to stop accepting job delegation from other hosts, or to exclude the local hosts from job execution completely. This can be used to adapt the load on systems to the current usage scenario.
Administrative HTTP Server:
DiCE has a built-in administrative HTTP server which can be started. This server is not identical to the HTTP server framework which can be used to serve requests from customers. The administrative HTTP server does not allow the execution of C++ code. It is meant to be used to monitor the system at runtime. You can configure if this server is started (by default it is not started) and on which port and interface it listens. The administrative HTTP server allows one to inspect aspects of the DiCE database and thus is useful for debugging integrations. Usually it would not be enabled in customer builds.
The logging in DiCE is done using an abstract C++ interface class with one member function which is called by DiCE whenever some part of the system needs to emit a log message. You can provide a log object, which is an implementation of this abstract interface, and register it with the DiCE API. Only one log object can be registered with a DiCE process at any time.
In a multi-hosted system all log messages will be sent to an elected logging host. This host will pass all hosts' log messages to the log object of that host. You can influence the log election process to favor a certain host.
Each log message will come with an associated log level and with a log category. You can decide which log messages to report to the user. The method of the log object that you have registered with the DiCE API can be called at any time by any thread including application threads which initiated operations in DiCE. The log object must be implemented to handle this concurrency properly.
You can configure a log verbosity level and DiCE will pre-filter the log messages to avoid the processor load associated with generating log messages which are never presented to the user.