Abstract
This document is intended to guide a new ODP implementation developer. Further details about ODP may be found at ODP homepage
1. Introduction and Overview
ODP consists of three distinct components:
-
An abstract API specification. This is the application’s writer’s view of ODP and defines the syntax and semantics of all ODP routines at a functional level.
-
An Implementation of the ODP API for a specific target platform. A mapping of the ODP APIs to a specific target platform. This is the focus of this document.
-
A Validation Test Suite. This is an independent set of routines that when run against an ODP implementation verifies that it correctly implements all of the defined ODP APIs at a functional level. The test suite is used by implementers to self-certify their ODP implementation as well as by third-parties to verify an implementation’s claim to be ODP API compliant.
The structure of these components, especially the API specification, is more fully defined in the ODP User’s Guide
1.1. Organization of this Document
This document is designed to serve two purposes. Its larger purpose is to provide guidance and practical advice for those wishing to implement ODP on their platform. To help with this, as well as to provide deeper insight into how to think about ODP from an implementer’s standpoint, this document also discusses in some depth the design and organization of a specific ODP implementation: the odp-linux reference implementation distributed as part of the main ODP git repository. By grounding theory in practice and discussing a particular example implementation, it is hoped this will provide insight into the trade-offs implementers should consider in approaching how to best implement ODP on their platforms.
The section The Include Structure discusses the layout of the ODP include tree from an implementer’s perspective. Although implementers have wide latitude in how they organize their ODP implementations, it is recommended that this layout be be observed by other implementations. Doing so both simplifies code sharing with the odp-linux reference implementation and also ensure ease of upgrading to future ODP API levels, as this is the basic layout that will be observed by future revisions to the API. This layout also facilitates shared library distribution.
The section The Validation Suite then discusses how validation tests are organized and run to provide ODP API conformance testing for ODP implementations. This is something ODP implementations need to consider from the outset. ODP uses a self-certifying model for API compliance, however this self-certification is performed by the ODP validation test suite (possibly augmented with a vendor’s own extensions to this suite). The important point is that because the validation suite is itself free open source code, any potential user of a given ODP implementation is free to do its own validation of a vendor’s ODP implementation using this suite.
Following this basic material, each ODP API area is then reviewed and its implementation considerations are discussed, illustrating these considerations with discussion of how these are done in odp-linux.
2. The Include Structure
The implementers view of the include source tree allows the common API definitions and documentation to be reused by all the platforms defined in the tree, but leave the actual definitions to be defined by the specific platform.
./ ├── include/ │ ├── odp/ │ │ └── api/ │ │ └── spec/ │ │ └── The Public API specification and its documentation. (1) │ │ │ ├── odp_api.h This file should be the only file included by the any ODP │ │ application. (4) │ │ └── platform/ └── <implementation name>/ └── include/ ├── Internal header files seen only by the implementation. └── odp/ └── api/ (2) ├── In-line function definitions of the public API for this │ platform seen by the application. │ └── plat/ (3) └── Platform specific types, enums etc as seen by the application but require overriding by the implementation.
1 | The specification, defining the ODP application programming interface (API) is held in 'include/odp/api/spec/'. The ODP API is defined by a set of '.h' files including doxygen documentation. |
2 | Each public specification file is included by a counterpart in 'platform/<implementation name>/include/odp/api'. The include of the specification API is AFTER the platform specific definitions to allow the platform to provide definitions that match the underlying hardware. |
3 | The implementation code may include files from 'platform/<implementation name>/include/odp/api/plat' |
4 | Applications in turn include the include/odp_api.h file which includes the 'platform/<implementation name>/include/odp/api' files to provide a complete definition of the API. |
After ODP installation (make install), the structure becomes as follows:
./ └── include/ ├── odp/ │ └── api/ API In-line for this platform. │ ├── plat/ API Platform specific types. │ └── spec/ The public API specification. └── odp_api.h
3. ODP library naming recommendations
The ODP project supports two implementations odp-linux and odp-dpdk. The
name of the libraries are libodp-linux
and libodp-dpdk
respectively. It is
recommended that other implementations follow the same schema
(odp-<implementation name>) to make the representation of the ODP
implementations uniform in a distribution.
4. The Validation Suite
ODP provides a comprehensive set of API validation tests that are intended to be used by implementers during development and by application developers to verify that a particular implementation meets their requirements.
The list of these tests is expected to grow as ODP grows.
Note that validation tests are not typically written by implementers, however their structure and operation needs to be understood so that implementations can properly run them to validate that they conform to the ODP API specification. The only exception to this is platform-specific tests, as described in Defining platform specific tests. These may be written by platforms to leverage the CUnit framework for their own internal test needs or to extend the platform-agnostic tests with platform-specific logic.
The list of test executables is run by the automake test harness, when running
make check
.
Therefore, as required by this harness, each executable should return 0 on
success (tests passed), 77 on inconclusive, or any other values on failure.
The automake functionality shows a status line (PASSED/FAIL…) for each of the
ran test executables.
It is expected that ODP developers will need to run tests as early as possible in the development cycle, before all APIs have been implemented. Besides, although there are no APIs that are formally listed as optional, it is also expected that there may be cases where a subset of APIs remain unimplemented on a particular platform. Moreover, some platforms may require specific initialization/termination code prior/after running the standard tests.
To accommodate with these platform disparities, the ODP validation has been divided in two distinct areas:
-
The platform agnostic area,
-
A platform dependent area (one per platform).
4.1. Platform agnostic
This grouping defines tests that are expected to be executable and succeed on any platform, though possibly with very different performance, depending on the underlying platform. They are written in plain C code, and may only use functions defined in the standard libC (C11) library (besides the ODP functions being tested, of course). A free C11 draft specification can be found at the open-std.org web site. No other languages (like scripting) are allowed as their usage would make assumptions on the platform capability.
This area is located at: test/common_plat/
In this directory, tests are grouped by category:
-
validation : groups of test defining the ODP compliance
-
performance : tests checking system responsiveness
-
miscellaneous
Each ODP interface contains modules, where each module groups the set of ODP
functions related to the same "topic" for the given interface.
Examples of modules for the application interface includes "classification"
(API functions dealing with ingress packets classification), time
(functions dealing with time, excluding timers which have their own module),
timer, …
The complete module list can be seen at:
ODP Modules
Within the platform agnostic area, the validation tests for a given interface
are also grouped by modules, matching the ODP interface modules: for instance,
test/common_plat/validation/api
mainly contains a list of
directories matching each module name (as defined by the doxygen @defgroup
or @addtogroup
statement present in each API .h
file).
Within each of these directories, a library (called libtest<module>.la
) and
its associated .h
file (called <module>.h
) defines all the test functions
for this module as well as few other functions to initialize, terminate, and
group the tests. An executable called <module>_main*
, is also built. It is
permissible to generate more than one executable to cover the functionality in
the test library for the module. These executable(s) shall call all the tests
for this module. See Module test and naming convention for
more details.
It is important to be aware that the tests defined for a given module
(defined in test/common_plat/validation/api/<module>
) are focused to test
the ODP functions belonging to this module, but are not limited to use this
module’s ODP functions only: many modules needs some interaction with some
other module to be tested.
The obvious illustration of this is for module "init" whose functions are
required by all tests of all other modules (as ODP needs to be initialized to
test anything else).
There is a Makefile.am
located at the top of the platform agnostic area. Its
role is limited to the construction of the different test libraries and the
<module>_main*
executables. No tests are run from this area when make check
is performed.
4.1.1. CUnit
Within a given test executable CUnit is used to run the different tests. The usage of CUnit implies the following structure:
-
Tests are simple C functions.
-
Tests are grouped in arrays called test suites. Each test suite can be associated with a suite initialization/termination function(s), called by CUnit before and after the whole suite is run.
-
An array of test suites (and associated init/term functions) defines the test registry run by the test executable.
Moreover, two extra functions can be used to initialize/terminate the test executable (these are not part of CUnit). A test executable return success (0) if every test of each suite succeed.
More details about CUnit can be found in the CUnit User’s Guide
4.1.2. Module test and naming convention
-
Tests, i.e. C functions which are used in CUnit test suites are named: <Module>_test_* where the suffix identifies the test.
-
Test arrays, i.e., arrays of
odp_testinfo_t
, listing the test functions belonging to a suite, are called: <Module>_suite[_*] where the possible suffix can be used if many suites are declared. -
CUnit suite init and termination functions are called: <Module>_suite[_*]_init() and <Module>_suite[_*]_term() respectively, where the possible extra middle pattern can be used if many suites are declared.
-
Suite arrays, i.e., arrays of
odp_suiteinfo_t
used in executables (CUnit registry) are called <Module>_suites[_*] where the possible suffix identifies the executable using it, if any. -
Main executable function(s), are called <Module>_main[_*]* where the *possible suffix identifies the executable, if any, using it.
-
Init/term functions for the whole executable are called <Module>_init and <Module>_term
All the above symbols are part of the generated libtest<Module>.la
libraries.
The generated main executable(s) (named <module>_main[_*]
, where the
optional suffix is used to distinguish the executables belonging to the same
module, if many) simply call(s) the related <Module>_main[_*]
from the
library.
4.2. Platform specific
These tests are located under platform/<platform_name>/
. There is one such
area for each platform implementing ODP. This location will be referred as
<PLATFORM_SPECIFIC> in the rest of this document.
4.2.1. The normal case
If the considered platform needs no platform specific tests, this directory
simply needs to contain a single Makefile.am
listing each of the executables
(named <module>_main) built from the platform agnostic area. The executables are
listed in the automake TEST variable and will therefore be run on "make check".
For the linux-generic platform, most tested modules fall into this category:
currently, the test/linux-generic/test/Makefile.am
looks
roughly as follows:
include $(top_srcdir)/test/Makefile.inc
TESTS_ENVIRONMENT += TEST_DIR=${top_builddir}/test/common_plat/validation
ALL_API_VALIDATION = ${top_builddir}/test/common_plat/validation/api
SUBDIRS =
if test_vald
TESTS = validation/api/pktio/pktio_run.sh \
validation/api/pktio/pktio_run_tap.sh \
$(ALL_API_VALIDATION)/atomic/atomic_main$(EXEEXT) \
$(ALL_API_VALIDATION)/barrier/barrier_main$(EXEEXT) \
$(ALL_API_VALIDATION)/buffer/buffer_main$(EXEEXT) \
$(ALL_API_VALIDATION)/classification/classification_main$(EXEEXT) \
$(ALL_API_VALIDATION)/cpumask/cpumask_main$(EXEEXT) \
$(ALL_API_VALIDATION)/crypto/crypto_main$(EXEEXT) \
$(ALL_API_VALIDATION)/errno/errno_main$(EXEEXT) \
$(ALL_API_VALIDATION)/hash/hash_main$(EXEEXT) \
$(ALL_API_VALIDATION)/init/init_defaults$(EXEEXT) \
$(ALL_API_VALIDATION)/lock/lock_main$(EXEEXT) \
$(ALL_API_VALIDATION)/packet/packet_main$(EXEEXT) \
$(ALL_API_VALIDATION)/pool/pool_main$(EXEEXT) \
$(ALL_API_VALIDATION)/queue/queue_main$(EXEEXT) \
$(ALL_API_VALIDATION)/random/random_main$(EXEEXT) \
$(ALL_API_VALIDATION)/scheduler/scheduler_main$(EXEEXT) \
$(ALL_API_VALIDATION)/std/std_main$(EXEEXT) \
$(ALL_API_VALIDATION)/thread/thread_main$(EXEEXT) \
$(ALL_API_VALIDATION)/time/time_main$(EXEEXT) \
$(ALL_API_VALIDATION)/timer/timer_main$(EXEEXT) \
$(ALL_API_VALIDATION)/traffic_mngr/traffic_mngr_main$(EXEEXT) \
$(ALL_API_VALIDATION)/shmem/shmem_main$(EXEEXT) \
$(ALL_API_VALIDATION)/system/system_main$(EXEEXT) \
With the exception for module pktio, all other modules testing just involves calling the platform agnostic <module>_main executables (in /test/common_plat/validation/api).
4.2.2. Using other languages
The pktio module, above, is actually tested using a bash script. This script is
needed to set up the interfaces used by the tests. The pktio_run
script
eventually calls the platform agnostic
test/common_plat/validation/api/pktio/pktio_main
after setting up the
interfaces needed by the tests.
Notice that the path to the script, validation/api/pktio/pktio_run.sh
,
is pointing to a file within the <PLATFORM_SPECIFIC> tree so is private to
this platform. Any languages supported by the tested platform can be used
there, as it will not impact other platforms.
The platform "private" executables (such as this script), of course, must also
return one of the return code expected by the automake test harness
(0 for success, 77 for skipped, other values for errors).
4.2.3. Defining test wrappers
The pktio case above is actually using a script as wrapper around the "standard"
(platform independent) test executable. Wrappers can also be defined by using
the LOG_COMPILER
variable of automake.
This is applicable in cases where the same wrapper should be used for more then
one test, as the test name is passed has parameter to the wrapper. A wrapper is
just a program expecting one argument: the test name.
Automake also supports the usage different wrappers based of the executable filename suffix. See Parallel-Test-Harness for more information.
To add a wrapper around the executed test, just add the following LOG_COMPILER
definition line in the <PLATFORM_SPECIFIC>/Makefile.am
:
...
if test_vald
LOG_COMPILER = $(top_srcdir)/platform/linux-generic/test/wrapper-script
TESTS = pktio/pktio_run \
...
Here follows a dummy example of what wrapper-script could be:
#!/bin/bash
# The parameter, $1, is the name of the test executable to run
echo "WRAPPER!!!"
echo "running $1!"
# run the test:
$1
# remember the test result:
res=$?
echo "Do something to clean up the mess here :-)"
# return the test result.
exit $res
Note how the above script stores the return code of the test executable to return it properly to the automake test harness.
4.2.4. Defining platform specific tests
Sometimes, it may be necessary to call platform specific system calls to check
some functionality: For instance, testing odp_cpumask_*
could involve checking
the underlying system CPU mask. On linux, such a test would require using the
CPU_ISSET macro, which is linux specific. Such a test would be written in
<PLATFORM_SPECIFIC>/<test-group>/<interface>/cpumask/…
The contents of
this directory would be very similar to the contents of the platform agnostic
side cpu_mask tests (including a Makefile.am
…), but platform specific test
would be written there.
<PLATFORM_SPECIFIC>/Makefile.am
would then trigger the building of the
platform specific tests (by listing their module name in SUBDIRS
and therefore
calling the appropriate Makefile.am
) and then it would call both the platform
agnostic executable(s) and the platform specific test executable.
The shm module of the linux-generic ODP API does have a validation
test written this way. You can see it at:
test/linux-generic/validation/api/shmem
4.2.5. Marking validation tests as inactive
The general policy is that a full run of the validation suite (a make check
)
must pass at all times. However a particular platform may have one or more test
cases that are known to be unimplemented either during development or
permanently, so to avoid these test cases being reported as failures it’s useful
to be able to skip them. This can be achieved by creating a new test executable
(still on the platform side), giving the platform specific initialization code
the opportunity to modify the registered tests in order to mark unwanted tests
as inactive while leaving the remaining tests active. It’s important that the
unwanted tests are still registered with the test framework to allow the fact
that they’re not being tested to be recorded.
The odp_cunit_update()
function is intended for this purpose, it is used to
modify the properties of previously registered tests, for example to mark them
as inactive. Inactive tests are registered with the test framework but aren’t
executed and will be recorded as inactive in test reports.
In test/common_plat/validation/api/foo/foo.c
, define all
validation tests for the 'foo' module:
odp_testinfo_t foo_tests[] = {
ODP_TEST_INFO(foo_test_a),
ODP_TEST_INFO(foo_test_b),
ODP_TEST_INFO_NULL
};
odp_suiteinfo_t foo_suites[] = {
{"Foo", foo_suite_init, foo_suite_term, foo_tests},
ODP_SUITE_INFO_NULL
};
In <platform>/validation/api/foo/foo_main.c
, register all the tests defined in
the foo
module, then mark a single specific test case as inactive:
static odp_testinfo_t foo_tests_updates[] = {
ODP_TEST_INFO_INACTIVE(foo_test_b),
ODP_TEST_INFO_NULL
};
static odp_suiteinfo_t foo_suites_updates[] = {
{"Foo", foo_suite_init, foo_suite_term, foo_tests_updates},
ODP_SUITE_INFO_NULL
};
int foo_main(void)
{
int ret = odp_cunit_register(foo_suites);
if (ret == 0)
ret = odp_cunit_update(foo_suites_updates);
if (ret == 0)
ret = odp_cunit_run();
return ret;
}
So foo_test_a
will be executed and foo_test_b
is inactive.
It’s expected that early in the development cycle of a new implementation the inactive list will be quite long, but it should shrink over time as more parts of the API are implemented.
4.2.6. Conditional Tests
Some tests may require specific conditions to make sense: for instance, on pktio, checking that sending a packet larger than the MTU is rejected only makes sense if packets can indeed, on that ODP implementation, exceed the MTU. A test can be marked conditional as follows:
odp_testinfo_t foo_tests[] = {
...
ODP_TEST_INFO_CONDITIONAL(foo_test_x, foo_check_x),
...
ODP_TEST_INFO_NULL
};
odp_suiteinfo_t foo_suites[] = {
{"Foo", foo_suite_init, foo_suite_term, foo_tests},
ODP_SUITE_INFO_NULL
};
Foo_test_x is the usual test function. Foo_check_x is the test precondition, i.e. a function returning a Boolean (int). It is called before the test suite is started. If it returns true, the test (foo_test_x) is run. If the precondition function (foo_check_x above) returns false, the test is not relevant (or impossible to perform) and it will be skipped.
Note
Conditional tests can be marked as inactive, keeping the precondition function. Both the test and the precondition function will be skipped, but re-activating the test is then just a matter of changing back the macro from ODP_TEST_INFO_INACTIVE to ODP_TEST_INFO_CONDITIONAL:
...
/* active conditional test */
ODP_TEST_INFO_CONDITIONAL(foo_test_x, foo_check_x),
/* inactive conditional test */
ODP_TEST_INFO_INACTIVE(foo_test_y, foo_check_y),
...
4.2.7. helper usage
The tests (both platform agnostic and platform dependent tests) make use of a set of functions defined in a helper library. The helper library tries to abstract and regroup common actions that applications may perform but which are not part of the ODP API (i.e. mostly OS system calls). Using these functions is recommended, as running the tests on a different OS could (hopefully) be as simple as changing the OS related helper lib.
In the linux helper, two functions are given to create and join ODP threads:
odph_thread_create()
odph_thread_join()
These two functions abstract what an ODP thread really is and their usage is recommended as they would be implemented in other OS`s helper lib.
Five older functions exist to tie and ODP thread to a specific implementation:
odph_linux_pthread_create()
odph_linux_pthread_join()
odph_linux_process_fork_n()
odph_linux_process_fork()
odph_linux_process_wait_n()
The usage of these functions should not occur within ODP examples nor tests. The usage of these functions in other application is not recommended.
5. ODP Abstract Types and Implementation Typedefs
ODP APIs are defined to be abstract and operate on abstract types. For example,
ODP APIs that perform packet manipulation manipulate objects of type
odp_packet_t
. Queues are represented by objects of type odp_queue_t
, etc.
Since the C compiler cannot compile code that has unresolved abstract types,
the first task of each ODP implementation is to decide how it wishes to
represent each of these abstract types and to supply appropriate typedef
definitions for them to make ODP applications compilable on this platform.
It is recommended that however a platform wishes to represent ODP abstract
types, that it do so in a strongly typed manner. Using strong types means
that an application that tries to pass a variable of type odp_packet_t
to
an API that expects an argument of type odp_queue_t
, for example, will result
in a compilation error rather than some difficult to debug runtime failure.
The odp-linux reference implementation defines all ODP abstract types strongly
using a set of utility macros contained in
platform/linux-generic/include/odp/api/plat/strong_types.h
. These macros
can be used or modified as desired by other implementations to achieve strong
typing of their typedefs.
5.1. Typedef approaches
ODP abstract types serve two distinct purposes that each implementation must
consider. First, they shield applications from implementation internals, thus
facilitating ODP application portability. Equally important, however, is that
implementations choose typdefs and representations that permit the
implementation to realize ODP APIs efficiently. This typically means that the
handles defined by typedefs are either a pointer to an implementation-defined
struct or else an index into an implementation-defined resource table. The two
LNG-provided ODP reference implementations illustrate both of these approaches.
The odp-dpdk implementation follows the former approach (pointers) as this
offers the highest performance. For example, in odp-dpdk an
odp_packet_t
is a pointer to an rte_mbuf
struct, which is how DPDK
represents packets. The odp-linux implementation, by contrast, uses indices
as this permits more robust validation support while still being highly
efficient. In general, software-based implementations will typically favor
pointers while hardware-based implementations will typically favor indices.
5.2. ABI Considerations
An Application Binary Interface is a specification of the representation of an API that guarantees that applications can move between implementations of an API without recompilation. ABIs thus go beyond the basic source-code portability guarantees provided by APIs to permit binary portability as well.
It is important to note that ODP neither defines nor prohibits the specification of ABIs. This is because ODP itself is an Abstract API Specification. As noted earlier, abstract APIs cannot be compiled in the absence of completion by an implementation that instantiates them, so the question of ABIs is really a question of representation agreement between multiple ODP implementations. If two or more ODP implementations agree on things like typedefs, endianness, alignments, etc., then they are defining an ABI which would permit ODP applications compiled to that common set of instantiations to inter operate at a binary as well as source level.
5.2.1. Traditional ABI
ABIs can be defined at two levels. The simplest ABI is within a specific Instruction Set Architecture (ISA). So, for example, an ABI might be defined among ODP implementations for the AArch64 or x86 architectures. This traditional approach is shown here:
In the traditional approach, multiple target platforms agree on a common set of typedefs, etc. so that the resulting output from compilation is directly executable on any platform that subscribes to that ABI. Adding a new platform in this approach simply requires that platform to accept the existing ABI specification. Note that since the output of compilation in a traditional ABI is a ISA-specific binary that applications cannot offer binary compatibility across platforms that use different ISAs.
5.2.2. Bitcode based ABI
An ABI an also be defined at a higher level by moving to a more sophisticated tool chain (such as is possible using LLVM) that implements a split compilation model. In this model, the output from a compiler is not directly executable. Rather it is a standardized intermediate representation called bitcode that must be further processed to result in an executable image as shown here:
The key to this model is that the platform linking and optimization that is needed to create a platform executable is a system rather than a developer responsibility. The developer’s output is a universal bitcode binary. From here, the library system creates a series of managed binaries that result from performing link-time optimization against a set of platform definitions. When a universal application is to run on a specific target platform, the library system selects the appropriate managed binary for that target platform and loads and runs it.
Adding a new platform in this approach involves adding the definition for that platform to the library system so that a managed binary for it can be created and deployed as needed. This occurs without developer involvement since the bitcode format that is input to this backend process is independent of the specific target platform. Note also that since bitcode is not tied to any ISA, applications using bitcode ABIs are binary portable between platforms that use different ISAs. This occurs without loss of efficiency because the process of creating a managed binary is itself a secondary compilation and optimization step. The difference is that performing this step is a system rather than a developer responsibility.
6. Configuration
Each ODP implementation will choose various sizes, limits, and similar
internal parameters that are well matched to its design and platform
capabilities. However, it is often useful to expose at least some of these
parameters and allow users to select specific values to use either
at compile time or runtime. This section discusses options for doing this,
using the configuration options offered in the odp-linux
reference
implementation as an example.
6.1. Static Configuration
Static configuration requires the ODP implementation to be recompiled. The
reasons for choosing static configuration vary but can involve both design
simplicity (e.g., arrays can be statically configured) or performance
considerations (e.g., including optional debug code). Two approaches to
static configuration are #define
statements and use of autotools.
6.1.1. #define
Statements
Certain implementation limits can best be represented by #define
statements
that are set at compile time. Examples of this can be seen in the odp-linux
reference implementation in the file
platform/linux-generic/include/odp_config_internal.h
.
/*
* Maximum number of supported CPU identifiers. The maximum supported CPU ID is
* CONFIG_NUM_CPU_IDS - 1.
*/
#define CONFIG_NUM_CPU_IDS 256
/*
* Maximum number of pools
*/
#define CONFIG_POOLS 64
Here two fundamental limits, the number of CPUs supported and the maximum
number of pools that can be created via the odp_pool_create()
API are
defined. By using #define
, the implementation can configure supporting
structures (bit strings and arrays) statically, and can also allow static
compile-time validation/consistency checks to be done using facilities like
ODP_STATIC_ASSERT()
. This results in more efficient code since these limits
need not be computed at runtime.
Users are able to change these limits (potentially within documented absolute bounds) by changing the relevant source file and recompiling that ODP implementation.
6.1.2. Use of autotools configure
The ODP reference implementation, like many open source projects, makes use of autotools to simplify project configuration and support for various build targets. These same tools permit compile-time configuration options to be specified without requiring changes to source files.
In addition to the "standard" configure
options for specifying prefixes,
target install paths, etc., the odp-linux
reference implementation supports
a large number of static configuration options that control how ODP is
built. Use the ./configure --help
command for a complete list. Here we
discuss simply a few for illustrative purposes:
--enable-debug
-
The ODP API specification simply says that "results are undefined" when invalid parameters are passed to ODP APIs. This is done for performance reasons so that implementations don’t need to insert extraneous parameter checking that would impact runtime performance in fast-path operations. While this is a reasonable trade off, it can complicate application debugging. To address this, the ODP implementation makes use of the
_ODP_ASSERT()
macro that by default disappears at compile time unless the--enable-debug
configuration option was specified. Running with a debug build of ODP trades off performance for improved parameter/bounds checking to make application debugging easier. --enable-user-guides
-
By default, building ODP only builds the code. When this option is specified, the supporting user documentation (including this file) is also built.
--enable-abi-compat
-
Enable ABI compatible ODP build, which permits application binary portability across different ODP implementations targeting the same Instruction Set Architecture (ISA). While this is useful in cloud/host environments, it does involve some performance cost to provide binary compatibility. For embedded use of ODP, disabling ABI compatibility means tighter code can be generated by inlining more of the ODP implementation into the calling application code. When built without ABI compatibility, moving an application to another ODP implementation requires that the application be recompiled. For most embedded uses this is a reasonable trade off in exchange for better application performance on a specific target platform.
6.2. Dynamic Configuration
While compile-time limits have the advantage of simplicity, they are also not very flexible since they require an ODP implementation to be regenerated to change them. The alternative is for implementations to support dynamic configuration that enables ODP to change implementation behavior without source changes or recompilation.
The options for dynamic configuration include: command line parameters, environment variables, and configuration files.
6.2.1. Command line parameters
Applications that accept a command line passed to their main()
function can
use this to tailor how they use ODP. This may involve self-imposed limits
driven by the application or these can specify arguments that are to be
passed to ODP initialization via the odp_init_global()
API. The syntax of
that API is:
int odp_init_global(odp_instance_t *instance,
const odp_init_t *params,
const odp_platform_init_t *platform_params);
and the odp_init_t
struct is used to pass platform-independent parameters
that control ODP behavior while the odp_platform_init_t
struct is used to
pass platform-specific parameters. The odp-linux
reference platform does
not make use of these platform-specific parameters, however the odp-dpdk
reference implementation uses these to allow applications to pass DPDK
initialization parameters to it via these params.
ODP itself uses the odp_init_t
parameters to allow applications to specify
override logging and abort functions. These routines are called to perform
these functions on behalf of the ODP implementation, thus better allowing
ODP to interoperate with application-defined logging and error handling
facilities.
6.2.2. Environment variables
Linux environment variables set via the shell provide a convenient means of
passing dynamic configuration values. Each ODP implementation defines which
environment variables it looks for and how they are used. For example, the
odp-dpdk
implementation uses the variable ODP_PLATFORM_PARAMS
as an
alternate means of passing DPDK initialization parameters.
Another important environment variable that ODP uses is ODP_CONFIG_FILE
that is used to specify the file path of a configuration override file, as
described in the next section.
6.2.3. Configuration files
The libconfig library provides a standard set of APIs and tools for parsing configuration files. ODP uses this to provide a range of dynamic configuration options that users may wish to specify.
ODP uses a base configuration file that contains system-wide defaults, and
is located in the config/odp-linux-generic.conf
file within the ODP
distribution. This specifies a range of overridable configuration options that
control things like shared memory usage, queue and scheduler limits and tuning
parameters, timer processing options, as well as I/O parameters for various
pktio classes.
While users of ODP may modify this base file before building it, users can
also supply an override configuration file that sets specific values of
interest while leaving other parameters set to their defaults as defined by
the base configuration file. As noted earlier, the ODP_CONFIG_FILE
environment variable is used to point to the override file to be used.
6.3. Summary
There is a place for both static and dynamic configuration in any ODP implementation. This section described some of the most common and discussed how the ODP-supplied reference implementations make use of them. Other ODP implementations are free to copy and/or build on these, or use whatever other mechanisms are native to the platforms supported by those ODP implementations.
7. Glossary
- worker thread
-
A worker is a type of ODP thread. It will usually be isolated from the scheduling of any host operating system and is intended for fast-path processing with a low and predictable latency. Worker threads will not generally receive interrupts and will run to completion.
- control thread
-
A control thread is a type of ODP thread. It will be isolated from the host operating system house keeping tasks but will be scheduled by it and may receive interrupts.
- ODP instantiation process
-
The process calling
odp_init_global()
, which is probably the first process which is started when an ODP application is started. There is one single such process per ODP instantiation. - thread
-
The word thread (without any further specification) refers to an ODP thread.
- ODP thread
-
An ODP thread is a flow of execution that belongs to ODP: Any "flow of execution" (i.e. OS process or OS thread) calling
odp_init_global()
, orodp_init_local()
becomes an ODP thread. This definition currently limits the number of ODP instances on a given machine to one. In the futureodp_init_global()
will return something like an ODP instance reference andodp_init_local()
will take such a reference in parameter, allowing threads to join any running ODP instance. Note that, in a Linux environment an ODP thread can be either a Linux process or a linux thread (i.e. a linux process callingodp_init_local()
will be referred as ODP thread, not ODP process). - event
-
An event is a notification that can be placed in a queue.
- queue
-
A communication channel that holds events