NAME | C SYNOPSIS | DESCRIPTION | PERFORMANCE METRICS - NAMES AND IDENTIFIERS | PMAPI CONTEXT | INSTANCE DOMAINS | THE TYPE OF METRIC VALUES | THE DIMENSIONALITY AND SCALE OF METRIC VALUES | INSTANCE PROFILE | COLLECTION TIME | GENERAL ISSUES OF PMAPI PROGRAMMING STYLE | DIAGNOSTICS | MULTI-THREADED APPLICATIONS | PCP ENVIRONMENT | SEE ALSO | COLOPHON

PMAPI(3)                  Library Functions Manual                  PMAPI(3)

NAME         top

       PMAPI  - introduction to the Performance Metrics Application Program‐
       ming Interface

C SYNOPSIS         top

       #include <pcp/pmapi.h>
        ... assorted routines ...
       cc ... -lpcp

DESCRIPTION         top

       Within the framework of the Performance Co-Pilot (PCP), client
       applications are developed using the Performance Metrics Application
       Programming Interface (PMAPI) that defines a procedural interface
       with services suited to the development of applications with a
       particular interest in performance metrics.
       This description presents an overview of the PMAPI and the context in
       which PMAPI applications are run.  The PMAPI is more fully described
       in the Performance Co-Pilot Programmer's Guide, and the manual pages
       for the individual PMAPI routines.

PERFORMANCE METRICS - NAMES AND IDENTIFIERS         top

       For a description of the Performance Metrics Name Space (PMNS) and
       associated terms and concepts, see PCPIntro(1).
       Not all PMIDs need be represented in the PMNS of every application.
       For example, an application which monitors disk traffic will likely
       use a name space which references only the PMIDs for I/O statistics.
       Applications which use the PMAPI may have independent versions of a
       PMNS, constructed from an initialization file when the application
       starts; see pmLoadASCIINameSpace(3), pmLoadNameSpace(3), and pmns(5).
       Internally (below the PMAPI) the implementation of the Performance
       Metrics Collection System (PMCS) uses only the PMIDs, and a PMNS
       provides an external mapping from a hierarchic taxonomy of names to
       PMIDs that is convenient in the context of a particular system or
       particular use of the PMAPI.  For the applications programmer, the
       routines pmLookupName(3) and pmNameID(3) translate between names in a
       PMNS and PMIDs, and vice versa.  The PMNS may be traversed using
       pmGetChildren(3) andpmTraversePMNS.  The pmFetchGroup(3) functions
       combine metric name lookup, fetch, and conversion operations.

PMAPI CONTEXT         top

       An application using the PMAPI may manipulate several concurrent
       contexts, each associated with a source of performance metrics, e.g.
       pmcd(1) on some host, or a set of archive logs of performance metrics
       as created by pmlogger(1).
       Contexts are identified by a ``handle'', a small integer value that
       is returned when the context is created; see pmNewContext(3) and
       pmDupContext(3).  Some PMAPI functions require an explicit ``handle''
       to identify the correct context, but more commonly the PMAPI function
       is executed in the ``current'' context.  The current context may be
       discovered using pmWhichContext(3) and changed using pmUseContext(3).
       If a PMAPI context has not been explicitly established (or the
       previous current context has been closed using pmDestroyContext(3))
       then the current PMAPI context is undefined.
       In addition to the source of the performance metrics, the context
       also includes the instance profile and collection time (both
       described below) which controls how much information is returned, and
       when the information was collected.

INSTANCE DOMAINS         top

       When performance metric values are returned across the PMAPI to a
       requesting application, there may be more than one value for a
       particular metric.  Multiple values, or instances, for a single
       metric are typically the result of instrumentation being implemented
       for each instance of a set of similar components or services in a
       system, e.g.  independent counts for each CPU, or each process, or
       each disk, or each system call type, etc.  This multiplicity of
       values is not enumerated in the name space but rather, when
       performance metrics are delivered across the PMAPI by pmFetch(3), the
       format of the result accommodates values for one or more instances,
       with an instance-value pair encoding the metric value for a
       particular instance.
       The instances are identified by an internal identifier assigned by
       the agent responsible for instantiating the values for the associated
       performance metric.  Each instance identifier has a corresponding
       external instance identifier name (an ASCII string).  The routines
       pmGetInDom(3), pmLookupInDom(3) and pmNameInDom(3) may be used to
       enumerate all instance identifiers, and to translate between internal
       and external instance identifiers.
       All of the instance identifiers for a particular performance metric
       are collectively known as an instance domain.  Multiple performance
       metrics may share the same instance domain.
       If only one instance is ever available for a particular performance
       metric, the instance identifier in the result from pmFetch(3) assumes
       the special value PM_IN_NULL and may be ignored by the application,
       and only one instance-value pair appears in the result for that
       metric.  Under these circumstances, the associated instance domain
       (as returned via pmLookupDesc(3)) is set to PM_INDOM_NULL to indicate
       that values for this metric are singular.
       The difficult issue of transient performance metrics (e.g. per-
       filesystem information, hot-plug replaceable hardware modules, etc.)
       means that repeated requests for the same PMID may return different
       numbers of values, and/or some changes in the particular instance
       identifiers returned.  This means applications need to be aware that
       metric instantiation is guaranteed to be valid at the time of
       collection only.  Similar rules apply to the transient semantics of
       the associated metric values.  In general however, it is expected
       that the bulk of the performance metrics will have instantiation
       semantics that are fixed over the execution life-time of any PMAPI
       client.

THE TYPE OF METRIC VALUES         top

       The PMAPI supports a wide range of format and type encodings for the
       values of performance metrics, namely signed and unsigned integers,
       floating point numbers, 32-bit and 64-bit encodings of all of the
       above, ASCII strings (C-style, NULL byte terminated), and arbitrary
       aggregates of binary data.
       The type field in the pmDesc structure returned by pmLookupDesc(3)
       identifies the format and type of the values for a particular
       performance metric within a particular PMAPI context.
       Note that the encoding of values for a particular performance metric
       may be different for different PMAPI contexts, due to differences in
       the underlying implementation for different contexts.  However it is
       expected that the vast majority of performance metrics will have
       consistent value encoding across all versions of all implementations,
       and hence across all PMAPI contexts.
       The PMAPI supports routines to automate the handling of the various
       value formats and types, particularly for the common case where
       conversion to a canonical format is desired, see pmExtractValue(3)
       and pmPrintValue(3).

THE DIMENSIONALITY AND SCALE OF METRIC VALUES         top

       Independent of how the value is encoded, the value for a performance
       metric is assumed to be drawn from a set of values that can be
       described in terms of their dimensionality and scale by a compact
       encoding as follows.  The dimensionality is defined by a power, or
       index, in each of 3 orthogonal dimensions, namely Space, Time and
       Count (or Events, which are dimensionless).  For example I/O
       throughput might be represented as Space/Time, while the running
       total of system calls is Count, memory allocation is Space and
       average service time is Time/Count.  In each dimension there are a
       number of common scale values that may be used to better encode
       ranges that might otherwise exhaust the precision of a 32-bit value.
       This information is encoded in the pmUnits structure which is
       embedded in the pmDesc structure returned from pmLookupDesc(3).
       The routine pmConvScale(3) is provided to convert values in
       conjunction with the pmUnits structures that defines the
       dimensionality and scale of the values for a particular performance
       metric as returned from pmFetch(3), and the desired dimensionality
       and scale of the value the PMAPI client wishes to manipulate.
       Alternatively, the pmFetchGroup(3) functions can perform data format
       and unit conversion operations, specified by textual descriptions of
       desired unit / scales.

INSTANCE PROFILE         top

       The set of instances for performance metrics returned from a
       pmFetch(3) call may be filtered or restricted using an instance
       profile.  There is one instance profile for each PMAPI context the
       application creates, and each instance profile may include instances
       from one or more instance domains.
       The routines pmAddProfile(3) and pmDelProfile(3) may be used to
       dynamically adjust the instance profile.

COLLECTION TIME         top

       For each set of values for performance metrics returned via
       pmFetch(3) there is an associated ``timestamp'' that serves to
       identify when the performance metric values were collected; for
       metrics being delivered from a real-time source (i.e. pmcd(1) on some
       host) this would typically be not long before they were exported
       across the PMAPI, and for metrics being delivered from a set of
       archive logs, this would be the time when the metrics were written
       into the archive log.
       There is an issue here of exactly when individual metrics may have
       been collected, especially given their origin in potentially
       different Performance Metric Domains, and variability in the metric
       updating frequency at the lowest level of the Performance Metric
       Domain.  The PMCS opts for the pragmatic approach, in which the PMAPI
       implementation undertakes to return all of the metrics with values
       accurate as of the timestamp, to the best of our ability.  The belief
       is that the inaccuracy this introduces is small, and the additional
       burden of accurate individual timestamping for each returned metric
       value is neither warranted nor practical (from an implementation
       viewpoint).
       Of course, in the case of collection of metrics from multiple hosts
       the PMAPI client must assume the sanity of the timestamps is
       constrained by the extent to which clock synchronization protocols
       are implemented across the network.
       A PMAPI application may call pmSetMode(3) to vary the requested
       collection time, e.g. to rescan performance metrics values from the
       recent past, or to ``fast-forward'' through a set of archive logs.

GENERAL ISSUES OF PMAPI PROGRAMMING STYLE         top

       Across the PMAPI, all arguments and results involving a ``list of
       something'' are declared to be arrays with an associated argument or
       function value to identify the number of elements in the list.  This
       has been done to avoid both the varargs(3) approach and sentinel-
       terminated lists.
       Where the size of a result is known at the time of a call, it is the
       caller's responsibility to allocate (and possibly free) the storage,
       and the called function will assume the result argument is of an
       appropriate size.  Where a result is of variable size and that size
       cannot be known in advance (e.g. for pmGetChildren(3), pmGetInDom(3),
       pmNameInDom(3), pmNameID(3), pmLookupText(3) and pmFetch(3)) the
       PMAPI implementation uses a range of dynamic allocation schemes in
       the called routine, with the caller responsible for subsequently
       releasing the storage when no longer required.  In some cases this
       simply involves calls to free(3), but in others (most notably for the
       result from pmFetch(3)), special routines (e.g. pmFreeResult(3))
       should be used to release the storage.
       As a general rule, if the called routine returns an error status then
       no allocation will have been done, and any pointer to a variable
       sized result is undefined.

DIAGNOSTICS         top

       Where error conditions may arise, the functions that comprise the
       PMAPI conform to a single, simple error notification scheme, as
       follows;
       +  the function returns an integer
       +  values >= 0 indicate no error, and perhaps some positive status,
          e.g. the number of things really processed
       +  values < 0 indicate an error, with a global table of error
          conditions and error messages
       The PMAPI routine pmErrStr(3) translates error conditions into error
       messages.  By convention, the small negative values are assumed to be
       negated versions of the Unix error codes as defined in <errno.h> and
       the strings returned are as per strerror(3).  The larger, negative
       error codes are PMAPI error conditions.
       One error, common to all PMAPI routines that interact with pmcd(1) on
       some host is PM_ERR_IPC, which indicates the communication link to
       pmcd(1) has been lost.

MULTI-THREADED APPLICATIONS         top

       The original design for PCP was based around single-threaded
       applications, or more strictly applications in which only one thread
       was ever expected to call the PCP libraries.  This restriction has
       been relaxed for libpcp to allow the most common PMAPI routines to be
       safely called from any thread in a multi-threaded application.
       However the following groups of functions and services in libpcp are
       still restricted to being called from a single-thread, and this is
       enforced by returning PM_ERR_THREAD when an attempt to call the
       routines in each group from more than one thread is detected.
       1.  Any use of a PM_CONTEXT_LOCAL context, as the DSO PMDAs that are
           called directly from libpcp may not be thread-safe.
       2.  The interval timer services use global state with semantics that
           demand it is only used in the context of a single thread, so
           __pmAFregister(3), __pmAFunregister(3), __pmAFblock(3),
           __pmAFunblock (3) and __pmAFisempty(3).
       3.  The following (undocumented) access control manipulation routines
           that are principally intended for single-threaded applications:
           __pmAccAddOp, __pmAccSaveHosts, __pmAccRestoreHosts,
           __pmAccFreeSavedHosts, __pmAccAddHost, __pmAccAddClient,
           __pmAccDelClient and __pmAccDumpHosts.
       4.  The following (undocumented) routines that identify pmlogger
           control ports and are principally intended for single-threaded
           applications: __pmLogFindPort and __pmLogFindLocalPorts.

PCP ENVIRONMENT         top

       Most environment variables are described in PCPIntro(1).  In
       addition, environment variables with the prefix PCP_ are used to
       parameterize the file and directory names used by PCP.  On each
       installation, the file /etc/pcp.conf contains the local values for
       these variables.  The $PCP_CONF variable may be used to specify an
       alternative configuration file, as described in pcp.conf(5).  Values
       for these variables may be obtained programmatically using the
       pmGetConfig(3) function.

SEE ALSO         top

       PCPIntro(1), PCPIntro(3), pmda(3), pmGetConfig(3), pcp.conf(5) and
       pcp.env(5).

COLOPHON         top

       This page is part of the PCP (Performance Co-Pilot) project.
       Information about the project can be found at ⟨http://www.pcp.io/⟩.
       If you have a bug report for this manual page, send it to
       pcp@oss.sgi.com.  This page was obtained from the project's upstream
       Git repository ⟨git://git.pcp.io/pcp⟩ on 2017-07-05.  If you discover
       any rendering problems in this HTML version of the page, or you
       believe there is a better or more up-to-date source for the page, or
       you have corrections or improvements to the information in this
       COLOPHON (which is not part of the original manual page), send a mail
       to man-pages@man7.org
Performance Co-Pilot                 PCP                            PMAPI(3)

Pages that refer to this page: chkhelp(1)dbpmda(1)newhelp(1)pcp(1)pcp2graphite(1)pcp2influxdb(1)pcp-collectl(1)pmafm(1)pmclient(1)pmdaperfevent(1)pmdaproc(1)pmdasystemd(1)pmdatrace(1)pmdaweblog(1)pmdumplog(1)pmdumptext(1)pmerr(1)pmfind(1)pmgenmap(1)pmie(1)pminfo(1)pmlogsummary(1)pmnscomp(1)pmprobe(1)pmsocks(1)pmstat(1)pmtrace(1)pmval(1)pmwebd(1)pcpintro(3)pmaddprofile(3)pmaf(3)pmafm(3)pmatomstr(3)pmconnectlogger(3)pmcontrollog(3)pmconverttime(3)pmconvscale(3)pmctime(3)pmda(3)pmdaattribute(3)pmdacache(3)pmdachildren(3)pmdaconnect(3)pmdadaemon(3)pmdadesc(3)pmdadso(3)pmdaeventclient(3)pmdaeventqueue(3)pmdafetch(3)pmdagetoptions(3)pmdahelp(3)pmdainit(3)pmdainstance(3)pmdainterfacemoved(3)pmdamain(3)pmdaname(3)pmdaopenlog(3)pmdapmid(3)pmdaprofile(3)pmdastore(3)pmdatext(3)pmdelprofile(3)pmderivederrstr(3)pmdestroycontext(3)pmdiscoverservices(3)pmdupcontext(3)pmerrstr(3)pmeventflagsstr(3)pmextractvalue(3)pmfault(3)pmfetch(3)pmfetcharchive(3)pmfetchgroup(3)pmfreeeventresult(3)pmfreeprofile(3)pmfreeresult(3)pmgetarchiveend(3)pmgetarchivelabel(3)pmgetchildren(3)pmgetchildrenstatus(3)pmgetconfig(3)pmgetcontexthostname(3)pmgetindom(3)pmgetindomarchive(3)pmgetoptions(3)pmgetpmnslocation(3)pmgetversion(3)pmhttpnewclient(3)pmidstr(3)pmindomstr(3)pmiputresult(3)pmloadasciinamespace(3)pmloadderivedconfig(3)pmloadnamespace(3)pmlocalpmda(3)pmlocaltime(3)pmlookupdesc(3)pmlookupindom(3)pmlookupindomarchive(3)pmlookupindomtext(3)pmlookupipc(3)pmlookupname(3)pmlookuptext(3)pmmktime(3)pmnameall(3)pmnameid(3)pmnameindom(3)pmnameindomarchive(3)pmnewcontext(3)pmnewcontextzone(3)pmnewzone(3)pmparsectime(3)pmparsehostattrsspec(3)pmparsehostspec(3)pmparseinterval(3)pmparsemetricspec(3)pmparsetime(3)pmparsetimewindow(3)pmparseunitsstr(3)pmprintf(3)pmprintvalue(3)pmreconnectcontext(3)pmregisterderived(3)pmsemstr(3)pmsetmode(3)pmsortinstances(3)pmspeclocalpmda(3)pmstore(3)pmtime(3)pmtraversepmns(3)pmtrimnamespace(3)pmtypestr(3)pmunitsstr(3)pmunloadnamespace(3)pmunpackeventrecords(3)pmusecontext(3)pmusezone(3)pmwebapi(3)pmwhichcontext(3)pmwhichzone(3)QMC(3)QmcContext(3)QmcDesc(3)QmcGroup(3)QmcIndom(3)QmcMetric(3)QmcSource(3)mmv(5)pcp-archive(5)pcp.conf(5)pcp.env(5)pmns(5)