Charter of the EPICS V4 Working Group

3rd Charter of the EPICS V4 Working Group, for Apr 2014 to Apr 2015

This version:
charter_20140621.html
Latest version:
charter.html
Previous version:
charter_20130428.html
Editors:
Greg White, SLAC, PSI
Bob Dalesio, Brookhaven Lab

Abstract

EPICS is a set of Open Source software tools for scientific and industrial control systems. This document is a statement of the goals and deliverables of the working group developing the next major version of EPICS, version 4, over the period approximately April 2014 to April 2015.

This year will focus 4 major areas; image processing and direct support for areaDetector plugin pipelines, multicast, a PVA gateway, and to further simplify high level scientific applications programming though leverage of structured PVs to enable PVs which are composites of CA PVs. Can we also do a timing synchronous data acquisition service and API?

For more information about the EPICS, please refer to the home page of the Experimental Physics and Industrial Control System.

Status of this Document

This version is the first editors draft of the 2014 charter. Following initial review by the working group this draft will be presented to the funding laboratories for approval.

The terms MUST, MUST NOT, SHOULD, SHOULD NOT, REQUIRED, and MAY when highlighted (through style sheets, and in uppercase in the source) are used in accordance with RFC 2119 [RFC2119]. The term NOT REQUIRED (not defined in RFC 2119) indicates exemption.


Table of Contents

Introduction and Background

EPICS is a set of Open Source software tools, libraries and applications developed collaboratively and used worldwide, to create distributed soft real-time control systems for scientific instruments such as a particle accelerators and telescopes, and also for industrial process management.

EPICS Version 4 (V4), is the effort to produce the next major version of EPICS. EPICS Version 4 software modules interoperate with the classic IOC. The EPICS V4 Working Group is a collaborative effort of members invited by Brookhaven Lab, to bring EPICS V4 to its full potential.

This Charter is a statement of the basis for the work of the EPICS V4 working group over the charter lifetime. It includes the intended outcomes for that work, its deliverables and success criteria. It also outlines some administrative matters of the organization of the group, and its working practices.

The EPICS V4 Working Group was chartered 2 ½ years ago to define and publish an open control protocol for control system endpoints, which functionally extended channel access to suitability for both device control and high level services. It included data types suitable for high level services, such as tables and complete image data such as from gated photon detectors.

In that time, EPICS V4 successfully brought a communications protocol for controls, "pvAccess", to advanced public review status, and a software framework suitable for high performance distributed control, message passing, and high level software services, to production release status.

Now the working group will reorient to supporting high performance distributed data processing. The IOC as an interface to hardware will still remain largely unaltered - being changed only in for far as to support aggregate PV data.

Charter lifetime

This version of the Charter is specifically for the third year of operations of the Working Group, covering approximately the year April 2014 to April 2015.

Mission

This year the Mission of the EPICS V4 working group is to support high performance data processing, particularly of area detector data, and to develop several mechanisms to make it easier for high level applications to manage groups of PVs (aggregates). Additionally, we will start development of a gateway, so that aggregate data and data processing may be housed anywhere in classical EPICS networking infrastructures.

Basis for Work

The EPICS V4 working group is directly supported by Brookhaven Lab (BNL), and collaborating organizations, subject to their following the mission, goals, and expected deliverables, of this Charter, and following the EPICS V4 Process as the work method. Conformance to and agreement with this Charter is required for participation in the working group.

Basic Scope and Goals

In the timespan of this charter it is still expected that in an EPICS installation, core control and hardware module support would continue to be done with EPICS V3 IOCs. Very little if any direct changes will be made this year to the core processing of the IOC.

Previous work in V4 created an EPICS "middle layer" for "Service Oriented Architecture" based operations. That work is now largely complete.

In this charter period we will enable easier access to associated PVs, so a client deals with 1 PV which "expands to" a number of others, for instance 1 beam position monitor (BPM) PV may be valued "X,Y,current" to express the horizontal and vertical beam offset, plus the signal strength at the BPMs position.

Scope for this year's charter is specifically extended to the IOC level, where we will complete integration with vxWorks and interoperation with the classic EPICS IOC. How mach effort on vxWorks? Who would that benefit? Need a partner to test pvDatabase and db* modules on IOC, otherwise vxWorks should be best effort..

We intend to improve the EPICS IOC's suitability and performance as a scientific instrument data processing pipeline, specifically for fast detector image data, and to greatly improve facilities for clients to deal with these data.

Note that, as last year, our primary focus is on definition of standard behavior through normative documents of the Working Group, informed by the development of software which implements the behavior. As last year, we shall supply reference implementations for these standards. It is to be expected that in some respects, reference implementations may trail the standard.

Goals

The goals of the Working Group over the charter lifetime are to be:

  1. Complete development of a high performance instrument data processing pipeline suitable for detector data processing. cf pvDatabase, and associated areaDetector support.
  2. Development of a pvAccess server for the IOC which enables IOC channel data to be collectively requested and returned as coherent data package to clients. For instance, so a client can make one request for the pertinent data of a beam position monitor (BPM), and get all and only the x, y, current, and status at that BPM. cf dbGroup and pvDatabase multi.
  3. Further development of pvAccess, concentrating on high performance and communication mechanisms other than TCP/IP (multicast in particular). cf Multicast, zeroMQ and performance quantification.
  4. Consolidation of pvData zero-copy and block data transfer enhancements. pvData is the open, documented, data type encoder/decoder capable of exchanging both atomic and semi-structured data, suitable as the data exchange format for high end diagnostics and services.
  5. Complete interoperability with existing EPICS v3 infrastructure and security norms. cf dbGroup and gateway
  6. Development of data aggregation services, to enable scientific application programs to deal simply with many channels and channels whose update rates are different to their client consumption rates. cf "gather" functionality, deployed in pvDatabase, with integrated synchronous timing, dbPv output of Normative Types.
  7. High Performance. The above protocols and controller framework must be of high performance. See Performance goals below
  8. Improve ease of use and ease of entry. The above protocols and framework must be transparent, documented, and delivered so that it is very easy for a developer to create pvDatabase applications and high level data services which interoperate easily with "V3". In a change to the last charter, we now will now concentrate on helping IOC engineers to interface IOC applications to pvDatabase.
  9. Implementability. The above protocols, and the processing of message I/O, must be documented in such as way that outside software developers can easily create an interoperable implementation
  10. Error handling. All normative software implementations must handle and log all errors in such a way that distributed users can see both the causes and outcomes of all system errors (scientific and control issues, as well as detected program exceptions).
  11. Develop outreach to the EPICS community, such that interested EPICS users know the scope of EPICS V4, the relationship of EPICS V4 to classic EPICS, and th goals for this year (for instance that the IOC will not be significantly developed, though data interfaces will be). Agree this charter with funders.

Interoperability with EPICS V3

Interoperability of the new communications protocol pvAccess, with the classic IOC has so far been supported very well.

EPICS V4 endpoints must continue to be interoperable with EPICS v3 IOCs. This interoperability must be both interlayer (classic IOC ⇔ pva client) and intralayer (V3 IOC ⇔ "embedded pvDatabase"). PvAccess must run in version 3 IOCs and serve all v3 type structures.

We will add ability to get or put a collection of V3 channels (dbGroup) synchronously (so long as they are all in the same lockset) as a result of 1 V4 channel operation.

This mechanism must satisfy the expected use case of device control including EPICS V4 for the foreseeable future; in which device control will continue to be through EPICS "V3" IOC (or simply "the IOC" since it is also a V4 IOC), with control wire protocols and client sides in EPICS V4. Presently it is still the position of the WG that device control is not in the scope of the WG.

Interoperability within EPICS V4

Each normative standard must be demonstrated to be interoperable with adjacent layers in the EPICS v4 protocol stack by production of, and testing, at least one implementation, in each of C++ and Java, from device to client (end to end).

Specific requirements for Pipelining

  1. loss-less, high performance transport of detector and camera images

Need more from David

Performance Goals

The following minimum performance goals are unchanged from last years charter. Last year we achieved these goals; this year we intend to certify, and documenting their achievement particularly for streams over network connections and in the context of image processing and archiving:

We shall publish the performance measurement data and micro-benchmark test framework for pvAccess, including documentation.

Shall we also write and publish to EPICS community, a performance report, comparing EPICS v3 to EPICS V4 for some common EPICS v3 control and read tasks. What have we learnt from micro-benchmarking?

Out of Scope

The following are out of scope for the group specifically:

  1. As last year, the production of any new Input/Output Controller (IOC) framework, suitable for both controls and services, known as "pvIOC", will not be pursued as a software deliverable this year. Rather, we will concentrate on the pvDatabase processor.
  2. Both and only Java and C/C++ bindings are required for reference implementations of the core protocols (pvAccess and pvData). One or the other language binding is required for service implementations
  3. Development of specific production quality services are out of scope. A few reference services may be produced, but they are only for illustration and offered as a basis for extensions. However of course, it's expected that individuals in the WG will develop production services - just not in the remit of the WG.

Success Criteria

Derived from the goals above, success criteria are at least:

  1. EPICS V4 integration into the EPICS release schedule. That is, merger of EPICS V4 into "EPICS" base
  2. Deployed image processing pipeline
  3. Early prototype pvaccess gateway
  4. Demonstrated high performance
  5. Demonstrate that it is easy to implement an EPICS V4 client, taking data from a number EPICS v3 records from >1 lock-set, returned as NTMultiChannel
  6. Demonstrated ease of dealing with control system measurement data and errors through use of Normative Types
  7. Demonstrated easy for an end user to get a list of all PVs and named entities in an EPICS network, through an EPICS client, given a name pattern

Deliverables and Duration

The outputs of the working group will be delivered as a system of documented standards, plus reference implementation source code where relevant. These documents and source codes will all be available from the EPICS V4 web site, http://epics-pvdata.sourceforge.net/.

Deliverables

The group is expected to produce the following normative deliverables WG to agree:

  1. An image processing pipeline and design framework, particularly oriented towards encapsulating the semantics of asynRecord and asynDriver. At the end of the year this must also be documented as implemented.
  2. A pvAccess gateway. Specifically, in this charter lifetime the gateway need only enable the classic EPICS fanout through a network interconnect of whole PVA objects. The gateway SHOULD also enable streaming. In following charters we MAY additionally address issues of pvRequest specificity (say the user requests only 1 field of a PVA PV) or caching specificity (caching largely static fields), protocol translation (eg PVA <-> HTTP) etc.
  3. Further develop the IOC module that acts as the V3 Ca to pvAccess clients now called pvaSrv. Upgrade this module to include at least the following (cf dbGroup):
    1. pvAccess client get or put or monitor, a single pva channel which corresponds to a set of ca channels where the members of the set is defined at runtime prior to the get call
    2. pvAccess client get or put or monitor, a single pva channel which corresponds to a set of ca channels where the members of the set has been defined by a prior configuration
    Precise semantics in the case of locksets, status and alarms, particularly in the case of the I/O operations involving more than on ca channel, must be defined and clearly started in documentation
  4. Develop a framework to enable definition of get, set and monitor semantics, for collections of PVA or IOC process variables, as a single NTMultiChannel PV. In this way enable a PV to express all of the interesting values of a device, or all of the interesting values of a system of devices - eg all the X,Y,TMIT of all BPMs in an accelerator line. cf pvDatabase multi
  5. Extend the "multi" framework above to integrate with an accelerator timing system, so that all PVs in the system may be collected from the same beam pulse or other event. cf synchronous multi
  6. Document platform readiness. Decide on supported targets, then track and publish the proven host compilation, cross-compilation, and runtime experience of at least the most recent public release tar, and the tips of the source code repos (when those are thought to be consistent and should work), of each platform in that list
  7. Completion of the easy to use pvAccess API, now called easyPVA, to add multichannel and monitor, for both Java and C++ bindings.
  8. Develop and publish Matlab PVA library
  9. Develop and publish python PVA library
  10. Add measurement/fit errors to Normative Types. This may involve quite extensive redefinition of some types. Complete new NTNDArray type.
  11. Extend the "Getting Started" document to how one would start programming using EPICS V4 for services and pvaSrv.
  12. Complete support for all of stably defined normative types in pvManager. Need a better statement of what this means in a way Gabriele and MS can commit to it

Expected Milestones

To be added.

Coordination with Other Bodies

The Working Group should coordinate and align objectives to satisfy the requirements of the following bodies:

BNL NSLS Controls
[TBD:]
SLAC FEL accelerators (LCLS/LCLS-II)
SLAC requires EPICS V4 services framework in general, and for online accelerator modelling in particular.
Control System Studio (CSS)
Periodically joint meetings of the EPICS V4 WG and CSS developers.
"Database" group
Periodically joint meetings of the EPICS V4 WG and DISCS WG

Group Participation

Effective participation is expected to consume 20% of one's time; though this may be more during development of a normative product of the group. The Chair shall ensure that the criteria for Good Standing are understood and followed:

Participant Status
Prospective group participants should ask their primary EPICS contact or manager to send an email to one of the EPICS V4 chairs, showing that they understand the participant will be involved with EPICS V4 and the commitment level expected
Observer Status
Individuals who are not active at the full participation level, may join meetings of the group as observers. Email the chairs to get details of how to join the meetings.

The group is expected to have at least 4 active participants to be successful.

Good Standing

A participant who is absent from more than 3 meetings consecutively, or in the opinion of the chairs is not acting on Action Items or their responsibilities for group deliverables, will be deemed out of good standing, and will no longer participate.

Chair

The chairs of this Group are Greg White (SLAC) and Andrew Johnson (APS).

Meetings

The Group will have distributed meetings, one to two hours every week, and face–to–face meetings, every three or four months.

Communication

The Working Group will utilize the mailing list, epics-pvdata-devel@lists.sourceforge.net. All except the most trivial messages should be cc'ed to this mailing list. The web archive of it used should be used as backward references in communications.

Confidentiality

The proceedings of this Working Group are open to all (not just members of the working group), subject to exceptions made by the Chairs, after consultation with the Working Group. The group's communications on the web archive will be world readable. Posting to epics-pvdata-devel@lists.sourceforge.net will be unrestricted for members, but moderated for non-members.

Patent Policy

This Working Group operates under EPICS Open License.

Glossary

Host Level
Host Level describes the category of computer systems including daemons, services, user level GUIs and other front-end programs, in the computer architecture of a large distributed computer system, such as an accelerator control system. Host level is in distinction to field level. In the client-server paradigm, host level programs are largely clients of field level systems.
Field Level
Field Level describes the category of computer systems including front-end-controllers and network enabled diagnostic equipement in the computer architecture of a large distributed computer system, such as an accelerator control system. In EPICS the field level corresponds to IOCs and instrument hardware. These systems are "in the field". In the client-server paradigm, field level systems are largely servers of host level systems.
V4 IOC
A "classic" IOC that is connected to pvaccess, such as one that includes pvaSrv. Note that a V4 IOC in this definition is very different from what may formerly have been understood as a V4 IOC, being an instance of an IOC running pvIOC. It is envisaged that an IOC running pvIOC may be called something else now that V4 is being targeted to the classic EPICS IOC runtime.

Greg White, EPICS V4 group co-chair
Last modified: Wed Jul 30 12:58:46 CEST 2014