AGENDA for Thursday, July 12th
Version 4 and the future direction for V4.
9.15 Functional requirements for scan engines and data rates as services, *Daron Chabot*
[Anyone else from PSI or Diamond?]
10.00 *Bob Dalesio* V3 limitations with respect to scanning.
10.45 *Markus* (for MJ and DM), on PSI thoughts on EPICS and IOC
11.15 Discussion on V3 IOC limitations with respect to scanning and other beamline requirements
[*Bob Dalesio* leads]
1.30 Thoughts on requirements for leveraging the combined V3/V4 system (*Greg*, 30 min)
IOC getting static data, IOC publishing structured data, IOC fast feedback support,
EPICS system instrumentation, Directory Service search and data acq integration, V4 IOC
SNL (I'm lookin' at you Ben!), Web Services
2.00 pm Data Analysis Requirements
[Would prefer to hear this form a PSI or diamond guy – kind of data – results of analysis]
3.00 Visualization requirements and solutions
*Everyone*, - inventory – what exists – what kind of interface? HDF5?
3.30 Discussion to formulate what is sensible for the EPICS embedded IOC in the short and medium term.
Outcomes and recommendations of this discussion would used in friday's session to build objectives
for the coming year.
[*Bob Dalesio* leads]
Present: DH, Mark Heron (MH) subject to availability), BD, GS, MK, Michael Abbot (MA), JR, TK, Daniel Meyer (DM), Markus Janousch (MJ), MS, Daron Chabot (DC),Paul Gibbons, Tom Cobb
PRESENTATION: Functional requirements for scan engines and data rates as services, Daron Chabot, BNL
Example beamline data rates:
CSX: 8Gbps, 4 TB/day
CHX, 4-40Gbps, 4 TB/day
IXS: ~10 Mbps, <1 GB/day
The experimental method very often involved "Scanning", and that most often involves physical motion.
Motion is of sample, detector, optics etc.
* Continuous (on-the-fly). Involves a single acceleration/deceleration cycle.
Recently, continuous has become much more common. It requires buffering close to hw. Requires trigger distribution.
So, need to focus on continuous scanning requirements [for future controls].
Places lacking in EPICS:
1. Reciprocal space. aka (q-space, k-space).
It's the frequency domain space. 1/distance, not 1/time.
Crystallography defines teh space in 3 axes: [h k l]. The orientation of [h k l] maps to several physical axes.
There is direct support of this axis space in EPICS.
Therefore no alarm, archive, direct control in [h k l] space in EPICS.
Many scanning operations are defined in terms of reciprocal space operations, particularly for materials science. Eg X-ray
photon correlation spectroscopy (XPCS).
In XPCS the objective is to study the timescales of material processes. These are derived from scanned "frames", often 20K frames/s. Frame acq is very bursty. You have to aggregate each pixel in a sequence of frames.
A particular difficulty is in the coordination of the frame data acquisition, and the online data analysis.
PG: But can't any process monitor the data acq and start processing on given frame number.
BD: [missed reply...?]
DC: The point is there is no framework for coordination. There are solutions, but it's not supported.
BD: Yes, our objective is to finish a framework.
PG: Do you want this f/w on the IOC?
BD: In the IOC or even lower, such as in the DAQ. Particularly for short term.
MJ: There are limitations to the extent one can do that in different exp. setups.
BD/MJ: Much of the data is 0s. Would be easier if formally sparsified (get rid of 0s).
BD: Essentially, all analysis that is now below the E DB, is there because the EDB isn't presently suitable.
MJ: Human interaction is needed to manage the coordination of data acq and analysis.
PG: yes, but there are ways to do that coordination.
Three Biggest Areas of Open Issues:
1. Scanning framework. Start, stop, control of scans.
2. Online data analysis frameworks (ODA)
3. Data Formats (HDF5).
GW: Question about role of EPICS in data processing...
Split between data acquisition and later analysis is not more feasible because of the data rates
Diamond is doing pre-analysis (visualization)
Access to the data from user's home labs is important
Standardization, formats, metadata etc. is important (collaborations)
MJ: In tomography, they need the whole data set. For visulaization you need the whole data set.
PG: That's true.
BD: This suggests the need for a data pipe that can give data to any client.
With a proper architecture, one has more freedom of selecting where the data processing takes place
MJ: One issue is that we store a huge amount of data for experimenters, but there are some groups that
aren't set up to access or process that data (needs lots of computing resources).
NEXUS defines semantics over HDF5 format.
Service for collecting metadata ("gather") would be useful
Metadata stored in DB and used used for data queries
PRESENTATION: Thoughts on EPICS4 on beam lines. Markus Janosh, PSI
Tomography, diffraction, spectroscopy (electron, photon)
Scanning of actuators and reading out detectors. Often simultaneously changing external parameter (temp, pressure,...)
STXM: raster scan of the sample, pixel at a time
acq time a few ms
Synchronous scanning in 1,2,3 dimensions
Data rates from KBytes/sec to 800 MBytes/sec
Require metadata of machine, BL status, experiment needed
Synchronization in hardware. Eg, the "whole status of the machine on a bunch-to-bunch basis, SwissFEL at 100Hz."
DAQ as dedicated apps or scripts
(scribe disconnected for a while)
EIGER: 50 Gb/s for a module
8 GB RAM on module
9M: 906 Gb/s
Compression on-module (data rate a few GB/sec)
How can Epics V4 Help
* General scanning framework, like sscan record
* General Data Acq
* Anti-collision management
* Simple DAQ -> Sophisticated and easily configurable state machine! [BEN, you listening to this!]
* Integration of RDB for samples
* Data processing queues
* Meta data of the IOC (status, health, ...)
Address the different programmer levels/skills:
+ Have a very easy and robust get, set, monitor (one usable by scientists)
+ Provide this in several scripting languages, and command line
+ Have an easy interface to the archiver and history
+ Have a tool to easily/quickly plot data and correlate them
Have a plugin architecture and developer tools.
EPICS 4 is EPICS V3 plus a platform for (scientific) services, plus possibly a frameork for aggregating data in an embedded IOC (V3).
GW: propose two working groups, one for services, one for "base" EPICS 4
MJ would like:
Collaboration provides reference services, on one platform (Linux)
Client tools and the framework work on many clients
Development tool: make
Error logging: use the pvAccess protocol, mec
JR: Does Ev4 aim to replace 0MQ? Would you use 0mq for high perf, and ev4 for others.
MS: Most of the main good things in 0mq are in pvAccess
PG: What about guaranteed delivery. I found not all messages in v4 were getting through.
MS: In pva you may loose monitors, but you're notified about it.
MS: in pva you monitor signals. If the signal changes a lot, then some notifications may be lost.
JR: What should you use when you want to monitor data, like publish and subscribe.
MS: Lets say you have acq, and it may be bursty, then set the queue length appropriately.
JR: Ok, makes sense. That's functionally equivalent to 0mq.
PG: But that's not the case in V3.
MK: In v3 you can't set the queue length for data.
PG: I used caput to put to a V4 ioc, and I lost some data.
MK: Well, you should have seen all. I'd have to see.
AI on JR: Add pattern for Caput to V4 IOC to Archictures Document.
PRESENTATION: EPICS V4 ideas in support of beamline science. Tom Cobb, Diamond
Beamlines are characterised by:
* Motors, lots of motors
- mainly delta-tau, with diff coord systems
* Cameras and detectors
*It takes longer than it should to create support for such collections of such systems
V3 Useful features
* rewireable links in records
* scan rate changes in records
* streamDevice - stops you writing boilerplate C code for comminicating with simeple devices
* 3 fold duplication of Driver, databases, screen
* The pain of building an IOC, undocumented template macro, etc.
* Lack of structured data types and metadata
* Can't create and delete records on teh fly.
* Can't form explressions in macro substitutions.
* Concurrency in multiple fields and multiple records
* syncing demand and readback fields. There is no pattern for this, have to decide how
* Only 16 fields in an mbbo and only 39 chars in a char string!!
* Objects (created at init) with field list that can row and shrink.
Example of a motor high level interface would be a record composed of a conglomeration of V3 [or v4] recrords.
S: How would the "database" work"
TC describes how a notional next gen EPICS database would work. Describes a 3 tier system
soft support"/intelligent device (a pump) -> streamDevice -> IP port (low levelo IO)
JR: At Diamon, we mainly add functionality at the asyn [driver] level, rather than at record support level.
TC: Therefore, you really only need one smart record, than can be used with asyn.
[need this discussion summarized]
S: Structured Data
TC would like:
Metadata that can be attached to any data [this is provided].
Like to be able to choose which fields and metadata are displayed
Functionality now handled by NTNDarray
Describes teh idea of creating device objects from parameterized templates.
PG: On metadata, I think that's a missuse. In the stats example, that's data too. Missue of the word.
NEW TOPIC: Scanning Limitations. Bob Dalesio, BNL
Large buffer management
Arrays are not supported (only one-dimensional array)
No metadata, limited support for operations on arrays
Handling of large datasets
Multi-core support in database
Timestamping from various I/O
normative types for images, ndim arrays, nodes of HDF5?
records that support operations on arrays, tables, etc.
buffer life cycle from driver through communication
thread assignment to cores
PG: Diamond is defining the scans in Python HL apps
How to provide a scan system? A service in the V4 framework?
Should the collaboration try to provide an IOC that is able to run a scan service?
PG: If yes, should be flexible (with a python interface)
GW: are the requirements general enough to make it worthwhile writing general-purpose services?
MJ: yes; every beamline has a defined (photon) energy, the reciprocal space is well defined, etc.
BD: two areas: data processing (area detector) and DA/experiment control (scan)
DC: Presently the scan record is an interface to a scan engine... [more not recorded]
PG: The service should be implemented in Python.
DC: Only the interface to the scan engine is important, the details
ofthe implementation of the scan engine (eg its language) is not
DC: What is it about the real-time OS that you use - that drives it being real-time, for beamline control?
JR: In fact, we started to use Linux instead.
TK: We probably won't use VxWorks for SwissFEL.
BD: Bob asks beamline group representatives do we need to form a "EPICS V4 beamline working group" (to define normative types, ...)?
DC, PG, ... :Yes.
BD: If we agree on normative types this would be a BIG win.
PG: Normative types for images would be really usefull.
GW: Should the working group define normative types and then define scan service.
TC: Do not push it, start with areaDetector
DC: Grey area: what are plans to support framework to get data from HW into the NT?
BD: Nothing was done in this area. We pull the data to the service. An aggregator can pull data from V3 and provide it in a form of a NT.
Ralph: What about areaDetector plugin for NT.
JR: We've aready done this. AD serializes images into NT and sends it over the sockets (pvAccess was not ready then).
PG: GDA receives data/images over pure sockets (for test purposes).
NEW TOPIC: MK: pvIOCCPP Thought about future development
pvIOCCPP module currently implements/contains:
- implemenation of V3 channel provider (no aggregation)
- service support (RPC support)
- obsolete header for easyPVA
What should pvIOCCPP provide? Most what pvIOCJava provides (3-4mm) in addition to implement Dirk's goal.
- database (set of PVRecords)
- install (support to dynamicall install/remove PVRecords)
- pvCopy (utility to access subset of the fields of the PVStructure)
- pvAccess (implementation of ChannelProvider and Channel that access local PVRecords)
- monitor (monitor support)
- support (record processing, code attached to a field of PVRecord)
- portDriver (V4 equvivalent of asynManager)
- caV3 (client/server support for CAV3)
- xml (reading XML to populate database)
- swtshell (test GUI, ugly, used as debugging tool)
Features usefull for implementing Services: database, install, pvCopy, pvAccess, and monitor. In addition some subset of "support".
pvIOCJava implements portDriver. Attaches to the fields.
portDriver does not know (define) anything about V4 records.
swtshell - no need to implement it in CPP. swtshell should be taken out of pvIOCJava and redone. Based on SWT.
GW: Marty do you suggest next year (after Sep) more work should be spent to fully implement CPP IOC.
GW/MK: Motivation is to access HW.
GW: Is it important to run on VME/embedded IOC?
MK: Yes. We (MK and MS) hope that V4 base will compile on VME platforms.
GW: Would you still call it pvIOC (when coexisting with V3 IOC)?
MK: "Marketing people" should decide.
GS: What about "V4 engine"?
DC: Naming should be revised (confusing for new people).
BD: Do we have a standard record set?
MK: I've done a standard components that build records. I've implemented something similar to ai, etc.
GW: Let's try to make a list of items for next charter!
NEW TOPIC: Beam line directed V4 working group.
[Group went through the basic charter deliverables that might be the remit of a new beamline oriented EPICS V4 working group]
1) heterogeneous data aggregator on epics V3 - part of this charter. No normative type. MK: Not hard (when implemented as channel RPC request).
2) connect ntimage and ndarray into v4 server
3) actions on images (ala areaDetector, such as background substraction...)
4) review normative types in light of beam line control.
5) fully implemented pvIOCCPP for supporting beam line control
6) prototype V4 record using V3 image-processing record
7) scan service
MK: Re 5, we can directly access V3 record data (C++ pointer) as CPP caV3 does.
MK: Wants to spent time to implement CPP pvCopy ("nobody" wants to implement get/put/... support code).
GW: Re 1, Not much work if done if done in V3 IOC.
PG: What about using directory service to build up...?
GW: This can be quickly done in V4 (as service) in Java. However, we want address Dirk's requirement - easy to do/deploy for V3 developer.
TC: Wants it all done in the V3 IOC.
MK: Feasible to do with not much of work: start V3 IOC, and within the IOC you also start a pvAccess server using caV3. There is a configuration file that specifies what fields to aggregate into PVRecord-s.
MK: Willing to work with Dirk from start of September (done till end of September)
PG: Interested in points 2, 3, 4