It’s easy to be wise after the fact, and it’s equally easy to look at something designed two years ago and find better ways to do it now.
I am not sure, however, this excuses the data access mechanisms in Part 19 of the DICOM Standard. Clearly designed with one person or group’s particular implementation in mind, the data transfer interface places heavy implementation requirements on the host in order to allow applications that need not parse DICOM. Conversely, the implementation requirements of the application mean this interface is poorly suited to applications that generate large quantities of data. I contend that the initial round of an interface should talk existing protocols as much as possible, without inventing new and redundant ones until they have been proven necessary.
The ‘Native DICOM Model’ in section A.1 is an XML formulation of a DICOM object, clearly designed for use by applications that choose not to parse DICOM. However, this seems like a strange goal. DICOM parsing is not so unusual as to not have a multitude of candidate toolkits.
Still, this is not as concerning as section A.2 ‘Abstract Multi-Dimensional Image Model’. This section requires that hosting applications be able to interpret DICOM image data, involving transformation into real-world values, interpolation and esoteric things such as splitting MR sequences and CT gates – something required nowhere else in the standard – and also convert such data back into DICOM. I understand the motivation for this requirement, since it is clearly intended to centralise this kind of processing, but what it mostly does is put prospective implementers off. Indeed, the very concept of performing this transformation is enough for me.
Even after the host has bent over backwards to support the two models, the work is not yet done. It is usually reasonable for a host to be able to supply data on demand for an application; when one considers the likely use cases for this interface (PACS clients, anticipatory processing), one can usually accept the requirement to be able to provide data multiple times. However, the host is also required to provide a generalised query interface over the models, using XPath. This implies that the host has efficient access to all possible models of the data, and fetching data over remote connections is probably not feasible without extensive local caching. Again, I understand the motivation, but that’s not going to cause an implementation.
Finally, the application gets its data. I clearly have a bias in my interpretation of the use cases, since I generally deal in large datasets (deformable registrations, warped dose volumes, contours), but once the application has generated its data it is required to cache it for as long as the host feels like making it wait. In the general case, this means shifting it to disk. The symmetric DataExchange mechanism that must have appealed to the authors of this interface falls down here, and feels completely unnecessary.
My choices are straightforward: talk DICOM wherever possible. Part 19 already expects that DICOM data can be transferred via HTTP (for which WADO is a candidate); for a first interface, I would contend that URLs, sorted by study and series, would be enough for getting data to the client. I see no compelling reason to have a notification arrangement for the host data provision; the URLs should be provided at launch time. The new WADO-RS standard (Supplement 161) can also provide the data using the XML format and hence give access to the metadata alone; while I wish they’d just used the standard DICOM format, it’s better than the abstract image model!
Similarly, for the application providing data to the host, it seems unnecessary to have a lazy fetch mechanism. HTTP POSTing data to the host should be sufficient. I believe there is a storage proposal pending (edit: STOW-RS, Supplement 163); this uses multipart HTTP requests to allow sending of multiple data sets at a time (including using the XML format). A prospective new Web Services interface would surely make the optimum use of WADO-RS et al rather than rolling another data exchange interface.
Next time: general thoughts and conclusion.