Tuesday, August 28, 2012

Chapter 12: Pixel Data

Frame 0001
I guess that one can't escape talking about pixels when dealing with DICOM. After all, imaging is what DICOM is all about and digital images are built from pixels. So today, to celebrate release 2.0.2.6 (and the x64 version) of the DICOM Toolkit, I'm finally going to touch the heart of every DICOM Image, The Pixel Data.

For today's post I've prepared a little C++ test application that really does nothing much other then putting pixels into the pixel data of a DICOM file and save it. Well, not exactly nothing much, because it creates a huge DICOM file, more then 0.7 GB and compress it and never use more then 20 MB of memory. If you want to know how, read on.


If you haven't done so yet, download the example source code and the latest version of RZDCX and regsvr32 it. For this example, it's important to use version 2.0.2.6 or later. You can also download the 200 frames JPEG compressed DICOM file that the test application creates. Just make sure you have enough RAM before double clicking it because most viewers will take ~770MB to display it.

The first part of the application, up to line 87 (look for the comment "Setting the image pixel group elements") is rather standard. We set some mandatory elements in every DICOM object like patient name and ID and the UID's for the study series and instance and set the object class to secondary capture.

Now comes the image pixel module with the tags starting with 0028. This group is responsible for describing how to read the pixels. I'm going to go over each one and explain its use and meaning. Here's a dump of this group from the uncompressed file created by this example:

(0028,0002) US 3                                        #   2, 1 SamplesPerPixel
(0028,0004) CS [RGB]                                    #   4, 1 PhotometricInterpretation
(0028,0006) US 0                                        #   2, 1 PlanarConfiguration
(0028,0008) IS [200]                                    #   4, 1 NumberOfFrames
(0028,0010) US 960                                      #   2, 1 Rows
(0028,0011) US 1280                                     #   2, 1 Columns
(0028,0100) US 8                                        #   2, 1 BitsAllocated
(0028,0101) US 8                                        #   2, 1 BitsStored
(0028,0102) US 7                                        #   2, 1 HighBit
(0028,0103) US 0                                        #   2, 1 PixelRepresentation
(0028,1050) DS [128]                                    #   4, 1 WindowCenter
(0028,1051) DS [256]                                    #   4, 1 WindowWidth
(0028,1052) DS [0]                                      #   2, 1 RescaleIntercept
(0028,1053) DS [1]                                      #   2, 1 RescaleSlope
(7fe0,0010) OB 00\00\00\00\00\00\00\00\00\00\00\00\ ... # 500616000, 1 PixelData


And here's how the JPEG compressed dump looks like:

(0028,0002) US 3                                        #   2, 1 SamplesPerPixel
(0028,0004) CS [YBR_FULL_422]                           #  12, 1 PhotometricInterpretation
(0028,0006) US 0                                        #   2, 1 PlanarConfiguration
(0028,0008) IS [200]                                    #   4, 1 NumberOfFrames
(0028,0010) US 960                                      #   2, 1 Rows
(0028,0011) US 1280                                     #   2, 1 Columns
(0028,0100) US 8                                        #   2, 1 BitsAllocated
(0028,0101) US 8                                        #   2, 1 BitsStored
(0028,0102) US 7                                        #   2, 1 HighBit
(0028,0103) US 0                                        #   2, 1 PixelRepresentation
(0028,2110) CS [01]                                     #   2, 1 LossyImageCompression
(0028,2112) DS [18.0721]                                #   8, 1 LossyImageCompressionRatio
(0028,2114) CS [ISO_10918_1]                            #  12, 1 LossyImageCompressionMethod
(7fe0,0010) OB (PixelSequence #=201)                    # u/l, 1 PixelData


You can see that there are differences because the data in these elements should describe the pixels as they are in the pixel data element. Notice the difference in Photometric Interpretation. In the JPEG compressed file, it's YBR_FULL_422 meaning the pixels are in the YCbCr color space. Also notice that the uncompressed file has a simple array of bytes in the pixel data element while the jpeg compressed has a sequence of 201 items each holding a frame and one more (the first one) with an offset table pointing to the offset to each frame in the sequence. Reading Chinese? Let's go over each element and explain them all.

Rows and Columns

Rows (0028,0010) and Columns (0028,0011) define the size of the image. Rows is the height (i.e. the Y) and Columns is the width (i.e. the X). In our example every frame is 1280 x 960 pixels. We'll see what is frame in a minute.

Samples Per Pixel

Samples per pixel (0028,0002)define the number of color channels. In grayscale images like CT and MR it is set to 1 for the single grayscale channel and for color images like in our case it is set to 3 for the three color channels Red, Green and Blue.

Photometric Interpratation

The photometric interpratation (0028,0004) element is rather unique to DICOM. It defines what does every color channel hold. You may refer it to the color space used to encode the image. In our example it is "RGB" meaning the first channel ir Red, the second is Green and the third is Blue. In grayscale images (like CT or MR) it is usually "MONOCHROME2" meaning its grayscale and 0 should be interpreted as Black. In some objects like some fluoroscopic images it may be "MONOCHROME1" meaning its grayscale and 0 should be interpreted as White. Other values may be "YBR_FULL" or "YBR_FULL_422" meaning the color channels are in the YCbCr color space that is used in JPEG.

Planar configuration

Planar configuration (0028,0006) defines how the color channels are arranged in the pixel data buffer. It is relevant only when Samples Per Pixel > 1 (i.e. for color images). It can be either 0 meaning the channels are interlaced which is the common way of serializing color pixels or 1 meaning its separated i.e. first all the reds, then all the greens and then all the blues like in print. The separated way is rather rare and when it is used its usually with RLE compression. The following image shows the two ways. BTW, If this element is missing, the default is interlaced.
Interlaced vs separated Planar Configuration

Bits Allocated, Bits Stored and High Bit

Luckily, most toolkits and RZDCX among them take care of extracting and manipulating the pixels for you, but if you ever need to do it yourself, you'll have to do it according to these attributes and also make sure to take little/big endian into your considerations.

Bits Allocated (0028,0100) defines how much space is allocated in the buffer for every sample in bits. In our case we encode 24 bit RGB image which is the most standard image on earth so every channel is encoded in 8 bits i.e. a complete bytes so samples are always aligned with bytes. All DICOM objects (at least that I have looked into so far) always use complete bytes for bits allocated so it is either 8 or 16 for grayscale images with more then 256 levels of gray.

Bits Stored (0028,0101) defines how many of the bits allocated are actually used. In our case, as every sample value is between 0 and 255, all the 8 bits are used so bits stored is 8. Returning to CT images, where each sample value is between 0 and 4095, bits stored is 12 (2 power 12 is 4096). The remaining four bits are not part of the pixel value and should be masked out when reading the pixels. Sometimes these bits are used to store overlay planes data.

High Bit (0028,0102) defines how the bits stored are aligned inside the bits allocated. It is the bit number (the first bit is bit 0) of the last bit used. In the standard it is always set as one less then the bits stored but hypothetically it doesn't have to be that way. In our case, the high bit is 7. In CT it is 11. Here's an image from the DICOM standard that shows how pixels are arranged bit-wise.
CT Pixel Data in Memory

Pixel Representation

Pixel Representation (0028,0103) is either unsigned (0) or signed (1). The default is unsigned. There's an anecdotal issue here with VR codes of US and SS and this attribute because when it is set to signed then all the attributes of group 0028 should be encoded as Signed Shorts (SS) and when it's unsigned they should be unsigned (US) too.

Number of Frames

Number of Frames (0028,0008) defines how many frames are in the image. Usually there's only one and this element is omitted but in DICOM you can create multi-frame image objects and then you have to set this element. In our case we create a multi-frame image with 200 frames.

This concludes all the mandatory elements of  the image pixel but for one, the pixel data.

Pixel Data

It's time to set the pixels into the pixel data element (7FE0, 0010). You may ask why all the other image pixel module elements are of group 0028 and only the pixel data is not? and though we don't ask DICOM why questions, but this time I would like to ask this question because I think there's a good answer. Think of it until we get to the end of this post.

Lets calculate the pixel data length. We have 1280 x 960 pixels in each frame, 200 frames, 3 samples per pixel, each sample is one byte and we get:
ROWS * COLUMNS * NUMBER_OF_FRAMES * 
SAMPLES_PER_PIXEL * (BITS_ALLOCATED/8) 
bytes, that's
1280 * 960 * 200 * 3 * (8/8) = 737280000 bytes!

720MB! That's a lot! You don't expect a software to allocate such memory space in one chuck and get away with it, do you? I don't. That's why I've added the SetValueFromFile to DCXELM and SetJPEGFrames to DCXOBJ

Let's have a look at the last part of the test application:

            ////////////////////////////////
            // Create dummy pixels frames //
            ////////////////////////////////

            el->Init(rzdcxLib::PixelData);
            el->ValueRepresentation = VR_CODE_OB;

            int frameSize = ROWS*COLUMNS*SAMPLES_PER_PIXEL;
            char *pixels = new char[frameSize];

            // Write 200 frames to a file
            ofstream s("pixel.data", ios_base::binary);
            for (int i=0; i
            {
                  number2image(i+1); // Let's add some salt to it
                  scaleImageTo(COLUMNS, ROWS, pixels);
                  s.write(pixels, frameSize);
            }
            s.close();

            delete[] pixels;

            int pixelsLength = frameSize*NUMBER_OF_FRAMES;
            el->SetValueFromFile("pixel.data", 0, pixelsLength);
            obj->insertElement(el);

            // Save it as is
            obj->saveFile("color.uncompressed.dcm");

            // Compress it as JPEG Lossless
            obj->SaveAs("Color.jpegLossless.dcm",
                  TS_LOSSLESS_JPEG_DEFAULT, 100, "c:\\tmp");

            // Compress it as JPEG
            obj->SaveAs("Color.jpeg.dcm",
                  TS_JPEG, 100, "c:\\tmp");


First we create the pixel data element and set the VR to OB. OB (other byte) means that every value in the data element is a byte and that's what we should do in this case. For CT images where every sample is stored in two bytes we should use the OW (other word) VR. Because we deal with binary data, this hint is required for the toolkit to store the data properly.

In the for loop, we write all the frames one after the other into the pixel data file. This is where you should copy your image bytes. To add some spice to this example I've burned in the frame number on each frame. This is done using the code in DigiTools.h. It's a little something I've written for this post too. I really work on these posts.

The last step is setting the pixel data file to the pixel data element value. This call doesn't load the data into memory. You are responsible to keep the pixel data file as long as the DCXOBJ instance (obj) lives.

Now we can call saveFile. What's nice is that even here the toolkit doesn't load all the data into memory. Instead it reads small chunks of the pixels data file (16K each if I remember correctly) and copy them into the DICOM file.

If you run this application and open the windows task manager you will notice that throughout the run of this test application it never takes more then 20 MB of RAM.

The SaveAs calls at the end keeps the same standard and utilize a temp folder that you provide to do the compression. Every compressed frame is temporarily stored into a file in this folder and then the frames are copied one by one to the DICOM file. 

In this example we first compress into JPEG and then to JPEG Lossless. This may take some time to run. On my workstation (Intel Core 2 Duo E6550 @ 2.33GHz with 8GB of RAM) it takes about two minutes to run, most of this time is spent on the two SaveAs calls. The toolkit is responsible for keeping the memory resources available and you are responsible to dispose the temp folder and it's content after use. I think that's nice. Try doing this with another toolkit.

If you are not keen on memory resources you can call EncodeJpeg or EncodeLosslessJpeg or simply set the TransferSyntax property of DCXOBJ. This does not require temp folder but uses a lot of memory. You can also use SetJpegFrames to set the pixel data from JPEG files and SetBMPFrames to set the pixel data from bitmap files.

So why pixel data is (7FE0,0010) and not for example (0028,9998)? I think its because it is a very long data element so we want it to be the last element in the file. As you remember, elements are written in order from small tag numbers to big tag numbers. Having the pixel data as the last element in the file, we can read all the 'DICOM header' and skip the heavy lifting of the pixel data. For example, let's say we want to scan a large data set and sort the images according to their 3D volume location and only then load the pixels into a volume buffer. In this way, we can stop reading every file after group 0028 for example and save a lot of disk time and memory. That's a good reason and good software engineering, don't you think?

38 comments:

  1. Footnote, A worth noting comment from Mathieu Malaterre on DICOM's google groups (comp.protocol.dicom):
    General purpose color Multi Frame objects should use the MF true color SC Image sop class 1.2.840.10008.5.1.4.1.1.7.4 and not the one used in this example. However, the support for the regular SC image object is wider and using the new MF color object may result in association negotiation error. So practically, I would use it as it is in the example.

    ReplyDelete
  2. Hi,
    Please clarify below query related to pixel spacing[0028,0030].
    I would like to know in One Dicom Image ,different pixel spacing is possible or not.
    For example:[0028,0030] ==> 0.175\0.275

    ReplyDelete
    Replies
    1. Hi Thiru Kumaran,
      If you use the MULTI-FRAME TRUE COLOR SC IMAGE (1.2.840.10008.5.1.4.1.1.7.4) then you may be able use the functional groups to set pixel spacing in the Pixel Measures Macro (see the DICOM standard C.7.6.16.2.1 Pixel Measures Macro) but I didn't find instructions in the standard if this macro is allowed in a per frame functional group or it can only be in the common group for this SOP Class.
      Having said that, I wouldn't recommend using this feature of the standard at all because I don't think most applications existing today will support it.
      Additionally, I couldn't imagine a good design reason to create such images. How come pixel spacing changes over a sequence of images group together as a single instance?
      Maybe you can share the reason for doing so?
      Regards,
      Roni

      Delete
  3. Hello Roni,
    With my .dcm file i can read in "osiris" or "xmedcon" positive and negative pixels values and once i use dicomread it changes all values. I have already tried to use the following solution:

    X = dicomread('the file name');
    meta = dicominfo('the file name');
    Y = X * meta.RescaleSlope + meta.RescaleIntercept;

    But the Y matrix still have the wrong values for the positive pixel values and now zero for the negative ones.
    Ex.:
    OSIRIS MATLAB [X] MATLAB [Y]
    -727 301 0
    -724 302 0
    -722 303 0
    -723 301 0
    89 1117 93
    98 1121 97
    101 1125 101
    101 1127 103
    101 1128 104

    For the 'file name', I have used the file extracted directly from the CD recorded from the CT scan or .dcm file saved on "OSIRIS" or "XMEDCON" and in both cases it generates the same matrix [X].
    You can find, please, those files in the links bellow:

    77815542: file exported directly from CT scan software
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542?attredirects=0&d=1

    77815542.dcm: file exported from OSIRIS
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542.dcm?attredirects=0&d=1

    77815542.mat: workspace from matlab matrix dicomread
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542.mat?attredirects=0&d=1

    And below is the OSIRIS program link:
    http://www.softpedia.com/get/Science-CAD/Osiris-Viewer.shtml

    I would really appreciate any help!
    Regards,
    Susana

    ReplyDelete
    Replies
    1. Hi Susana,
      OSIRIX is right. It shows the values after Modality LUT transformation.
      If you look at the dump of the file, at group 0028 you will see the following:

      (0028,0002) US 1 # 2, 1 SamplesPerPixel
      (0028,0004) CS [MONOCHROME2] # 12, 1 PhotometricInterpretation
      (0028,0010) US 512 # 2, 1 Rows
      (0028,0011) US 512 # 2, 1 Columns
      (0028,0030) DS [0.1171875\0.1171875] # 20, 2 PixelSpacing
      (0028,0100) US 16 # 2, 1 BitsAllocated
      (0028,0101) US 12 # 2, 1 BitsStored
      (0028,0102) US 11 # 2, 1 HighBit
      (0028,0103) US 0 # 2, 1 PixelRepresentation
      (0028,0106) US 241 # 2, 1 SmallestImagePixelValue
      (0028,0107) US 2745 # 2, 1 LargestImagePixelValue
      (0028,1050) DS [143\300] # 8, 2 WindowCenter
      (0028,1051) DS [892\1500] # 8, 2 WindowWidth
      (0028,1052) DS [-1024] # 6, 1 RescaleIntercept
      (0028,1053) DS [1] # 2, 1 RescaleSlope
      (0028,1055) LO [WINDOW1\WINDOW2] # 16, 2 WindowCenterWidthExplanation

      Rescale Intercept and Rescale Slope are -1024 and 1 reap.
      The data is unsigned as you can see from Pixel Representation.

      When you read into matlab you get a matrix X of type uint16
      So when you transform it you still have uint16 matrix.
      You need to cast it or to create Y as sint16 or better sint32 of the same size as X (512 x 512) and then make the transformation. Once done this way, you will get values identical to Osirix.

      Roni

      Delete
  4. Hi Roni,

    I am new to the DICOM, I work more on the modality side, specifically CT. my question is how PACS handles the differences between vendors, for example:
    - the old CT scanner is sending these values:
    0028 0100 2 | bits_allocated | US | 1 | 0x0010 16
    0028 0101 2 | bits_stored | US | 1 | 0x000c 12
    0028 0102 2 | high_bit | US | 1 | 0x000b 11
    0028 0103 2 | pixel_representation | US | 1 | 0x0000 0

    and the new scanner is sending these values:

    0028 0100 2 | bits_allocated | US | 1 | 0x0010 16
    0028 0101 2 | bits_stored | US | 1 | 0x0010 16
    0028 0102 2 | high_bit | US | 1 | 0x000f 15
    0028 0103 2 | pixel_representation | US | 1 | 0x0001 1

    The images of the new scanner don't look good on the PACS even though they look great on the CT console, does PACS admin need to change some confiuration on the PACS side to handle the difference in the data?
    Appreciate your help.

    AM

    ReplyDelete
    Replies
    1. Hi hi AM
      16 bits stored signed stored signed pixels is unusual but the pacs should be able to display it. My guess is that you also use lossless jpeg. compression. Transfer syntax.
      Many implementations of the jpeg lossless, specifically the ones that use cornel university code base and not the jig code have a bug and they fail on 16 bits signed pixel data.
      If this is not the case then It's probably something with the reading I 16 bits signed because to cast it to int some applications will make it 17 bits (eg dcmtk).
      I recommend changing your implementation to more common pixels like 16 bit unsigned if you have to use all the 16 bits. It's probably not your bug but it will spare you from explaining that to customers.
      Roni

      Delete
    2. Thank you Roni, very much appreciated.

      Delete
    3. p.s. let me know if it was JPEG Lossless Issue. These bugs were fixed and I might be able to send you a patch for that. Send me an email or fill the contact form on www.roniza.com

      Delete
  5. I will be working on this issue tomorrow with Agfa rep., on their Impax 6.3.1, they claim that the problem from our CT, but I was able to have our images displayed perfectly using another application on their PACS monitor.
    I will let you know the outcome.
    Thanks for your help.

    ReplyDelete
  6. I have a question

    I need pixel data in DICOM image to implement volume rendering.
    Is there no binary file or sample code which extracts only pixel data as 2D or 3D array?
    Or, how can I handle pixel data in DICOM by using DCMTK?

    ReplyDelete
  7. OFCondition cond = _item->loadAllDataIntoMemory();
    if (cond.bad())
    return DCMTKError(cond);

    DcmElement* elem;
    cond = _item->findAndGetElement(DCM_PixelData, elem);

    if (cond.good())
    {
    if (elem)
    {
    DcmPixelData* pixelData = (DcmPixelData *)elem;

    *pVal = 0;
    Uint16 *data = NULL;

    OFCondition cond = pixelData->getUint16Array(data);
    if (cond.good() && (data))
    {
    *pVal = (int)data;
    return NOERROR;
    }
    }
    }

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Hi,

      i gone through the above code which you updated.

      But how i need to display the dicom image in vtk ?

      I Decompressed the Dicom Image using DCMTK & i need to load the image

      using pixel data into VTK.

      Please can you tell me how you have done the rendering in VTK?

      Delete
    3. Your code sample seems to be off, most likely from copying the < in the for loop.

      // Write 200 frames to a file
      ofstream s("pixel.data", ios_base::binary);
      for (int i=0; i
      {
      number2image(i+1); // Let's add some salt to it
      scaleImageTo(COLUMNS, ROWS, pixels);
      s.write(pixels, frameSize);
      }

      That section. Enjoying your blog though.

      Delete
  8. Hi;

    What about 32 - bit real(float) images?

    ReplyDelete
  9. Yes, that's possible. The only example I could fine though is RT Dose and the type is integer, not float. Why would you want real pixels?

    ReplyDelete
  10. good night all right? could help me to draw a new picture coloring the pixels according to the pixel intensity of a dicom file read using vtkDICOMImageReader? I can read the matrix using two "for" and reading the intensity of each pixel where did eg, 1 to 50 = blue, and so on, as I can draw a new picture doing this?

    ReplyDelete
    Replies
    1. Hi Pedro,
      After creating the new image follow the steps in this post code examples to insert the data into a DICOM object.
      Roni

      Delete
  11. Hi Roni,

    I am a student who is currently doing a project that involves measuring lengths on x-ray images that were exported from the PACS system. I am using the Osirix software to perform the measurements. I found it to be a problem when the measurements were in pix. I need the units to be in centimetres and not pix. The furthest I managed to get to was a recalibrating option that required me to set a value in cm for the length that was being measured (in pix). I do not how to recalibrate it. Alternatively, I thought of just simply using a online conversion calculator to convert from pix to cm. I am only measuring vertical distance. I was hoping that you could help me out with this issue. I am quite illiterate in this to be honest so if you can please reply in lay man terms. Can I just convert the length measurements just by using an online conversion calculator? Alternatively, if you can teach me how to recalibrate the units from cm to pix on Osirix so that all measurements will be in cm on Osirix, that would be an even better solution.

    ReplyDelete
  12. This comment by Norbert Neubauer was deleted by mistake so I repost it:

    Hi Roni,
    Great site. I am new to DICOM so please excuse me if I mix some obvious things. I am about to implement a piece of code to transfer DICOM files via internet.

    When I studied the DICOM format specs I was sure that it is possible that in one dcm file I can have many studies, many series and many images.

    Having that in mind I thought that for huge dcm file it would be wise to extract images out of the dcm consisting of many images, send pure dicom without images and send images sequentially . Then on the other side upon request I would compose dynamically the dcm file with all the images that were completely sent.

    However until now I only managed to find dcm file with framed image or one image in the file at most.

    My question are (I kindly ask for the answer):

    1. Do you think that what I described seems feasible (stripping pixel data, sending pure small dcm without pixel data, and uploading pixel separately)?

    2. And do really exist dcm files consisting of many pixel data composed of many images.

    3. Or maybe there are dcm files consisting of "in fact" another dcm files (or some kind of subset data). So in fact I can have one big dcm, "extract" it into many dcms splitted e.g by study id. And in that case when considering transmission of big dcm I should split it into number of smaller dcm files and do something with them. I am a bit lost here. If you could clarify how it works I would appreciate that very much.

    ReplyDelete
    Replies
    1. Hi Norbert,
      'Streaming' Medical Imaging Info including images has always been a challenge. There are so many solutions out there and some of them are quite good.
      First of all, I would like to note that you may not want to transfer the entire DICOM header, specially not over insecure connection. Instead I recommend transferring only information you need.
      To your questions:
      1) Yes, very reasonable. As I said, there's many way to do this and I recommend that you make a survey of existing solutions. With the evolvement of html5 and javascript, node.js there's many open sources projects and very good commercial ones for a very reasonable price that can save you a lot of work.

      2) Yes. There are DICOM files with many 'frames' as we call it. It is very common in Ultrasound where there's a multi-frame ultrasound SOP class and there are the new CT and MR multi frame objects. Note one very important thing that the concept of STUDY is not directly related to a DICOM file. Every DICOM Instance, probably stored as a DICOM file, belongs to exactly one STUDY and is linked to it by the Study Instance UID. The complete content of a Study can span over multiple DICOM Instance and more important can change over time as new Instances are added to the study, for example reports, presentation states, post-processed images and so on.

      3) In your implementation, the system you are developing, you can do whatever you like and you are not bounded to DICOM. Remember that DICOM is relevant only when you integrate with other systems. Within your system boundaries you are free as a bird. Its when you want to communicate with other system that DICOM is your tool, your language to speak to other systems. In a way, DICOM to Medical Imaging is like Written English to International Business.

      Delete
  13. Hi Roni,
    Thank you for your answers.

    I will transfer data over secure connections only, so although I take into account your arguments I am still thinking about transferring full dicom header, . This is because I am considering only to strip "heavy" part and not to go to deep into dicom data, if possible. Just wanted to confirm that "nested" dicom files exist (ie one dcm file can contain many sets of data, specially pixel data which is heavy).

    I actually need to do 2 things.

    a). Transfer dicom data between some nodes
    b). Let my app be visible as dicom node, ie let pacs to send data to me and let operator of my app to search for some data to be send on another dicom node. It means that I need to implement a dicom listener and dicom sender be able to issue dicom queries. Currently as I do it in Java I tried some basics with dcm4chee-toolkit, but I have not decided yet which libraries will be using by the end of the day.


    Re: 1. I have to write it myself, however with the help of libraries, I don't want to reinvent the wheel

    Re: 2. Do you know the place or can offer example Dicom file to download that has some nested data. All I found until now is a singe dcm files or zipped tree of dcm files. If I understand correctly one dcm file can have some nested data and each set of that nested data can contain pixel data space. The idea is to recursively parse that kind of nested dicom exclude pixel data, and then place pixel data back dynamically when image parts are transferred. I have some doubts however because whatever I saw was rather small dicom files (but many). So maybe I should rather focus on transferring each of dcm files from the set of files and not to split the particular dcm file on parts ( data + pixel)

    Re: 3. I know I can communicate between my nodes in the way that fits transmission requirements not just DICOM, however as my system must also act as dicom node, I finally have to implement some kind of listener offering buffer to store dicom data and be able to query for the data in another systems acting as dicom node.


    Once again thank you for your answers and if you could answer / comment on current ones I would appreciate that very much.

    ReplyDelete
  14. Hi Roni,

    Like Norbert above, I've also been looking for example Dicom files containing multiple images. I'm looking into adding Dicom support to my online and offline volumetric rendering tool (which currently supports Analyze/Nif-TI1), but I'm having trouble locating example files that cover a good range of the different potential configurations/layouts/data types/compressions/etc.

    It would certainly be useful to have some sort of definitive "standards test" data set. Do you know where I could find, or do you have, something like this?

    Thanks,
    Rich

    ReplyDelete
  15. Hi Rich
    There's test data in IHE's web site, in the NEMA DICOM download ftp site.
    There's also many samples in OsiriX web site
    I try to delete any data that I get and not keep copies of files after I finish the task related to it.
    Roni

    ReplyDelete
  16. Hi Roni,
    Thanks for the prompt reply! Yeah, I've found some of my current test data from the NEMA DICOM FTP as well. However, the data sets I've found thus far have been pretty poorly labeled/categorized and scattered in respect to establishing a good data set that ensures I've got all of the core bases covered for the binary format. I was hoping such a data set might already exist for the purpose of covering all real in-use data types and layouts, compression types, etc. But if not, I guess I'll have to continue scavenging. :)

    Thanks again,
    Rich

    ReplyDelete
  17. Hi Roni,
    Thank you for excellent post. I have question about MR spectroscopy DICOM files. There are few specific attributes for this files (e.g. Data Point Rows (0028,9001), Data Point Columns (0028,9002), Data Representation (0028,9108), Spectroscopy Data (5600,0020) and so on). I have file that SOP Class UID is MR Storage (1.2.840.10008.5.1.4.1.1.4) but not MR Spectroscopy Storage (1.2.840.10008.5.1.4.1.1.4.2). That's why I can't see specific spectroscopic tags (I think so). When I open this file with dicom viewer I see set of grayscale squares.

    I want to extract the spectroscopy data (array of intensity) from this file using Pixel Data. It necessary to create spectrum for each voxel . Can I do it or I only need data from attribute Spectroscopy Data (5600,0020)? Standard says that Spectroscopy Data attribute points to a data stream of intensities that comprise the spectroscopy data.

    I look forward to hearing from you.

    ReplyDelete
    Replies
    1. Dear A.
      MR Image (1.2.840.10008.5.1.4.1.1.4) and MR Spectroscopy (1.2.840.10008.5.1.4.1.1.4.2) are two different IOD's (or classes).
      An instance of MR Image does not carry MR Spectroscopy data.
      You have to get an MR Spectroscopy Instance.

      Delete
    2. I have the software Sivic. So, using this software I can create spectrum using file with MR-Image SOP Class, but I have no idea about how Sivic does it. It mean that it's possible to storage MR Spectroscopy data into files with MR-Image SOP Class. File that I have is produced by GE MR-machine (may be this fact is important).

      Delete
  18. Hi Roni,

    From the above article we can calculate the pixel size for uncompressed images

    But Is it possible to calculate the pixel data size of compressed images with out reading the actual pixel data?

    Thanks,
    Sreeraj

    ReplyDelete
    Replies
    1. The information about the pixel size in mm is not affected by the image format in which the pixels are stored. Where in this article do you find such linkage?

      Delete
  19. I have serious concern about MR Spectroscopy SOP Class UID 1.2.840.10008.5.1.4.1.1.4.2. When I have see the rows and columns in the dcm, the values are 1 and 1. Since this is a multiframe image it checks for Bits Allocated which is absents and fails. How do we fetch such images? Please assist me in resolving this concern?

    Thank You,
    Kishor.

    ReplyDelete
  20. This comment has been removed by a blog administrator.

    ReplyDelete
  21. while loading dicom file in my software i get error pixel data not found in dicom file . Can you explain this ?

    ReplyDelete
  22. Hi all, this article is interesting and have a questions, I am worked with DICOM and PACś, i would like know yes with this tag can create a dicom image?

    ReplyDelete
  23. Hi all, this article is interesting and have a questions, I am worked with DICOM and PACś, i would like know yes with this tag can create a dicom image?

    ReplyDelete
  24. The links to the example image and C++ code are currently broken. Do you think you can make these available again (perhaps through a github repo?).

    In particular I'm interested in whether all elements need to be of even length. If that is the case, then how is a string like "RGB" supposed to be padded? I'd like to see how you solved this in your code.

    ReplyDelete
    Replies
    1. The application is available in our examples package of RZDCX.
      The post link was fixed as well.
      RGB is encoded with an extra space like any other odd length string in DICOM

      Delete