Difference between revisions of "DICOM:Database"
From Slicer Wiki
(Created page with 'Local databases to organize DICOM header information are often used in medical image applications and workstations. This page is used to organize information and examples.') |
|||
(9 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
Local databases to organize DICOM header information are often used in medical image applications and workstations. This page is used to organize information and examples. | Local databases to organize DICOM header information are often used in medical image applications and workstations. This page is used to organize information and examples. | ||
+ | |||
+ | == History == | ||
+ | |||
+ | Bill started [http://massmail.spl.harvard.edu/public-archives/slicer-devel/2010/004323.html a thread on the slicer-devel mailing list] about improving dicom parsing performance. [http://massmail.spl.harvard.edu/public-archives/slicer-devel/2010/004364.html This message] includes a sample sql database schema as an attachment. | ||
+ | |||
+ | At the [http://www.na-mic.org/Wiki/index.php/Events:CTK-Pre-Hackfest-2010 CTK meeting], Marco Nolden showed a similar approach has been followed as part of the [http://mitk.org MITK project] using [http://dicom.offis.de DCMTK] to fill an [http://sqlite.org SQLite] database. | ||
+ | |||
+ | == Example Data == | ||
+ | |||
+ | * [http://github.com/pieper/CTK/blob/master/Libs/DICOM/Core/Resources/dicom-schema.sql MITK DICOM Schema] | ||
+ | * [http://github.com/pieper/CTK/blob/master/Libs/DICOM/Core/Resources/dicom-sample.sql Dump of example database created with schema above] | ||
+ | * [[file:dicom-database-examples-2010-03-09.zip| Sample DICOM data used to create the database dump]] | ||
+ | * [[Slicer3/strawman DICOM Schema|Lorensen's strawman sqlite DICOM schema]] | ||
+ | |||
+ | == Considerations == | ||
+ | |||
+ | * It would be ideal if the database schema was standardized and could be used with any DICOM toolkit (GDCM and/or DCMTK). | ||
+ | |||
+ | * The MITK schema is nice because it uses the standard DICOM field names for the columns, for example PatientsUID, ModalitiesInStudy, etc). | ||
+ | |||
+ | * Eventually we could create an ITK IO Factory plugin reader that an read when given an SQLite filename and a query string that specifies a volume. With something like: "/tmp/dicom.db:SeriesUID=1.2.3...." If the SQL database kept the width, height, and pixel data offset then the files could be read quickly without re-parsing. | ||
+ | |||
+ | * Marco plans to contribute a cleaned up version of the MITK code to the [http://github.com/pieper/CTK CTK git repository]. | ||
+ | |||
+ | * Bill proposes [[Slicer3/DICOM import mechanism|a background DICOM import mechanism]]. | ||
+ | |||
+ | * Jim suggests that the image table hold information about the resolution, pixel size, (or field of view), coordinate frame (imagepositionpatient, imageorientationpatient), acquisition time, etc. as well as an offset into the file for the start of the pixel data. The goal should be that once the data had been entered into the database we never have to use a dicom parser again on that subject. Unless we are looking to pull out a very special tag. We should probably put some summary information up at the series level. You can't always do that so maybe the series table needs a column which indicates whether the whole series is homogeneous. | ||
+ | ** Steve agrees with Jim, but suggests that maybe we have multiple databases: a central database with minimal information to point sort out patients/studies/series and then another database file per study that has the detailed information. This per-study database file could be use for the fast loading without making the central database file grow too big. |
Latest revision as of 19:47, 18 March 2010
Home < DICOM:DatabaseLocal databases to organize DICOM header information are often used in medical image applications and workstations. This page is used to organize information and examples.
History
Bill started a thread on the slicer-devel mailing list about improving dicom parsing performance. This message includes a sample sql database schema as an attachment.
At the CTK meeting, Marco Nolden showed a similar approach has been followed as part of the MITK project using DCMTK to fill an SQLite database.
Example Data
- MITK DICOM Schema
- Dump of example database created with schema above
- File:Dicom-database-examples-2010-03-09.zip
- Lorensen's strawman sqlite DICOM schema
Considerations
- It would be ideal if the database schema was standardized and could be used with any DICOM toolkit (GDCM and/or DCMTK).
- The MITK schema is nice because it uses the standard DICOM field names for the columns, for example PatientsUID, ModalitiesInStudy, etc).
- Eventually we could create an ITK IO Factory plugin reader that an read when given an SQLite filename and a query string that specifies a volume. With something like: "/tmp/dicom.db:SeriesUID=1.2.3...." If the SQL database kept the width, height, and pixel data offset then the files could be read quickly without re-parsing.
- Marco plans to contribute a cleaned up version of the MITK code to the CTK git repository.
- Bill proposes a background DICOM import mechanism.
- Jim suggests that the image table hold information about the resolution, pixel size, (or field of view), coordinate frame (imagepositionpatient, imageorientationpatient), acquisition time, etc. as well as an offset into the file for the start of the pixel data. The goal should be that once the data had been entered into the database we never have to use a dicom parser again on that subject. Unless we are looking to pull out a very special tag. We should probably put some summary information up at the series level. You can't always do that so maybe the series table needs a column which indicates whether the whole series is homogeneous.
- Steve agrees with Jim, but suggests that maybe we have multiple databases: a central database with minimal information to point sort out patients/studies/series and then another database file per study that has the detailed information. This per-study database file could be use for the fast loading without making the central database file grow too big.