The Center Mr Scanners Contact Us Diagnostic Imaging Services Patient Information Referring Physicians Site Diagnostic Services Research McLean Imaging Center Applied Neuroimaging Statistics Laboratory CCNC fNIBI MIND Neurodevelopmental Lab on Addictions and Mental Health (NLAMH) Opto-Magnetic Group (OMG) Translational Imaging Translational Imaging Collaborations—Mclean Publications Publications Education N.I.D.A. T32 N.I.H. K Awards Northeastern CoOp MRI Techonlogist Training  

Collaborations Non-Mclean

Daniel M. Drucker, Ph.D., Director of IT

Computing Facilities

Primary data analysis for most projects is performed on investigators' personal computers (Mac OS X, Linux, and Windows) or our computational cluster. In addition, the MIC provides support infrastructure for specialized data processing needs, consisting of a variety of Unix and Linux systems and networked printers. All machines are networked to the hospital wide Ethernet. On-line data storage is provided by a TrueNAS file server, accessible via SMB and NFS, totaling ~500 TB, Pegasus RAIDs directly connected to Osirix MD DICOM servers totaling ~75 TB, and Orthanc and XNAT servers backed by the TrueNAS. Backups are made daily to a secondary offsite TrueNAS system.

The MIC computation cluster consists of one head node with PowerEdge R720 with Intel Xeon E-26XX Processors (20 Cores), 128 GB RAM, PERC H810 RAID with 230 TB of storage and RHEL HPC operating system. There are five compute nodes each with PowerEdge R720 with Intel Xeon E-26XX Processors (3 nodes with 20 Cores, and 2 with 32), 256 GB RAM and RHEL HPC operating system; two of these have GPU capability. The cluster is managed with Bright Cluster Manager and runs Sun Grid Engine (SGE). Connectivity among cluster modules is with Infiniband and to our local network via a 10 GB switch. A second cluster is in beta, based on SLURM, and is accessible by invitation.

The cluster has key software used at the MIC such as FSL, SPM, AFNI Freesurfer and ANTs for fMRI processing. These packages support SGE clusters and achieve a speedup for large jobs proportional to the number of cores in the cluster for large jobs. The cluster also hosts the full Human Connectome Project dataset, for use in hypothesis testing and generation.

Offsite Clusters

For larger jobs, MIC researchers have access to both 1) The Harvard Medical School's O2 computational cluster, a shared, heterogeneous High Performance Compute facility which includes 350+ compute nodes, 11,000+ compute cores, assorted GPU cards, and more than 50TB of memory using a SLURM scheduling system, and 2) The Partners Healthcare ERISOne cluster has over 380 compute nodes, 7000 CPU cores and a total of 56TB RAM memory and 5PB's of storage in addition to specialized parallel processing resources including GPUs, using an LSF scheduler. All fMRI processing tools available on the MIC cluster are installed on both clusters, with the addition of fmriprep.