Known Problems and Issues.
Computing hardware running Hadoop/Yarn, ordered by age.
year | # | type | CPU | RAM | disk | partitions | network | name(s) | task(s) | access |
2016 | 8 | Dell R730 | 2xE5-2630-v3 | 64 | 6x4THDD raid0 | | FDR infiniband | ctit071..078 | hadoop worker | yarn scheduler |
Ordered hardware
year | # | type | CPU | RAM | disk | partitions | network | name(s) | task(s) | access |
2023 | 8 | Dell R7515 | 2xEPYC-7713P | 1024GB | 2x240GBssd, 2×1.9TBssd, 8x16TBhdd | | 10Gb Base-T | ctit001..008 | virtual hadoop | docker/kubernetes |
Obsolete hardware
year | # | type | CPU | RAM | disk | partitions | special | name(s) | task(s) | access |
2014 | 12 | Dell R415 | 1xOpteron-4386 | 64 | 4x4THDD raid0 | | | ctit033..044 | hadoop worker | yarn scheduler |
2014 | 1 | Dell R415 | 1xOpteron-4386 | 64 | 4x4THDD raid1 | | | ctit045 | OpenNebula front end and worker | web based (only via https) |
2014 | 1 | Dell R415 | 1xOpteron-4386 | 64 | 4x4THDD raid1 | | | ctit046 | OpenNebula worker | via front end |
2014 | 1 | Dell R415 | 1xOpteron-4386 | 64 | 4x4THDD raid1 | | | ctit047 | primary SLURM/secondary hadoop scheduler | admins only |
2014 | 1 | Dell R415 | 1xOpteron-4386 | 64 | 4x4THDD raid1 | | | ctit048 | secondary SLURM/primary hadoop scheduler | admins only |
2013 | 32 | Dell R415 | 2xOpteron-4386 | 64 | 1×2.15T (echt 4) + 2x2TB | .15 RAID1 + 8TB striped | QDR infiniband | ctit001..032 | hadoop worker | yarn scheduler |
2014 | 1 | Mac mini | 1xi7-4578U | 8 | 256GBssd | | | fmtmini | OS X build/test environment | FMT members only |
2013 | 1 | Dell R720 | 1x E5- | 32 | 8x2THDD raid6 | | 10GbE | brecklenkamp_old | file server | admins only |
2011 | 2 | supermicro | 1xE3-1220 | 16 | ? | | | singraven | proxy server | admin only |
2011 | 1 | supermicro | 4xOpteron-6168 | 128 | ? | | 10GbE | westervlier | node | slurm |
2011 | 2 | supermicro | 1xE3-1220 | 16 | ? | | | schuilenburg | legacy twickel cluster | admin only |
2009 | 1 | Dell R710 | 2xX5550 | 144 | ? | | 10GbE | oldemeule | head node | ssh |
2009 | 1 | Dell R710 | 2xX5550 | 72 | ? | | 10GbE? | wegdam | head node | ssh |
2008 | 15 | Dell R200 | 1xE3110 | 8 | ? + 4T | | | farm01..15 | web services | contact Jan Flokstra |
2007 | 1 | TTec 2U | 2xX5355 | 32 | 4x4T raid5 | | | vieker | file server | admins only |
2007 | 2 | TTec 2U | 2xE5335 | 64 | 4×500 raid5 | | | twickel, data1 | OpenNebula worker | via front end |
2007 | 1 | TTec 2U | 2xX5355 | 64 | 4×500 raid5 | | | weldam | OpenNebula worker | via front end |
2007 | 1 | TTec 2U | 2xE5335 | 16 | 4×500 raid6 | | | warmelo | legacy twickel cluster | admin only |
On the hadoop part, we're going to stop the slurm scheduler.
Switching hardware
year | type | description | use |
2014 | ?? | 36 port infiniband switch | HPC interconnect for ctit001..032 |
2009 | Dell M6220 | 20 port Gbit with 4 SFP+ 10Gbit connections, internal to blade chassis | main network for ctit061..070 |
2009 | Dell M2401G | 24 port infiniband switch, internal to blade chassis | HPC interconnect for ctit061..070 |
2007 | Dell 2448 | 48 port Gbit switch | currently not in use |
2007 | Dell 6224 | 24 port Gbit with 4 SFP+ 10Gbit connections, stand alone | currently not in use |