Skip to Main Content

Department of Computer Science

Technical Services and Support

Computing Facilities and Services: Summary

A variety of facilities are described throughout these pages. This is a user-friendly summary with pointers to the pages describing specifics. See the New Users section for easy-to-follow documentation. In that section

Here is a summary of our facilities:

Linux Systems

For specific hostnames and locations of the systems listed below, see Configurations of all generally available Linux systems run by LCSR. Unless noted otherwise, our systems run the current  Ubuntu LTS (Long Terms Service).

  • Generally available (within CS) systems (iLab Cluster): These systems are readily accessible to grad students, researchers, undergraduate majors, and undergraduates taking CS courses, ensuring you always have the necessary resources. For historical reasons, this system collection is called the “ilab.”
    • Over 50 desktop systems are available in Hill 248, 252, and 254, as well as in graduate student offices in Hill and CORE. 16 or 32 GB with CUDA-compatible GPUs. Details and availability of ilab desktops.  
    • Note for grad student offices: Most students prefer laptops in TA and RA offices. Thus, by default, we provide Monitors and other support for laptops. We have desktop systems if you want them. Let us know if you need a different setup.
    • ilab.cs.rutgers.edu. Virtual address pointing to 4 multi-user systems, with 1 TB of memory, up to 96 cores, and 8 Nvidia A4000 GPUs each.
    • ilabU.cs.rutgers.edu. This system tests new LTS versions of the Ubuntu operating system. Currently, it has Ubuntu 22.04, while our other systems have Ubuntu 22.04. iLabU has 256G of memory, 48 cores, and 8 Nvidia 2080 TI GPUs. 
    • Data science tools. Spark and map-reduce are available on all of our systems. Jupyter notebooks are supported.
    • Researchers should avoid large CPU usage during peak instructional use on the iLab servers (host names starting with ilab) and the systems in student public areas.
    • Limits: Memory and GPU usage on the iLab cluster are limited. If you’re running long jobs or GPU-intensive jobs, please See Limits Enforced on CS Machines and Scheduler for GPU Jobs
  • Faculty desktops: these systems in some faculty offices, with up to 32 GB of memory, are not part of the iLab cluster.
  • Faculty servers: are 2 Ubuntu Linux VMs, not part of the iLab cluster. These are intended primarily so that faculty can do grading and other tasks on systems to which only faculty have access. Connect to faculty.cs.rutgers.edu.   
  • General research systems, also part of the iLab cluster, are available to faculty, grad students, and other researchers. rlab1.cs.rutgers.edu – rlab4.cs.rutgers.edu. Systems intended for jobs using GPUs and large memory. 512 GB of memory (except rlab1, which is 1.5 TB), 95 cores, 8 Nvidia 1080TI GPU. Jobs on these systems use the Slurm scheduler to allocate GPUs and memory.
  • Private research systems are available to faculty and their groups. These additional systems run by researchers are unavailable to the general community and not part of the iLab cluster. Most of them use Ubuntu Linux. Many of them have GPUs
  • Web hosting: You can put HTML and other files intended for web access in your public_html directory, located in /common/web/$USER.  They will be visible as people.cs.rutgers.edu/username.  See Publishing web pages or CS Homepage Manager for more info. We also maintain a WordPress system for project web pages.
Storage
  • General approach: Home directories are enough for many users. They are on a file server that uses SSD. For those who need more space,
    • /common/users can be used. It has larger quotas.
    • Faculty can also request special project directories. These can have quotas of 10s of terabytes if necessary.
    • However /common/users and those project directories are on spinning disk. However, the server has 50 disks in mirrored pairs, with metadata on SSD, so it performs well if you can use multiple processes for I/O.
  • If you have a project that needs SSD performance and won’t fit in your home directory, there are several options. But they all assume you’ll only keep the files you’re working on on SSD. They must go to /common/users or a project directory for long-term storage. These directories are not backed up, although we’re willing to back up local directories on faculty-owned systems if necessary.
    • All of our systems have local SSD. Generally, the directories are /data/local, but a few have other names. They are all set up, so any user can create a directory. However, these file systems are cleaned out roughly once a semester. 
    • If you need fast storage that you can access from more than one system, /filer/tmp1 is available. Like local SSD, this is intended to store your working data, not permanent storage. It is cleaned out once a semester
  • Home Directories: User home directories and shared storage are on two Linux NFS servers. Home directories are in /common/home. They are on SSD storage, with 50 GB or 200 GB quotas for faculty and PhD students.
  • /common/users: All generally available systems mount /common/users for those needing more storage. User quotas are 100 GB on this system or 1TB for faculty and PhD students.  This is intended to be used where large capacity is needed, but not as high performance.
  • /common/web: This is used for web pages. See Publishing web pages
  • /research/archive. Funding agencies require data to be kept for long-term storage. Please let us know if you need this service. 
  • We can normally accommodate other needs with special file systems.  
  • Shared Directories: For projects and teams that want to share storage, /common/users/shared is set so that any user can create a directory and set it to be shared by a group. See Making A Directory You Can Share. Note that files in these directories still count against your quota on /common/users.
  • Backups: Home directories and other storage are snapshotted and backed up in a separate building (CBIM or Hill) and monthly at a commercial offsite storage facility.
  • Local storage: Most systems have some local storage, often mounted as /local. This storage is NOT BACKED UP. It is intended for jobs that need large work files. Source files and results should be stored in your home directory or /common/users.
  • Access from systems not run by LCSR: Home directories, /common/users, and other file systems can be mounted on research systems run by faculty, as long as those systems use Kerberos authentication. See Integrating Your Systems With LCSR Kerberos.
  • See Storage Technology Options for more details.
Virtual Machines

 There are two kinds of Virtual machine facilities in the department. Virtual Machines are used for Academic and virtual machines for instructional and research needs. Below are descriptions of LCSR running virtual machines.

LCSR runs many virtual machines, both for its internal use and for various instructional and research needs. VMs can use CentOS, Ubuntu, or Windows. Typically, faculty/staff are asked for special VMs, and users who run/request VMs act as their system administrators, but LCSR is willing to do updates for system software. 

  • There are two 1 TB servers hosting VMs for instructional and student needs. These are commonly used in courses that have requirements not met by shared instructional systems. 
  • Most commonly, LCSR staff will work with the instructor to configure appropriate software and duplicate a master copy for each student or student team. A web interface allows students to start, stop, and restore their VM to its initial configuration. Grad students may request personal VMs if needed for their projects.
  • servers are running VMware ESXi. These are used for various services, such as web servers and administrative applications. LCSR infrastructure, such as the Kerberos servers, also runs on these VMs. Faculty may request VMs for their use. They’re commonly used for special-purpose servers and applications supporting research projects. Note: As of 2024, we are moving to open-source KVM products and away from VMWare products due to expensive and confusing licenses.
  • There is a WordPress system set up to host websites for research projects. It has a web interface allowing faculty to create and administer their sites using templates that default to Rutgers standards but can be customized. For more information, see Computer Science Web Hosting. (You can also make web pages available by putting them in public_html on our shared computer systems. See Publishing web pages or CS Homepage Manager for more info)
Hackerspace

Hackerspace has a collection of special-purpose devices intended for courses and student projects. These include

  • Makerbot 3D printer
  • Systems on a disconnected network for students to try security attacks
  • Arduinos and related hardware
  • Parts for building electronic equipment
  • Small robotics equipment, such as Lego Mindstorm and iRobot Create
  • ARDrone
  • VR equipment

We have a budget to buy special-purpose devices as needed for courses, but we try to keep one or two devices that will be useful for future classes and projects.

Networking

A large data network interconnects all of LCSR’s facilities. The wireline network contains 64 switches, 166 10 Gbps ports, 1316 1 Gbps ports, and 528 100 Mbps ports on 222 VLANs. The core is a mix of 100 and 40 Gbps. This network is used by all systems in computer science, even if LCSR doesn’t run the systems.

LCSR, in cooperation with the University, supports a wireless network covering all department areas.

Outside connectivity is provided via the University’s access to the Internet, Internet2, and various special-purpose networks. LCSR also maintains an extensive security infrastructure over the network, including firewalls and custom intrusion detection software, and provides post-mortem analysis of compromised machines.

Directory Services and Authentication

LCSR maintains a set of 3 servers running Redhat’s IPA. This is a combination of LDAP and Kerberos. All systems maintained by LCSR use this for authentication and user information. We encourage faculty to use these services for the systems that they run. With LCSR Kerberos and directory services, systems can access shared file systems. It also provides an easy way to maintain the list of authorized users for a set of systems. See Integrating Your System with LCSR Kerberos.

In addition to this system, LCSR has a variety of data about faculty, staff, and students in a set of Oracle databases used for administrative applications. Please contact hedrick@rutgers.edu if you need this data for an application.

Facilities Outside Computer Science

OARC is a University group that provides high-performance computing. Computer science, in general, doesn’t have a conventional HPC cluster. We concentrate on GPUs and more specialized hardware. For large-scale HPC and data science, OARC is the best source. They have a large cluster, Amarel. It is intended as a “condo” cluster. I.e., grants buy nodes and are guaranteed at least as much capacity as they purchased. The University matches the cost. However, some capacity is available for those who haven’t bought into the system, particularly for coursework and student use.

For more information, see the OARC website.

LCSR Services

In addition to running computing systems, LCSR supports faculty and students in computer science. Some commonly used services are

  • Planning. This includes helping identify the best resources for instructional and research use and configuring purchasing systems. LCSR encourages faculty to talk with us about courses or projects with special requirements.
  • Hardware installation, network configuration, and support for hardware purchased by faculty and not administered by LCSR.
  • Support. LCSR provides help for users of the facilities it runs. However, it also assists in setting up and solving problems in systems run by researchers.
  • Programming. LCSR can provide staff to do programming for research projects. We have both full-time and student programmers available.

For help with our systems or immediate assistance, visit LCSR Operator at CoRE 235 or call 848-445-2443. Otherwise, see CS HelpDesk. Don’t forget to include your NetID along with descriptions of your problem.

For planning and infrastructure support, contact the LCSR director.