Facilities
Workspace
Our workspace is in the Physics Building, Room F–217, at the University of Puerto Rico – Mayagüez. It encompasses a large room, measuring 34 feet in length and 20 feet in width. This space has been divided into several distinct areas.
Food & Beverage Amenities: Our office features a kitchen corner equipped with one microwave, two coffee makers, and one small refrigerator.
Working and Research Space: Our office is designed to accommodate both teamwork and individual work. It is furnished with six large tables, 17 chairs, three portable whiteboards, two large wall-mounted whiteboards, and three shelves stocked with books, paper, clips, pens, and other essential items. Additionally, there is one computer (complete with a screen, keyboard, and mouse) and one printer. This area can accommodate up to 12 people.
Rest & Comfort Zone: For breaks and relaxation, we have a cozy area with two comfortable chairs, five pillows, and one small table, suitable for up to four people.
Conference Facilities: Our meeting room is equipped with one large table, 10 chairs, two projectors, one large screen, two speakers, and one camera, accommodating up to 10 people.
Cleaning Supplies & Maintenance: To maintain cleanliness, we are equipped with two brooms, two dustpans, one mop, one trash bin, one sink, two buckets, and one warning cone, along with a supply of disinfectants, hand sanitizer, alcohol, air fresheners, and sponges stored in drawers.
Safety Measures: For safety, we have a fire extinguisher and a sign indicating the emergency exit location.
Voyager
Extensive computing resources are available in our group. In particular, the UPRM Voyager HPC Cluster consists of head and login nodes for cluster management and for compiling applications to be run on the cluster; 32 compute nodes organized into 8 enclosures, each node having: 2 CPUs and 24 cores, 128GB of RAM, 2 TB of local store, 2x10Gbps NiC, for a total of 1920 cores; 6 high memory nodes organized into 6 enclosures with 2 CPUs and 24 cores, 384GB of RAM, 2 TB of local store, 2x10 Gbps NiC; 11 GPU nodes with 2 CPUs and 24 cores, 64GB of RAM, 2 x Nvidia Tesla Pascal V100 GPU, and 4 TB of storage; and InfiniBand network for all compute nodes in the system. The computer nodes are connected to the storage nodes over the SciNet 10Gbps local network. The nodes in the Cluster run the Linux CentOS OS. The Cluster is managed with Open Stack, with the following plugins for user account management: Horizon (web-based interface) and Blazar (resource reservation).