SSH Access¶
To use the cluster you need to ask for the creation of a user account to an
administrator. Once you have an account, you can connect easily using ssh
.
To facilitate the ssh
usage, you can copy-paste the following lines into your
$HOME/.ssh/config
file and replace [USER_MONOLITHE]
with your Monolithe
login and [USER_LIP6]
with your LIP6 login:
config
# the following lines prevent automatic ssh deconnections
Host *
ServerAliveInterval 120
ServerAliveCountMax 5
TCPKeepAlive no
Host proxy.lip6 # ------------------------------------------ proxy (LIP6 proxy)
User [USER_LIP6]
HostName ducas.lip6.fr
# if you are external to the LIP6 you should comment the previous line and
# uncomment the next line
# HostName ssh1.mono.proj.lip6.fr
# wired LIP6 net wireless LIP6 net
Match host *mono.lip6 !localnetwork 132.227.102.0/24,132.227.92.0/24
ProxyJump proxy.lip6
Host *mono.lip6
User [USER_MONOLITHE]
RequestTTY force
HostName front.mono.proj.lip6.fr
Host front.mono.lip6 # ----------------------------------- Monolithe (frontend)
HostName front.mono.proj.lip6.fr
Host xu4.mono.lip6 # ---------------------------------------- Odroid-XU4 (node)
RemoteCommand srun -p xu4 --cpus-per-task=8 --pty bash -i && bash -i
Host rpi3.mono.lip6 # ----------------------------------- Raspberry Pi 3 (node)
RemoteCommand srun -p rpi3 --cpus-per-task=4 --pty bash -i && bash -i
Host tx2.mono.lip6 # ---------------------------------------- Jetson TX2 (node)
RemoteCommand srun -p tx2 --cpus-per-task=6 --pty bash -i && bash -i
Host xagx.mono.lip6 # -------------------------------- Jetson AGX Xavier (node)
RemoteCommand srun -p xagx --cpus-per-task=8 --pty bash -i && bash -i
Host xnano.mono.lip6 # ------------------------------ Jetson Xavier Nano (node)
RemoteCommand srun -p xnano --cpus-per-task=4 --pty bash -i && bash -i
Host rpi4.mono.lip6 # ----------------------------------- Raspberry Pi 4 (node)
RemoteCommand srun -p rpi4 --cpus-per-task=4 --pty bash -i && bash -i
Host xnx.mono.lip6 # ---------------------------------- Jetson Xavier NX (node)
RemoteCommand srun -p xnx --cpus-per-task=6 --pty bash -i && bash -i
Host m1u.mono.lip6 # ------------------------------------------ M1 Ultra (node)
RemoteCommand srun -p m1u --cpus-per-task=20 --pty bash -i && bash -i
Host vim1s.mono.lip6 # ------------------------------------------- VIM1S (node)
RemoteCommand srun -p vim1s --cpus-per-task=4 --pty bash -i && bash -i
Host onx.mono.lip6 # ------------------------------------ Jetson Orin NX (node)
RemoteCommand srun -p onx --cpus-per-task=8 --pty bash -i && bash -i
Host oagx.mono.lip6 # ---------------------------------- Jetson AGX Orin (node)
RemoteCommand srun -p oagx --cpus-per-task=12 --pty bash -i && bash -i
Host onano.mono.lip6 # -------------------------------- Jetson Orin Nano (node)
RemoteCommand srun -p onano --cpus-per-task=6 --pty bash -i && bash -i
Host opi5.mono.lip6 # --------------------------------- Orange Pi 5 Plus (node)
RemoteCommand srun -p opi5 --cpus-per-task=8 --pty bash -i && bash -i
Host rpi5.mono.lip6 # ----------------------------------- Raspberry Pi 5 (node)
RemoteCommand srun -p rpi5 --cpus-per-task=4 --pty bash -i && bash -i
Host em780.mono.lip6 # ----------------------------------- Mercury EM780 (node)
RemoteCommand srun -p em780 --cpus-per-task=16 --pty bash -i && bash -i
Host bpif3.mono.lip6 # ------------------------------------ Banana Pi F3 (node)
RemoteCommand srun -p bpif3 --cpus-per-task=8 --pty bash -i && bash -i
Host x7ti.mono.lip6 # ------------------------------------ AtomMan X7 Ti (node)
RemoteCommand srun -p x7ti --cpus-per-task=22 --pty bash -i && bash -i
Once it is done, you can simply connect to the frontend using:
It is also possible to directly connect to a specific node using the Slurm partition identifier. For instance, if you want to directly connect to the Raspberry Pi 4 you can do: