Skip to content

SSH Access

To use the cluster you need to ask for the creation of a user account to an administrator. Once you have an account, you can connect easily using ssh.

To facilitate the ssh usage, you can copy-paste the following lines into your $HOME/.ssh/config file and replace [USER_LOGIN_NFS_MONOLITHE] with your Monolithe login and [USER_LOGIN_NFS_LIP6] with your LIP6 login:

# The following lines prevent automatic ssh deconnections
Host *
    ServerAliveInterval 120
    ServerAliveCountMax 5 
    TCPKeepAlive no

Host *mono.lip6*
    User              [USER_LOGIN_NFS_MONOLITHE]
    RequestTTY        force
    HostName          monolithe.soc.lip6.fr

Host *mono.lip6.ext
    ProxyJump         [USER_LOGIN_NFS_LIP6]@ducas.lip6.fr:22

# Monolithe (frontend)
# -- inside LIP6
Host              mono.lip6
HostName          monolithe.soc.lip6.fr
# -- outside LIP6
Host              mono.lip6.ext
HostName          monolithe.soc.lip6.fr

# Odroid-XU4 (node)
# -- inside LIP6
Host              xu4.mono.lip6
RemoteCommand     srun -p xu4 --cpus-per-task=8 --pty bash -i && bash -i
# -- outside LIP6
Host              xu4.mono.lip6.ext
RemoteCommand     srun -p xu4 --cpus-per-task=8 --pty bash -i && bash -i

# Raspberry Pi 3 (node)
# -- inside LIP6
Host              rpi3.mono.lip6
RemoteCommand     srun -p rpi3 --cpus-per-task=4 --pty bash -i && bash -i
# -- outside LIP6
Host              rpi3.mono.lip6.ext
RemoteCommand     srun -p rpi3 --cpus-per-task=4 --pty bash -i && bash -i

# Jetson TX2 (node)
# -- inside LIP6
Host              tx2.mono.lip6
RemoteCommand     srun -p tx2 --cpus-per-task=6 --pty bash -i && bash -i
# -- outside LIP6
Host              tx2.mono.lip6.ext
RemoteCommand     srun -p tx2 --cpus-per-task=6 --pty bash -i && bash -i

# Jetson AGX Xavier (node)
# -- inside LIP6
Host              xagx.mono.lip6
RemoteCommand     srun -p xagx --cpus-per-task=8 --pty bash -i && bash -i
# -- outside LIP6
Host              xagx.mono.lip6.ext
RemoteCommand     srun -p xagx --cpus-per-task=8 --pty bash -i && bash -i

# Brubeck (node)
# -- inside LIP6
Host              brub.mono.lip6
RemoteCommand     srun -p brub --cpus-per-task=1 --pty bash -i && bash -i
# -- outside LIP6
Host              brub.mono.lip6.ext
RemoteCommand     srun -p brub --cpus-per-task=1 --pty bash -i && bash -i

# Jetson Xavier Nano (node)
# -- inside LIP6
Host              xnano.mono.lip6
RemoteCommand     srun -p xnano --cpus-per-task=4 --pty bash -i && bash -i
# -- outside LIP6
Host              xnano.mono.lip6.ext
RemoteCommand     srun -p xnano --cpus-per-task=4 --pty bash -i && bash -i

# Raspberry Pi 4 (node)
# -- inside LIP6
Host              rpi4.mono.lip6
RemoteCommand     srun -p rpi4 --cpus-per-task=4 --pty bash -i && bash -i
# -- outside LIP6
# Raspberry Pi 4 from outside LIP6
Host              rpi4.mono.lip6.ext
RemoteCommand     srun -p rpi4 --cpus-per-task=4 --pty bash -i && bash -i

# Jetson Xavier NX (node)
# -- inside LIP6
Host              xnx.mono.lip6
RemoteCommand     srun -p xnx --cpus-per-task=6 --pty bash -i && bash -i
# -- outside LIP6
Host              xnx.mono.lip6.ext
RemoteCommand     srun -p xnx --cpus-per-task=6 --pty bash -i && bash -i

# M1 Ultra (node)
# -- inside LIP6
Host              m1u.mono.lip6
RemoteCommand     srun -p m1u --cpus-per-task=20 --pty bash -i && bash -i
# -- outside LIP6
Host              m1u.mono.lip6.ext
RemoteCommand     srun -p m1u --cpus-per-task=20 --pty bash -i && bash -i

# Jetson Orin NX (node)
# -- inside LIP6
Host              onx.mono.lip6
RemoteCommand     srun -p onx --cpus-per-task=8 --pty bash -i && bash -i
# -- outside LIP6
Host              onx.mono.lip6.ext
RemoteCommand     srun -p onx --cpus-per-task=8 --pty bash -i && bash -i

# Jetson AGX Orin (node)
# -- inside LIP6
Host              oagx.mono.lip6
RemoteCommand     srun -p oagx --cpus-per-task=12 --pty bash -i && bash -i
# -- outside LIP6
Host              oagx.mono.lip6.ext
RemoteCommand     srun -p oagx --cpus-per-task=12 --pty bash -i && bash -i

# Jetson Orin Nano (node)
# -- inside LIP6
Host              onano.mono.lip6
RemoteCommand     srun -p onano --cpus-per-task=6 --pty bash -i && bash -i
# -- outside LIP6
Host              onano.mono.lip6.ext
RemoteCommand     srun -p onano --cpus-per-task=6 --pty bash -i && bash -i

# Orange Pi 5 Plus (node)
# -- inside LIP6
Host              opi5.mono.lip6
RemoteCommand     srun -p opi5 --cpus-per-task=8 --pty bash -i && bash -i
# -- outside LIP6
Host              opi5.mono.lip6.ext
RemoteCommand     srun -p opi5 --cpus-per-task=8 --pty bash -i && bash -i

# Raspberry Pi 5 (node)
# -- inside LIP6
Host              rpi5.mono.lip6
RemoteCommand     srun -p rpi5 --cpus-per-task=4 --pty bash -i && bash -i
# -- outside LIP6
Host              rpi5.mono.lip6.ext
RemoteCommand     srun -p rpi5 --cpus-per-task=4 --pty bash -i && bash -i

# Minisforum Mercury EM780 (node)
# -- inside LIP6
Host              em780.mono.lip6
RemoteCommand     srun -p em780 --cpus-per-task=16 --pty bash -i && bash -i
# -- outside LIP6
Host              em780.mono.lip6.ext
RemoteCommand     srun -p em780 --cpus-per-task=16 --pty bash -i && bash -i

Once it is done, if you are on the LIP6 network, you can simply connect to the frontend using:

ssh mono.lip6

Or, if you are not on the LIP6 network you can use:

ssh mono.lip6.ext

It is also possible to directly connect to a specific node using the Slurm partition identifier. For instance, if you want to directly connect to the Raspberry Pi 4 you can do:

ssh rpi4.mono.lip6

Danger

It is not possible to directly connect to a node using the NFS accounts. You should always use the srun command from the frontend. For instance, the following command will always fail:

ssh user-nfs@orangepi5.soc.lip6.fr