Intron and Exon Upgrade information

With the upgrade to the operating system some changes on the server will require intervention on your end.
This document will be updated with new information about changes you may want to make to continue running your applications.


WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

This situation is expected if it is the first time you are logging-on to the server after the upgrade. After you apply the fix, you should never see this message again.
If you are running ssh from a Terminal window on MacOS or the command Prompt on windows. Please use the ssh-keygen command.

ssh-keygen -R intron

The example above will remove the stored identification information for a server called intron. If you have use multiple versions of the hostname, like intron.case.edu or intron.cwru.edu, you will need to repeat this step for each version of the hostname.
The next time you log in, you will be asked if you are sure you want to continue connecting. Answer yes and you should not be bothered with this again for this server.


RUNNING dexseq_count.py dexseq_prepare_annotation.py post upgrade

There are some compatibility issues with the python scripts included with the DEXSeq R package. A conda environment has been created that is available server wide. As other programs with similar problems are identified additional conda environments will be added. In this example we will use DEXSeq.

To check for available conda envirnonments use "conda env list" This will provide a list of environments installed on the server as well as your own conda environments

conda env list # conda environments: # base /usr DEXSeq /usr/envs/DEXSeq snakemake /usr/envs/snakemake

Conda environments that are available server wide will be named after the program they are for.

Therefore, before running the application, activate the appropriate conda environment before running the associated application. In this example DEXSeq.

conda activate DEXSeq (DEXSeq)

You will know the conda environment is active as it will display the environment name before your shell prompt.


Copying data from RDS to Scratch for cluster computation

The full documentation for CWRU HPC Data transfer protocols can be found here: https://sites.google.com/a/case.edu/hpcc/data-transfer

The instructions for transfering data to scrach are:
The RDS servers are not mounted to compute nodes. Data from RDS servers needs copied to /scratch or another active mount point for use in computations. To do so programmatically, in a Slurm script, may be accomplished as follows:

# Create temporary scratch space
mkdir /scratch/users/<CaseID>

# Copy data from RDS to /scratch. Suggested nodes to use when copying from/to RDS: dtn[1-3], hpctransfer
ssh dtn2 'cp -r /mnt/rds/Genetics02/Genetics/<rds name>/<folder1> /scratch/users/<CaseID>'

# Copy data from /scratch back to RDS
ssh dtn2 'cp -r /scratch/users/<CaseID>/<folder1> /mnt/rds/Genetics02/Genetics/<rds name>/.'