Content from Introducing the Shell


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • What is a command shell and why would I use one?
  • How can I move around in a computer?
  • How can I see what files and directories I have?
  • How can I specify the location of a file or directory on my computer?

Objectives

  • Describe key reasons for learning shell.
  • Learn how to access a remote machine.
  • Navigate your file system using the command line.
  • Access and read help files for bash programs and use help files to identify useful command options.
  • Demonstrate the use of tab completion, and explain its advantages.

What is a shell and why should I care?


A shell is a computer program that presents a command line interface which allows you to control your computer using commands entered with a keyboard instead of controlling graphical user interfaces (GUIs) with a mouse/keyboard combination.

There are many reasons to learn about the shell.

  • Many bioinformatics tools can only be used through a command line interface, or have extra capabilities in the command line version that are not available in the GUI. This is true, for example, of BLAST, which offers many advanced functions only accessible to users who know how to use a shell.
  • The shell makes your work less boring. In bioinformatics you often need to do the same set of tasks with a large number of files. Learning the shell will allow you to automate those repetitive tasks and leave you free to do more exciting things.
  • The shell makes your work less error-prone. When humans do the same thing a hundred different times (or even ten times), they’re likely to make a mistake. Your computer can do the same thing a thousand times with no mistakes.
  • The shell makes your work more reproducible. When you carry out your work in the command-line (rather than a GUI), your computer keeps a record of every step that you’ve carried out, which you can use to re-do your work when you need to. It also gives you a way to communicate unambiguously what you’ve done, so that others can check your work or apply your process to new data.
  • Many bioinformatic tasks require large amounts of computing power and can’t realistically be run on your own machine. These tasks are best performed using remote computers or cloud computing, which can only be accessed through a shell.

In this lesson you will learn how to use the command line interface to move around in your file system.

How to access the shell


On a Mac or Linux machine, you can access a shell through a program called Terminal, which is already available on your computer. If you’re using Windows, you’ll need to download a separate program to access the shell (see installation instructions here).

In this workshop, we suggest using a remote server, to invest most of our time learning the basics of shell by manipulating some experimental data, instead of dealing with installations. The remote server already includes the required bioinformatics packages as well as the large datasets that usually take a lot of time to load into everyone’s local computers.

Shell alternatives

In case you decide to follow the lesson on your computer, you won’t need to use ssh command because you will not connect to a remote machine.
If you are working on a remote machine that includes RStudio (which you will open in a browser) you can work in the terminal that is included in RStudio.

Ask your instructor for the ip_address and password to login.

To log in you need the ssh command (ssh stands for Secure Shell), your username and the adress of the machine you are logging into.

BASH

$ ssh dcuser@ec2-18-702-132-236.compute-1.amazonaws.com

Then you are prompted to type the password. Take into account that while you are typing a password no characters will appear on the screen, trust that they are being typed and press enter.

After logging in, you will see a screen showing something like this:

OUTPUT

Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-48-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Sat Feb  2 00:08:17 UTC 2019

  System load: 0.0                Memory usage: 5%   Processes:       82
  Usage of /:  29.9% of 98.30GB   Swap usage:   0%   Users logged in: 0

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

597 packages can be updated.
444 updates are security updates.

New release '16.04.5 LTS' available.
Run 'do-release-upgrade' to upgrade to it.


Last login: Fri Feb  1 22:34:53 2019 from c-73-116-43-163.hsd1.ca.comcast.net

This provides a lot of information about the remote server that you’re logging in to. We’re not going to use most of this information for our workshop, so you can clear your screen using the clear command.

BASH

$ clear

This will scroll your screen down to give you a fresh screen and will make it easier to read. You haven’t lost any of the information on your screen. If you scroll up, you can see everything that has been output to your screen up until this point.


The part of the operating system responsible for managing files and directories is called the file system. It organizes our data into files, which hold information, and directories (also called “folders”), which hold files or other directories.

Several commands are frequently used to create, inspect, rename, and delete files and directories.

Preparation Magic

If you type the command: PS1='\W\$ ' into your shell, followed by pressing the Enter key, your window should look like this:
~\$
That only shows the ultimate directory where you ar standing. In this case it is the home directory. The symbol ~ is an abbreviation of the home directory. This isn’t necessary to follow along (in fact, your prompt may have other helpful information you want to know about). This is up to you!

The dollar sign is a prompt, which shows us that the shell is waiting for input; your shell may use a different character as a prompt and may add information before the prompt. When typing commands, either from these lessons or from other sources, do not type the prompt, only the commands that follow it. In this lesson we will use the dollar sign to indicate the prompt.

BASH

$

Let’s find out where we are by running a command called pwd (which stands for “print working directory”). At any moment, our current working directory is our current default directory, i.e., the directory that the computer assumes we want to run commands in unless we explicitly specify something else. Here, the computer’s response is /home/dcuser, which is the top level directory within our cloud system:

BASH

$ pwd

OUTPUT

/home/dcuser

Let’s look at how our file system is organized. We can see what files and subdirectories are in this directory by running ls, which stands for “listing”:

BASH

$ ls

OUTPUT

dc_workshop  R 

ls prints the names of the files and directories in the current directory in alphabetical order, arranged neatly into columns. We’ll be working within the dc_workshop subdirectory, and creating new subdirectories, throughout this workshop.

The command to change locations in our file system is cd followed by a directory name to change our working directory. cd stands for “change directory”.

Let’s say we want to navigate to the dc_workshop directory we saw above. We can use the following command to get there:

BASH

$ cd dc_workshop

Let’s look at what is in this directory:

BASH

$ ls

OUTPUT

data	mags  taxonomy

We can make the ls output more comprehensible by using the flag -F, which tells ls to add a trailing / to the names of directories, or other symbols to identify the type of elements in the directory:

BASH

$ ls -F

OUTPUT

data/  mags/  taxonomy/

Anything with a “/” after it is a directory. Things with a “*” after them are programs. If there are no decorations, it’s a file.

To understand a little better how to move between folders, let’s see the following image:

Folder organization diagram showing a parent directory called dc_workshop, with tree subdirectories called data, mags, and taxonomy. Insida data there is another one called untrimmed_fastq, and inside taxonomy there is another one called mags_taxonomy.

Here we can see a diagram of how the folders are arranged one inside another. In this way, if we think about moving, from dc_workshop to the untrimmed_fastq folder, the path must go as they are ordered: cd dc_workshop/data/untrimmed_fastq

ls has lots of other options. To find out what they are, we can type:

BASH

$ man ls

Some manual files are very long. You can scroll through the file using your keyboard’s down arrow or use the Space key to go forward one page and the b key to go backwards one page. When you are done reading, hit q to quit.

Excercise 1: Extra information with ls -l

Use the -l option for the ls command to display more information for each item in the directory. What is one piece of additional information this long format gives you that you don’t see with the bare ls command?

BASH

$ ls -l

OUTPUT

total 12
drwxr-xr-x 3 dcuser dcuser 4096 Jun  3 17:59 data
drwxrwxr-x 2 dcuser dcuser 4096 Jun  3 18:02 mags
drwxrwxr-x 3 dcuser dcuser 4096 Jun  3 18:25 taxonomy

The additional information given includes the name of the owner of the file, when the file was last modified, and whether the current user has permission to read and write to the file.

No one can possibly learn all of these arguments, that’s why the manual page is for. You can (and should) refer to the manual page or other help files as needed.

Let’s go into the data/untrimmed_fastq directory and see what is in there.

BASH

$ cd data/untrimmed_fastq
$ ls

OUTPUT

JC1A_R1.fastq.gz  JC1A_R2.fastq.gz  JP4D_R1.fastq.gz  JP4D_R2.fastq.gz  TruSeq3-PE.fa

This directory contains a file TruSeq3-PE.fa, that we will use in a later lesson and four files with .fastq.gz extensions. FASTQ is a format for storing information about sequencing reads and their quality. GZ is an archive file compressed. We will be learning more about FASTQ files in a later lesson. These data comes in a compressed format, which is why there is a .gz at the end of the files. This makes it faster to transfer, and allows it to take up less space on our computer. Let’s use gunzip to decompress the files so that we can look at the FASTQ format.

BASH

$ gunzip JC1A_R1.fastq.gz  JC1A_R2.fastq.gz  JP4D_R1.fastq.gz  JP4D_R2.fastq.gz
$ ls

OUTPUT

JC1A_R1.fastq  JC1A_R2.fastq  JP4D_R1.fastq  JP4D_R2.fastq  TruSeq3-PE.fa

Shortcut: Tab Completion

Usually the key Tab is located on the left side of the keyboard just above the “Shift” key or “Caps lock” key.

Typing out file or directory names can waste a lot of time and it’s easy to make typing mistakes. Instead we can use tab complete as a shortcut. When you start typing out the name of a directory or file, then hit the Tab key, the shell will try to fill in the rest of the directory or file name.

Return to your home directory:

BASH

$ cd

then enter:

BASH

$ cd dc<tab>

The shell will fill in the rest of the directory name for dc_workshop.

Now change directories to dc_workshop

BASH

$ cd dc_workshop

Using tab complete can be very helpful. However, it will only autocomplete a file or directory name if you’ve typed enough characters to provide a unique identifier for the file or directory you are trying to access.

If we navigate to our data directory and try to access one of our sample files:

BASH

$ cd data/untrimmed_fastq
$ ls JC<tab>

The shell auto-completes your command to JC1A_R, because there is another file name in the directory begin with this prefix. When you hit Tab again, the shell will list the possible choices.

BASH

$ ls JC1A_R<tab><tab>

OUTPUT

JC1A_R1.fastq  JC1A_R2.fastq

Tab completion can also fill in the names of programs, which can be useful if you remember the beginning of a program name.

BASH

$ pw<tab><tab>

OUTPUT

pwd   pwdx

Displays the name of every program that starts with pw.

Summary


We now know how to move around our file system using the command line. This gives us an advantage over interacting with the file system through a Graphical User Interface (GUI) as it allows us to work on a remote server, carry out the same set of operations on a large number of files quickly, and opens up many opportunities for using bioinformatics software that is only available in command line versions.

In the next few episodes, we’ll be expanding on these skills and seeing how using the command line shell enables us to make our workflow more efficient and reproducible.

Key Points

  • The shell gives you the ability to work more efficiently by using keyboard commands rather than a GUI.
  • Useful commands for navigating your file system include: ls, pwd, and cd.
  • Most commands take options (flags) which begin with a -.
  • Tab completion can reduce errors from mistyping and make work more efficient in the shell.

Content from Navigating Files and Directories


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • How can I perform operations on files outside of my working directory?
  • What are some navigational shortcuts I can use to make my work more efficient?

Objectives

  • Use a single command to navigate multiple steps in your directory structure, including moving backwards (one level up).
  • Perform operations on files in directories outside your working directory.
  • Work with hidden directories and hidden files.
  • Interconvert between absolute and relative paths.
  • Employ navigational shortcuts to move around your file system.

Moving around the file system


We’ve learned how to use pwd to find our current location within our file system. We’ve also learned how to use cd to change locations and ls to list the contents of a directory. Now we’re going to learn some additional commands for moving around within our file system.

Use the commands we’ve learned so far to navigate to the dc_workshop/data/untrimmed_fastq directory, if you’re not already there.

BASH

$ cd
$ cd dc_workshop
$ cd data
$ cd untrimmed_fastq

What if we want to move back up and out of this directory and to our top level directory? Can we type cd dc_workshop? Try it and see what happens.

BASH

$ cd dc_workshop

OUTPUT

-bash: cd: dc_workshop: No such file or directory

Your computer looked for a directory or file called dc_workshop within the directory you were already in. It didn’t know you wanted to look at a directory level above the one you were located in.

We have a special command to tell the computer to move us back or up one directory level.

BASH

$ cd ..

Now we can use pwd to make sure that we are in the directory we intended to navigate to, and ls to check that the contents of the directory are correct.

BASH

$ pwd

OUTPUT

/home/dcuser/dc_workshop/data

From this output, we can see that .. did indeed took us back one level in our file system.

You can chain these together to move several levels:

BASH

$ cd ../../..

Excercise 1: Finding hidden directories

First navigate to the dc_workshop directory. There is a hidden directory within this directory. Explore the options for ls to find out how to see hidden directories. List the contents of the directory and identify the name of the text file in that directory.

Hint: hidden files and folders in Unix start with ., for example .my_hidden_directory

First use the man command to look at the options for ls.

BASH

$ man ls

The -a option is short for all and says that it causes ls to “not ignore entries starting with .” This is the option we want.

BASH

$ ls -a

OUTPUT

.  ..  data  .hidden  mags  taxonomy

The name of the hidden directory is .hidden. We can navigate to that directory using cd.

BASH

$ cd .hidden

And then list the contents of the directory using ls.

BASH

$ ls

OUTPUT

youfoundit.txt

The name of the text file is youfoundit.txt.

File permissions

Another option that the ls command has is to check the permissions on a file. If we are organized and we have a folder with the backup of all our files, we can rescue files that we have accidentally deleted, for example, but just because we have two copies doesn’t make us safe. We can still accidentally delete or overwrite both copies. To make sure we can’t accidentally mess up a file, we’re going to change the permissions on the file so that we’re only allowed to read (i.e. view) the file, not write to it (i.e. make new changes).

View the current permissions on a file using the -l (long) flag for the ls command.

BASH

$ ls -l

OUTPUT

total 0
-rw-rw-r-- 1 dcuser dcuser 0 May 27 23:16 youfoundit.txt

The first part of the output for the -l flag gives you information about the file’s current permissions. There are ten slots in the permissions list. The first character in this list is related to file type, not permissions, so we’ll ignore it for now. The next three characters relate to the permissions that the file owner has, the next three relate to the permissions for group members, and the final three characters specify what other users outside of your group can do with the file. We’re going to concentrate on the three positions that deal with your permissions (as the file owner).

File permission parameters The file permission parameters described in the text (-rw-rw-r--) showing which of the slots correspond to who has permissions, and a legend showing the meaning of the letters. Here the three positions that relate to the file owner are rw-. The r means that you have permission to read the file, the w indicates that you have permission to write to (i.e. make changes to) the file, and the third position is a -, indicating that you don’t have permission to carry out the ability encoded by that space (this is the space where x or executable ability is stored, we’ll talk more about this in a later lesson).

Our goal for now is to change permissions on this file so that you no longer have w or write permissions. We can do this using the chmod (change mode) command and subtracting (-) the write permission -w.

BASH

$ chmod -w youfoundit.txt 
$ ls -l 

OUTPUT

total 0
-r--r--r-- 1 dcuser dcuser 0 May 27 23:16 youfoundit.txt

Absolute vs. relative paths


The cd command takes an argument which is a directory name. Directories can be specified using either a relative path or a full absolute path. The directories on the computer are arranged into a hierarchy. The full path tells you where a directory is in that hierarchy. Navigate to the home directory, then enter the pwd command.

BASH

$ cd  
$ pwd  

You will see:

OUTPUT

/home/dcuser

This is the full name of your home directory. This tells you that you are in a directory called dcuser, which sits inside a directory called home which sits inside the very top directory in the hierarchy. The very top of the hierarchy is a directory called / which is usually referred to as the root directory. So, to summarize: dcuser is a directory in home which is a directory in /.

Now enter the following command:

BASH

$ cd /home/dcuser/dc_workshop/.hidden

This jumps forward multiple levels to the .hidden directory. Now go back to the home directory.

BASH

$ cd 

And then

BASH

$ cd dc_workshop/.hidden

These two commands have the same effect, they both take us to the .hidden directory. The first one uses the absolute path, giving the full address from the home directory. The second uses a relative path, giving only the address from the working directory. A full path always starts with a /. A relative path does not.

A relative path is like getting directions from someone on the street. They tell you to “go right at the stop sign, and then turn left on Main Street”. That works great if you’re standing there together, but not so well if you’re trying to tell someone how to get there from another country. A full path is like GPS coordinates. It tells you exactly where something is no matter where you are right now.

You can usually use either a full path or a relative path depending on what is most convenient. If we are in the home directory, it is more convenient to enter the relative path since it involves less typing.

Over time, it will become easier for you to keep a mental note of the structure of the directories that you are using and how to quickly navigate amongst them.

Excercise 2: Relative path resolution

Using the filesystem diagram below, if pwd displays /Users/thing, what will ls ../backup display?

  1. ../backup: No such file or directory
  2. 2012-12-01 2013-01-08 2013-01-27
  3. 2012-12-01/ 2013-01-08/ 2013-01-27/
  4. original pnas_final pnas_sub
Filesystem diagram with folders: Users/thing/backup/2012-12-02, Users/thing/backup/2012-01-08, Users/thing/backup/2013-01-27, Users/backup/original, Users/backup/pnas_final, and Users/backup/pnas_sub
  1. No: there is a directory backup in /Users.
  2. No: this is the content of Users/thing/backup, but with .. we asked for one level further up.
  3. No: see previous explanation. Also, we did not specify -F to display / at the end of the directory names.
  4. Yes: ../backup refers to /Users/backup.

The commands cd, and cd ~ are very useful for quickly navigating back to your home directory. We will be using the ~ character in later lessons to specify our home directory.

Key Points

  • The /, ~, and .. characters represent important navigational shortcuts.
  • Hidden files and directories start with . and can be viewed using ls -a.
  • Relative paths specify a location starting from the current location, while absolute paths specify a location from the root of the file system.

Content from Working with Files and Directories


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • How can I view and search file contents?
  • How can I create, copy and delete files and directories?
  • How can I control who has permission to modify a file?
  • How can I repeat recently used commands?

Objectives

  • View, search within, copy, move, and rename files. Create new directories.
  • Use wildcards (*) to perform operations on multiple files.
  • Make a file read only
  • Use the history command to view and repeat recently used commands.

Working with Files


Wildcards

Now that we know how to navigate around our directory structure, let’s start working with our sequencing files. We did a sequencing experiment and have four result files, which are stored in our untrimmed_fastq directory.

Navigate to your untrimmed_fastq directory.

BASH

$ cd ~/dc_workshop/data/untrimmed_fastq

We are interested in looking at the FASTQ files in this directory. We can list all files with the .fastq extension using the command:

BASH

$ ls *.fastq

OUTPUT

JC1A_R1.fastq JC1A_R2.fastq JP4D_R1.fastq JP4D_R2.fastq

The * character is a special type of character called a wildcard, which can be used to represent any number of any type of character. Thus, *.fastq matches every file that ends with .fastq.

This command:

BASH

$ ls *R1.fastq

OUTPUT

JC1A_R1.fastq JP4D_R1.fastq

lists only the file that ends with R1.fastq.

Command History


If you want to repeat a command that you’ve run recently, you can access previous commands using the up arrow on your keyboard to go back to the most recent command. Likewise, the down arrow takes you forward in the command history.

A few more useful shortcuts:

  • Ctrl+C will cancel the command you are writing, and give you a fresh prompt.
  • Ctrl+R will do a reverse-search through your command history. This is very useful.
  • Ctrl+L or the clear command will clear your screen.

You can also review your recent commands with the history command, by entering:

BASH

$ history

to see a numbered list of recent commands. You can reuse one of these commands directly by referring to the number of that command.

For example, if your history looked like this:

OUTPUT

259  ls *
260  ls /usr/bin/*.sh
261  ls *R1*fastq

then you could repeat command #260 by entering:

BASH

$ !260

Type ! (exclamation point) and then the number of the command from your history. You will be glad you learned this when you need to re-run very complicated commands.

Examining Files


We now know how to switch directories, run programs, and look at the contents of directories, but how do we look at the contents of files?

One way to examine a file is to print out on the screen all of the contents using the program cat.

$ cat JC1A_R2.fastq

cat is a terrific program, but, as you just saw if your ran the command, when the file is really big (as the files we have), it can be annoying to use. You can always use Ctrl+C to stop the command.

The program, less, is useful for this case. less opens the file as read only, and lets you navigate through it. The navigation commands are identical to the man program.

Enter the following command:

BASH

$ less JC1A_R2.fastq

Some navigation commands in less

key action
Space to go forward
b to go backward
g to go to the beginning
G to go to the end
q to quit

less also gives you a way of searching through files. Use the “/” key to begin a search. Enter the word you would like to search for and press enter. The screen will jump to the next location where that word is found.

Shortcut: If you hit “/” then “enter”, less will repeat the previous search. less searches from the current location and works its way forward. Note, if you are at the end of the file and search for the sequence “CAA”, less will not find it. You either need to go to the beginning of the file (by typing g) and search again using / or you can use ? to search backwards in the same way you used / previously.

For instance, let’s search forward for the sequence TTTTT in our file. You can see that we go right to that sequence, what it looks like, and where it is in the file. If you continue to type / and hit return, you will move forward to the next instance of this sequence motif. If you instead type ? and hit return, you will search backwards and move up the file to previous examples of this motif.

Remember, the man program actually uses less internally and therefore uses the same commands, so you can search documentation using “/” as well!

There’s another way that we can look at files, and in this case, just look at part of them. This can be particularly useful if we just want to see the beginning or end of the file, or see how it’s formatted.

The commands are head and tail and they let you look at the beginning and end of a file, respectively.

BASH

$ head JC1A_R2.fastq

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 2:N:0:TCGNAG
CGCGATCAGCAGCGGCCCGGAACCGGTCAGCCGCGCCNTGGGGTTCAGCACCGGCNNGGCGAAGGCCGCGATCGCGGCGGCGGCGATCAGGCAGCGCAGCAGCAGGAGCCACCAGGGCGTGCGGTCGGGCGTCCGTTCGGCGTCCTCGCGCCCCAGCAGCAGGCGCACGCCAGGGAATCCGACCCGCCGCCGGCTCGGCCGCGTCNCCCGCNCCCGCCCCCCGAGCACCCGNAGCCNCNCCACCGCCGCCC
+
1>AAADAAFFF1G11AA0000AAFE/AAE0FBAEGGG#B/>EF/EGHHHHHHG?C##???/FE/ECHCE?C<FGGGGCCCGGGG@?AE.BFFEAB-9@@@FFFFFEEEEFBFF--99A-;@B=@A@@?@@>-@@--/B--@--@@-F----;@--:F---9-AB9=-@-9E-99A-;:BF-9-@@-;@-@#############################################################
@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:15782:2187 2:N:0:TCGAAG
CAACCGGCTGATCCTCGACGCCATCGAGGCGACCGGCGCCGGCGCCGACGGGCTGATCACCGCCGCCGAGGTCGTCGCGATCAACGCGGCGATCCGCGGCGACGCGACGCCCCTCGCCGACTTCGTCGACCTGCACGGCGACGACGAGGAGGGCCTCGAGACCGGCTTCCCCCTGATCCAGGGCGACGGCGCCGCGACGCAGCTCGGCGGGTTCCACCCTCCTCACCGGGCCGCCGCCGGCTTCTACCCGA
+
BBBBBBBBDBFFGGFFEEGEFG2FHGFEGCA?EEGCE@EFEEE/EEE@EDCFDCAC2G2CG?CC/CFG?C?DHFCGCGFD-C.0;DFA-AD;AFFFF;DF-BB--@;>9D-@DAD->>=-@-9FFBDCFFFFB?.FE@---;@9@--9@9AD;D?.F.9..AE;C;-;;B.;D##############################################################################
@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:11745:2196 2:N:0:NCGAAG
CGAAAAGCCGCGCGCCGACCTGGGCGTCGAGCGCCGCGCCGCTCCAACGAACGCCAGGCGATCCGAGCGCGGCGGCGATGGCACCCGGATCGAGCCCGGTAAAGTCGGCCCGTAGGTCGAGGCCGCCGCCGCCAGGCGCCACTTCGAGCCGTGGGAGATGCAACGTTAGCGGCGCCGCCCCGTCGGCCGTCTCGAGCAAAATGCGCGTGTCGGTGAGCCGCCGGTGCTCCGGCAACCGCATCCTGCGCCAG

BASH

$ tail JC1A_R2.fastq

OUTPUT

+SRR098026.247 HWUSI-EAS1599_1:2:1:2:1311 length=35
#!##!#################!!!!!!!######
@SRR098026.248 HWUSI-EAS1599_1:2:1:2:118 length=35
GNTGNGGTCATCATACGCGCCCNNNNNNNGGCATG
+SRR098026.248 HWUSI-EAS1599_1:2:1:2:118 length=35
B!;?!A=5922:##########!!!!!!!######
@SRR098026.249 HWUSI-EAS1599_1:2:1:2:1057 length=35
CNCTNTATGCGTACGGCAGTGANNNNNNNGGAGAT
+SRR098026.249 HWUSI-EAS1599_1:2:1:2:1057 length=35
A!@B!BBB@ABAB#########!!!!!!!######

The -n option to either of these commands can be used to print the first or last n lines of a file.

BASH

$ head -n 1 JC1A_R2.fastq

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 2:N:0:TCGNAG

BASH

$ tail -n 1 JC1A_R2.fastq

OUTPUT

AAA#>>A#1>AAGGGGGGGG#ABFEFGGHGEFGEGGGEGFHHHGGGGGGGGEEEEEGCG?EGHHHG@CC#??#???FFG############################################################################################################################################################################

Details on the FASTQ format

Since we are learning while using FASTQ files, let’s understand what they are. Although it looks complicated (and it is), it’s easy to understand the fastq format with a little decoding. Some rules about the format include…

Line Description
1 Always begins with ‘@’ and then information about the read
2 The actual DNA sequence
3 Always begins with a ‘+’ and sometimes the same info in line 1
4 Has a string of characters which represent the quality scores; must have same number of characters as line 2

We can view the first complete read in one of the files our dataset by using head to look at the first four lines.

BASH

$ head -n 4 JC1A_R2.fastq

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 2:N:0:TCGNAG
CGCGATCAGCAGCGGCCCGGAACCGGTCAGCCGCGCCNTGGGGTTCAGCACCGGCNNGGCGAAGGCCGCGATCGCGGCGGCGGCGATCAGGCAGCGCAGCAGCAGGAGCCACCAGGGCGTGCGGTCGGGCGTCCGTTCGGCGTCCTCGCGCCCCAGCAGCAGGCGCACGCCAGGGAATCCGACCCGCCGCCGGCTCGGCCGCGTCNCCCGCNCCCGCCCCCCGAGCACCCGNAGCCNCNCCACCGCCGCCC
+
1>AAADAAFFF1G11AA0000AAFE/AAE0FBAEGGG#B/>EF/EGHHHHHHG?C##???/FE/ECHCE?C<FGGGGCCCGGGG@?AE.BFFEAB-9@@@FFFFFEEEEFBFF--99A-;@B=@A@@?@@>-@@--/B--@--@@-F----;@--:F---9-AB9=-@-9E-99A-;:BF-9-@@-;@-@#############################################################

Most of the nucleotides are correct, although we have some unknown bases (N). This is actually a good read!

Line 4 shows the quality for each nucleotide in the read. Quality is interpreted as the probability of an incorrect base call (e.g. 1 in 10) or, equivalently, the base call accuracy (e.g. 90%). To make it possible to line up each individual nucleotide with its quality score, the numerical score is converted into a code where each individual character represents the numerical quality score for an individual nucleotide. For example, in the line above, the quality score line is:

OUTPUT

!!!!!!!!!!!!!!!!#!!!!!!!!!!!!!!!!!!

The # character and each of the ! characters represent the encoded quality for an individual nucleotide. The numerical value assigned to each of these characters depends on the sequencing platform that generated the reads. The sequencing machine used to generate our data uses the standard Sanger quality PHRED score encoding, Illumina version 1.8 onwards. Each character is assigned a quality score between 0 and 42 as shown in the chart below.

OUTPUT

Quality encoding: !"#$%&'()\*+,-./0123456789:;<=>?@ABCDEFGHIJK
                  |         |         |         |         |
Quality score:    0........10........20........30........40..                          

Each quality score represents the probability that the corresponding nucleotide call is incorrect. This quality score is logarithmically based, so a quality score of 10 reflects a base call accuracy of 90%, but a quality score of 20 reflects a base call accuracy of 99%. These probability values are the results from the base calling algorithm and dependent on how much signal was captured for the base incorporation.

Looking back at our read:

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 2:N:0:TCGNAG
CGCGATCAGCAGCGGCCCGGAACCGGTCAGCCGCGCCNT
+
1>AAADAAFFF1G11AA0000AAFE/AAE0FBAEGGG#B

We can now see that the quality of each of the Ns is 0 and the quality of the only nucleotide call (C) is also very poor (# = a quality score of 2). This is indeed a very bad read.

Creating, moving, copying, and removing


Now we can move around in the file structure, look at files, and search files. But what if we want to copy files or move them around or get rid of them? Most of the time, you can do these sorts of file manipulations without the command line, but there will be some cases (like when you’re working with a remote computer like we are for this lesson) where it will be impossible. You’ll also find that you may be working with hundreds of files and want to do similar manipulations to all of those files. In cases like this, it’s much faster to do these operations at the command line.

Copying Files

When working with computational data, it’s important to keep a safe copy of that data that can’t be accidentally overwritten or deleted. For this lesson, our raw data is our FASTQ files. We don’t want to accidentally change the original files, so we’ll make a copy of them and change the file permissions so that we can read from, but not write to, the files.

First, let’s make a copy of one of our FASTQ files using the cp command.

Navigate to the /home/dcuser/dc_workshop/data/untrimmed_fastq directory and enter:

BASH

$ cp JC1A_R2.fastq JC1A_R2-copy.fastq
$ ls -F

OUTPUT

JC1A_R1.fastq  JC1A_R2-copy.fastq  JC1A_R2.fastq  JP4D_R1.fastq  JP4D_R2.fastq  TruSeq3-PE.fa

We now have two copies of the JC1A_R2.fastq file, one of them named JC1A_R2-copy.fastq. We’ll move this file to a new directory called backup where we’ll store our backup data files.

Creating Directories

The mkdir command is used to make a directory. Enter mkdir followed by a space, then the directory name you want to create.

BASH

$ mkdir backup

Moving / Renaming

We can now move our backup file to this directory. We can move files around using the command mv.

BASH

$ mv JC1A_R2-copy.fastq backup
$ ls backup

OUTPUT

JC1A_R2-copy.fastq

The mv command is also how you rename files. Let’s rename this file to make it clear that this is a backup.

BASH

$ cd backup
$ mv JC1A_R2-copy.fastq JC1A_R2-backup.fastq
$ ls

OUTPUT

JC1A_R2-backup.fastq

Removing

When we want to remove a file or a directory we use the rm command. By default, rmwill not delete directories. You can tell rm to delete a directory using the -r (recursive) option.

Let’s delete the backup directory we just made.

BASH

$ cd ..
$ rm -r backup

This will delete not only the directory, but all files within the directory. If you have write-protected files in the directory, you will be asked whether you want to override your permission settings.

If we want to modifiy a file without all the permissions you’ll be asked if you want to override your file permissions. for example:

OUTPUT

rm: remove write-protected regular file 'example.fastq'? 

If you enter n (for no), the file will not be deleted. If you enter y, you will delete the file. This gives us an extra measure of security, as there is one more step between us and deleting our data files.

Important: The rm command permanently removes the file. Be careful with this command. It doesn’t just nicely put the files in the Trash. They’re really gone.

Exercise 1: Make backup folder with write-protected permissions

Starting in the /home/dcuser/dc_workshop/data/untrimmed_fastq directory, do the following:

  1. Make sure that you have deleted your backup directory and all files it contains.
  2. Create a copy of each of your FASTQ files. (Note: You’ll need to do this individually for each of the two FASTQ files. We haven’t learned yet how to do this with a wildcard.)
  3. Use a wildcard to move all of your backup files to a new backup directory.
  4. Change the permissions on all of your backup files to be write-protected.
  1. rm -r backup
  2. cp JC1A_R1.fastq JC1A_R1-backup.fastq, cp JC1A_R2.fastq JC1A_R2-backup.fastq, cp JP4D_R1.fastq JP4D_R1-backup.fastq
    and cp JP4D_R2.fastq JP4D_R2-backup.fastq
  3. mkdir backup and mv *-backup.fastq backup
  4. chmod -w backup/*-backup.fastq
    It’s always a good idea to check your work with ls -l backup. You should see something like:

OUTPUT

-r--r--r-- 1 dcuser dcuser  24203913 Jun 17 23:08 JC1A_R1-backup.fastq
-r--r--r-- 1 dcuser dcuser  24917444 Jun 17 23:10 JC1A_R2-backup.fastq
-r--r--r-- 1 dcuser dcuser 186962503 Jun 17 23:10 JP4D_R1-backup.fastq
-r--r--r-- 1 dcuser dcuser 212161034 Jun 17 23:10 JP4D_R2-backup.fastq

Key Points

  • You can view file contents using less, cat, head or tail.
  • The commands cp, mv, and mkdir are useful for manipulating existing files and creating new directories.
  • You can view file permissions using ls -l and change permissions using chmod.
  • The history command and the up arrow on your keyboard can be used to repeat recently used commands.

Content from Redirection


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • How can I search within files?
  • How can I combine existing commands to do new things?

Objectives

  • Employ the grep command to search for information within files.
  • Print the results of a command to a file.
  • Construct command pipelines with two or more stages.

Searching files


We discussed in a previous episode how to search within a file using less. We can also search within files without even opening them, using grep. grep is a command-line utility for searching plain-text files for lines matching a specific set of characters (sometimes called a string) or a particular pattern (which can be specified using something called regular expressions). We’re not going to work with regular expressions in this lesson, and are instead going to specify the strings we are searching for. Let’s give it a try!

Nucleotide abbreviations

The four nucleotides that appear in DNA are abbreviated A, C, T and G. Unknown nucleotides are represented with the letter N. An N appearing in a sequencing file represents a position where the sequencing machine was not able to confidently determine the nucleotide in that position. You can think of an N as being aNy nucleotide at that position in the DNA sequence.

We’ll search for strings inside of our fastq files. Let’s first make sure we are in the correct directory.

BASH

$ cd ~/dc_workshop/data/untrimmed_fastq
$ ls  

OUTPUT

JC1A_R1.fastq   JC1A_R2.fastq     JP4D_R1.fastq     JP4D_R2.fastq  TruSeq3-PE.fa

Suppose we want to see how many reads in our file have really bad segments containing 10 consecutive unknown nucleotides (Ns).

Determining quality

In this lesson, we’re going to be manually searching for strings of Ns within our sequence results to illustrate some principles of file searching. It can be really useful to do this type of searching to get a feel for the quality of your sequencing results, however, in your research you will most likely use a bioinformatics tool that has a built-in program for filtering out low-quality reads. You’ll learn how to use one such tool in a later lesson.

Let’s search for the string NNNNNNNNNN in the JC1A_R2.fastq file.

BASH

$ grep NNNNNNNNNN JC1A_R2.fastq

This command returns a lot of output to the terminal. Every single line in the JC1A_R2.fastq file that contains at least 10 consecutive Ns is printed to the terminal, regardless of how long or short the file is. We may be interested not only in the actual sequence which contains this string, but in the name (or identifier) of that sequence. We discussed in a previous lesson that the identifier line immediately precedes the nucleotide sequence for each read in a FASTQ file. We may also want to inspect the quality scores associated with each of these reads. To get all of this information, we will return the line immediately before each match and the two lines immediately after each match.

We can use the -B argument for grep to return a specific number of lines before each match. The -A argument returns a specific number of lines after each matching line. Here we want the line before and the two lines after each matching line, so we add -B1 -A2 to our grep command.

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R2.fastq

One of the sets of lines returned by this command is:

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:2111:5300:24013 2:N:0:TCGAAG
NNNNNNNNNNNCNANNANNNNNCGCCGGTGTTCTNCTGGGGNACGGANACCGAGTAGATCGGAACAGCGTCGTGGAGNGAAAGAGTGTAGATCCCGGTGGGCGGCGTATCATTAAAAAAAAAACCTGCTGGTCCTTGTCTC
+
AAA11BB3333BGG1GGEC1E?0E0B0BFDGFHD2FBH110A1BEE?A/BAFBDGH///>FEGGG><@/#//?#?/#//????########################################################################################################################################################################

Exercise 1: Using grep

  1. Search for the sequence GATCGAGAGGGGATAGGCG in the JC1A_R2.fastq file. Have your search return all matching lines and the name (or identifier) for each sequence that contains a match.

  2. Search for the sequence AAGTT in all FASTQ files. Have your search return all matching lines and the name (or identifier) for each sequence that contains a match.

’1.To search for the GATCGAGAGGGGATAGGCG sequence in the file JC1A_R2.fastq:

BASH

$ grep -B1 GATCGAGAGGGGATAGGCG JC1A_R2.fastq

The output shows all of the lines that contain the sequence GATCGAGAGGGGATAGGCG. As the flag -B1 is used, it also shows the previous line to each occurence. In a FastQ file the identifier of each sequence is one line avobe the sequence itself, therefore in this example you can see the names and the sequences that match your query.

’2.To search for a sequence in all of the FastQ files you could use the asterisk * wildcard before the file extension .fastq :

$ grep -B1 AAGTT *.fastq

In this case, the lines with the sequence AAGTT are shown for all of the files that end with ‘.fastq’ in the current directory. The output shows the name of the file followed by semicolon to differentiate what file each line comes from.

Redirecting output


grep allowed us to identify sequences in our FASTQ files that match a particular pattern. All of these sequences were printed to our terminal screen, but in order to work with these sequences and perform other operations on them, we will need to capture that output in some way.

We can do this with something called “redirection”. The idea is that we are taking what would ordinarily be printed to the terminal screen and redirecting it to another location. In our case, we want to print this information to a file so that we can look at it later and use other commands to analyze this data.

The command for redirecting output to a file is >.

Let’s try out this command and copy all the records (including all four lines of each record) in our FASTQ files that contain ‘NNNNNNNNNN’ to another file called bad_reads.txt.

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R2.fastq > bad_reads.txt

The prompt should sit there a little bit, and then it should look like nothing happened. But type ls. You should see a new file called bad_reads.txt.

We can check the number of lines in our new file using a command called wc. wc stands for word count. This command counts the number of words, lines, and characters in a file.

BASH

$ wc bad_reads.txt

OUTPUT

  402   489 50076 bad_reads.txt

This will tell us the number of lines, words and characters in the file. If we want only the number of lines, we can use the -l flag for lines.

BASH

$ wc -l bad_reads.txt

OUTPUT

402 bad_reads.txt

Because we asked grep for all four lines of each FASTQ record, we need to divide the output by four to get the number of sequences that match our search pattern.

Exercise 2: Using wc

How many sequences in JC1A_R2.fastq contain at least 3 consecutive Ns?

BASH

$ grep NNN JC1A_R2.fastq > bad_reads.txt
$ wc -l bad_reads.txt

Exercise 2: Using wc (continued)

596 bad_reads.txt

{: .output}

We might want to search multiple FASTQ files for sequences that match our search pattern. However, we need to be careful, because each time we use the > command to redirect output to a file, the new output will replace the output that was already present in the file. This is called “overwriting” and, just like you don’t want to overwrite your video recording of your kid’s first birthday party, you also want to avoid overwriting your data files.

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R1.fastq > bad_reads.txt
$ wc -l bad_reads.txt

OUTPUT

24 bad_reads.txt

The old bad_reads.txt that counts bad quality reads from file JC1A_R2.fastq with 402 lines has been erased. Instead a new bad_reads.txt that contain 24 lines from bad reads from JC1A_R1.fastq has been created. We can avoid overwriting our files by using the command >>. >> is known as the “append redirect” and will append new output to the end of a file, rather than overwriting it.

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R2.fastq > bad_reads.txt
$ wc -l bad_reads.txt

OUTPUT

402 bad_reads.txt

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R1.fastq >> bad_reads.txt
$ wc -l bad_reads.txt

OUTPUT

426 bad_reads.txt

The output of our second call to wc shows that we have not overwritten our original data. The final number of 426 lines results from the adition of 402 reads from JC1A_R2.fastq file + 24 reads from JC1A_R1.fastq file. We can also do this for more files with a single line of code by using a wildcard.

BASH

$ rm bad_reads.txt

BASH

$ grep -B1 -A2 NNNNNNNNNN *.fastq >> bad_reads.txt
$ wc -l bad_reads.txt

OUTPUT

427 bad_reads.txt

Since we might have multiple different criteria we want to search for, creating a new output file each time has the potential to clutter up our workspace. We also so far haven’t been interested in the actual contents of those files, only in the number of reads that we’ve found. We created the files to store the reads and then counted the lines in the file to see how many reads matched our criteria. There’s a way to do this, however, that doesn’t require us to create these intermediate files - the pipe command (|).

This is probably not a key on your keyboard you use very much, so let’s all take a minute to find that key. What | does is take the output that is scrolling by on the terminal and uses that output as input to another command. When our output was scrolling by, we might have wished we could slow it down and look at it, like we can with less. Well it turns out that we can! We can redirect our output from our grep call through the less command.

BASH

$ grep -B1 -A2 NNNNNNNNNN JC1A_R2.fastq | less

We can now see the output from our grep call within the less interface. We can use the up and down arrows to scroll through the output and use q to exit less.

Redirecting output is often not intuitive, and can take some time to get used to. Once you’re comfortable with redirection, however, you’ll be able to combine any number of commands to do all sorts of exciting things with your data!

None of the command line programs we’ve been learning do anything all that impressive on their own, but when you start chaining them together, you can do some really powerful things very efficiently.

Writing for loops


Loops are key to productivity improvements through automation as they allow us to execute commands repeatedly. Similar to wildcards and tab completion, using loops also reduces the amount of typing (and typing mistakes). Loops are helpful when performing operations on groups of sequencing files, such as unzipping or trimming multiple files. We will use loops for these purposes in subsequent analyses, but will cover the basics of them for now.

When the shell sees the keyword for, it knows to repeat a command (or group of commands) once for each item in a list. Each time the loop runs (called an iteration), an item in the list is assigned in sequence to the variable, and the commands inside the loop are executed, before moving on to the next item in the list. Inside the loop, we call for the variable’s value by putting $ in front of it. The $ tells the shell interpreter to treat the variable as a variable name and substitute its value in its place, rather than treat it as text or an external command. In shell programming, this is usually called “expanding” the variable.

BASH

$ cd ../untrimmed_fastq/

Let’s write a for loop to show us the first two lines of the fastq files we downloaded earlier. You will notice shell prompt changes from $ to > and back again as we were typing in our loop. The second prompt, >, is different to remind us that we haven’t finished typing a complete command yet. A semicolon, ;, can be used to separate two commands written on a single line.

BASH

$ for filename in *.fastq
> do
> head -n 2 ${filename} >> seq_info.txt
> done

To see the content of the little file we just made it is useful to use the cat command.

BASH

cat seq_info.txt

OUTPUT

@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 1:N:0:TCGNAG
CTACGGCGCCATCGGCGNCCCCGGACGGTAGGAGACGGCGATGCTGGCCCTCGGCGCGGTCGCGTTCCTGAACCCCTGGCTGCTGGCGGCGCTCGCGGCGCTGCCGGTGCTCTGGGTGCTGCTGCGGGCGACGCCGCCGAGCCCGCGGCGGGTCGGATTCCCCGGCGTGCGCCCCCCGCTCCGGCTCGAGGACGCCGCACCGACGCCCCACCCCCCCCCCCGGTGGCTCCTCCTGCCGCCCTGCCTGATCC
@MISEQ-LAB244-W7:91:000000000-A5C7L:1:1101:13417:1998 2:N:0:TCGNAG
CGCGATCAGCAGCGGCCCGGAACCGGTCAGCCGCGCCNTGGGGTTCAGCACCGGCNNGGCGAAGGCCGCGATCGCGGCGGCGGCGATCAGGCAGCGCAGCAGCAGGAGCCACCAGGGCGTGCGGTCGGGCGTCCGTTCGGCGTCCTCGCGCCCCAGCAGCAGGCGCACGCCAGGGAATCCGACCCGCCGCCGGCTCGGCCGCGTCNCCCGCNCCCGCCCCCCGAGCACCCGNAGCCNCNCCACCGCCGCCC
@MISEQ-LAB244-W7:156:000000000-A80CV:1:1101:12622:2006 1:N:0:CTCAGA
CCCGTTCCTCGGGCGTGCAGTCGGGCTTGCGGTCTGCCATGTCGTGTTCGGCGTCGGTGGTGCCGATCAGGGTGAAATCCGTCTCGTAGGGGATCGCGAAGATGATCCGCCCGTCCGTGCCCTGAAAGAAATAGCACTTGTCAGATCGGAAGAGCACACGTCTGAACTCCAGTCACCTCAGAATCTCGTATGCCGTCTTCTGCTTGAAAAAAAAAAAAGCAAACCTCTCACTCCCTCTACTCTACTCCCTT
@MISEQ-LAB244-W7:156:000000000-A80CV:1:1101:12622:2006 2:N:0:CTCAGA
GACAAGTGCTATTTCTTTCAGGGCACGGACGGGCGGATCATCTTCGCGATCCCCTACGAGACGGATTTCACCCTGATCGGCACCACCGACGCCGAACACGACATGGCAGACCGCAAGCCCGACTGCACGCCCGAGGAACGGGAGATCGGAAGAGCGTCGTGTAGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATTAAAAAAAAAAAGCGATCAACTCGACCGACCTGTCTTATTATATCTCACGTAA

The for loop begins with the formula for <variable> in <group to iterate over>. In this case, the word filename is designated as the variable to be used over each iteration. In our case JC1A_R1.fastq and JC1A_R2.fastq will be substituted for filename because they fit the pattern of ending with .fastq in directory we’ve specified. The next line of the for loop is do. The next line is the code that we want to execute. We are telling the loop to print the first two lines of each variable we iterate over and save the information to a file. Finally, the word done ends the loop.

Note that we are using >> to append the text to our seq_info.txt file. If we used >, the seq_info.txt file would be rewritten every time the loop iterates, so it would only have text from the last variable used. Instead, >> adds to the end of the file.

Using Basename in for loops


Basename is a function in UNIX that is helpful for removing a uniform part of a name from a list of files. In this case, we will use basename to remove the .fastq extension from the files that we’ve been working with.

BASH

$ basename JC1A_R2.fastq .fastq

We see that this returns just the SRR accession, and no longer has the .fastq file extension on it.

OUTPUT

JC1A_R2

If we try the same thing but use .fasta as the file extension instead, nothing happens. This is because basename only works when it exactly matches a string in the file.

BASH

$ basename JC1A_R2.fastq .fasta

OUTPUT

JC1A_R2.fastq

Basename is really powerful when used in a for loop. It allows to access just the file prefix, which you can use to name things. Let’s try this.

Inside our for loop, we create a new name variable. We call the basename function inside the parenthesis, then give our variable name from the for loop, in this case ${filename}, and finally state that .fastq should be removed from the file name. It’s important to note that we’re not changing the actual files, we’re creating a new variable called name. The line > echo $name will print to the terminal the variable name each time the for loop runs. Because we are iterating over two files, we expect to see two lines of output.

BASH

$ for filename in *.fastq
> do
> name=$(basename ${filename} .fastq)
> echo ${name}
> done

OUTPUT

JC1A_R1
JC1A_R2
JP4D_R1
JP4D_R2

Exercise 3: Using basename

Print the file prefix of all of the .txt files in our current directory.

BASH

$ for filename in *.txt
> do
> name=$(basename ${filename} .txt)
> echo ${name}
> done

One way this is really useful is to move files. Let’s rename all of our .txt files using mv so that they have the years on them, which will document when we created them.

BASH

$ for filename in *.txt
> do
> name=$(basename ${filename} .txt)
> mv ${filename}  ${name}_2019.txt
> done

Key Points

  • grep is a powerful search tool with many options for customization.
  • >, >>, and | are different ways of redirecting output.
  • command > file redirects a command’s output to a file.
  • command >> file redirects a command’s output to a file without overwriting the existing contents of the file.
  • command_1 | command_2 redirects the output of the first command as input to the second command.
  • for loops are used for iteration
  • basename gets rid of repetitive parts of names

Content from Writing Scripts and Working with Data


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • How can we automate a commonly used set of commands?
  • How can we transfer files between local and remote computers?

Objectives

  • Use the nano text editor to modify text files.
  • Write a basic shell script.
  • Use the bash command to execute a shell script.
  • Use chmod to make a script an executable program.

Writing files


We have been able to do much work with existing files, but what if we want to write our own files? We are not going to type in a FASTA file, but we will see as we go through other tutorials; there are many reasons we will want to write a file or edit an existing file.

We will use a text editor called Nano to add text to files. We are going to create a file to take notes about what we have been doing with the data files in ~/dc_workshopd/data/untrimmed_fastq.

Taking notes is good practice when working in bioinformatics. We can create a file called a README.txt that describes the data files in the directory or documents how the files in that directory were generated. As the name suggests, it is a file that others should read to understand the information in that directory.

Let’s change our working directory to ~/dc_workshop/data/untrimmed_fastq using cd, then run nano to create a file called README.txt:

BASH

$ cd ~/dc_workshop/data/untrimmed_fastq
$ nano README.txt

You should see something like this:

nano screen with the name of the file in the top bar, a blank screen to write in the middle, and a bottom bar with the shortcuts for the available nano instructions. Figure 1. GNU Nano Text Editor Menu.

The text at the bottom of the screen shows the keyboard shortcuts for performing various tasks in nano. We will talk more about how to interpret this information soon.

Which Editor?

When we say, “nano is a text editor,” we really do mean “text”: it can only work with plain character data, not tables, images, or any other human-friendly media. We use it in examples because it is one of the least complex text editors. However, because of this trait, it may not be powerful enough or flexible enough for the work you need to do after this workshop. On Unix systems (such as Linux and Mac OS X), many programmers use Emacs or Vim (both of which require more time to learn), or a graphical editor such as Gedit. On Windows, you may wish to use Notepad++. Windows also has a built-in editor called notepad that can be run from the command line in the same way as nano for the purposes of this lesson.

No matter what editor you use, you need to know where it searches for and saves files. If you start it from the shell, it will (probably) use your current working directory as its default location. If you use your computer’s start menu, it may want to save files in your desktop or documents directory instead. You can change this by navigating to another directory the first time you “Save As…”

Let us type in a few lines of text. Describe the files in this directory or what you have been doing with them. The same screen as before but now it has text in the middle part. Figure 2. For example, the README file is written in nano.

Once we are happy with our text, we can press Ctrl-O (press the Ctrl or Control key and, while holding it down, press the O key) to write our data to disk. You will be asked what file we want to save this to: Press Return to accept the suggested default of README.txt.

Once our file is saved, we can use Ctrl-X to quit the editor and return to the shell.

Control, Ctrl, or ^ Key

The Control key is also called the “Ctrl” key. There are various ways in which using the Control key may be described. For example, you may see an instruction to press the Ctrl key and, while holding it down, press the X key, described as any of:

  • Control-X
  • Control+X
  • Ctrl-X
  • Ctrl+X
  • ^X
  • C-x

In nano, along the bottom of the screen, you will see ^G Get Help ^O WriteOut. This means that you can use Ctrl-G to get help and Ctrl-O to save your file.

Now you have written a file. You can look at it with less or cat, or open it up again and edit it with nano.

Exercise 1: Edit a file with nano

Open README.txt, add the date to the top of the file, and save the file.

BASH

Use `nano README.txt` to open the file.  
Add today's date and then use <kbd>Ctrl</kbd>-<kbd>X</kbd> to exit and `y` to save.

Writing scripts


A really powerful thing about the command line is that you can write scripts. Scripts let you save commands to run them and also lets you put multiple commands together. Though writing scripts may require an additional time investment initially, this can save you time as you run them repeatedly. Scripts can also address the challenge of reproducibility: if you need to repeat analysis, you retain a record of your command history within the script.

One thing we will commonly want to do with sequencing results is pull out bad reads and write them to a file to see if we can figure out what is going on with them. We are going to look for reads with long sequences of N’s like we did before, but now we are going to write a script, so we can run it each time we get new sequences rather than type the code in by hand each time.

Bad reads have a lot of N’s, so we are going to look for NNNNNNNNNN with grep. We want the whole FASTQ record, so we are also going to get the one line above the sequence and the two lines below. We also want to look at all the files that end with .fastq, so we will use the * wildcard.

BASH

grep -B1 -A2 NNNNNNNNNN *.fastq > scripted_bad_reads.txt

We are going to create a new file to put this command in. We will call it bad-reads-script.sh. The sh is not required, but using that extension tells us it is a shell script.

BASH

$ nano bad-reads-script.sh

Type your grep command into the file and save it as before. Be careful not to add the $ at the beginning of the line.

Now comes the neat part. We can run this script. Type:

BASH

$ bash bad-reads-script.sh

It will look like nothing happened, but now if you look at scripted_bad_reads.txt, you can see that there are now reads in the file.

Exercise 2: Edit a script

We want the script to tell us when it is done.

BASH

1. Open `bad-reads-script.sh` and add the line `echo "Script finished!"` after the `grep` command and save the file.  
2. Run the updated script.

Making the script into a program


We had to type bash because we needed to tell the computer what program to use to run this script. Instead, we can turn this script into its own program. We need to tell it that it is a program by making it executable. We can do this by changing the file permissions. We talked about permissions in an earlier episode.

First, let us look at the current permissions.

BASH

$ ls -l bad-reads-script.sh

OUTPUT

-rw-rw-r-- 1 dcuser dcuser 0 Oct 25 21:46 bad-reads-script.sh

We see that it says -rw-r--r--. This combination shows that the file can be read by any user and written to by the file owner (you). We want to change these permissions so the file can be executed as a program. We use the command chmod as we did earlier when we removed write permissions. Here we are adding (+) executable permissions (+x).

BASH

$ chmod +x bad-reads-script.sh

Now let us look at the permissions again.

BASH

$ ls -l bad-reads-script.sh

OUTPUT

-rwxrwxr-x 1 dcuser dcuser 0 Oct 25 21:46 bad-reads-script.sh

Now we see that it says -rwxr-xr-x. The x’s there now tell us we can run it as a program. So, let us try it! We will need to put ./ at the beginning, so the computer knows to look here in this directory for the program.

BASH

$ ./bad-reads-script.sh

The script should run the same way as before, but now we have created our own computer program!

It is good practice to keep any large files compressed while not using them. In this way, you save storage space; you will see that you will appreciate it when you advance your analysis. So, since we will not use the FASTQ files for now, let us compress them. Moreover, run ls -lh to confirm that they are compressed.

BASH

$ gzip ~/dc_workshop/data/untrimmed_fastq/*.fastq
$ ls -lh  ~/dc_workshop/data/untrimmed_fastq/*.fastq.gz

OUTPUT

total 428M
-rw-r--r-- 1 dcuser dcuser  24M Nov 26 12:36 JC1A_R1.fastq.gz
-rw-r--r-- 1 dcuser dcuser  24M Nov 26 12:37 JC1A_R2.fastq.gz
-rw-r--r-- 1 dcuser dcuser 179M Nov 26 12:44 JP4D_R1.fastq.gz
-rw-r--r-- 1 dcuser dcuser 203M Nov 26 12:51 JP4D_R2.fastq.gz

Moving and downloading data


So far, we have worked with pre-loaded data on the instance in the cloud. Usually, however, most analyses begin with moving data onto the instance. Below we will show you some commands to download data onto your instance or to move data between your computer and the cloud.

Getting data from the cloud

Two programs will download data from a remote server to your local (or remote) machine: wget and curl. They were designed to do slightly different tasks by default, so you will need to give the programs somewhat different options to get the same behavior, but they are mostly interchangeable.

  • wget is short for “world wide web get”, and its basic function is to download

  • cURL is a pun. It is supposed to be read as “see URL”, so its primary function is to display webpages or data at a web address.

Which one you need to use mainly depends on your operating system, as most computers will only have one or the other installed by default.

Let us say you want to download some data from Ensembl. We will download a tiny tab-delimited file that tells us what data is available on the Ensembl bacteria server. Before starting our download, we need to know whether we are using curl or wget.

To see which program you have, type:

BASH

$ which curl
$ which wget

which is a BASH program that looks through everything you have installed and tells you what folder it is installed to. If it cannot find the program you asked for, it returns nothing, i.e., it gives you no results.

On Mac OSX, you will likely get the following output:

BASH

$ which curl

OUTPUT

/usr/bin/curl

BASH

$ which wget

OUTPUT

$

This output means that you have curl installed but not wget.

Once you know whether you have curl or wget use one of the following commands to download the file:

BASH

$ cd
$ wget ftp://ftp.ensemblgenomes.org/pub/release-37/bacteria/species_EnsemblBacteria.txt

or

BASH

$ cd
$ curl -O ftp://ftp.ensemblgenomes.org/pub/release-37/bacteria/species_EnsemblBacteria.txt

Since we wanted to download the file rather than view it, we used wget without any modifiers. With curl however, we had to use the -O flag, which simultaneously tells curl to download the page instead of showing it to us and specifies that it should save the file using the same name it had on the server: species_EnsemblBacteria.txt

It’s important to note that both curl and wget download to the computer that the command line belongs to. So, if you are logged into AWS on the command line and execute the curl command above in the AWS terminal, the file will be downloaded to your AWS machine, not your local one.

Moving files between your laptop and your instance

What if the data you need is on your local computer, but you need to get it into the cloud? There are several ways to do this. While following this lesson, you may be using the RStudio interface containing a terminal, some other terminal, or your own local computer. Depending on your setup, there are several alternatives to transfer the files. Here we describe how to use the RStudio interface to transfer files..

Transferring files scenarios

  1. If you are working on your local computer, there is no need to transfer files because you already have them locally.
    In that case, you only need to know the directory you are working in.
  2. If you are working on a remote machine such as an AWS instance, you can use the scp command. In that case, it is always easier to start the transfer locally. If you are typing into a terminal, the terminal should not be logged into your instance. It should show your local computer. If you are using a transfer program, it needs to be installed on your local machine, not your instance.
  3. If you are using the RStudio server from the AWS instance, you can transfer files between your local and your remote machine using the graphic interface of RStudio.

Downloading files in RStudio

We will follow the next five steps to download files with the RStudio interface.

  1. First, we select the file to download from the bottom right panel.

Download data with R Studio.

  1. Then, we choose “More” to display more actions for the selected file.

Download data with R studio.

  1. Within the “More” menu, the “export” button should become available.

Download data with R studio.

  1. An emergent window should be displayed on your screen where you can select the “Download” option.

Download data with R studio.

  1. Your file should now be downloaded to your local computer.

Upload files to AWS in RStudio

Now that we learned how to download files from the RStudio interface, we will learn the opposite action, uploading files from your local computer to your remote AWS machine.

  1. Choose the option ‘Upload’ in your RStudio interface.

Upload data with R studio.

  1. After an emergent window is displayed on your screen, select “Select file”.

Upload data with R studio.

  1. A new screen is displayed on your computer where you should choose the file to upload. Choose the file and click “open”.

Upload data with R studio.

  1. Finally, if the file is correct, click “ok,” and the uploading will start.

  2. Now, you can view a new file in your RStudio interface.

Upload data with R studio.

Transferring data between your local and virtual machine with scp

scp stands for ‘secure copy protocol’ and is a widely used UNIX tool for moving files between computers. The simplest way to use scp is to run it in your local terminal and use it to copy a single file. While scp <local-file-path> <AWS instance path> will upload a local file into your AWS instance, scp <AWS-instance-path> <local-file-path> will move your file from your remote AWS instance into your local computer. The general form of the scp command is the following:

BASH

$ scp <file you want to move, local or remote> <path to where I want to move it, local or remote>

Exercise 3: Uploading data with scp

Let us download the text file ~/data/untrimmed_fastq/scripted_bad_reads.txt from the remote machine to your local computer. Which of the following commands would download the file?
A)

BASH

$  scp local_file.txt dcuser@ip.address:/home/dcuser/

BASH

$ scp dcuser@ip.address:/home/dcuser/dc_workshop/data/untrimmed_fastq/scripted_bad_reads.txt. ~/Downloads
  1. False. This command will upload the file local_file.txt to the dcuser home directory in your AWS remote machine.
  2. True. This option downloads the bad reads file in ~/data/scripted_bad_reads.txt to your local ~/Downloads directory (make sure you use substitute dcuser@ ip.address with your remote login credentials)

Key Points

  • Scripts are a collection of commands executed together.
  • Scripts are executable text files.
  • Nano is a text editor.
  • In a terminal, scp transfers information to and from virtual and local computers.
  • R studio remote interface allows the transfer of information between virtual and local computers.

Content from Project Organization


Last updated on 2025-04-06 | Edit this page

Overview

Questions

  • How can I organize my file system for a new bioinformatics project?
  • How can I document my work?

Objectives

  • Create a file system for a bioinformatics project.
  • Explain what types of files should go in your docs, data, and results directories.
  • Use the history command and a text editor like nano to document your work on your project.

Getting your project started


Project organization is one of the most essential parts of a sequencing project, and yet, it is often overlooked amidst the excitement of getting a first look at new data. Of course, while it is best to get yourself organized before you even begin your analyses, it is never too late to start.

You should approach your sequencing project similarly to how you do a biological experiment, and this ideally begins with experimental design. We’re going to assume that you’ve already designed a beautiful sequencing experiment to address your biological question, collected appropriate samples, and have enough statistical power to answer the questions you’re interested in asking. These steps are all crucial but beyond the scope of our course. For all of those steps (collecting specimens, extracting DNA, prepping your samples) you’ve likely kept a lab notebook that details how and why you did each step. However, the process of documentation doesn’t stop at the sequencer!

Genomics projects can quickly accumulate hundreds of files across tens of folders. Every computational analysis you perform throughout your project will create many files, which can especially become problematic when you inevitably want to rerun some of those analyses. For instance, you might have made significant headway into your project but must remember the PCR conditions you used to create your sequencing library months prior.

Other questions might arise along the way:

  • What were your best alignment results?
  • Which folder were they in: Analysis1, AnalysisRedone, or AnalysisRedone2?
  • Which quality cutoff did you use?
  • What version of a given program did you implement your analysis in?

Good documentation is vital in avoiding this issue, and luckily enough, recording your computational experiments is even easier than recording lab data. Copy/Paste will become your best friend, sensible file names will make your analysis understandable by you and your collaborators, and writing the methods section for your following paper will be easy! Remember that in any project of yours, it’s worthwhile to consider a future version of yourself as an entirely separate collaborator. The better your documentation is, the more this ‘collaborator’ will feel indebted to you!

With this in mind, let’s look at the best practices for documenting your genomics project. Your future self will thank you.

In this exercise, we will set up a file system for the project we will be working on during this workshop.

We will start by creating a directory that we can use for the rest of the workshop. First, navigate to your home directory. Then, confirm that you are in the correct directory using the pwd command.

BASH

$ cd ~
$ pwd

You should see the output:

OUTPUT

/home/dcuser  

Tip

If you aren’t in your home directory, the easiest way to get there is to enter the command cd, which always returns you to home.

Exercise 1: Making and organized file system

Use the mkdir command to make the following directories:

  • workshop
  • workshop/docs
  • workshop/data
  • workshop/results

BASH

$ mkdir workshop
$ mkdir workshop/docs
$ mkdir workshop/data
$ mkdir workshop/results

Use ls -R to verify that you have created these directories. The -R option for ls stands for recursive. This option causes ls to return the contents of each subdirectory within the directory iteratively.

BASH

$ ls -R workshop

You should see the following output:

OUTPUT

workshop/:
data  docs  results

workshop/data:

workshop/docs:

workshop/results: 

Organizing your files


Before beginning any analysis, it’s important to save a copy of your raw data. The raw data should never be changed. Regardless of how sure you are that you want to carry out a particular data cleaning step, there’s always the chance that you’ll change your mind later or that there will be an error in carrying out the data cleaning and you’ll need to go back a step in the process. Having a raw copy of your data that you never modify guarantees that you will always be able to start over if something goes wrong with your analysis. When starting any analysis, you can make a copy of your raw data file and do your manipulations on that file, rather than the raw version. We learned in a previous episode how to prevent overwriting our raw data files by setting restrictive file permissions.

You can store any results that are generated from your analysis in the results folder. This guarantees that you won’t confuse results file and data files in six months or two years when you are looking back through your files in preparation for publishing your study.

The docs folder is the place to store any written analysis of your results, notes about how your analyses were carried out, and documents related to your eventual publication.

Documenting your activity on the project


When carrying out wet-lab analyses, most scientists work from a written protocol and keep a hard copy of written notes in their lab notebook, including any things they did differently from the written protocol. This detailed record-keeping process is just as important when doing computational analyses. Luckily, it’s even easier to record the steps you’ve carried out computational than it is when working at the bench.

The history command is a convenient way to document all the commands you have used while analyzing and manipulating your project files. Let’s document the work we have done on our project so far.

View the commands that you have used so far during this session using history:

BASH

$ history

The history likely contains many more commands than you have used for the current project. Let’s view the last several commands that focus on just what we need for this project.

View the last n lines of your history (where n = approximately the last few lines you think relevant). For our example, we will use the last 7:

BASH

$ history | tail -n 7

Exercise 2: Creating a record of the used commands

Using your knowledge of the shell, use the append redirect >> to create a file called workshop_log_XXXX_XX_XX.sh (Use the four-digit year, two-digit month, and two digit day, e.g. workshop_log_2021_03_25.sh)

BASH

$ history | tail -n 8 >> workshop_log_2021_03_25.sh

Note we used the last 7 lines as an example, the number of lines may vary.

You may have noticed that your history contains the history command itself. To remove this redundancy from our log, let’s use the nano text editor to fix the file:

BASH

$ nano workshop_log_2021_03_25.sh

(Remember to replace the 2021_03_25 with your workshop date.)

From the nano screen, you can use your cursor to navigate, type, and delete any redundant lines.

Add a date line and comment to the line where you have created the directory, for example:

BASH

# 2021_03_25  
# Created sample directories for the Data Carpentry workshop  

bash treats the # character as a comment character. Any text on a line after a # is ignored by bash when evaluating the text as code.

Next, remove any lines of the history that are not relevant by navigating to those lines and using your delete key. Save your file and close nano.

Your file should look something like this:

OUTPUT

# 2021_03_25
# Created sample directories for the Data Carpentry workshop

mkdir workshop
mkdir workshop/docs
mkdir workshop/data
mkdir workshop/results

If you keep this file up to date, you can use it to re-do your work on your project if something happens to your results files. To demonstrate how this works, first delete your workshop directory and all of its subdirectories. Look at your directory contents to verify the directory is gone.

BASH

$ rm -r workshop
$ ls

OUTPUT

dc_workshop   R   workshop   workshop_log_2021_03_25.sh 

Then run your workshop log file as a bash script. You should see the workshop directory and all of its subdirectories reappear.

BASH

$ bash workshop_log_2021_03_25.sh
$ ls

OUTPUT

workshop workshop_log_2021_03_25.txt

It’s important that we keep our workshop log file outside of our workshop directory if we want to use it to recreate our work. It’s also important for us to keep it up to date by regularly updating with the commands that we used to generate our results files.

Congratulations! You’ve finished your introduction to using the shell for metagenomics projects. You now know how to navigate your file system, create, copy, move, and remove files and directories, and automate repetitive tasks using scripts and wildcards. With this solid foundation, you’re ready to move on to apply all of these new skills to carrying out more sophisticated bioinformatics analysis work. Don’t worry if everything doesn’t feel perfectly comfortable yet. We’re going to have many more opportunities for practice as we move forward on our bioinformatics journey!

References


A Quick Guide to Organizing Computational Biology Projects

Key Points

  • Spend the time to organize your file system when you start a new project. Your future self will thank you!
  • Always save a write-protected copy of your raw data.