Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,13 @@ install:
- sudo apt-get install npm && npm install -g markdownlint-cli # For documentation linting

script:
# Test the main code in this repo
- pytest --cov=nf_core .
- markdownlint . -c ${TRAVIS_BUILD_DIR}/.github/markdownlint.yml
# Test the pipeline template code
- nf-core create -n testpipeline -d "This pipeline is for testing" -a "Testing McTestface"
- nf-core lint nf-core-testpipeline
- markdownlint . -c ${TRAVIS_BUILD_DIR}/.github/markdownlint.yml
- markdownlint nf-core-testpipeline -c nf-core-testpipeline/.github/markdownlint.yml

after_success:
- codecov --rcfile=${TRAVIS_BUILD_DIR}/.github/coveragerc
Expand Down
7 changes: 5 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,11 @@

## v1.6dev

_..nothing yet.._
#### Syncing Functionality
#### Template pipeline
* Fixed markdown linting
* Tools CI testing now runs markdown lint on compiled template pipeline

#### Tools helper code
* Drop [nf-core/rnaseq](https://github.com/nf-core/rnaseq]) from `blacklist.json` to make template sync available

## [v1.5](https://github.com/nf-core/tools/releases/tag/1.5) - 2019-03-13 Iron Shark
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The command `nf-core list` shows all available nf-core pipelines along with thei

An example of the output from the command is as follows:

```bash
```txt
$ nf-core list

,--./,-.
Expand All @@ -75,7 +75,7 @@ nf-core/vipr dev - - No

To narrow down the list, supply one or more additional keywords to filter the pipelines based on matches in titles, descriptions and topics:

```bash
```txt
nf-core list rna rna-seq

,--./,-.
Expand Down Expand Up @@ -103,7 +103,7 @@ To make this process easier and ensure accurate retrieval of correctly versioned

By default, the pipeline will just download the pipeline code. If you specify the flag `--singularity`, it will also download any singularity image files that are required.

```bash
```txt
$ nf-core download methylseq --singularity

,--./,-.
Expand All @@ -124,7 +124,7 @@ INFO: Downloading 1 singularity container
nf-core-methylseq-1.0.simg [762.28MB] [####################################] 780573/780572
```

```bash
```txt
$ tree -L 2 nf-core-methylseq-1.0/

nf-core-methylseq-1.0/
Expand All @@ -150,7 +150,7 @@ nf-core-methylseq-1.0/
## Pipeline software licences
Sometimes it's useful to see the software licences of the tools used in a pipeline. You can use the `licences` subcommand to fetch and print the software licence from each conda / PyPI package used in an nf-core pipeline.

```bash
```txt
$ nf-core licences rnaseq

,--./,-.
Expand Down Expand Up @@ -191,7 +191,7 @@ With a given pipeline name, description and author, it makes a starter pipeline
After creating the files, the command initialises the folder as a git repository and makes an initial commit. This first "vanilla" commit which is identical to the output from the templating tool is important, as it allows us to keep your pipeline in sync with the base template in the future.
See the [nf-core syncing docs](http://nf-co.re/sync) for more information.

```bash
```txt
$ nf-core create

,--./,-.
Expand All @@ -217,7 +217,7 @@ INFO: Done. Remember to add a remote and push to GitHub:
Once you have run the command, create a new empty repository on GitHub under your username (not the `nf-core` organisation, yet).
On your computer, add this repository as a git remote and push to it:

```bash
```txt
git remote add origin https://github.com/ewels/nf-core-nextbigthing.git
git push --set-upstream origin master
```
Expand All @@ -235,7 +235,7 @@ This is the same test that is used on the automated continuous integration tests

For example, the current version looks something like this:

```bash
```txt
$ cd path/to/my_pipeline
$ nf-core lint .

Expand Down Expand Up @@ -268,7 +268,7 @@ The command uses results from the linting process, so will only work with workfl

Usage is `nf-core bump-version <pipeline_dir> <new_version>`, eg:

```bash
```txt
$ cd path/to/my_pipeline
$ nf-core bump-version . 1.0

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ data/
results/
.DS_Store
tests/test_data
*.pyc
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
// Limit resources so that this can run on Travis
max_cpus = 2
max_memory = 6.GB
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ Multiple reference index types are held together with consistent structure for m
We have put a copy of iGenomes up onto AWS S3 hosting and this pipeline is configured to use this by default.
The hosting fees for AWS iGenomes are currently kindly funded by a grant from Amazon.
The pipeline will automatically download the required reference files when you run the pipeline.
For more information about the AWS iGenomes, see https://ewels.github.io/AWS-iGenomes/

For more information about AWS iGenomes, see [https://ewels.github.io/AWS-iGenomes/](https://ewels.github.io/AWS-iGenomes/).

Downloading the files takes time and bandwidth, so we recommend making a local copy of the iGenomes resource.
Once downloaded, you can customise the variable `params.igenomes_base` in your custom configuration file to point to the reference location.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,9 @@ Be warned of two important points about this default configuration:
### Docker
First, install docker on your system: [Docker Installation Instructions](https://docs.docker.com/engine/installation/)

Then, running the pipeline with the option `-profile docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/{{ cookiecutter.name_docker }}).
Then, running the pipeline with the option `-profile docker` tells Nextflow to enable Docker for this run.
An image containing all of the software requirements will be automatically fetched and used from dockerhub
([https://hub.docker.com/r/{{ cookiecutter.name_docker }}](https://hub.docker.com/r/{{ cookiecutter.name_docker }})).

### Singularity
If you're not able to use Docker then [Singularity](http://singularity.lbl.gov/) is a great alternative.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ The pipeline has special steps which allow the software versions used to be repo
* `Project_multiqc_data/`
* Directory containing parsed statistics from the different tools used in the pipeline

For more information about how to use MultiQC reports, see http://multiqc.info
For more information about how to use MultiQC reports, see [http://multiqc.info](http://multiqc.info)
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ if(params.readPaths){
// Header log info
log.info nfcoreHeader()
def summary = [:]
if(workflow.revision) summary['Pipeline Release'] = workflow.revision
summary['Run Name'] = custom_runName ?: workflow.runName
// TODO nf-core: Report custom parameters here
summary['Reads'] = params.reads
Expand Down Expand Up @@ -360,7 +361,7 @@ workflow.onComplete {
c_green = params.monochrome_logs ? '' : "\033[0;32m";
c_red = params.monochrome_logs ? '' : "\033[0;31m";
if(workflow.success){
log.info "${c_purple}[{{ cookiecutter.name }}]${c_green} Pipeline complete${c_reset}"
log.info "${c_purple}[{{ cookiecutter.name }}]${c_green} Pipeline completed successfully${c_reset}"
} else {
checkHostname()
log.info "${c_purple}[{{ cookiecutter.name }}]${c_red} Pipeline completed with errors${c_reset}"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,14 @@ params {

// Boilerplate options
name = false
multiqc_config = "$baseDir/conf/multiqc_config.yaml"
multiqc_config = "$baseDir/assets/multiqc_config.yaml"
email = false
maxMultiqcEmailFileSize = 25.MB
plaintext_email = false
monochrome_logs = false
help = false
igenomes_base = "./iGenomes"
tracedir = "${params.outdir}/pipeline_info"
clusterOptions = false
awsqueue = false
awsregion = 'eu-west-1'
igenomesIgnore = false
Expand Down