我有一个示例脚本定义为:
#!/usr/bin/env python
def test(path):
print(path)
test(snakemake.input[0])
config.yml 如下:
executor: slurm
jobs: 100
samples: "config/samples.csv"
还有像这样的 Snakefile:
import os
import glob
import pandas as pd
configfile: "config/config.yml"
rule get_samples:
resources:
mem_mb = 512
threads: 1
input: config["samples"]
output: "out/sampleFASTQs.csv"
shell: "scripts/getFASTQs.py"
当我跑步时
snakemake --核心 1
我收到以下错误:
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job stats:
job count min threads max threads
----------- ------- ------------- -------------
get_samples 1 1 1
total 1 1 1
Select jobs to execute...
[Wed May 15 05:36:05 2024]
rule get_samples:
input: config/samples.csv
output: out/sampleFASTQs.csv
jobid: 0
reason: Missing output files: out/sampleFASTQs.csv
resources: tmpdir=/tmp, mem_mb=512, mem_mib=489
Traceback (most recent call last):
File "[redacted]/tools/snakemake/scrnaseq/workflow/scripts/getFASTQs.py", line 6, in <module>
test(snakemake.input[0])
NameError: name 'snakemake' is not defined
[Wed May 15 05:36:05 2024]
Error in rule get_samples:
jobid: 0
input: config/samples.csv
output: out/sampleFASTQs.csv
shell:
scripts/getFASTQs.py
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: .snakemake/log/2024-05-15T053604.329295.snakemake.log
我几乎在任何地方都找不到这个错误,也不明白为什么 python 环境中没有自动定义 Snakemake。
如果您使用
run
指令(而不是 shell
),那么代码块将继承 Snakefile 内的所有内容。
如果使用
script
指令,那么脚本将继承 snakemake
对象。
更多信息可以在docs中找到。