File size: 1,475 Bytes
0b8aa90 711c575 0b8aa90 711c575 0b8aa90 711c575 0b8aa90 711c575 0b8aa90 711c575 0b8aa90 711c575 0b8aa90 711c575 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
slurm submission log: 2024-05-30 23:53:13.187539 created following sbatch script: ############################### #!/bin/bash #SBATCH --account=nlp #SBATCH --cpus-per-task=16 #SBATCH --dependency=afterok:7673212 #SBATCH --gres=gpu:1 #SBATCH --job-name=tthrush-job-3043295 #SBATCH --mem=60G #SBATCH --nodelist=sphinx1 #SBATCH --open-mode=append #SBATCH --output=/juice5/scr5/tthrush/pretraining-coreset-selection/llm_pretraining/paper_writeup_tests/ordinal_ph_proj/llms/pythia-70m_xnli_es_1/eval_job_output.txt #SBATCH --partition=sphinx #SBATCH --time=14-0 # activate your desired anaconda environment . /nlp/scr/tthrush/miniconda3/envs/pretraining-coreset-selection/etc/profile.d/conda.sh ; conda activate pretraining-coreset-selection # cd to working directory cd . # launch commands srun --unbuffered run_as_child_processes 'lm_eval --model hf --model_args pretrained=/juice5/scr5/tthrush/pretraining-coreset-selection/llm_pretraining/paper_writeup_tests/ordinal_ph_proj/llms/pythia-70m_xnli_es_1,revision=main,dtype=float16,trust_remote_code=True --tasks piqa,arc_easy,xnli_en,xnli_fr,xnli_de,xnli_es,sciq,lambada --device cuda --output_path /juice5/scr5/tthrush/pretraining-coreset-selection/llm_pretraining/paper_writeup_tests/ordinal_ph_proj/llms/pythia-70m_xnli_es_1/perf' ############################### submission to slurm complete! ############################### slurm submission output Submitted batch job 7673213 ############################### |