torque qsubqsub 提交作业任务为什么不能使用root

This list is provided to help users of the TORQUE resource manager share questions, suggestions, issues and ideas.
This list is provided to help users of the TORQUE resource manager share questions, suggestions, issues and ideas.
21 Jun 05:17 2016
PBS_Server 6.0.1 segfault by 'qdel'sudo &sudo &at& sstc.co.jp&
03:17:39 GMT
I'm trying to use torque 6.0.1 with numa support.
Most of torque functions run well as expected.
The only thing I'm confusing is that pbs_server segfaults by user's qdel.
The admin(root) can not qdel the jobs (also pbs_server gets segfault).
Only 'qdel -p &jid&' can kill the jobs.(by root only)
I recreate db by torque.setup, but, still pbs_server keep segfault by 'qdel'.
Are there any similar issues exists of torque v6.0.1 before?
I'm appreciate any comments/suggestions/workarounds.
------------------------------------------------------------------------------
Here is my experience and environment.
- user's qdel make pbs_server segfault.
[]$ qstat -a
11:12:01 JST
Jobname SessID
----------------------- ----------- -------- ---------------- ------ ----- ------ ---------
--------- - ---------
himeNO7 13059
01:00:00 C
himeNO7 14544
01:00:00 R
16 Jun 17:59 2016
Boost assertion error when starting pbs_serverSkip Montanaro &skip &at& &
15:59:34 GMT
I've been away from Torque and Maui for about five years. I was recently asked to get going with them again for a new user. It appears that in the intervening time the Boost libraries have insinuated themselves into Torque. I am building and running on a couple openSuSE 12.2 systems, and installed boost-devel (1.49) to build Torque. I also removed everything from PATH other than the minimal required, to keep from polluting the build with locally installed stuff. From config.log:
PATH: /usr/X11R6/bin
PATH: /usr/bin
PATH: /bin
PATH: /usr/sbin
When I try to start pbs_server, I get a Boost assertion error:
% sudo /opt/local/sbin/pbs_server&
skipm's password:
pbs_server: /usr/include/boost/unordered/detail/table.hpp:387: size_t boost::unordered::detail::table&Types&::min_buckets_for_size(size_t) const [with Types = boost::unordered::detail::map&std::allocator&std::pair&const std::basic_string&char, std::char_traits&char&, std::allocator&char& &, int& &, std::basic_string&char, std::char_traits&char&, std::allocator&char& &, int, boost::hash&std::basic_string&char, std::char_traits&char&, std::allocator&char& & &, std::equal_to&std::basic_string&char, std::char_traits&char&, std::allocator&char& & & &]: Assertion `this-&mlf_ != 0' failed.
Not being a Boost user, I haven't the slightest idea what that means. I ran things from within gdb and got this backtrace:
#0 &0x00007ffff552ad25 in raise () from /lib64/libc.so.6
#1 &0x00007ffff552c1a8 in abort () from /lib64/libc.so.6
#2 &0x00007ffff5523c22 in __assert_fail_base () from /lib64/libc.so.6
#3 &0x00007ffff5523cd2 in __assert_fail () from /lib64/libc.so.6
#4 &0x6f92 in boost::unordered::detail::table&boost::unordered::detail::map&std::allocator&std::pair&std::string const, int& &, std::string, int, boost::hash&std::string&, std::equal_to&std::string& & &::min_buckets_for_size (this=0x7ffff7dda368 &cache+72&, size=1)
& & at /usr/include/boost/unordered/detail/table.hpp:387
#5 &0x6405 in boost::unordered::detail::table&boost::unordered::detail::map&std::allocator&std::pair&std::string const, int& &, std::string, int, boost::hash&std::string&, std::equal_to&std::string& & &::reserve_for_insert (this=0x7ffff7dda368 &cache+72&, size=1) at /usr/include/boost/unordered/detail/table.hpp:643
#6 &0x54dd in boost::unordered::detail::table_impl&boost::unordered::detail::map&std::allocator&std::pair&std::string const, int& &, std::string, int, boost::hash&std::string&, std::equal_to&std::string& & &::operator[] (this=0x7ffff7dda368 &cache+72&, k=...) at /usr/include/boost/unordered/detail/unique.hpp:351
#7 &0x4731 in boost::unordered::unordered_map&std::string, int, boost::hash&std::string&, std::equal_to&std::string&, std::allocator&std::pair&std::string const, int& & &::operator[] (
& & this=0x7ffff7dda368 &cache+72&, k=...) at /usr/include/boost/unordered/unordered_map.hpp:1192
#8 &0x00007ffff7510d8d in container::item_container&int&::find (this=0x7ffff7dda320 &cache&, id=...)
& & at ../../../src/include/container.hpp:389
#9 &0x00007ffff751082d in addrcache::getFromCache (this=0x7ffff7dda320 &cache&,&
& & hostName=0xb53b20 &server_host& "blade") at ../Libnet/net_cache.c:354
#10 0x00007ffff750f6f3 in get_cached_fullhostname (hostname=0xb53b20 &server_host& "blade", sai=0x0)
& & at ../Libnet/net_cache.c:464
#11 0x00007ffff7505eb6 in get_fullhostname (shortname=0xb53b20 &server_host& "blade",&
& & namebuf=0xb53b20 &server_host& "blade", bufsize=1024, EMsg=0x7fffffffe610 "")
& & at ../Libnet/get_hostname.c:153
#12 0x33b2 in main (argc=1, argv=0x7fffffffeb28) at pbsd_main.c:1670
I see that in get_fullhostname it shows just "blade". For some reason, many years ago, our admins concluded that fully qualified hostnames were a bad idea. When I built Torque, I had configured --with-server-home=/opt/local/torque, so created a server_name file in that directory with the true fully qualified domain name, but it still craps out with the same backtrace.
Setting up and starting the trqauthd.service seems to yield similar results:
# systemctl status trqauthd.service
trqauthd.service - TORQUE trqauthd daemon
&Loaded: loaded (/usr/lib/systemd/system/trqauthd. enabled)
&Active: failed (Result: core-dump) since Thu, 16 Jun :01 -0500; 1min 4s ago
Process: 18810 ExecStart=/opt/local/sbin/trqauthd (code=dumped, signal=ABRT)
&CGroup: name=systemd:/system/trqauthd.service
Jun 16 10:57:01 blade trqauthd[18810]: trqauthd: /usr/include/boost/unorder...d.
Any suggestions about how to further debug this error appreciated.&
Skip Montanaro
_______________________________________________
torqueusers mailing list
torqueusers &at& supercluster.org
13 Jun 19:32 2016
Getting "#PBS -M" user supplied e-mail to an epilogue script?Nicholas Lindberg &nlindberg &at& mkei.org&
17:32:37 GMT
I&m trying to do something that I feel should be fairly easy, but can&t figure out a straightforward answer to, which is this: how do I retrieve a user&s supplied e-mail address using
from within an epilogue script?
As far as I know, the only parameters the epilogue script are passed are below (which doesn&t include the user supplied e-mail).& Does this mean that I have to do something like have a prologue script (or
some kind of submit filter) write out an environment variable containing the user supplied e-mail address after doing some perl/sed magic to grep it out, store it in the environment,
so that I can then reference said environment inside my epilogue?&
Seems like way too much work, but also seems like it&s the only way.& If somebody has found another way, I&m all ears.
PARAMETERS PASSED TO EPILOGUE (example from docs):
&&&&&&&&&&&&&&&
echo "Epilogue Args:"
echo "Job ID: $1"
echo "User ID: $2"
echo "Group ID: $3"
echo "Job Name: $4"
echo "Session ID: $5"
echo "Resource List: $6"
echo "Resources Used: $7"
echo "Queue Name: $8"
echo "Account String: $9"
Epilogue Args:
Job ID: 13724.node01
User ID: user1
Group ID: user1
Job Name: script.sh
Session ID: 28244
Resource List: neednodes=node01,nodes=1,walltime=00:01:00
Resources Used: cput=00:00:00,mem=0kb,vmem=0kb,walltime=00:00:07
Queue Name: batch
Account String:
Nick Lindberg
Director of Engineering
Milwaukee Institute&
414-269-8332 (O)
608-215-3508&(M)
_______________________________________________
torqueusers mailing list
torqueusers &at& supercluster.org
13 Jun 15:03 2016
jobstart vs prologueStijn De Weirdt &stijn.deweirdt &at& ugent.be&
13:03:30 GMT
does anyone know (or point to the documentation) if the pbs_mom
node_check_interval value jobstart runs the healthcheck before or after
prologue (and similar for jobend vs epilogue)?
many thanks,
8 Jun 20:23 2016
PBS Equivalent to SGE's -sync?Gabriel A. Devenyi &gdevenyi &at& &
18:23:35 GMT
Is there an equivalent method of blocking a qsub until the submitted job is complete, similar to functionality of SGE's -sync?
& -sync y causes qsub to wait for the job to complete before exiting.& If the job completes successfully, qsub's exit code will be that of the completed job.& If the job fails to complete successfully, qsub will print out a error message indicating why the job failed and will have an exit code of 1.& If qsub is interrupted, e.g. with CTRL-C, before the job completes, the job will be canceled.
--Gabriel A. Devenyi B.Eng. Ph.D.Research Computing AssociateComputational Brain Anatomy LaboratoryCerebral Imaging CenterDouglas Mental Health University InstituteAffiliate, Department of PsychiatryMcGill Universityt: 514.761.e:
_______________________________________________
torqueusers mailing list
torqueusers &at& supercluster.org
7 Jun 06:53 2016
pbsnodes reports incorrect total_sockets/total_numa_nodes/total_cores/total_threads for some numanodesGo Yoshimura &go-yoshimura &at& sstc.co.jp&
04:53:37 GMT
I have a question about torque v6.0.1 .
I have 8nodes cluster each has 2socket/28cores per node.
I built torque v6.0.1 with NUMA support and all cluster nodes
has HWLOC-1.9 installed already.
Most of cluster nodes behaves just fine, but, pbsnodes reports
strange total_cores/total_threads values on the few nodes in this cluster.
I tried pbs_server recreate database, reboot pbs_mom, or, reboot entire cluster
didn't help.
Would you suggest us what is wrong on this node(nc02 below)?
Where to check pbs_server/pbs_mom configuration?
Here are some information.
===========================================================================
0) torque configue
It was created by torque configure 6.0.1, which was
generated by GNU Autoconf 2.69.
Invocation command line was
$ ./configure --enable-numa-support --enable-cpuset --enable-cgroups
(e.g ac_cv_env_HWLOC_LIBS_value='-L/usr/local/lib -lhwloc'
HWLOC_LIBS='-L/usr/local/lib -lhwloc' . . . fm config.log )
1) HWLOC information on the cluster (nc01~nc08)
# pdsh -w nc0[1-8] hwloc-info --version | sort
nc01: hwloc-info 1.9
nc02: hwloc-info 1.9
nc03: hwloc-info 1.9
nc04: hwloc-info 1.9
nc05: hwloc-info 1.9
nc06: hwloc-info 1.9
nc07: hwloc-info 1.9
nc08: hwloc-info 1.9
# pdsh -w nc0[1-8] hwloc-info |grep PU | sort
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
depth 8: 28 PU (type #6)
1) PBS_SERVER : server_priv/node file
[root &at& fs9 ~]# cat /var/spool/torque/server_priv/nodes
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
np=28 num_node_boards=2
2) PBS_MOM : mom.layout on the cluster nodes (all the same)
# cat /var/spool/torque/mom_priv/mom.layout
nodes=0 cpus=0-13 mems=0
nodes=1 cpus=14-27 mems=1
3) Strange node 'nc02-0'
It seems mom recognize ncpus=14 both for nc02-0/nc02-1 which is good.
But, it reports "total_sockets = 1, total_cores = 4" on nc02-0.
They are "total_sockets = 2, total_cores = 28" on other NUMA nodes.
[root &at& fs9 ~]# pbsnodes nc02
state = free
power_state = Running
ntype = cluster
status = rectime=,macaddr=1c:b7:2c:14:56:c7,cpuclock=OnDemand:1200MHz,varattr=,jobs=,state=free,netload=,gres=,loadave=0.00,ncpus=14,physmem=kb,availmem=kb,totmem=
kb,idletime=0,nusers=0,nsessions=0,uname=Linux nc02 2.6.32-573.el6.x86_64 #1 SMP Thu Jul
23 15:44:03 UTC ,opsys=linux
mom_service_port = 15002
mom_manager_port = 15003
total_sockets = 1
total_numa_nodes = 1
total_cores = 4
total_threads = 4
dedicated_sockets = 0
dedicated_numa_nodes = 0
dedicated_cores = 0
dedicated_threads = 0
state = free
power_state = Running
ntype = cluster
status = rectime=,macaddr=1c:b7:2c:14:56:c7,cpuclock=OnDemand:1200MHz,varattr=,jobs=,state=free,netload=,gres=,loadave=0.00,ncpus=14,physmem=kb,availmem=kb,totmem=
kb,idletime=0,nusers=0,nsessions=0,uname=Linux nc02 2.6.32-573.el6.x86_64 #1 SMP Thu Jul
23 15:44:03 UTC ,opsys=linux
mom_service_port = 15002
mom_manager_port = 15003
total_sockets = 2
total_numa_nodes = 2
total_cores = 28
total_threads = 28
dedicated_sockets = 0
dedicated_numa_nodes = 0
dedicated_cores = 0
dedicated_threads = 14
4) Healthy node 'node08' (for example)
# pbsnodes nc08
state = free
power_state = Running
ntype = cluster
status = rectime=,macaddr=14:dd:a9:24:2f:8d,cpuclock=OnDemand:1200MHz,varattr=,jobs=,state=free,netload=,gres=,loadave=0.00,ncpus=14,physmem=kb,availmem=kb,totmem=
kb,idletime=0,nusers=0,nsessions=0,uname=Linux nc08 2.6.32-573.el6.x86_64 #1 SMP Thu Jul
23 15:44:03 UTC ,opsys=linux
mom_service_port = 15002
mom_manager_port = 15003
total_sockets = 2
total_numa_nodes = 2
total_cores = 28
total_threads = 28
dedicated_sockets = 0
dedicated_numa_nodes = 0
dedicated_cores = 0
dedicated_threads = 0
state = free
power_state = Running
ntype = cluster
status = rectime=,macaddr=14:dd:a9:24:2f:8d,cpuclock=OnDemand:1200MHz,varattr=,jobs=,state=free,netload=,gres=,loadave=0.00,ncpus=14,physmem=kb,availmem=kb,totmem=
kb,idletime=0,nusers=0,nsessions=0,uname=Linux nc08 2.6.32-573.el6.x86_64 #1 SMP Thu Jul
23 15:44:03 UTC ,opsys=linux
mom_service_port = 15002
mom_manager_port = 15003
total_sockets = 2
total_numa_nodes = 2
total_cores = 28
total_threads = 28
dedicated_sockets = 0
dedicated_numa_nodes = 0
dedicated_cores = 0
dedicated_threads = 0
- cgconfig is 'on' on cluster nodes.
- trqauth is 'on'
- pbs_mom is 'on'
- CentOS6.7
- Kernel Linux nc02 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
x86_64 x86_64 GNU/Linux
Go Yoshimura &go-yoshimura &at& sstc.co.jp&
Scalable Systems Co., Ltd.
Osaka Office
HONMACHI-COLLABO Bldg. 4F, 4-4-2 Kita-kyuhoji-machi, Chuo-ku, Osaka 541-0057 Japan
Tel: 81-6-
Tokyo Kojimachi Office
BUREX Kojimachi 11F, 3-5-2 Kojimachi, Chiyoda-ku, Tokyo 102-0083 Japan
Tel: 81-3- Fax: 81-3-
6 Jun 09:46 2016
recommended torque version for CentOS6.7Go Yoshimura &go-yoshimura &at& sstc.co.jp&
07:46:17 GMT
Hi everyone!
- We are going to install torque into CentOS6.7.
- There are many versions of torque.
- Latest is 6.0.1.
- Which version is recommended to install into CentOS6.7?
- We want to enable numa-support.
- As for 6.0.1, we tried both
./configure --enable-numa-support --enable-cpuset --enable-cgroups
./configure --enable-numa-support
- Both of above, we failed in configure because of too old hwloc version(1.5).
- "--enable-cgroups" is new feature.
But even only for "--enable-numa-support",
hwloc is required.
- Required version is hwloc 1.9 or later like ((6.0.1 failure)).
- hwloc 1.9 is much newer than CentOS6.7 or CentOS7.2 provide
- We can configure torque-5.1.3 like ((5.1.3 success))
torqueRelaseNotes5.1.3.pdf mensions that
RHEL7.x/CentOS7.x are newly supported
((6.0.1 failure))
checking for HWLOC... no
configure: error: cpuset support requires the hwloc development package
cgroup support requires the hwloc development package
Requested 'hwloc &= 1.9' but version of hwloc is 1.5
This can be solved by configuring with --with-hwloc-path=&path&. This path
should be the path to the directory containing the lib/ and include/ directories
for your version of hwloc.
hwloc can be loaded by running the hwloc_install.sh script in the
contrib directory within this Torque distribution.
Another option is adding the directory containing 'hwloc.pc'
to the PKG_CONFIG_PATH environment variable.
If you have done these and still get this
error, try running ./autogen.sh and
then configuring again.
((5.1.3 success))
Building components: server=yes mom=yes clients=yes
gui=no drmaa=no pam=no
PBS Machine type
Remote copy
: /usr/bin/scp -rpB
: /var/spool/torque
Default server
: cent6-07
Unix Domain sockets :
Linux cpusets
: disabled
: disabled
Authentication
: classic (pbs_iff)
Ready for 'make'.
Go Yoshimura &go-yoshimura &at& sstc.co.jp&
Scalable Systems Co., Ltd.
Osaka Office
HONMACHI-COLLABO Bldg. 4F, 4-4-2 Kita-kyuhoji-machi, Chuo-ku, Osaka 541-0057 Japan
Tel: 81-6-
Tokyo Kojimachi Office
BUREX Kojimachi 11F, 3-5-2 Kojimachi, Chiyoda-ku, Tokyo 102-0083 Japan
Tel: 81-3- Fax: 81-3-
2 Jun 17:00 2016
Unexpected array dependency behaviourChristopher Wirth &Christopher.Wirth &at& cruk.manchester.ac.uk&
15:00:11 GMT
Hi everyone,
Having some problems with jobs using array dependencies - especially the
-W depend=afternotokarray:...
dependency. When I run a normal (non-array) job, dependencies work as expected:
-W depend=afterok:${jobID}
= job runs if dependency finishes ok, but is removed from the queue if dependency finishes not ok.
-W depend=afternotok:${jobID}
= job runs if dependency finishes not ok, but is removed from the queue if dependency finishes ok.
Array jobs are appended with []. Adding this in, and using array dependencies, I get unexpected functionality:
-W depend=afterokarray:${jobID}[]
= job runs if all elements of dependency finish ok, but stays in the queue in a perpetual hold state otherwise
-W depend=afternotokarray:${jobID}[]
= seemingly a perpetual hold, regardless of whether or not the dependency finishes ok.
Just in case I was getting the format for array dependencies wrong, I’ve tried several other formats. The
following were all tested based on being dependent on an array of short, simple jobs that basically just
sleep for a few seconds to make sure there is no race condition.
In our system, all job IDs are appended with '.headnode01' - this appears after []. Here, ${jobID} is the
numbers ONLY. If I included anything else, it is specified explicitly. These are the things that go wrong
for the various different formats I’ve tried after -W depend=...
afternotokarray:${jobID}[].headnode01
afternotokarray:${jobID}[]
afternotokarray:${jobID}[][]
afternotokarray:${jobID}[][1]
afternotokarray:${jobID}[][${dependencyArrayLength}]
afternotokarray:${jobID}[][${lastDependencyArrayIndex}]
afternotok:${jobID}[].headnode01
afternotok:${jobID}[]
afternotok:${jobID}[][]
afternotok:${jobID}[][1]
afternotok:${jobID}[][${dependencyArrayLength}]
afternotok:${jobID}[][${lastDependencyArrayIndex}]
= perpetual hold
afternotokarray:${jobID}
afternotokarray:${jobID}[1]
afternotokarray:${jobID}[].headnode01[1]
afternotokarray:${jobID}[].headnode01[]
afternotok:${jobID}
= qsub: submit error (Invalid Job Dependency)
afternotokarray:${jobID}[][0]
afternotokarray:${jobID}[][*]
= job runs immediately upon submission, without waiting to see whether dependency completes ok
Can anyone shed any light on this? Is it a bug, or just me doing something wrong!? I greatly appreciate any
help any of you can give.
________________________________
This email is confidential and intended solely for the use of the person(s) ('the intended recipient') to
whom it was addressed. Any views or opinions presented are solely those of the author and do not
necessarily represent those of the Cancer Research UK Manchester Institute or the University of
Manchester. It may contain information that is privileged & confidential within the meaning of
applicable law. Accordingly any dissemination, distribution, copying, or other use of this message, or
any of its contents, by any person other than the intended recipient may constitute a breach of civil or
criminal law and is strictly prohibited. If you are NOT the intended recipient please contact the sender
and dispose of this e-mail as soon as possible.
30 May 12:07 2016
PBS_ServerLOG_ERROR No such file or directory (2) in recov_attr, read2Vincent Lefort &vincent.lefort &at& iter.org&
10:07:46 GMT
I try to start torque manager and I get this message :
05/30/:57;0001;PBS_SSPBS_SLOG_ERROR::No such file
or directory (2) in recov_attr, read2
05/30/:57;0001;PBS_SSPBS_SLOG_ERROR::que_recov,
recov_attr[common] failed
pbs deamon appear to be running
00:00:40 /usr/local/sbin/pbs_server
-d /var/torque
but job are stuck
Do you have any idea about the pb ?
Thank you very much for the help
30 May 11:05 2016
PBS_SLOG_ERROR::No such file or directory (2) in recov_attr, read2Vincent Lefort &vincent.lefort &at& iter.org&
09:05:47 GMT
I Try to restart torque daemon and I get this message :
05/30/:57;0001;PBS_SSPBS_SLOG_ERROR::No such file
or directory (2) in recov_attr, read2
05/30/:57;0001;PBS_SSPBS_SLOG_ERROR::que_recov,
recov_attr[common] failed
torque appear to be launched
00:00:04 /usr/local/sbin/pbs_server
-d /var/torque
but my job still stuck and won't run
Any idea about this error ?
Thank you everyone for the help
25 May 00:21 2016
Sending jobs to another cluster queueRichard Young &Richard.Young &at& usq.edu.au&
22:21:20 GMT
I am wondering if somebody has come across this problem before and can supply some help. I have two clusters,
one running PBSPro and the other running Torque/Maui. I have setup a queue on the PBSPro cluster to send
jobs to the Torque/Maui cluster however the jobs just sit in the queue waiting to run.
If I run the command "echo hostname|qsub -q default &at& hpc-sunadmin-prd-t1" on the PBSpro cluster it comes
back with the error of:
pbs_iff: error returned: 15031
pbs_iff: error returned: 15031
No Permission.
qsub: cannot connect to server hpc-sunadmin-prd-t1 (errno=15007)
Whereas if I run the same command on the Torque/Maui cluster it returns the correct jobno. On the
Torque/Maui cluster I have added the PBSPro login and admin nodes to both the acl_hosts and submit_hosts
options and this has made no difference. I have also tried adding acl_user entries but this again has made
no difference.
Has anybody setup this type of system before and can supply some insight into fixing the problem.
---------------------------------------------------------------------
Richard A. Young
ICT Services
HPC Systems Engineer
University of Southern Queensland
Toowoomba, Queensland 4350
Email: Richard.Young &at& usq.edu.au
Phone: (07)
---------------------------------------------------------------------
_____________________________________________________________
This email (including any attached files) is confidential and is for the intended recipient(s) only. If
you received this email by mistake, please, as a courtesy, tell the sender, then delete this email.
The views and opinions are the originator's and do not necessarily reflect those of the University of
Southern Queensland. Although all reasonable precautions were taken to ensure that this email
contained no viruses at the time it was sent we accept no liability for any losses arising from its receipt.
The University of Southern Queensland is a registered provider of education with the Australian Government.
(CRICOS Institution Code QLD 00244B / NSW 02225M, TEQSA PRV12081 )
202223242526
27282930&&&
38&33&29&101&28&30&70&29&40&73&58&49&48&46&71&88&55&84&83&55&76&103&97&79&67&96&123&140&73&126&92&132&171&143&98&123&59&46&66&65&103&117&107&122&93&51&93&103&135&164&172&129&179&111&164&154&142&154&141&73&168&157&216&238&243&188&129&132&131&157&147&185&279&279&153&195&155&93&189&85&35&54&119&77&131&118&143&111&109&119&216&130&116&141&122&206&148&178&181&163&212&91&97&183&132&134&149&132&109&158&133&193&156&156&124&140&237&195&138&144&182&87&168&170&191&258&135

我要回帖

更多关于 qsub 提交作业 的文章

 

随机推荐