yum update throws无法在centos 6.5上的/var/run/yum.pid上创建锁定

问题描述 投票:1回答:4

我已经在我的VMServer上部署了一个新的CentOS 6.5实例,其中包含了开发工具,X11和其他一些安装的软件包。第一天,似乎每件事都很好。后来我无法使用yum安装程序更新或安装任何软件包,它会抛出错误,如下所示:

[root@localDev ~]# yum update
Loaded plugins: fastestmirror, refresh-packagekit, security
Cannot open logfile /var/log/yum.log
Could not create lock at /var/run/yum.pid: [Errno 30] Read-only file system: '/var/run/yum.pid'
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: yum
    Memory :  20 M RSS (315 MB VSZ)
    Started: Wed Jul 20 22:01:54 2016 - 00:03 ago
    State  : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: yum
    Memory :  20 M RSS (315 MB VSZ)
    Started: Wed Jul 20 22:01:54 2016 - 00:05 ago
    State  : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: yum
    Memory :  20 M RSS (315 MB VSZ)
    Started: Wed Jul 20 22:01:54 2016 - 00:07 ago
    State  : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: yum
    Memory :  20 M RSS (315 MB VSZ)
    Started: Wed Jul 20 22:01:54 2016 - 00:09 ago
    State  : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: yum
    Memory :  20 M RSS (315 MB VSZ)
    Started: Wed Jul 20 22:01:54 2016 - 00:11 ago
    State  : Running, pid: 10750
^C

Exiting on user cancel.
[root@localDev ~]#

即使没有这样的进程在ps命令的结果中使用所提到的pid 10750运行。

[root@localDev ~]# ps -eaf
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Jul19 ?        00:00:00 /sbin/init
root         2     0  0 Jul19 ?        00:00:00 [kthreadd]
root         3     2  0 Jul19 ?        00:00:00 [migration/0]
root         4     2  0 Jul19 ?        00:00:00 [ksoftirqd/0]
root         5     2  0 Jul19 ?        00:00:00 [migration/0]
root         6     2  0 Jul19 ?        00:00:00 [watchdog/0]
root         7     2  0 Jul19 ?        00:00:00 [migration/1]
root         8     2  0 Jul19 ?        00:00:00 [migration/1]
root         9     2  0 Jul19 ?        00:00:00 [ksoftirqd/1]
root        10     2  0 Jul19 ?        00:00:00 [watchdog/1]
root        11     2  0 Jul19 ?        00:00:02 [events/0]
root        12     2  0 Jul19 ?        00:01:03 [events/1]
root        13     2  0 Jul19 ?        00:00:00 [cgroup]
root        14     2  0 Jul19 ?        00:00:00 [khelper]
root        15     2  0 Jul19 ?        00:00:00 [netns]
root        16     2  0 Jul19 ?        00:00:00 [async/mgr]
root        17     2  0 Jul19 ?        00:00:00 [pm]
root        18     2  0 Jul19 ?        00:00:00 [sync_supers]
root        19     2  0 Jul19 ?        00:00:00 [bdi-default]
root        20     2  0 Jul19 ?        00:00:00 [kintegrityd/0]
root        21     2  0 Jul19 ?        00:00:00 [kintegrityd/1]
root        22     2  0 Jul19 ?        00:00:00 [kblockd/0]
root        23     2  0 Jul19 ?        00:00:00 [kblockd/1]
root        24     2  0 Jul19 ?        00:00:00 [kacpid]
root        25     2  0 Jul19 ?        00:00:00 [kacpi_notify]
root        26     2  0 Jul19 ?        00:00:00 [kacpi_hotplug]
root        27     2  0 Jul19 ?        00:00:00 [ata_aux]
root        28     2  0 Jul19 ?        00:00:00 [ata_sff/0]
root        29     2  0 Jul19 ?        00:00:00 [ata_sff/1]
root        30     2  0 Jul19 ?        00:00:00 [ksuspend_usbd]
root        31     2  0 Jul19 ?        00:00:00 [khubd]
root        32     2  0 Jul19 ?        00:00:00 [kseriod]
root        33     2  0 Jul19 ?        00:00:00 [md/0]
root        34     2  0 Jul19 ?        00:00:00 [md/1]
root        35     2  0 Jul19 ?        00:00:00 [md_misc/0]
root        36     2  0 Jul19 ?        00:00:00 [md_misc/1]
root        37     2  0 Jul19 ?        00:00:00 [linkwatch]
root        38     2  0 Jul19 ?        00:00:00 [khungtaskd]
root        39     2  0 Jul19 ?        00:00:00 [kswapd0]
root        40     2  0 Jul19 ?        00:00:00 [ksmd]
root        41     2  0 Jul19 ?        00:00:00 [khugepaged]
root        42     2  0 Jul19 ?        00:00:00 [aio/0]
root        43     2  0 Jul19 ?        00:00:00 [aio/1]
root        44     2  0 Jul19 ?        00:00:00 [crypto/0]
root        45     2  0 Jul19 ?        00:00:00 [crypto/1]
root        50     2  0 Jul19 ?        00:00:00 [kthrotld/0]
root        51     2  0 Jul19 ?        00:00:00 [kthrotld/1]
root        52     2  0 Jul19 ?        00:00:00 [pciehpd]
root        54     2  0 Jul19 ?        00:00:00 [kpsmoused]
root        55     2  0 Jul19 ?        00:00:00 [usbhid_resumer]
root        85     2  0 Jul19 ?        00:00:00 [kstriped]
root       162     2  0 Jul19 ?        00:00:00 [scsi_eh_0]
root       163     2  0 Jul19 ?        00:00:00 [scsi_eh_1]
root       169     2  0 Jul19 ?        00:00:02 [mpt_poll_0]
root       170     2  0 Jul19 ?        00:00:00 [mpt/0]
root       187     2  0 Jul19 ?        00:00:37 [scsi_eh_2]
root       291     2  0 Jul19 ?        00:00:00 [jbd2/sda2-8]
root       292     2  0 Jul19 ?        00:00:00 [ext4-dio-unwrit]
root       381     1  0 Jul19 ?        00:00:00 /sbin/udevd -d
root       564     2  0 Jul19 ?        00:00:02 [vmmemctl]
root       713     2  0 Jul19 ?        00:00:00 [jbd2/sda1-8]
root       714     2  0 Jul19 ?        00:00:00 [ext4-dio-unwrit]
root       715     2  0 Jul19 ?        00:00:00 [jbd2/sda3-8]
root       716     2  0 Jul19 ?        00:00:00 [ext4-dio-unwrit]
root       717     2  0 Jul19 ?        00:00:00 [jbd2/sda6-8]
root       718     2  0 Jul19 ?        00:00:00 [ext4-dio-unwrit]
root       761     2  0 Jul19 ?        00:00:00 [kauditd]
root       995     1  0 Jul19 ?        00:00:00 auditd
root      1020     1  0 Jul19 ?        00:00:01 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root      1050     1  0 Jul19 ?        00:00:14 irqbalance --pid=/var/run/irqbalance.pid
rpc       1064     1  0 Jul19 ?        00:00:00 rpcbind
rpcuser   1082     1  0 Jul19 ?        00:00:00 rpc.statd
dbus      1192     1  0 Jul19 ?        00:00:00 dbus-daemon --system
root      1208     1  0 Jul19 ?        00:00:00 cupsd -C /etc/cups/cupsd.conf
root      1233     1  0 Jul19 ?        00:00:00 /usr/sbin/acpid
68        1242     1  0 Jul19 ?        00:00:00 hald
root      1243  1242  0 Jul19 ?        00:00:00 hald-runner
root      1282  1243  0 Jul19 ?        00:00:00 hald-addon-input: Listening on /dev/input/event0 /dev/input/event2
68        1290  1243  0 Jul19 ?        00:00:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root      1310     1  0 Jul19 ?        00:00:00 automount --pid-file /var/run/autofs.pid
root      1343     1  0 Jul19 ?        00:00:00 /usr/sbin/sshd
postgres  1377     1  0 Jul19 ?        00:00:00 /usr/pgsql-9.2/bin/postmaster -p 5432 -D /var/lib/pgsql/9.2/data
postgres  1379  1377  0 Jul19 ?        00:00:00 postgres: logger process
postgres  1381  1377  0 Jul19 ?        00:00:00 postgres: checkpointer process
postgres  1382  1377  0 Jul19 ?        00:00:00 postgres: writer process
postgres  1383  1377  0 Jul19 ?        00:00:01 postgres: wal writer process
postgres  1384  1377  0 Jul19 ?        00:01:13 postgres: autovacuum launcher process
postgres  1385  1377  0 Jul19 ?        00:00:01 postgres: stats collector process
root      1463     1  0 Jul19 ?        00:00:00 /usr/libexec/postfix/master
postfix   1472  1463  0 Jul19 ?        00:00:00 qmgr -l -t fifo -u
root      1487     1  0 Jul19 ?        00:00:00 /usr/sbin/abrtd
root      1506     1  0 Jul19 ?        00:00:00 /usr/sbin/atd
root      1545     1  0 Jul19 ?        00:00:00 /usr/sbin/certmonger -S -p /var/run/certmonger.pid
root      1558     1  0 Jul19 tty1     00:00:00 /sbin/mingetty /dev/tty1
root      1560     1  0 Jul19 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      1562     1  0 Jul19 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      1564     1  0 Jul19 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      1566     1  0 Jul19 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      1568     1  0 Jul19 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      1569   381  0 Jul19 ?        00:00:00 /sbin/udevd -d
root      1570   381  0 Jul19 ?        00:00:00 /sbin/udevd -d
root     10436  1343  0 19:28 ?        00:00:00 sshd: root@pts/0
root     10438  1343  0 19:28 ?        00:00:00 sshd: root@notty
root     10440 10438  0 19:28 ?        00:00:00 /usr/libexec/openssh/sftp-server
root     10449 10436  0 19:28 pts/0    00:00:00 -bash
postfix  10670  1463  0 21:15 ?        00:00:00 pickup -l -t fifo -u
root     10756     2  0 22:04 ?        00:00:00 [flush-8:0]
root     10765 10449  0 22:09 pts/0    00:00:00 ps -eaf
[root@localDev ~]#

经过一些谷歌搜索,发现根分区在设置中以ro的形式安装。尝试使用命令mount -o remount,rw /将根分区“/”重新安装为rw,这会导致另一条错误消息:

[root@localDev ~]# mount -o remount,rw /
mount: cannot remount block device /dev/sda2 read-write, is write-protected

以下是命令cat /proc/mounts的输出:

[root@localDev ~]# cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=1952148k,nr_inodes=488037,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sda2 / ext4 ro,relatime,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sda3 /home ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sda6 /tmp ext4 rw,relatime,barrier=1,data=ordered 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/etc/auto.misc /misc autofs rw,relatime,fd=7,pgrp=1310,timeout=300,minproto=5,maxproto=5,indirect 0 0
-hosts /net autofs rw,relatime,fd=13,pgrp=1310,timeout=300,minproto=5,maxproto=5,indirect 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0

这个设置有什么问题?凭借我的小调试知识,尝试在引导期间修改安装配置并导致结果失败。请建议我解决此问题的方法。

提前致谢...

linux yum mount centos6.5
4个回答
0
投票

»另一个应用程序目前持有yum锁;等待它退出......另一个应用是:yum«

CentOS 6.5是旧版本,“2013年1月12日”。当前的更新级别为6.8。

您可以等待“搜索更新过程”以完成“1000”更新,或者终止该过程。


0
投票

尝试执行以下命令。

[root@xyz ~]# ps -ef | grep yum
root      4511  4383 24 15:21 ?        00:00:39 /usr/bin/python/usr/share/PackageKit/helpers/yum/yumBackend.py get-updates none
root      4558  4524  0 15:24 pts/1    00:00:00 grep yum
[root@xyz ~]# kill 4511 

现在执行#yum update


0
投票

它发生是因为另一个持有进程来解决它的应用程序我们应该杀死当前正在运行的进程。 要了解当前运行的进程ID: - $ ps aux | grep yum

root 2640 1 0 Nov09? 00:00:00 / usr / bin / python -tt / usr / sbin / yum-updatesd root 13974 6577 0 10:27 pts / 1 00:00:00 grep yum root 17552 2640 0 09:16? 00:00:00 / usr / bin / python -tt / usr / libexec / yum-updatesd-helper --check --dbus

杀死这个过程: - $ kill process_id

杀死所有正在运行的进程。 杀2640 杀了17552

请再次检查是否有其他进程正在运行。如果它然后杀死那个。

现在更新 $ yum update -y


0
投票

选项#1终止进程

kill -9 processid

选项#2杀死所有yum进程

killall -9 yum

选项#3删除yum.pid process_id

rm -f / var / run / yum.pid 2600

yum -y更新

© www.soinside.com 2019 - 2024. All rights reserved.