在现有磁盘的 /home/kafka_data 处挂载额外磁盘

问题描述 投票:0回答:2
typ** command lsblk   for checking the space added 
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                               8:0    0   40G  0 disk
├─sda1                            8:1    0    1G  0 part /extra-storage
└─sda2                            8:2    0   39G  0 part
  ├─cl_stag--elk--testing1-root 253:0    0 35.1G  0 lvm  /
  └─cl_stag--elk--testing1-swap 253:1    0    4G  0 lvm  [SWAP]
sdb                               8:16   0   20G  0 disk                                    20GB added 
sr0                              11:0    1 1024M  0 rom
[root@stag-elk-testing1 extra-storage]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xcc8319d1.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (2048-41943039, default 41943039):     try enter   and then if signature option comes apply for remove 

Created a new partition 1 of type 'Linux' and of size 20 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@stag-elk-testing1 extra-storage]# fdisk -l          clear picture of the partitions and space alloted to it
Disk /dev/sda: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0xa263bb60

Device     Boot   Start      End  Sectors Size Id Type
/dev/sda1  *       2048  2099199  2097152   1G 83 Linux
/dev/sda2       2099200 83886079 81786880  39G 8e Linux LVM




Disk /dev/mapper/cl_stag--elk--testing1-root: 35.1 GiB, 37652267008 bytes, 73539584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes


Disk /dev/mapper/cl_stag--elk--testing1-swap: 4 GiB, 4219469824 bytes, 8241152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes


Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0xcc8319d1

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 41943039 41940992  20G 83 Linux
[root@stag-elk-testing1 extra-storage]# mkfs.ext4 /dev/s                     Formatting 
sda       sda2      sdb1      sg1       shm/      snd/      stderr    stdout
sda1      sdb       sg0       sg2       snapshot  sr0       stdin
[root@stag-elk-testing1 extra-storage]# mkfs.ext4 /dev/sdb
sdb   sdb1
[root@stag-elk-testing1 extra-storage]# mkfs.ext4 /dev/sdb1  Formatting
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done
Creating filesystem with 5242624 4k blocks and 1310720 inodes
Filesystem UUID: 940eadc0-dbc0-4519-8bc2-c5d4d23c4823
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

我使用上面的sdb1创建了一个分区,其中有30GB的东西。 我想将它安装在 /home/kafka_data ,那里已经有 20GB 和数据。 问题是,当我在 /home/kafka_data 安装该空间时,所有数据都被替换并隐藏,并且我的 kafka 停止工作。 在 df-h 的输出中,现在有两个 home,一个是旧的,有 20 GB,另一个是 /home/kafka_data 30 GB 空间。 但较旧的数据被隐藏 /home/kafka_data 为空。 我想要在现有 20GB 的 /home/kafka_data 中额外增加 30GB,因为我的日志很大。

我备份了/home/kafka_data和consumer_offsets/topics目录中的所有数据,并在/home/kafka_data上安装了30GB后将它们复制到kafka_data中,以便kafka在有20GB时可以像以前一样运行。但是我的kafka没有运行,我无法理解为什么我的kafka在挂载后没有运行我有三个kafka节点,并且我在一个节点上完成了这一点。

linux disk
2个回答
0
投票

需要在现有 20GB 的 /home/kafka_data 中额外增加 30GB,因为我的日志很大。

获取更大的硬盘,或者使用云提供的 Kafka 服务而不是您自己的硬件。您无法通过将分区安装为现有主目录来“组合”存储;这不是分区的工作原理。挂载分区将有效隐藏现有的操作系统分区文件指针,是的。要跨多个驱动器扩展容量,您需要 ZFS 池之类的东西,而不是 EXT4 分区。

Kafka 支持

log.dirs
作为逗号分隔路径;您不需要使用一个目录。特别是不是主目录。


0
投票

 df -h
Filesystem                               Size  Used Avail Use% Mounted on
devtmpfs                                 1.9G     0  1.9G   0% /dev
tmpfs                                    2.0G     0  2.0G   0% /dev/shm
tmpfs                                    2.0G   17M  1.9G   1% /run
tmpfs                                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/cl_stag--elk--testing1-root   36G  2.0G   34G   6% /
/dev/sda1                                976M  120M  789M  14% /boot
tmpfs                                    390M     0  390M   0% /run/user/0

adding 5gb from sdb1 to root 

pvcreate /dev/sdb1
vgextend cl_stag-elk-testing1 /dev/sdb1
lvextend -l +100%FREE /dev/cl_stag-elk-testing1/root
sudo xfs_growfs /

reducing 5gb from root
 df -h
Filesystem                               Size  Used Avail Use% Mounted on
devtmpfs                                 1.9G     0  1.9G   0% /dev
tmpfs                                    2.0G     0  2.0G   0% /dev/shm
tmpfs                                    2.0G   17M  1.9G   1% /run
tmpfs                                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/cl_stag--elk--testing1-root   41G  2.0G   39G   5% /
/dev/sda1                                976M  120M  789M  14% /boot
tmpfs                                    390M     0  390M 

lvreduce -L -5G /dev/cl_stag-elk-testing1/root
sudo xfs_growfs /meta-data=/dev/mapper/cl_stag--elk--testing1-root isize=512    agcount=5, agsize=2298112 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=10502144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=4488, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data size 9191424 too small, old size is 10502144 which step is missing?

© www.soinside.com 2019 - 2024. All rights reserved.