大数据常用命令大全
Linux 常用命令目录树[root@hd101 /]# find . -print 2>/dev/null|awk '!/\.$/ {for (i=1;i<NF;i++){d=length($i);if ( d < 5 && i != 1 )d=5;printf("%"d"s","|")}print "---"$NF}' FS='/'文本批量替换[root@hd
持续更新中…
文章目录
Linux
常用命令
- 目录树
[root@hd101 /]# find . -print 2>/dev/null|awk '!/\.$/ {for (i=1;i<NF;i++){d=length($i);if ( d < 5 && i != 1 )d=5;printf("%"d"s","|")}print "---"$NF}' FS='/'
- 文本批量替换
说明 : 将"./“文件夹下所有名为"target.log"的文件中的"srcTxt"替换为"tarTxt”[root@hd101 /]# grep '=srcTxt' -rl --include="target.log" ./ | xargs sed -i "s/=srcTxt/=tarTxt/g"
- 查找包含文本的文件
说明 : 遍历"./"文件夹获取所有名为"target.log"且包含"txt"文本的文件[root@hd101 /]# grep 'txt' -rl --include="target.log" ./ |xargs grep 'txt'
- 查看本机公网IP
[root@hd101 /]# curl ifconfig.co [root@hd101 /]# curl cip.cc
- VIM 下修改文件编码
[root@hd101 /]# :set fileencoding=utf-8
- 清理30天前的文件
说明 : 对".“文件夹下所有名为”*"的文件执行"rm -rf"命令[root@hd101 /]# find . -mtime +30 -name "*" -exec rm -rf {} \;
- 查看文件占用空间
说明 : 详尽"du"命令请跳至☞https://blog.csdn.net/WFSLIFE/article/details/124539448?spm=1001.2014.3001.5501[root@hd101 /]# du -ah --max-depth=1 | sort -nr
- 查看访问端口的IP
说明 : 查看访问本机"9092"端口的IP[root@hd101 /]# netstat -ntu | grep tcp |grep 9092| awk '{print $5}' | cut -d: -f1 | sort | uniq -c
- 临时关闭防火墙,重启后会重新自动打开
说明 : systemctl start firewalld; systemctl restart firewalld[root@hd101 /]# systemctl stop firewalld
- 检查防火墙状态
[root@hd101 /]# firewall-cmd --state [root@hd101 /]# firewall-cmd --list-all [root@hd101 /]# systemctl status firewalld
- 关闭防火墙自启自启
[root@hd101 /]# systemctl disable firewalld
- 开启防火墙自启自启
[root@hd101 /]# systemctl enable firewalld
- 系统消息
[root@hd101 /]# conntrack [root@hd101 /]# dmesg -T | grep conntrack
- 更新服务器硬件时间
[root@hd101 /]# hwclock --systohc
- raid盘rebuild提速, rebuild之后记得改回来
[root@hd101 /]# sysctl dev.raid.speed_limit_min [root@hd101 /]# sysctl -w dev.raid.speed_limit_min=200000
- nc + tar 数据传输
接收端(绝对路径) [root@hd101 /]# nc -l 9999 |tar -C /home/gpadmin/nctest/ -xPf - 发送端(相对路径) [root@hd101 /]# tar -cf - ./* |nc gp-segment-08-1 9999
配置 – 取消打开文件数限制
- 会话级限制
说明 : 表示将当前shell的当前用户所有进程能打开的最大文件数量设置为1000[root@hd101 /]# ulimit -n 1000
- 用户级限制
说明 :[root@hd101 /]# sudo vim /etc/security/limits.conf 文件的末尾加入以下内容 * soft nofile 1024000 * hard nofile 1024000 * soft nproc unlimited * hard nproc unlimited
- 第一列的 * 表示针对所有用户
- soft nofile : 表示软限制 ( 代表警告的设定, 可以超过这个设定值, 但是超过后会有警告 ) ;
- hard nofile : 表示硬限制 ( 代表严格的设定,不允许超过这个设定的值 );
- 软限制要小于等于硬限制。
- nofile : 是每个用户所有进程可以打开的文件数的限制
- nproc : 是对每个用户创建的进程数的限制
- 重启后生效
- 系统级限制
[root@hd101 /]# sudo cat /proc/sys/fs/file-max 1024000
总结 :
- /proc/sys/fs/file-max 限制不了 /etc/security/limits.conf
- 只有root用户才有权限修改 /etc/security/limits.conf
- 对于非root用户, /etc/security/limits.conf会限制 ulimit -n,但是限制不了 root 用户
- 对于非root用户, ulimit -n只能越设置越小, root用户则无限制
- 任何用户对 ulimit -n 的修改只在当前环境有效, 退出后失效, 重新登录新来后, ulimit -n由 limits.conf 决定
- 如果 limits.conf 没有做设定, 则默认值是1024
- 当前环境的用户所有进程能打开的最大问价数量由 ulimit -n 决定
配置 – coredump 的配置方法
- 永久开启
[root@hd101 /]# sudo vim /etc/security/limits.conf 去掉 soft core 0 一行前面的注释 同时, 将 0 改为 unlimited 或 某个数值, 如 * soft core 204800 * hard core 204800
- 验证是否开启:
[root@hd101 /]# sudo ulimit -c 0 : 表示未开启。 204800 : 表示已经开启 coredump 文件大小为 204800, 单位为 blocks。
配置 – 内存锁 memlock
-
[root@hd101 /]# sudo vim /etc/security/limits.conf * soft memlock unlimited * hard memlock unlimited
查看服务器硬件信息
- 查看服务器型号
[root@hd101 /]# dmidecode | grep "System Information" -A8 System Information Manufacturer: VMware, Inc. Product Name: VMware Virtual Platform Version: None Serial Number: VMware-42 3a 5c b1 9a c2 7c 65-b9 57 19 fe 7a 49 2c e6 UUID: b15c3a42-c29a-657c-b957-19fe7a492ce6 Wake-up Type: Power Switch SKU Number: Not Specified Family: Not Specified ```
- 查看主板型号
[root@etl /]# dmidecode | grep "Base Board Information" -A10 Base Board Information Manufacturer: Intel Corporation Product Name: 440BX Desktop Reference Platform Version: None Serial Number: None Asset Tag: Not Specified Features: None Location In Chassis: Not Specified Chassis Handle: 0x0000 Type: Unknown Contained Object Handles: 0 ```
- 查看BIOS信息
[root@etl /]# dmidecode -t bios # dmidecode 3.2 Getting SMBIOS data from sysfs. SMBIOS 2.7 present. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: Phoenix Technologies LTD Version: 6.00 Release Date: 12/12/2018 Address: 0xEA490 Runtime Size: 88944 bytes ROM Size: 64 kB Characteristics: ISA is supported PCI is supported PC Card (PCMCIA) is supported PNP is supported APM is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported EDD is supported Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) CGA/mono video services are supported (int 10h) ACPI is supported Smart battery is supported BIOS boot specification is supported Function key-initiated network boot is supported Targeted content distribution is supported BIOS Revision: 4.6 Firmware Revision: 0.0
- 查看内存槽及内存条
[root@etl ~]# dmidecode -t memory | grep "Memory Controller Information" -A35 Memory Controller Information Error Detecting Method: None Error Correcting Capabilities: None Supported Interleave: One-way Interleave Current Interleave: One-way Interleave Maximum Memory Module Size: 32768 MB Maximum Total Memory Size: 491520 MB Supported Speeds: 70 ns 60 ns Supported Memory Types: FPM EDO DIMM SDRAM Memory Module Voltage: 3.3 V Associated Memory Slots: 15 0x0006 0x0007 0x0008 0x0009 0x000A 0x000B 0x000C 0x000D 0x000E 0x000F 0x0010 0x0011 0x0012 0x0013 0x0014 Enabled Error Correcting Capabilities: None [root@etl ~]# dmidecode -t memory | grep "Physical Memory Array" -A6 Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 33 GB Error Information Handle: Not Provided Number Of Devices: 64
- 查看pci信息,即主板所有硬件槽信息
[root@etl ~]# lspci 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) 00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10) 00:0f.0 VGA compatible controller: VMware SVGA II Adapter 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01) 02:01.0 SATA controller: VMware SATA AHCI controller 03:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI Controller (rev 02) 0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
查看CPU硬件信息
- 查看cpu型号
[root@etl ~]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 16 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
- 查看系统中实际物理CPU的数量(物理)
[root@etl ~]# grep 'physical id' /proc/cpuinfo | sort | uniq | wc -l 16
- 系统中实际物理CPU的数量(核数)
[root@etl ~]# cat /proc/cpuinfo | grep 'processor' | wc -l 16
- 查看每个物理CPU中core的个数(即核数)
[root@etl ~]# cat /proc/cpuinfo | grep "cores" | uniq cpu cores : 1
- 查看CPU的主频
[root@etl ~]# cat /proc/cpuinfo | grep MHz | uniq cpu MHz : 2199.998
- 查看CPU的相关信息
[root@etl ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 1 Core(s) per socket: 1 座: 16 NUMA 节点: 2 厂商 ID: GenuineIntel CPU 系列: 6 型号: 79 型号名称: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 步进: 0 CPU MHz: 2199.998 BogoMIPS: 4399.99 超管理器厂商: VMware 虚拟化类型: 完全 L1d 缓存: 32K L1i 缓存: 32K L2 缓存: 256K L3 缓存: 30720K NUMA 节点0 CPU: 0-7 NUMA 节点1 CPU: 8-15 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
- 查看cpu运行模式
[root@etl ~]# getconf LONG_BIT 64
- 查看cpu是否支持64bit
说明 : 结果大于0, 说明支持64bit计算. lm指long mode, 支持lm则是64bit[root@etl ~]# cat /proc/cpuinfo | grep flags | grep ' lm ' | wc -l 16
查看内存信息
- 查看内存的使用情况 /proc/meminfo
说明 : 这个动态更新的虚拟文件实际上是许多其他内存相关工具 (如 : free / ps / top) 等的组合显示[root@etl ~]# cat /proc/meminfo MemTotal: 32778484 kB MemFree: 4500892 kB MemAvailable: 10443612 kB Buffers: 28 kB Cached: 5103804 kB SwapCached: 50972 kB Active: 20923752 kB Inactive: 4685580 kB Active(anon): 17482576 kB Inactive(anon): 3263584 kB Active(file): 3441176 kB Inactive(file): 1421996 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 16777212 kB SwapFree: 16536572 kB Dirty: 924 kB Writeback: 0 kB AnonPages: 20453628 kB Mapped: 423420 kB Shmem: 240660 kB Slab: 2115364 kB SReclaimable: 1508012 kB SUnreclaim: 607352 kB KernelStack: 29808 kB PageTables: 139368 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 33166452 kB Committed_AS: 83019168 kB VmallocTotal: 34359738367 kB VmallocUsed: 213096 kB VmallocChunk: 34342297596 kB Percpu: 12544 kB HardwareCorrupted: 0 kB AnonHugePages: 196608 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 180096 kB DirectMap2M: 8208384 kB DirectMap1G: 27262976 kB
- 查看内存的使用情况
[root@etl ~]# free -h total used free shared buff/cache available Mem: 31G 20G 4.2G 235M 6.3G 9.8G Swap: 15G 235M 15G
查看硬盘信息
- 查看挂接的分区状态
[root@etl ~]# fdisk -l |grep "磁盘 " 磁盘 /dev/sda:107.4 GB, 107374182400 字节,209715200 个扇区 磁盘 /dev/sdb:536.9 GB, 536870912000 字节,1048576000 个扇区 磁盘 /dev/mapper/centos-root:89.7 GB, 89653248000 字节,175104000 个扇区 磁盘 /dev/mapper/centos-swap:17.2 GB, 17179869184 字节,33554432 个扇区 磁盘 /dev/mapper/zdata-zdata:536.9 GB, 536866717696 字节,1048567808 个扇区
- 查看硬盘和分区分布
说明 : NAME 设备名、MAJ:MIN 主:次 设备号、RM removable device、SIZE size of the device、RO 只读设备、TYPE device type、MOUNTPOINT where the device is mounted[root@etl ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 512M 0 part /boot └─sda2 8:2 0 99.5G 0 part ├─centos-root 253:0 0 83.5G 0 lvm / └─centos-swap 253:1 0 16G 0 lvm [SWAP] sdb 8:16 0 500G 0 disk └─zdata-zdata 253:2 0 500G 0 lvm /zdata sr0 11:0 1 4.5G 0 rom
- 查看硬盘和分区的详细信息
[root@etl ~]# fdisk -l 磁盘 /dev/sda:107.4 GB, 107374182400 字节,209715200 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘标签类型:dos 磁盘标识符:0x000c217c 设备 Boot Start End Blocks Id System /dev/sda1 * 2048 1050623 524288 83 Linux /dev/sda2 1050624 209715199 104332288 8e Linux LVM 磁盘 /dev/sdb:536.9 GB, 536870912000 字节,1048576000 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘 /dev/mapper/centos-root:89.7 GB, 89653248000 字节,175104000 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘 /dev/mapper/centos-swap:17.2 GB, 17179869184 字节,33554432 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘 /dev/mapper/zdata-zdata:536.9 GB, 536866717696 字节,1048567808 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节
- 查看挂接的分区状态
[root@etl ~]# mount | column -t sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16372276k,nr_inodes=4093069,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
- 查看硬盘使用情况
[root@etl ~]# df -hT 文件系统 类型 容量 已用 可用 已用% 挂载点 devtmpfs devtmpfs 16G 0 16G 0% /dev tmpfs tmpfs 16G 0 16G 0% /dev/shm tmpfs tmpfs 16G 99M 16G 1% /run tmpfs tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 84G 6.4G 78G 8% / /dev/sda1 xfs 509M 178M 332M 35% /boot /dev/mapper/zdata-zdata xfs 500G 18G 483G 4% /zdata tmpfs tmpfs 3.2G 16K 3.2G 1% /run/user/42 tmpfs tmpfs 3.2G 0 3.2G 0% /run/user/0 overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/ff22a546ad3793ab7cc9f9750439b32cf63eff8d3feca927b846769af50b0945/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/a267f23c1cda8ac0641e8d7d3965a18a81546ce5531374f2ca583f6b0d8dbce0/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/db59d03119d2deea4a0e17ec1a29460f5feba7a56bb29dd2389dcaadf56437a7/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/86aa5c1b06af3b55c0a427254a8323c43d99ac2b8679a342350dfcb134160cea/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/db2b2b68e08aae188f7735cab2c598fad416995223e05d0840d0e516075fe4a9/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/85ee61c95098c9239cb1c504c9617ff323324f6d6d5b83e06aa50f5eb2948bab/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/86d0abfa455157e156fdc149d63d9ac63affcf6c732f918cc8da84b78b8269e0/merged overlay overlay 500G 18G 483G 4% /zdata/docker/overlay2/57d0a2c759d8ca7acc1954404f28f5457979b6e2e9b569cd981e6b83bc7f24c3/merged
- 硬盘检测
[root@Master ~]# smartctl -a /dev/sda smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-642.el6.x86_64] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net Vendor: HP Product: LOGICAL VOLUME Revision: 3.56 User Capacity: 299,966,445,568 bytes [299 GB] Logical block size: 512 bytes Logical Unit id: 0x600508b1001cc8a1b9ec4dacc5ab35dc Serial number: PDNNK0BRH9U0AG Device type: disk Local Time is: Mon Feb 5 13:13:33 2018 CST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Error Counter logging not supported Device does not support Self Test logging
查看网卡信息
- 查看网卡硬件信息
[root@etl ~]# lspci | grep -i 'eth' 0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
- 查看系统的所有网络接口
[root@etl ~]# ifconfig -a [root@etl ~]# ifconfig -a
- 查看某个网络接口的详细信息,例如ens192的详细参数和指标
[root@etl ~]# ethtool ens192 Settings for ens192: Supported ports: [ TP ] Supported link modes: 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Supported FEC modes: Not reported Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Advertised FEC modes: Not reported Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: off MDI-X: Unknown Supports Wake-on: uag Wake-on: d Link detected: yes
- 查看所有网卡的链路状态
[root@etl ~]# for i in `seq 0 9`;do ethtool eth${i} | egrep 'eth|Link';done
查看服务器性能
- 查看系统负载
说明 : 显示的信息显示依次为:现在时间、系统已经运行了多长时间、目前有多少登陆用户、系统在过去的1分钟、5分钟和15分钟内的平均负载。[root@hd101 /]# uptime 10:11:35 up 51 days, 14:54, 10 users, load average: 6.80, 6.50, 6.59
- 显示内核环形缓冲区内容
[root@hd101 /]# dmesg -T | tail [二 5月 31 15:12:39 2022] device vethfa1c9b6 entered promiscuous mode [二 5月 31 15:12:39 2022] IPv6: ADDRCONF(NETDEV_UP): vethfa1c9b6: link is not ready [二 5月 31 15:12:39 2022] docker0: port 3(vethfa1c9b6) entered blocking state [二 5月 31 15:12:39 2022] docker0: port 3(vethfa1c9b6) entered forwarding state [二 5月 31 15:12:39 2022] docker0: port 3(vethfa1c9b6) entered disabled state [二 5月 31 15:12:40 2022] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [二 5月 31 15:12:40 2022] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [二 5月 31 15:12:40 2022] IPv6: ADDRCONF(NETDEV_CHANGE): vethfa1c9b6: link becomes ready [二 5月 31 15:12:40 2022] docker0: port 3(vethfa1c9b6) entered blocking state [二 5月 31 15:12:40 2022] docker0: port 3(vethfa1c9b6) entered forwarding state
查看系统资源占用
- 查看内存占用
说明 : free默认以kb为单位显示; 可以使用free -h可读显示[root@hd101 /]# free total used free shared buff/cache available Mem: 32779888 3935664 7546548 200588 21297676 28240784 Swap: 5242876 0 5242876
Mem行 : total = used + free 其中buffers和cached虽然计算在used内, 但其实为可用内存。
Swap:内存交换区的使用情况。 - 查看内存占用前五的进程
说明 : ps auxw 显示系统资源占用情况;[root@hd101 /]# ps auxw | head -1;ps auxw | sort -rn -k4 | head -5
head -1表示显示第一列,即标题列;
sort -r 表示反向排序,-n表示按数字排序,-k4表示列的第4个字符。 - 查看CPU占用前三的进程
[root@hd101 /]# ps auxw | head -1;ps auxw | sort -rn -k3 | head -3
- 查看系统整体的负载
[root@hd101 /]# top
说明 : 进入top后按h打开帮助页; top -d 2 每隔2秒刷新显示; top -p 12345 -p 6789 每隔5秒显示pid是12345和pid是6789的进程资源情况。
第一行 : 系统时间 + 系统运行时间 + 几个用户 + 1/5/15分钟系统平均负载
第二行 : 进程总数(total) + 正在运行进程数(running) + 睡眠进程数(sleeping) + 停止的进程数(stopped)+ 僵尸进程数(zombie)
第三行 : 用户空间CPU占比(us) + 内核空间CPU占比(sy)+ CPU空置率(id)
PID : 进程ID; USER : 用户名; PR : 优先级; NI : 负值表示高优先级, 正值表示低优先级; VIRT :虚拟内存; RES : 真实内训; SHR :共享内存; S : 进程状态, D 不可中断的睡眠状态, R 运行, S 睡眠, T 跟踪/停止, Z 僵尸进程 - 查看进程实时的网络连接速度
说明 : 需要手动安装。[root@hd101 /]# nethogs
CentOS系统安装nethogs:yum tall nethogs -y
Ubuntu系统安装nethogs:sudo apt install nethogs -y
Zookeeper
- 启动zookeeper
[root@hd101 zookeeper]# bin/zkServer.sh start
- 查看状态
[root@hd101 zookeeper]# bin/zkServer.sh status
- 启动客户端
[root@hd101 zookeeper]# bin/zkCli.sh
- 显示所有操作命令
[zk: localhost:2181(CONNECTED) 0] help
- 查看当前znode中所包含的内容
[zk: localhost:2181(CONNECTED) 0] ls /
- 查看当前节点详细数据
[zk: localhost:2181(CONNECTED) 0] ls2 /
- 创建节点
[zk: localhost:2181(CONNECTED) 0] create /sanguo "zhugeliang"
- 获得节点的值
[zk: localhost:2181(CONNECTED) 0] get /sanguo
- 创建短暂节点
[zk: localhost:2181(CONNECTED) 0] create -e /sanguo/wuguo "zhouyu"
- 退出客户端
[zk: localhost:2181(CONNECTED) 0] quit
Kafka
- 启动kafka
[root@hd101 kafka]# bin/kafka-server-start.sh -daemon config/server.properties
- 关闭kafka
[root@hd101 kafka]# bin/kafka-server-stop.sh stop
- 查看当前服务器中的所有topic
[root@hd101 kafka]# bin/kafka-topics.sh --zookeeper [zkNode1:2181,zkNode2:2181] --list
- 创建topic
[root@hd101 kafka]# bin/kafka-topics.sh --zookeeper [zkNode1:2181,zkNode2:2181] --create --replication-factor 3 --partitions 1 --topic [your_topic_name]
- 删除topic
[root@hd101 kafka]# bin/kafka-topics.sh --zookeeper [zkNode1:2181,zkNode2:2181] --delete --topic [your_topic_name]
- 控制台生产消息
[root@hd101 kafka]# bin/kafka-console-producer.sh --broker-list [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name]
- 控制台消费消息
[root@hd101 kafka]# bin/kafka-console-consumer.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name] [root@hd101 kafka]# bin/kafka-console-consumer.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name] --from-beginning
- 查看topic详情
[root@hd101 kafka]# bin/kafka-topics.sh --zookeeper [zkNode1:2181,zkNode2:2181] --describe --topic [your_topic_name]
- 修改topic分区数
[root@hd101 kafka]# bin/kafka-topics.sh --zookeeper [zkNode1:2181,zkNode2:2181] --alter --topic [your_topic_name] --partitions 6
- 读取offset
0.11.0.0之前版本: [root@hd101 kafka]# bin/kafka-console-consumer.sh --topic [your_topic_name] --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --formatter "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter" --consumer.config config/consumer.properties --from-beginning 0.11.0.0之后版本(含): [root@hd101 kafka]# bin/kafka-console-consumer.sh --topic [your_topic_name] --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --consumer.config config/consumer.properties --from-beginning
- 重置消费者组 offset 到指定时间之前(需要先优雅的关闭消费者)
[root@hd101 kafka]# bin/kafka-consumer-groups.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --group <消费者组> --reset-offsets --all-topics --execute --to-datetime 2022-01-12T00:30:00.000+08:00
- 查看消费者组信息
[root@hd101 kafka]# bin/kafka-consumer-groups.sh --describe --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --group [your_group_name]
- 重置消费者组, 重置之前需要优雅的关闭消费者
[root@hd101 kafka]# bin/kafka-consumer-groups.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --group [your_group_name] --reset-offsets --all-topics --to-latest --execute
- 重置group offset到指定偏移量, 重置之前需要优雅的关闭消费者
[root@hd101 kafka]# bin/kafka-consumer-groups.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --group [your_group_name] --reset-offsets --all-topics --execute --to-offset 10000000
- 删除消费者组
[root@hd101 kafka]# bin/kafka-consumer-groups.sh --bootstrap-server [kfkNode1:9092,kfkNode2:9092] --group [your_group_name] --delete
- 查看topic某分区偏移量最大值
每个分区
指定分区[root@hd101 kafka]# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name] --time -1
所有分区加总[root@hd101 kafka]# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name] --time -1 --partitions 0
[root@hd101 kafka]# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list [kfkNode1:9092,kfkNode2:9092] --topic [your_topic_name] --time -1 | awk -F ":" '{sum1+=$NF} END {print sum1}'
- 服役新节点后, 进行负载均衡
- 生成负载均衡计划
需要提前创建topics-to-move.json文件, 例如对first主题进行负载均衡[root@hd101 kafka]# vim topics-to-move.json { "topics": [ {"topic": "first"} ], "version": 1 } [root@hd101 kafka]# bin/kafka-reassign-partitions.sh --bootstrap-server hd101:9092,hd102:9092 --topics-to-move-json-file topics-to-move.json --broker-list "0,1,2,3" --generate
- 创建副本存储计划
需要提前创建increase-replication-factor.json文件, 将计划输入并保存[root@hd101 kafka]# vim increase-replication-factor.json {"version":1,"partitions":[{"topic":"first","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"first","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"first"," partition":2,"replicas":[2,3,0],"log_dirs":["any","any","any"]},{"topic":"first","partition":3,"replicas":[3,0,1],"log_dirs":["any","any","any"]},{"topic":"first","partition":4,"replicas":[0,2,3],"log_dirs":[" any","any","any"]},{"topic":"first","partition":5,"replicas":[1,3,0],"log_dirs":["any","any","any"]}]} [root@hd101 kafka]# bin/kafka-reassign-partitions.sh --bootstrap-server hd101:9092,hd102:9092 --reassignment-json-file increase-replication-factor.json --execute
- 验证副本存储计划
[root@hd101 kafka]# bin/kafka-reassign-partitions.sh --bootstrap-server hd101:9092,hd102:9092 --reassignment-json-file increase-replication-factor.json --verify
- 生成负载均衡计划
Greenplum
- gp备份
[root@hd101 greenplum]# gpbackup --dbname pasobi --backup-dir /exts/gpbackups/pasobi --compression-type zstd --compression-level 9 --jobs 2
- 刷新pgrest缓存
[root@hd101 greenplum]# killall -SIGUSR1 postgrest
Java
- 执行jar包
[root@hd101 /]# /data/soft/jdk1.8/bin/java -cp template.jar com.hd.service.UpgradeMain
Airflow
- 如果是docker, 需要先进入docker, 并切换airflow用户
[root@hd101 /]# docker exec -it airflow2 bash [root@hd101 /]# su - airflow
- 列出所有dag
[root@hd101 /]# airflow dags list
- 启动dagid=template的dag
[root@hd101 /]# airflow dags trigger template
- 查看单个任务的依赖关系
[root@hd101 /]# airflow tasks list template --tree
Nginx
Linux环境安装Nginx
- 安装所需环境
[root@hd101 /]# yum -y install openssl openssl-devel pcre pcre-devel zlib zlib-devel gcc gcc-c++
- 准备安装包
[root@hd101 /]# cd /opt/software [root@hd101 software]# wget http://nginx.org/download/nginx-1.13.7.tar.gz [root@hd101 software]# tar -zxvf nginx-1.13.7.tar.gz
- 安装nginx
[root@hd101 /]# cd /opt/software/nginx-1.13.7 [root@hd101 nginx-1.13.7]# ./configure --prefix=/opt/module/nginx [root@hd101 nginx-1.13.7]# make && make install
常用命令
- 启动
说明 : 如果使用非root用户启动时需要执行端口使用赋权。含义: 让当前用户的某个应用也可以使用1024以下的端口[root@hd101 /]# cd /opt/module/nginx [root@hd101 nginx]# sbin/nginx
[root@hd101 /]# sudo setcap cap_net_bind_service=+eip /opt/module/nginx/sbin/nginx
- 查看Nginx进程
[root@hd101 /]# ps -ef | grep nginx
- 停止
[root@hd101 /]# cd /opt/module/nginx [root@hd101 nginx]# sbin/nginx -s quit 或者 [root@hd101 nginx]# sbin/nginx -s stop
- 重启
[root@hd101 /]# cd /opt/module/nginx [root@hd101 nginx]# sbin/nginx -s reload
- 设置开启自启动
[root@hd101 /]# vim /etc/rc.local 底部添加一行 /opt/module/nginx/sbin/nginx
- 配置检查
[root@hd101 /]# /opt/module/nginx/sbin/nginx -c /opt/module/nginx/conf/nginx.conf -t
- web端
http://hd101:80/
HDFS
- 查看帮助
[root@hd101 /]# hdfs dfs -help
- 查看当前目录信息
[root@hd101 /]# hdfs dfs -ls /
- 上传文件
[root@hd101 /]# hdfs dfs -put /本地路径 /hdfs路径
- 剪切文件
[root@hd101 /]# hdfs dfs -moveFromLocal a.txt /aa.txt
- 下载文件到本地
[root@hd101 /]# hdfs dfs -get /hdfs路径 /本地路径
- 合并下载
[root@hd101 /]# hdfs dfs -getmerge /hdfs路径文件夹 /合并后的文件
- 创建文件夹
[root@hd101 /]# hdfs dfs -mkdir /hello
- 创建多级文件夹
[root@hd101 /]# hdfs dfs -mkdir -p /hello/world
- 移动hdfs文件
[root@hd101 /]# hdfs dfs -mv /hdfs路径 /hdfs路径
- 复制hdfs文件
[root@hd101 /]# hdfs dfs -cp /hdfs路径 /hdfs路径
- 删除hdfs文件
[root@hd101 /]# hdfs dfs -rm /aa.txt
- 删除hdfs文件夹
[root@hd101 /]# hdfs dfs -rm -r /hello
- 查看hdfs中的文件
[root@hd101 /]# hdfs dfs -cat /文件 [root@hd101 /]# hdfs dfs -tail -f /文件
- 查看文件夹中有多少个文件
[root@hd101 /]# hdfs dfs -count /文件夹
- 查看hdfs的总空间
[root@hd101 /]# hdfs dfs -df / [root@hd101 /]# hdfs dfs -df -h /
- 修改副本数
[root@hd101 /]# hdfs dfs -setrep 1 /a.txt
Elasticsearch
- 查看集群信息
[root@hd101 /]# curl http://hd103:9200/
- 查看节点信息
[root@hd101 /]# curl http://hd103:9200/_cat/nodes?v
- 查看集群健康状态
[root@hd101 /]# curl -XGET 'http://hd103:9200/_cluster/health?pretty=true'
- 查看索引状态
[root@hd101 /]# curl -XGET 'http://hd103:9200/_cat/indices?v&pretty'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size red open .kibana_1 eZ7hC4Z_SGSD-gpSfqI-Vg 1 0
- 删除异常索引
[root@hd101 /]# curl -XDELETE http://hd103:9200/.kibana_1
Redis
docker化安装
- 拉取镜像
[root@hd101 /]# docker pull redis
- 启动容器
[root@hd101 /]# docker run -p 16379:6379 --name redis_etl -d redis redis-server --requirepass hbaCYQhCH7ABQX
- 进入容器
[root@hd101 /]# docker exec -it redis_etl bash
- 进入redis客户端
root@853cd82306e2:/data# redis-cli
- 查看现有密码
127.0.0.1:6379> config get requirepass
- 认证密码
127.0.0.1:6379> auth hbaCYQhCH7ABQX
- 设置密码
127.0.0.1:6379> config set requirepass hbaCYQhCH7ABQX
- 退出redis客户端
127.0.0.1:6379> exit
Oralce
sql命令
- 查看正在执行的语句查询 & 拼接杀进程语句
select 'alter system kill session '''||b.sid||','||b.serial#||''';' killer, b.sid oracleid, b.username oracle用户, b.serial#, spid 操作系统id, paddr, sql_text 正在执行的sql, b.machine 计算机名 from v$process a, v$session b, v$sqlarea c where a.addr = b.paddr and b.sql_hash_value = c.hash_value;
- 查看锁表语句 & 拼接杀进程语句
select 'alter system kill session '''||s.sid||','||s.serial#||''';' killer, l.session_id sid, s.serial#, l.locked_mode, l.oracle_username, s.user#, l.os_user_name, s.machine, s.terminal, a.sql_text, a.action from v$sqlarea a, v$session s, v$locked_object l where l.session_id = s.sid and s.prev_sql_addr = a.address order by sid, s.serial#;
- 最耗费CPU
select * from (select sql_text, buffer_gets, disk_reads, sorts, cpu_time / 1000000 cpu_sec, executions, rows_processed from v$sqlstats order by cpu_time DESC) where rownum < 11;
select * from (select a.sid session_id, a.sql_id, a.status, a.cpu_time / 1000000 cpu_sec, a.buffer_gets, a.disk_reads, b.sql_text sql_text from v$sql_monitor a, v$sql b where a.sql_id = b.sql_id order by a.cpu_time desc) where rownum <= 20;
- 查询SQL执行情况
select t.inst_id,t.sql_id,t.last_active_time,t.sql_profile, --如果该字段有值,就是按固化走执行计划 t.plan_hash_value,t.sql_fulltext,t.child_number 执行计划版本号, trunc((t.cpu_time/t.executions/1000000),4) 每次cpu时间,trunc((t.elapsed_time-t.cpu_time)/t.executions/1000000,4)"每次等待时间",t.executions 总执行次数, --trunc(t.executions/((t.last_active_time-to_date(t.last_load_time,'yyyy/mm/dd hh24:mi:ss'))*86400)) 平均每秒执行次数, round(t.rows_processed/t.executions,2) 平均返回行数,trunc(t.elapsed_time / t.executions / 1000000,4) "每次执行(秒)", trunc((t.buffer_gets / t.executions/1000000),4) 每次逻辑读,trunc((t.disk_reads / t.executions/1000000),4) 每次物理读, trunc((t.cluster_wait_time/t.executions/1000000),4) 每次集群等待,trunc((t.user_io_wait_time/t.executions/1000000),4) 每次io等待, trunc((t.application_wait_time/t.executions/1000000),4) 每次应用等待,trunc((t.concurrency_wait_time/t.executions/1000000),4) 每次并发等待, t.first_load_time 首次硬解析时间,t.last_load_time 上次硬解析时间, t.module,t.action,t.parsing_schema_name,trunc(t.elapsed_time/1000000,4) "执行时间(秒)", trunc(t.cpu_time/1000000,4) cpu时间,t.parse_calls 总解析次数, t.loads 硬解析次数, t.buffer_gets,t.cluster_wait_time, t.user_io_wait_time,t.application_wait_time,t.concurrency_wait_time,t.plan_hash_value from gv$sql t where t.executions >0 --and t.sql_id='cjrpgh8gqybs0' order by t.cpu_time desc
- 最耗缓存
select * from ( select sql_fulltext sql, buffer_gets, executions, buffer_gets/executions "gets/exec", hash_value,address,last_active_time from v$sqlarea where buffer_gets > 10000 order by buffer_gets desc) where rownum <= 10 ;
- 最多物理读取
select * from ( select sql_fulltext sql, disk_reads, executions, disk_reads/executions "reads/exec", hash_value,address,last_active_time from v$sqlarea where disk_reads > 1000 order by disk_reads desc) where rownum <= 10 ;
- 最多执行
select * from ( select substr(sql_text,1,40) sql,sql_fulltext, executions, rows_processed, rows_processed/executions "rows/exec", hash_value,address,last_active_time from v$sqlarea where executions > 100 order by executions desc) where rownum <= 10 ;
- 最耗内存
select * from ( select substr(sql_text,1,40) sql, sharable_mem, executions, hash_value,address,last_active_time from v$sqlarea where sharable_mem > 1048576 order by sharable_mem desc) where rownum <= 10 ;
- 失效的索引重建
select 'alter index '||index_name||' rebuild online;' from user_indexes where status <> 'valid' and index_name not like'%$$';
- 查看当前用户的表空间
select * from user_tablespaces;
- 查看表空间使用情况
select a.tablespace_name, round(a.bytes*1.0/(1024*1024*1024),4) "total(gb)", round(b.bytes*1.0/(1024*1024*1024),4) "used(gb)", round(c.bytes*1.0/(1024*1024*1024),4) "free(gb)", round((b.bytes * 100) / a.bytes,4) "used(%)", round((c.bytes * 100) / a.bytes,4) "free(%)" from sys.sm$ts_avail a,sys.sm$ts_used b,sys.sm$ts_free c where a.tablespace_name = b.tablespace_name and a.tablespace_name = c.tablespace_name;
- 查看每张表的表空间占用
select segment_type,tablespace_name,segment_name, round(bytes*1.0/(1024*1024*1024),4) "gb" from user_segments where segment_type = 'TABLE' order by bytes desc;
Python
Linux 中安装/卸载 Miniconda
- 下载最新版 miniconda
[root@hd101 /]# wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
- 在bash中安装
[root@hd101 /]# sh Miniconda3-latest-Linux-x86_64.sh
- 安装完成后, 关闭terminal后, 重新打开, 输入以下命令以验证是否安装成功
[root@hd101 /]# conda -V
- 卸载 miniconda
删除已安装的文件夹
打开 ~/.bashrc 文件, 删除miniconda的环境变量[root@hd101 /]# rm -rf /opt/module/miniconda/ [root@hd101 /]# rm -rf /opt/module/anaconda/
export PATH=" /opt/module/anaconda/bin:$PATH" export PATH=" /opt/module/miniconda3/bin:$PATH"
Miniconda命令
- 查看版本
[root@hd101 /]# conda -V
- 检查更新当前conda
[root@hd101 /]# conda update conda
- 查看安装了哪些包
[root@hd101 /]# conda list
- 查看当前存在哪些虚拟环境
[root@hd101 /]# conda env list [root@hd101 /]# conda info -e [root@hd101 /]# conda info --envs
- 创建Python环境
[root@hd101 /]# conda create -n NewEnvName python=3.8
- 激活Python环境
[root@hd101 /]# conda activate NewEnvName
- 退出虚拟环境
[root@hd101 /]# conda deactivate
- 删除Python环境
[root@hd101 /]# conda remove -n NewEnvName --all
- Python环境中安装py包
[root@hd101 /]# conda install -n NewEnvName [package]
- Python环境中卸载py包
[root@hd101 /]# conda remove --n NewEnvName [package]
Python项目部署
python项目开发完后,需要部署到生产或其他环境下,为了快速安装当前项目所需要的依赖包,常规的做法:
[root@hd101 /]# pip freeze > requirements.txt
此方法是导出pip安装的所有依赖包,而不仅仅是当前项目,这么做显然没有必要。
python 提供了pipreqs包解决这个问题。
- 安装pipreqs
[root@hd101 /]# pip install pipreqs
- 进入当前项目根目录下执行
[root@hd101 /]# cd project_name [root@hd101 /]# pipreqs . --encoding=utf8 --force
-
“.” 指的是将导出依赖包的文件放在当前目录下
-
“–encoding=utf8” 指的是存放文件的编码为utf-8, 否则会报错
-
“–force” 强制执行,当生成目录下的requirements.txt存在时强制覆盖
-
- 在新的环境下,进入项目根目录执行以下脚本完成所有依赖包的安装
[root@hd101 /]# pip install -r requirements.txt
pip镜像源
常用国内源
清华大学 https://pypi.tuna.tsinghua.edu.cn/simple
中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/
阿里云 http://mirrors.aliyun.com/pypi/simple/
豆瓣 http://pypi.douban.com/simple/
- 临时使用
[root@hd101 /]# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pyzipper
- 手动更换镜像源
全局设置镜像源地址[root@hd101 /]# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
RFC 2822、datetime 转换
import pytz
from email.utils import parsedate_to_datetime, format_datetime
shanghai = pytz.timezone('Asia/Shanghai')
# RFC 2822 -> datetime
datetime = parsedate_to_datetime('Mon, 14 Nov 2022 11:25:38 +0800').astimezone(shanghai)
print('datetime:', type(datetime), '\t', datetime)
# datetime -> RFC 2822
rfc_date_str = format_datetime(datetime)
print('rfc_date_str :', type(rfc_date_str), '\t', rfc_date_str)
StreamSets
部署安装
StreamSets官网
StreamSets 3.14.0 安装包 提取码:yyds
-
上传安装包并解压
[root@hd101 /]# unzip streamsets-datacollector-3.14.0.zip -d /opt/module/streamsets
-
修改配置 sdc-env.sh
[root@hd101 /]# vim /opt/module/streamsets/libexec/sdc-env.sh
-
修改配置 sdc.properties
[root@hd101 /]# vim /opt/module/streamsets/etc/sdc.properties
-
启动
[root@hd101 /]# nohup /opt/module/streamsets/bin/streamsets dc &
默认登录账号 admin/admin
-
停止
[root@hd101 /]# ps -ef | grep streamsets | grep -v grep | grep -v /bin/bash | awk '{print$2}' | xargs --no-run-if-empty kill
-
启停脚本 (需要配置环境变量STREAMSETS_HOME)
#!/bin/sh if [ $# = 1 ] then if [ $1 = "help" ] then echo "control the streamsets process, eg:sh streamsets.sh start|stop|restart|status" elif [ $1 = "start" ] then pid=`ps -ef | grep streamsets | grep -v grep | grep -v /bin/bash | grep -v streamsets.sh | awk '{print$2}'` if [ "$pid" != "" ] then echo "streamsets has already running pid is "$pid else nohup $STREAMSETS_HOME/bin/streamsets dc > $STREAMSETS_HOME/nohup.out 2>&1 & echo "streamsets has been start" fi elif [ $1 = "stop" ] then pid=`ps -ef | grep streamsets | grep -v grep | grep -v /bin/bash | grep -v streamsets.sh | awk '{print$2}'` if [ "$pid" != "" ] then kill -9 $pid echo "streamsets has been stop" else echo "streamsets has not running" fi elif [ $1 = "restart" ] then pid=`ps -ef | grep streamsets | grep -v grep | grep -v /bin/bash | grep -v streamsets.sh | awk '{print$2}'` if [ "$pid" != "" ] then kill -9 $pid echo "streamsets has been stop" nohup $STREAMSETS_HOME/bin/streamsets dc > $STREAMSETS_HOME/nohup.out 2>&1 & echo "streamsets has been start" else nohup $STREAMSETS_HOME/bin/streamsets dc > $STREAMSETS_HOME/nohup.out 2>&1 & echo "streamsets has been start" fi elif [ $1 = "status" ] then pid=`ps -ef | grep streamsets | grep -v grep | grep -v /bin/bash | grep -v streamsets.sh | awk '{print$2}'` if [ "$pid" != "" ] then echo "streamsets is running pid is "$pid else echo "streamsets is not running" fi else echo "WRONG ARGS,USAGE:" echo " eg: sh streamsets.sh help" fi else echo "WRONG ARGS,USAGE:" echo " eg: sh streamsets.sh help" fi
Git
常用命令
- yum安装
[root@hd101 /]# yum install -y git
- 稀疏检出
[root@hd101 /]# cd /path/to/my/workspace [root@hd101 /]# git init [root@hd101 /]# git config core.sparsecheckout true [root@hd101 /]# git config --global credential.helper store [root@hd101 /]# git remote add -f origin https://gitlab.hd123.com/datacenter/remoteRep.git [root@hd101 /]# echo "fileX" >> .git/info/sparse-checkout [root@hd101 /]# git pull origin master
1. 进到工作空间目录 2. 创建一个空的repository 3. 在 config 中设置允许使用 sparse checkout(稀疏检出) 模式 4. 在 config 中设置免密 5. 获取远程仓库中的所有对象, 但不检出, 同时将远程的 git server url 添加到 .git/config 文件中 6. 定义要实现稀疏检出的文件/文件夹, 如 fileX 7. 检出 master 分支, 此时仅会检出定义好的 fileX 文件夹
- 舍弃本地修改, 并检出
[root@hd101 /]# git fetch --all [root@hd101 /]# git reset --hard origin/master [root@hd101 /]# git pull origin master
1. 从另外一个版本库下载对象和引用 2. 重置当前HEAD到指定状态 (master分支) 3. 检出master分支
更多推荐
所有评论(0)