安装生产环境业务机使用的CentOS系统


安装生产环境业务机使用的CentOS系统

确保开启虚拟化、准备好网线,准备centos7.2系统(建议最小化镜像),进行如下操作:

1.CentOS系统的安装(以CentOS7.2为例)

1.1命名centos系统网卡名为“eth”

在centos7版本的系统,默认网卡名以“ens”开头,为了统一网卡名称,我们需要改为以“eth”开头

1.在开机界面,按下“Tab”键,输入“net.ifnames=0 biosdevname=0”,如下图。

安装生产环境业务机使用的CentOS系统

1.2选择英文界面,添加中文语言包

安装英文版本的系统比较稳定,添加中文语言包,是因为在生产使用中,查看日志有时会出现乱码,或菜单奇形怪状等等,这些都是没有中文语言包产生的问题

安装生产环境业务机使用的CentOS系统

安装生产环境业务机使用的CentOS系统

安装生产环境业务机使用的CentOS系统

1.3修改时区为“上海”

安装生产环境业务机使用的CentOS系统

1.4选择“最小化”安装

安装生产环境业务机使用的CentOS系统

1.5创建分区

  • 说明:因为在生产环境,如果内存不足会选择升级内存,而“SWAP”分区依靠机械磁盘的性能模拟内存效果很一般,如果是固态硬盘还好点,所以“SWAP”分区用处不大,就不创建了。

安装生产环境业务机使用的CentOS系统

安装生产环境业务机使用的CentOS系统

安装生产环境业务机使用的CentOS系统

安装生产环境业务机使用的CentOS系统

1.6查看网卡名是否以“eth”开头

安装生产环境业务机使用的CentOS系统

1.7开启安装系统

安装生产环境业务机使用的CentOS系统

1.8设置管理员root的密码和创建普通用户

安装生产环境业务机使用的CentOS系统

1.9等待安装,之后按照提示重启系统

安装生产环境业务机使用的CentOS系统

1.10输入root用户及密码,进入系统

安装生产环境业务机使用的CentOS系统

注意:在安装界面,没有说明的,就保存默认设置

2.CentOS系统的优化配置

2.1 修改网络配置文件,安装集成工具包“net-tools”,查看网络

1.编辑eth0的配置文件中“ONBOOT”项为“yes”,使eth0网络开启自动启动
[root@centos7 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
[root@centos7 ~]# 

2.重启网络服务
[root@centos7 ~]# systemctl restart network
[root@centos7 ~]# 

3.安装集成工具包“net-tools”
[root@centos7 ~]# yum -y install net-tools

2.2 查看网络IP地址,使用远程工具连接

查看IP地址
[root@centos7 ~]# ifconfig

2.3 永久关闭“防火墙、SElinux、NetworkManager”服务

1.永久关闭NetworkManager服务
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager

2.永久关闭SElinux服务
[root@localhost ~]# vi /etc/sysconfig/selinux
“SELINUX=enforcing” 改为 “SELINUX=disabled”
[root@localhost ~]# 

3.永久关闭防火墙服务
因为有时会用到防火墙,如WEB服务,所以先安装防火墙,在把防火墙永久关闭,需要时在启用

(1)安装防火墙
[root@localhost ~]# yum -y install firewalld

(2)永久关闭防火墙
[root@localhost ~]# systemctl disable firewalld

4.重启系统
[root@localhost ~]# reboot

2.4 修改yum源

系统默认yum源是CentOS官网,连接缓慢,所以要修改yum源。修改“/etc/yum.repos.d/CentOS-Base.repo”文件为公司的yum源,或网络上常用的yum源(如阿里等)

2.5 安装常用的基础命令

[root@localhost ~]# yum install  vim iotop bc gcc gcc-c++ glibc glibc-devel pcre /
> pcre-devel openssl  openssl-devel zip unzip zlib-devel  net-tools /
> lrzsz tree ntpdate telnet lsof tcpdump wget libevent libevent-devel /
> bc  systemd-devel bash-completion traceroute -y

重要:
这里我没有安装epel源的包,如果此虚拟机用于安装Openstack,则不能安装epel源,因为会与Openstack的源冲突;
如果不用于Openstack,则可以安装epel源。

2.6 优化内核参数

1.查看优化文件
[root@localhost ~]# ll
-rw-r--r--  1 root root 2895 Jun 18 18:51 limits.conf
-rw-r--r--  1 root root 2232 Jun 18 18:51 sysctl.conf
[root@localhost ~]#

2.把centos系统内核文件替换为优化文件
[root@localhost ~]# mv sysctl.conf /etc/sysctl.conf 
mv: overwrite ‘/etc/sysctl.conf’? y
[root@localhost ~]#
[root@localhost ~]# mv limits.conf /etc/security/limits.conf 
mv: overwrite ‘/etc/security/limits.conf’? y
[root@localhost ~]#

3.查看优化文件“sysctl.conf”

[root@localhost ~]# cat /etc/sysctl.conf 
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# TCP kernel paramater
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

# socket buffer
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920

# TCP conn
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15

# tcp conn reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 1

net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_timestamps = 1 #?
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1

# keepalive conn
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001    65000

# swap
vm.overcommit_memory = 0
vm.swappiness = 10

#net.ipv4.conf.eth1.rp_filter = 0
#net.ipv4.conf.lo.arp_ignore = 1
#net.ipv4.conf.lo.arp_announce = 2
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2

[root@localhost ~]#

4.查看优化文件“limits.conf”
[root@localhost ~]# cat /etc/security/limits.conf 
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means for example that setting a limit for wildcard domain here
#can be overriden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overriden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - a user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open file descriptors
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

# End of file

*                soft    core               unlimited
*                hard    core             unlimited
*                soft    nproc            1000000
*                hard    nproc          1000000
*                soft    nofile            1000000
*                hard    nofile          1000000
*                soft    memlock      32000
*                hard    memlock    32000
*                soft    msgqueue    8192000
*                hard    msgqueue  8192000
[root@localhost ~]#

2.7 修改主机名

[root@localhost ~]# vim /etc/hostname
centos7
[root@localhost ~]# 

重启系统生效以上配置:[root@localhost ~]#  reboot

到此,就可以用于生产环境

扩展

一.同步时间

二.修改为静态IP地址

三.如果做网卡绑定,需要安装“bridge-utils”软件包

网卡绑定可通过“brctl”命令(需要安装“bridge-utils”软件包)实现临时绑定;还可通过网卡配置文件设置永久绑定

参考:Linux 双网卡绑定及Bridge

linux操作系统下双网卡绑定有七种模式。现在一般的企业都会使用双网卡接入,这样既能添加网络带宽,同时又能做相应的冗余,可以说是好处多多。而一般企业都会使用linux操作系统下自带的网卡绑定模式,当然现在网卡产商也会出一些针对windows操作系统网卡管理软件来做网卡绑定(windows操作系统没有网卡绑定功能 需要第三方支持),一共有其中方式,其中比较长用的是0/1/6:

双网卡绑定的前提:安装“bridge-utils”软件包
[root@centos7 ~]# yum -y install bridge-utils

1:网卡绑定案例,先做绑定,然后再把绑定后的网卡配置成桥接:

1.1:第一组配置,将eth1和eth5绑定为bond0:

1.1.1:先创建bond0配置那文件步骤及内容如下:

[root@linux-host1 ~]# cd /etc/sysconfig/network-scripts/
[root@linux-host1 network-scripts]# cp ifcfg-eth0   ifcfg-bond0
[root@linux-host1 network-scripts]# cat ifcfg-bond0 #内容如下:
BOOTPROTO=static
NAME=bond0
DEVICE=bond0
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100" #指定绑定类型为1及链路状态监测间隔时间
BRIDGE=br0 #桥接到br0

1.1.2:配置br0:

TYPE=Bridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=X.X.X.X
NETMASK=255.255.255.0
GATEWAY=X.X.X.X

1.1.3:eth1配置:

[root@linux-host1 network-scripts]# vim ifcfg-eth1
BOOTPROTO=static
NAME=eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
USERCTL=no
SLAVE=yes

1.1.4:eth5的配置:

[root@linux-host1 network-scripts]# cp ifcfg-eth1  ifcfg-eth5
[root@linux-host1 network-scripts]# vim ifcfg-eth5
BOOTPROTO=static
NAME=eth5
DEVICE=eth5
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
USERCTL=no
SLAVE=yes

1.1.5:重启网络服务:

[root@linux-host1 network-scripts]# systemctl  restart network

1.1.6:验证网络是否正常:

[root@linux-host1 network-scripts]# ping www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=128 time=6.17 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=128 time=10.3 ms
64 bytes from 61.135.169.125: icmp_seq=3 ttl=128 time=5.36 ms
64 bytes from 61.135.169.125: icmp_seq=4 ttl=128 time=6.74 ms
64 bytes from 61.135.169.125: icmp_seq=5 ttl=128 time=5.71 ms

1.1.7:可以验证当前是绑定在哪一块网卡上的:

[root@linux-host1 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1 #备份链路网卡
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 18:66:da:f3:34:e5
Slave queue ID: 0

Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:99:ba:d1
Slave queue ID: 0

1.2:第二组配置,将eth2和eth6绑定为bond1:

1.2.1:创建bond1配置文件:

[root@linux-host1 network-scripts]# cp ifcfg-bond0  ifcfg-bond1
[root@linux-host1 network-scripts]# vim ifcfg-bond1
BOOTPROTO=static
NAME=bond1
DEVICE=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=static
NAME=bond1
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=100"
BRIDGE=br1

1.2.2:配置br1:

TYPE=Bridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br1
DEVICE=br1
ONBOOT=yes
IPADDR=X.X.X.X
NETMASK=255.255.255.0
GATEWAY=X.X.X.X
DNS1=X.X.X.X

1.2.3:eth2的配置:

[root@linux-host1 network-scripts]# vim ifcfg-eth2
BOOTPROTO=static
NAME=eth2
DEVICE=eth2
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond1
USERCTL=no
SLAVE=yes

1.2.4:eth6的配置:

[root@linux-host1 network-scripts]# vim ifcfg-eth6
BOOTPROTO=static
NAME=eth6
DEVICE=eth6
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond1
USERCTL=no
SLAVE=yes

1.2.5:重启网络服务:

[root@linux-host1 network-scripts]# systemctl  restart network

1.2.6:测试内网网络是否正常:

[root@linux-host1 network-scripts]# ping 192.168.20.12
PING 192.168.20.12 (192.168.20.12) 56(84) bytes of data.
64 bytes from 192.168.20.12: icmp_seq=1 ttl=64 time=1.86 ms
64 bytes from 192.168.20.12: icmp_seq=2 ttl=64 time=0.570 ms
64 bytes from 192.168.20.12: icmp_seq=3 ttl=64 time=0.410 ms

1.3:设置开机启动:

[root@linux-host1 network-scripts]# vim /etc/rc.d/rc.local
ifenslave eth1 eth5
ifenslave eth2 eth6
[root@linux-host1 network-scripts]# chmod  a+x /etc/rc.d/rc.local

1.4:重启系统后验证网络