知识迎虎年,开挂一整年1、服务部署规划 IP 最低要求配置 ROLE service 192。168。6。10 virturlIP 192。168。6。11 2C1。7G master01 keepalive、haproxy 192。168。6。12 2C1。7G master02 keepalive、haproxy 192。168。6。13 2C1。7G master03 keepalive、haproxy 192。168。6。14 server01workNode 192。168。6。15 server02workNode 2、部署haproxy部署位置: master01、master02、master03安装〔rootmaster01haproxy〕yuminstallyhaproxy重要配置说明: 由于haproxy部署在master节点,为避免与k8s的6443端口冲突,修改端口为9443。(如果为阿里云服务器,则可以直接使用SLB去做负载及高可用)配置文件:〔rootmaster01haproxy〕cathaproxy。cfgGlobalsettingsgloballog127。0。0。1local2chrootvarlibhaproxypidfilevarrunhaproxy。pidmaxconn4000userhaproxygrouphaproxydaemonturnonstatsunixsocketstatssocketvarlibhaproxystatscommondefaultsthatallthelistenandbackendsectionswilluseifnotdesignatedintheirblockdefaultsmodehttplogglobaloptionhttplogoptiondontlognulloptionhttpservercloseoptionforwardforexcept127。0。0。08optionredispatchretries3timeouthttprequest10stimeoutqueue1mtimeoutconnect10stimeoutclient1mtimeoutserver1mtimeouthttpkeepalive10stimeoutcheck10smaxconn3000frontendkubernetesapiservermodetcpbind:9443监听9443端口bind:443sslTobecompleted。。。。aclurlstaticpathbegistaticimagesjavascriptstylesheetsaclurlstaticpathendi。jpg。gif。png。css。jsdefaultbackendkubernetesapiserverbackendkubernetesapiservermodetcp模式tcpbalanceroundrobin采用轮询的负载算法k8sapiserversbackend配置apiserver,端口6443servermaster192。168。6。11192。168。6。11:6443checkservermaster192。168。6。12192。168。6。12:6443checkservermaster192。168。6。13192。168。6。13:6443check启动haproxy并查看是否启动成功〔rootmaster03haproxy〕systemctlenablehaproxysystemctlstarthaproxy〔rootmaster03haproxy〕netstatntlpActiveInternetconnections(onlyservers)ProtoRecvQSendQLocalAddressForeignAddressStatePIDProgramnametcp000。0。0。0:220。0。0。0:LISTEN861sshdtcp00127。0。0。1:250。0。0。0:LISTEN952mastertcp000。0。0。0:94430。0。0。0:LISTEN3745haproxytcp600:::22:::LISTEN861sshdtcp600::1:25:::LISTEN952master 4、部署keepalive部署位置: master01、master02、master03安装〔rootmaster01haproxy〕yuminstallykeepalive配置文件:globaldefs{scriptuserrootenablescriptsecuritynotificationemail{xxxxqq。com}}vrrpscriptchkhaproxy{scriptbinbashcif〔〔(netstatnlpgrep9443)〕〕;thenexit0;elseexit1;fihaproxy检测interval2每2秒执行一次检测weight11权重变化}vrrpinstanceVI1{interfaceens32stateMASTERbackup节点设为BACKUPvirtualrouterid51id设为相同,表示是同一个虚拟路由组priority100初始权重,BACKUP节点可设置小一点nopreempt可抢占,必须配置,否则主节点挂了则无法切换unicastpeer{}virtualipaddress{192。168。6。10vip}authentication{authtypePASSauthpasspassword123}trackscript{chkhaproxy}} 注意事项: (1)主从节点需修改配置stateMASTERbackup节点设为BACKUPpriority100初始权重,BACKUP节点可设置小一点 (2)注意防火墙导致keepalive脑裂 方式一:停止iptablesiptablesFiptablesXiptablestnatFiptablestnatX 方式二:开放策略iptablesAINPUTs192。168。6。024d224。0。0。18jACCEPT允许组播地址通信iptablesAINPUTs192。168。6。024pvrrpjACCEPT允许VRRP(虚拟路由器冗余协)通信启动keepalived并查看是否启动成功〔rootmaster01keepalived〕systemctlenablekeepalivedsystemctlstartkeepalived〔rootmaster01keepalived〕psauxgrepkeepalivedroot289850。00。11207921416?Ss10:580:00usrsbinkeepalivedDroot289860。00。31275323312?S10:580:00usrsbinkeepalivedDroot289870。10。31318363144?S10:580:01usrsbinkeepalivedDroot309870。00。0112824992pts1S11:110:00grepcolorautokeepalived〔rootmaster01keepalived〕ipagrep192。168。6。10inet192。168。6。1032scopeglobalens32 5、k8s组件安装前优化配置(1)修改增加hosts解析catetchostsEOF192。168。6。11master01192。168。6。12master02192。168。6。13master03192。168。6。14server01192。168。6。15server02EOF(2)调整防火墙、selinux、NetworkManager 配置位置: 所有节点iptablesFiptablesXiptablestnatFiptablestnatXsystemctlstopNetworkManagersystemctldisableNetworkManagersystemctlstopfirewalldsystemctldisablefirewalldsetenforce0sedisSELINUXenforcingSELINUXdisabledgetcselinuxconfig(3)关闭swap 配置位置: 所有节点 临时关闭swapoffa 永久关闭sedi。bakswapsetcfstab(4)设置内核参数bridgenfcalliptables1配置位置: 所有节点 查看brnetfilter模块lsmodgrepbrnetfilter 临时新增modprobebrnetfilter 永久新增catetcrc。sysinitEOF!binbashforfileinetcsysconfigmodules。modules;do〔xfile〕filedoneEOFcatetcsysconfigmodulesbrnetfilter。modulesEOFmodprobebrnetfilterEOFchmod755etcsysconfigmodulesbrnetfilter。modules(5)设置内核参数 配置位置: 所有节点 配置:catEOFetcsysctl。dk8s。confvm。swappiness0kernel。sysrq1net。ipv4。neigh。default。gcstaletime120seedetailsinhttps:help。aliyun。comknowledgedetail39428。htmlnet。ipv4。conf。all。rpfilter0net。ipv4。conf。default。rpfilter0net。ipv4。conf。default。arpannounce2net。ipv4。conf。lo。arpannounce2net。ipv4。conf。all。arpannounce2seedetailsinhttps:help。aliyun。comknowledgedetail41334。htmlnet。ipv4。tcpmaxtwbuckets5000net。ipv4。tcpsyncookies1net。ipv4。tcpmaxsynbacklog1024net。ipv4。tcpsynackretries2net。ipv4。tcpslowstartafteridle0net。bridge。bridgenfcalliptables1net。bridge。bridgenfcallip6tables1net。ipv4。ipforward1net。ipv4。tcptwrecycle0net。ipv4。neigh。default。gcthresh11024net。ipv4。neigh。default。gcthresh12048net。ipv4。neigh。default。gcthresh14096vm。swappiness0vm。overcommitmemory1vm。paniconoom0fs。inotify。maxuserinstances8192fs。inotify。maxuserwatches1048576fs。filemax52706963fs。nropen52706963net。ipv6。conf。all。disableipv61net。netfilter。nfconntrackmax2310720EOF 加载使配置生效sysctlpetcsysctl。dk8s。conf(6)kubeproxy开启ipvs 配置位置: 所有节点 配置: 安装ipvs基础软件包:yumyinstallipsetipvsadm linux内核开启ipvs支持:catetcsysconfigmodulesipvs。modulesEOF!binbashmodprobeipvsmodprobeipvsrrmodprobeipvswrrmodprobeipvsshmodprobenfconntrackipv4EOFchmod755etcsysconfigmodulesipvs。modulesbashetcsysconfigmodulesipvs。modules 查看是否开启了ipvs支持:lsmodgrepeipvsenfconntrackipv46、docker安装及优化 配置位置:所有节点(1)设置yum源catEOFetcyum。repos。dkubernetes。repo〔kubernetes〕nameKubernetesbaseurlhttps:mirrors。aliyun。comkubernetesyumreposkubernetesel7x8664enabled1gpgcheck0repogpgcheck1gpgkeyhttps:mirrors。aliyun。comkubernetesyumdocyumkey。gpghttps:mirrors。aliyun。comkubernetesyumdocrpmpackagekey。gpgEOFwgethttps:download。docker。comlinuxcentosdockerce。repoPetcyum。repos。d(2)安装docker 查看版本yumlistdockerceshowduplicatessortr 安装依赖yuminstallyyumutilsdevicemapperpersistentdatalvm2 安装dockeryuminstallydockerce3。20。10(3)命令补全yumyinstallbashcompletionsourceetcprofile。dbashcompletion。sh(4)镜像加速设置 由于初始化k8s从国外拉取镜像,因此可以使用阿里云镜像加速服务mkdirpetcdockerteeetcdockerdaemon。jsonEOF{registrymirrors:〔https:v16stybc。mirror。aliyuncs。com〕}EOF(5)修改CgroupDriver 修改daemon。json,新增‘execopts:〔native。cgroupdriversystemd’catetcdockerdaemon。json{registrymirrors:〔https:v16stybc。mirror。aliyuncs。com〕,execopts:〔native。cgroupdriversystemd〕}(6)启动服务systemctldaemonreloadsystemctlenabledockersystemctlrestartdocker 7、k8s安装(1)版本查看〔rootmaster01〕yumlistkubeletshowduplicatessortrgrep1。20Repositorybaseislistedmorethanonceintheconfigurationkubelet。x86641。20。90kuberneteskubelet。x86641。20。80kuberneteskubelet。x86641。20。70kuberneteskubelet。x86641。20。60kuberneteskubelet。x86641。20。50kuberneteskubelet。x86641。20。40kuberneteskubelet。x86641。20。20kuberneteskubelet。x86641。20。150kuberneteskubelet。x86641。20。140kuberneteskubelet。x86641。20。130kuberneteskubelet。x86641。20。120kuberneteskubelet。x86641。20。110kuberneteskubelet。x86641。20。10kuberneteskubelet。x86641。20。100kuberneteskubelet。x86641。20。00kubernetes(2)安装需要的版本 安装位置:所有节点yuminstallykubelet1。20。10kubeadm1。20。10kubectl1。20。10 注释kubelet运行在集群所有节点上,用于启动Pod和容器等对象的工具kubeadm用于初始化集群,启动集群的命令工具kubectl用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件(3)开机自启动kubelet 操作位置:所有节点systemctlenablekubeletsystemctlstartkubelet 设置命令补全echosource(kubectlcompletionbash)。bashprofilesource。bashprofile(4)提前下载初始化镜像 操作位置:所有节点catimage。sh!binbashurlregistry。cnhangzhou。aliyuncs。comgooglecontainersversionv1。16。4images(kubeadmconfigimageslistkubernetesversionversionawkF{print2})forimagenamein{images〔〕};dodockerpullurlimagenamedockertagurlimagenamek8s。gcr。ioimagenamedockerrmifurlimagenamedoneshximage。sh 查看下载dockerimagesREPOSITORYTAGIMAGEIDCREATEDSIZEk8s。gcr。iokubeproxyv1。20。10945c9bce487a6monthsago99。7MBk8s。gcr。iokubeapiserverv1。20。10644cadd07add6monthsago122MBk8s。gcr。iokubecontrollermanagerv1。20。102f450864515d6monthsago116MBk8s。gcr。iokubeschedulerv1。20。104c9be8dc650b6monthsago47。3MBk8s。gcr。ioetcd3。4。1300369cf4303ff17monthsago253MBk8s。gcr。iocoredns1。7。0bfe3a36ebd2520monthsago45。2MBk8s。gcr。iopause3。280d28bedfe5d2yearsago683kB(5。1)配置文件初始化集群(推荐,可永久修改kubeproxy的mode为ipvs) 生成初始化配置文件〔rootmaster01k8s〕kubeadmconfigprintinitdefaultskubeadminit。yaml 修改配置文件〔rootmaster01k8s〕catkubeadminit。yamlapiVersion:kubeadm。k8s。iov1beta2bootstrapTokens:groups:system:bootstrappers:kubeadm:defaultnodetokentoken:abcdef。0123456789abcdefttl:24h0m0susages:signingauthenticationkind:InitConfigurationlocalAPIEndpoint:advertiseAddress:192。168。6。11修改为某个master节点的IPbindPort:6443修改为6443(默认就是)nodeRegistration:criSocket:varrundockershim。sockname:master01taints:effect:NoSchedule根据实际情况调整,master生产环境一般不跑业务应用key:noderole。kubernetes。iomasterapiServer:timeoutForControlPlane:4m0sapiVersion:kubeadm。k8s。iov1beta2certificatesDir:etckubernetespkiclusterName:kubernetescontrollerManager:{}controlPlaneEndpoint:192。168。6。10:9443新增配置项,为SLB或者虚拟IP地址端口dns:type:CoreDNSetcd:local:dataDir:varlibetcdimageRepository:k8s。gcr。io可以换成国内阿里的地址如:registry。cnhangzhou。aliyuncs。comgooglecontainerskind:ClusterConfigurationkubernetesVersion:v1。20。10修改为安装的版本,我安装的是v1。20。10networking:dnsDomain:cluster。localserviceSubnet:10。10。0。016根据自己需要修改podSubnet:10。144。0。016新增pod的ip范围,根据自己需求新增scheduler:{}此行以下配置为新增配置,修改kubeproxymode为ipvsapiVersion:kubeproxy。config。k8s。iov1alpha1kind:KubeProxyConfigurationmode:ipvs 初始化〔rootmaster01k8s〕kubeadminitconfig。kubeadminit。yamluploadcerts〔init〕UsingKubernetesversion:v1。20。10〔preflight〕Runningpreflightchecks〔WARNINGSystemVerification〕:thisDockerversionisnotonthelistofvalidatedversions:20。10。12。Latestvalidatedversion:19。03〔preflight〕PullingimagesrequiredforsettingupaKubernetescluster〔preflight〕Thismighttakeaminuteortwo,dependingonthespeedofyourinternetconnection〔preflight〕Youcanalsoperformthisactioninbeforehandusingkubeadmconfigimagespull〔certs〕UsingcertificateDirfolderetckubernetespki〔certs〕Generatingcacertificateandkey〔certs〕Generatingapiservercertificateandkey〔certs〕apiserverservingcertissignedforDNSnames〔kuberneteskubernetes。defaultkubernetes。default。svckubernetes。default。svc。cluster。localmaster01〕andIPs〔10。10。0。1192。168。6。11192。168。6。10〕〔certs〕Generatingapiserverkubeletclientcertificateandkey〔certs〕Generatingfrontproxycacertificateandkey〔certs〕Generatingfrontproxyclientcertificateandkey〔certs〕Generatingetcdcacertificateandkey〔certs〕Generatingetcdservercertificateandkey〔certs〕etcdserverservingcertissignedforDNSnames〔localhostmaster01〕andIPs〔192。168。6。11127。0。0。1::1〕〔certs〕Generatingetcdpeercertificateandkey〔certs〕etcdpeerservingcertissignedforDNSnames〔localhostmaster01〕andIPs〔192。168。6。11127。0。0。1::1〕〔certs〕Generatingetcdhealthcheckclientcertificateandkey〔certs〕Generatingapiserveretcdclientcertificateandkey〔certs〕Generatingsakeyandpublickey〔kubeconfig〕Usingkubeconfigfolderetckubernetes〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingadmin。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingkubelet。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingcontrollermanager。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingscheduler。confkubeconfigfile〔kubeletstart〕Writingkubeletenvironmentfilewithflagstofilevarlibkubeletkubeadmflags。env〔kubeletstart〕Writingkubeletconfigurationtofilevarlibkubeletconfig。yaml〔kubeletstart〕Startingthekubelet〔controlplane〕Usingmanifestfolderetckubernetesmanifests〔controlplane〕CreatingstaticPodmanifestforkubeapiserver〔controlplane〕CreatingstaticPodmanifestforkubecontrollermanager〔controlplane〕CreatingstaticPodmanifestforkubescheduler〔etcd〕CreatingstaticPodmanifestforlocaletcdinetckubernetesmanifests〔waitcontrolplane〕WaitingforthekubelettobootupthecontrolplaneasstaticPodsfromdirectoryetckubernetesmanifests。Thiscantakeupto4m0s〔apiclient〕Allcontrolplanecomponentsarehealthyafter19。048763seconds〔uploadconfig〕StoringtheconfigurationusedinConfigMapkubeadmconfiginthekubesystemNamespace〔kubelet〕CreatingaConfigMapkubeletconfig1。20innamespacekubesystemwiththeconfigurationforthekubeletsinthecluster〔uploadcerts〕StoringthecertificatesinSecretkubeadmcertsinthekubesystemNamespace〔uploadcerts〕Usingcertificatekey:1fbd8452dd60d3803b395f08fdcd8b9c88e2b72e8451963cdfa1975229006a43〔markcontrolplane〕Markingthenodemaster01ascontrolplanebyaddingthelabelsnoderole。kubernetes。iomasterandnoderole。kubernetes。iocontrolplane(deprecated)〔markcontrolplane〕Markingthenodemaster01ascontrolplanebyaddingthetaints〔noderole。kubernetes。iomaster:NoSchedule〕〔bootstraptoken〕Usingtoken:abcdef。0123456789abcdef〔bootstraptoken〕Configuringbootstraptokens,clusterinfoConfigMap,RBACRoles〔bootstraptoken〕configuredRBACrulestoallowNodeBootstraptokenstogetnodes〔bootstraptoken〕configuredRBACrulestoallowNodeBootstraptokenstopostCSRsinorderfornodestogetlongtermcertificatecredentials〔bootstraptoken〕configuredRBACrulestoallowthecsrapprovercontrollerautomaticallyapproveCSRsfromaNodeBootstrapToken〔bootstraptoken〕configuredRBACrulestoallowcertificaterotationforallnodeclientcertificatesinthecluster〔bootstraptoken〕CreatingtheclusterinfoConfigMapinthekubepublicnamespace〔kubeletfinalize〕Updatingetckuberneteskubelet。conftopointtoarotatablekubeletclientcertificateandkey〔addons〕Appliedessentialaddon:CoreDNS〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔addons〕Appliedessentialaddon:kubeproxyYourKubernetescontrolplanehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdirpHOME。kubesudocpietckubernetesadmin。confHOME。kubeconfigsudochown(idu):(idg)HOME。kubeconfigAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIGetckubernetesadmin。confYoushouldnowdeployapodnetworktothecluster。Runkubectlapplyf〔podnetwork〕。yamlwithoneoftheoptionslistedat:https:kubernetes。iodocsconceptsclusteradministrationaddonsYoucannowjoinanynumberofthecontrolplanenoderunningthefollowingcommandoneachasroot:kubeadmjoin192。168。6。10:9443tokenabcdef。0123456789abcdefdiscoverytokencacerthashsha256:fcc54a23d35c8eb3baf59290a3178f0656b573c3d0553fd3f9085ae1c9648babcontrolplanecertificatekey1fbd8452dd60d3803b395f08fdcd8b9c88e2b72e8451963cdfa1975229006a43Pleasenotethatthecertificatekeygivesaccesstoclustersensitivedata,keepitsecret!Asasafeguard,uploadedcertswillbedeletedintwohours;Ifnecessary,youcanusekubeadminitphaseuploadcertsuploadcertstoreloadcertsafterward。Thenyoucanjoinanynumberofworkernodesbyrunningthefollowingoneachasroot:kubeadmjoin192。168。6。10:9443tokenabcdef。0123456789abcdefdiscoverytokencacerthashsha256:fcc54a23d35c8eb3baf59290a3178f0656b573c3d0553fd3f9085ae1c9648bab(5。2)命令行初始化集群(不推荐,此种方式无法永久设置kubeproxy的mode为ipvs,只能临时修改) 操作位置:随便某个master节点即可kubeadminitkubernetesversion1。20。10servicecidr10。10。0。016podnetworkcidr10。144。0。016controlplaneendpoint192。168。6。10:9443uploadcerts 附:单个master初始化不用带下面参数controlplaneendpoint192。168。6。10uploadcerts 输出结果:〔rootmaster01〕kubeadminitkubernetesversion1。20。10servicecidr10。10。0。016podnetworkcidr10。144。0。016controlplaneendpoint192。168。6。10:9443uploadcerts〔init〕UsingKubernetesversion:v1。20。10〔preflight〕Runningpreflightchecks〔WARNINGSystemVerification〕:thisDockerversionisnotonthelistofvalidatedversions:20。10。12。Latestvalidatedversion:19。03〔preflight〕PullingimagesrequiredforsettingupaKubernetescluster〔preflight〕Thismighttakeaminuteortwo,dependingonthespeedofyourinternetconnection〔preflight〕Youcanalsoperformthisactioninbeforehandusingkubeadmconfigimagespull〔certs〕UsingcertificateDirfolderetckubernetespki〔certs〕Generatingcacertificateandkey〔certs〕Generatingapiservercertificateandkey〔certs〕apiserverservingcertissignedforDNSnames〔kuberneteskubernetes。defaultkubernetes。default。svckubernetes。default。svc。cluster。localmaster01〕andIPs〔10。10。0。1192。168。6。10〕〔certs〕Generatingapiserverkubeletclientcertificateandkey〔certs〕Generatingfrontproxycacertificateandkey〔certs〕Generatingfrontproxyclientcertificateandkey〔certs〕Generatingetcdcacertificateandkey〔certs〕Generatingetcdservercertificateandkey〔certs〕etcdserverservingcertissignedforDNSnames〔localhostmaster01〕andIPs〔192。168。6。10127。0。0。1::1〕〔certs〕Generatingetcdpeercertificateandkey〔certs〕etcdpeerservingcertissignedforDNSnames〔localhostmaster01〕andIPs〔192。168。6。10127。0。0。1::1〕〔certs〕Generatingetcdhealthcheckclientcertificateandkey〔certs〕Generatingapiserveretcdclientcertificateandkey〔certs〕Generatingsakeyandpublickey〔kubeconfig〕Usingkubeconfigfolderetckubernetes〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingadmin。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingkubelet。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingcontrollermanager。confkubeconfigfile〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔kubeconfig〕Writingscheduler。confkubeconfigfile〔kubeletstart〕Writingkubeletenvironmentfilewithflagstofilevarlibkubeletkubeadmflags。env〔kubeletstart〕Writingkubeletconfigurationtofilevarlibkubeletconfig。yaml〔kubeletstart〕Startingthekubelet〔controlplane〕Usingmanifestfolderetckubernetesmanifests〔controlplane〕CreatingstaticPodmanifestforkubeapiserver〔controlplane〕CreatingstaticPodmanifestforkubecontrollermanager〔controlplane〕CreatingstaticPodmanifestforkubescheduler〔etcd〕CreatingstaticPodmanifestforlocaletcdinetckubernetesmanifests〔waitcontrolplane〕WaitingforthekubelettobootupthecontrolplaneasstaticPodsfromdirectoryetckubernetesmanifests。Thiscantakeupto4m0s〔apiclient〕Allcontrolplanecomponentsarehealthyafter18。013101seconds〔uploadconfig〕StoringtheconfigurationusedinConfigMapkubeadmconfiginthekubesystemNamespace〔kubelet〕CreatingaConfigMapkubeletconfig1。20innamespacekubesystemwiththeconfigurationforthekubeletsinthecluster〔uploadcerts〕StoringthecertificatesinSecretkubeadmcertsinthekubesystemNamespace〔uploadcerts〕Usingcertificatekey:8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3a〔markcontrolplane〕Markingthenodemaster01ascontrolplanebyaddingthelabelsnoderole。kubernetes。iomasterandnoderole。kubernetes。iocontrolplane(deprecated)〔markcontrolplane〕Markingthenodemaster01ascontrolplanebyaddingthetaints〔noderole。kubernetes。iomaster:NoSchedule〕〔bootstraptoken〕Usingtoken:lry74b。lek8xpwhfofkslxm〔bootstraptoken〕Configuringbootstraptokens,clusterinfoConfigMap,RBACRoles〔bootstraptoken〕configuredRBACrulestoallowNodeBootstraptokenstogetnodes〔bootstraptoken〕configuredRBACrulestoallowNodeBootstraptokenstopostCSRsinorderfornodestogetlongtermcertificatecredentials〔bootstraptoken〕configuredRBACrulestoallowthecsrapprovercontrollerautomaticallyapproveCSRsfromaNodeBootstrapToken〔bootstraptoken〕configuredRBACrulestoallowcertificaterotationforallnodeclientcertificatesinthecluster〔bootstraptoken〕CreatingtheclusterinfoConfigMapinthekubepublicnamespace〔kubeletfinalize〕Updatingetckuberneteskubelet。conftopointtoarotatablekubeletclientcertificateandkey〔addons〕Appliedessentialaddon:CoreDNS〔endpoint〕WARNING:portspecifiedincontrolPlaneEndpointoverridesbindPortinthecontrolplaneaddress〔addons〕Appliedessentialaddon:kubeproxyYourKubernetescontrolplanehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdirpHOME。kubesudocpietckubernetesadmin。confHOME。kubeconfigsudochown(idu):(idg)HOME。kubeconfigAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIGetckubernetesadmin。confYoushouldnowdeployapodnetworktothecluster。Runkubectlapplyf〔podnetwork〕。yamlwithoneoftheoptionslistedat:https:kubernetes。iodocsconceptsclusteradministrationaddonsYoucannowjoinanynumberofthecontrolplanenoderunningthefollowingcommandoneachasroot:kubeadmjoin192。168。6。10:9443tokenlry74b。lek8xpwhfofkslxmdiscoverytokencacerthashsha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129controlplanecertificatekey8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3aPleasenotethatthecertificatekeygivesaccesstoclustersensitivedata,keepitsecret!Asasafeguard,uploadedcertswillbedeletedintwohours;Ifnecessary,youcanusekubeadminitphaseuploadcertsuploadcertstoreloadcertsafterward。Thenyoucanjoinanynumberofworkernodesbyrunningthefollowingoneachasroot:kubeadmjoin192。168。6。10:9443tokenlry74b。lek8xpwhfofkslxmdiscoverytokencacerthashsha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129(6)添加节点 操作位置:初始化所在master节点 生成免密钥登录〔rootmaster01〕sshkeygenGeneratingpublicprivatersakeypair。Enterfileinwhichtosavethekey(root。sshidrsa):Enterpassphrase(emptyfornopassphrase):Entersamepassphraseagain:Youridentificationhasbeensavedinroot。sshidrsa。Yourpublickeyhasbeensavedinroot。sshidrsa。pub。Thekeyfingerprintis:SHA256:AXI6qrQrDrOB9ApII4MqogKl5678m0CVpCpQXOa4f4Qrootmaster02Thekeysrandomartimageis:〔RSA2048〕。。oo。。。。。。o。ooo。OoE。So。。。。。X。。BBo。〔SHA256〕〔rootmaster01〕foriin1213141516dosshcooyidroot192。168。6。{i}done 从初始化执行的节点,同步证书到其它节点 同步脚本:〔rootmaster01〕catscp。shCONTROLPLANEIPS1USERrootdiretckubernetespkiforhostin{CONTROLPLANEIPS};doscpetckubernetespkica。crt{USER}host:dirscpetckubernetespkica。key{USER}host:dirscpetckubernetespkisa。key{USER}host:dirscpetckubernetespkisa。pub{USER}host:dirscpetckubernetespkifrontproxyca。crt{USER}host:dirscpetckubernetespkifrontproxyca。key{USER}host:dirscpetckubernetespkietcdca。crt{USER}host:{dir}etcdQuotethislineifyouareusingexternaletcdscpetckubernetespkietcdca。key{USER}host:{dir}etcddone 同步证书到其它master节点:〔rootmaster01〕foriin1213doshscp。sh192。168。6。{i}done添加master节点 操作位置:其它master节点kubeadmjoin192。168。6。10:9443tokenlry74b。lek8xpwhfofkslxmdiscoverytokencacerthashsha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129controlplanecertificatekey8a6b2052e2d717628c1d9a8ff9404da9748194848acee56162d604fa40fb2f3a添加node节点 操作位置:server节点kubeadmjoin192。168。6。10:9443tokenlry74b。lek8xpwhfofkslxmdiscoverytokencacerthashsha256:6a79bad34c640800f3defc739049a6382994c65077b971d93f15870aeeee8129查看添加结果〔rootmaster01〕kubectlgetnodesNAMESTATUSROLESAGEVERSIONmaster01NotReadycontrolplane,master124mv1。20。10master02NotReadycontrolplane,master10mv1。20。10master03NotReadycontrolplane,master10mv1。20。10server01NotReadynone3m40sv1。20。10server02NotReadynone79sv1。20。10 由于未添加网络组件,因此node节点的状态为NotRready8、添加网络插件calico 背景:由于coreDNS容器启动失败,看日志:NormalScheduled3m17sdefaultschedulerSuccessfullyassignedkubesystemcoredns74ff55c5b7jzlrtomaster01WarningFailedCreatePodSandBox3m14skubeletFailedtocreatepodsandbox:rpcerror:codeUnknowndesc〔failedtosetupsandboxcontainera1a7b574f6073e766d7744ed29da60f9b3fe8dbb8cda2d83d89e720f6a78760enetworkforpodcoredns74ff55c5b7jzlr:networkPlugincnifailedtosetuppodcoredns74ff55c5b7jzlrkubesystemnetwork:errorgettingClusterInformation:Gethttps:〔10。10。0。1〕:443apiscrd。projectcalico。orgv1clusterinformationsdefault:x509:certificatesignedbyunknownauthority(possiblybecauseofcryptorsa:verificationerrorwhiletryingtoverifycandidateauthoritycertificatekubernetes),failedtocleanupsandboxcontainera1a7b574f6073e766d7744ed29da60f9b3fe8dbb8cda2d83d89e720f6a78760enetworkforpodcoredns74ff55c5b7jzlr:networkPlugincnifailedtoteardownpodcoredns74ff55c5b7jzlrkubesystemnetwork:errorgettingClusterInformation:Gethttps:〔10。10。0。1〕:443apiscrd。projectcalico。orgv1clusterinformationsdefault:x509:certificatesignedbyunknownauthority(possiblybecauseofcryptorsa:verificationerrorwhiletryingtoverifycandidateauthoritycertificatekubernetes)〕(1)下载calico。yaml及镜像 版本下载地址:https:github。comprojectcalicocalicoreleases 找到release版本号。tgz压缩文件〔rootmaster01calico〕wgethttps:github。comprojectcalicocalicoreleasesdownloadv3。20。3releasev3。20。3。tgz〔rootmaster01calico〕tarxzvfreleasev3。20。3。tgz〔rootmaster01calico〕lsreleasev3。20。3binimagesk8smanifestsREADME yaml文件解压后所在位置〔rootmaster01calico〕catreleasev3。20。3k8smanifestscalico。yamlgrepimageimage:docker。iocalicocni:v3。20。3image:docker。iocalicocni:v3。20。3image:docker。iocalicopod2daemonflexvol:v3。20。3image:docker。iocaliconode:v3。20。3image:docker。iocalicokubecontrollers:v3。20。3 根据上述calico。yaml文件找到解压包内的镜像压缩文件〔rootmaster01calico〕lsreleasev3。20。3imagesegrepcnipod2daemonflexvolnodekubecontrollerscalicocni。tarcalicokubecontrollers。tarcaliconode。tarcalicopod2daemonflexvol。tar 同步到所有节点〔rootmaster01calico〕foriinlsreleasev3。20。3imagesegrepcnipod2daemonflexvolnodekubecontrollersdoforjin12131415doscpreleasev3。20。3imagesiroot192。168。6。{j}:rootdonedone 各个节点均导出为镜像 master01执行foriinlsreleasev3。20。3imagesegrepcnipod2daemonflexvolnodekubecontrollersdodockerloadireleasev3。20。3imagesidone 其它节点执行foriinlsrootegrepcnipod2daemonflexvolnodekubecontrollersdodockerloadirootidone(2)创建calico相关资源对象〔rootmaster01calico〕kubectlapplyfcalico。yamlconfigmapcalicoconfigcreatedcustomresourcedefinition。apiextensions。k8s。iobgpconfigurations。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。iobgppeers。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioblockaffinities。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioclusterinformations。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。iofelixconfigurations。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioglobalnetworkpolicies。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioglobalnetworksets。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。iohostendpoints。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioipamblocks。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioipamconfigs。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioipamhandles。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ioippools。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。iokubecontrollersconfigurations。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ionetworkpolicies。crd。projectcalico。orgcreatedcustomresourcedefinition。apiextensions。k8s。ionetworksets。crd。projectcalico。orgcreatedclusterrole。rbac。authorization。k8s。iocalicokubecontrollerscreatedclusterrolebinding。rbac。authorization。k8s。iocalicokubecontrollerscreatedclusterrole。rbac。authorization。k8s。iocaliconodecreatedclusterrolebinding。rbac。authorization。k8s。iocaliconodecreateddaemonset。appscaliconodecreatedserviceaccountcaliconodecreateddeployment。appscalicokubecontrollerscreatedserviceaccountcalicokubecontrollerscreatedpoddisruptionbudget。policycalicokubecontrollerscreated(3)再次查看状态,都是Ready状态〔rootmaster01calico〕kubectlgetnodesNAMESTATUSROLESAGEVERSIONmaster01Readycontrolplane,master22hv1。20。10master02Readycontrolplane,master20hv1。20。10master03Readycontrolplane,master20hv1。20。10server01Readynone20hv1。20。10server02Readynone20hv1。20。10〔rootmaster01calico〕kubectlgetpodANAMESPACENAMEREADYSTATUSRESTARTSAGEkubesystemcalicokubecontrollers9bcc567d64kq2g11Running03m21skubesystemcaliconode25h7v11Running03m20skubesystemcaliconodedrtxw11Running03m20skubesystemcaliconodenknc511Running03m20skubesystemcaliconodesvk8711Running03m21skubesystemcaliconodexvc2911Running03m20skubesystemcoredns74ff55c5b7jzlr11Running05m45skubesystemcoredns74ff55c5bjdwwh11Running05m45skubesystemetcdmaster0111Running05m52skubesystemetcdmaster0211Running04m51skubesystemetcdmaster0311Running04m29skubesystemkubeapiservermaster0111Running05m52skubesystemkubeapiservermaster0211Running04m52skubesystemkubeapiservermaster0311Running03m24skubesystemkubecontrollermanagermaster0111Running25m52skubesystemkubecontrollermanagermaster0211Running04m52skubesystemkubecontrollermanagermaster0311Running03m23skubesystemkubeproxyct6bh11Running03m49skubesystemkubeproxyhk5rd11Running04m1skubesystemkubeproxys5swf11Running04m52skubesystemkubeproxysjzl811Running03m53skubesystemkubeproxyxglsf11Running05m45skubesystemkubeschedulermaster0111Running25m52skubesystemkubeschedulermaster0211Running04m51skubesystemkubeschedulermaster0311Running03m13s附1、kubeproxy调整 如果采用命令行初始化集群的,需修改kubeproxymode默认值为ipvskubectleditconfigmapkubeproxynkubesystem mode:修改为mode:ipvs 然后重启kubeproxykubectlgetpodAgrepkubeproxyawk{print2}xargskubectldeletepodnkubesystem