|
| 1 | +--- |
| 2 | +title: 傲来大数据方向HBase组优化日报-Day2 |
| 3 | +date: 2025-05-07 11:41:04 +08:00 |
| 4 | +filename: 2025-05-07-EulixOS-HBase-Day2 |
| 5 | +categories: |
| 6 | + - Study |
| 7 | + - HBase |
| 8 | +tags: |
| 9 | + - BigData |
| 10 | + - EulixOS |
| 11 | + - DataBase |
| 12 | +dir: Study/HBase |
| 13 | +share: true |
| 14 | +--- |
| 15 | +## 配置代理 |
| 16 | + |
| 17 | +简述一下我的环境,我的qemu运行在WSL中的Docker中,Docker Container层的代理是WSL(我才用了mirroeed网络模式,所以实际上WSL用的也是宿主机的端口,详情可见我的[博客](https://delusion.uno/posts/Mirroed-WSL-Docker-Proxy/)),所以我本来以为是要在qemu中连接到qemu上层,也就是Docker Container访问代理,结果经过尝试,发现qemu中可以直接ping到我宿主机/WSL的ip,也就是我路由器分配给宿主机/WSL的ip(192.168.x.x)。所以索性直接把Docker Container中的代理文件复制进qemu的`~/.bashrc`中了,结果还真能成果,我的代理部分如下。 |
| 18 | + |
| 19 | +```shell |
| 20 | +proxy_ip="192.168.6.115" |
| 21 | +proxy_port="7890" |
| 22 | + |
| 23 | +alias proxy=" |
| 24 | + export http_proxy=http://$proxy_ip:$proxy_port; |
| 25 | + export https_proxy=http://$proxy_ip:$proxy_port; |
| 26 | + export all_proxy=http://$proxy_ip:$proxy_port; |
| 27 | + export HTTP_PROXY=http://$proxy_ip:$proxy_port; |
| 28 | + export HTTPS_PROXY=http://$proxy_ip:$proxy_port; |
| 29 | + export ALL_PROXY=http://$proxy_ip:$proxy_port;" |
| 30 | +alias unproxy=" |
| 31 | + unset http_proxy; |
| 32 | + unset https_proxy; |
| 33 | + unset all_proxy; |
| 34 | + unset HTTP_PROXY; |
| 35 | + unset HTTPS_PROXY; |
| 36 | + unset ALL_PROXY;" |
| 37 | +proxy |
| 38 | +``` |
| 39 | + |
| 40 | +接下来可以输入`curl ipinfo.io`来验证代理网络,返回结果 |
| 41 | + |
| 42 | +```shell |
| 43 | +root@localhost [04:10:53] [~] |
| 44 | +-> # curl ipinfo.io |
| 45 | +{ |
| 46 | + "ip": "185.244.208.192", |
| 47 | + "city": "Hong Kong", |
| 48 | + "region": "Hong Kong", |
| 49 | + "country": "HK", |
| 50 | + "loc": "22.2783,114.1747", |
| 51 | + "org": "AS199524 G-Core Labs S.A.", |
| 52 | + "postal": "999077", |
| 53 | + "timezone": "Asia/Hong_Kong", |
| 54 | + "readme": "https://ipinfo.io/missingauth" |
| 55 | +}# |
| 56 | +``` |
| 57 | + |
| 58 | +## 配置zsh |
| 59 | + |
| 60 | +因为我使用的是kitty中的kitten ssh,在bash下会出现很多奇怪的情况,体现在一些操作字符的持久保留,比如在vim下,会出现如下图所示的情况,当然这个情况可以通过使用普通的ssh避免。 |
| 61 | + |
| 62 | + |
| 63 | + |
| 64 | +所以选择安装了zsh。不是必要的配置,不过即使没有奇怪的bug,我也推荐大家使用zsh,不仅美观可以显示很多信息,而且还有很多好用的插件。 |
| 65 | + |
| 66 | + |
| 67 | +### 安装zsh |
| 68 | + |
| 69 | +```shell |
| 70 | +dnf install zsh |
| 71 | +``` |
| 72 | + |
| 73 | +### 切换默认shell |
| 74 | + |
| 75 | +其实可以不用管,后面ohmyzsh安装的时候会自动设置,但是我做这一步的时候遇到了一些奇怪的情况,所以记录一下。 |
| 76 | + |
| 77 | +```shell |
| 78 | +# 安装包含chsh命令的包 |
| 79 | +sudo dnf install util-linux-user |
| 80 | + |
| 81 | +chsh -s $(which zsh) |
| 82 | +``` |
| 83 | + |
| 84 | +重启shell应该会进入到zsh,但这里会出现一个奇怪的情况 |
| 85 | + |
| 86 | +```shell |
| 87 | +[root@localhost]~# # 233 |
| 88 | +zsh: command not found: # |
| 89 | +[root@localhost]~# echo $SHELL |
| 90 | +/bin/bash |
| 91 | +``` |
| 92 | + |
| 93 | +目前仍在使用bash作为默认shell,而不是zsh。这可以从`echo $SHELL`命令的输出`/bin/bash`清楚地看出。 |
| 94 | + |
| 95 | +但从第一行命令`# 233`后收到的错误信息`zsh: command not found: #`来看,当前命令解释器似乎是zsh。这是一种有趣的情况 |
| 96 | + |
| 97 | +要确认当前正在使用的shell,可以运行: |
| 98 | + |
| 99 | +```bash |
| 100 | +ps -p $$ |
| 101 | +``` |
| 102 | + |
| 103 | +这将显示当前shell进程的名称。结果没问题,是zsh |
| 104 | + |
| 105 | +### 安装ohmyzsh |
| 106 | + |
| 107 | +gitee源omz |
| 108 | + |
| 109 | +```shell |
| 110 | +sh -c "$(curl -fsSL https://gitee.com/mirrors/oh-my-zsh/raw/master/tools/install.sh)" |
| 111 | +``` |
| 112 | + |
| 113 | +运行过程中会提示你要不要更改当前的默认终端,我们输入y |
| 114 | + |
| 115 | +运行结束之后,可以看到zsh和ohmyzsh都工作的很好。 |
| 116 | + |
| 117 | + |
| 118 | + |
| 119 | +### 安装ohmyzsh插件以及主题 |
| 120 | + |
| 121 | +拉下来插件仓库,注意这一步要在代理下完成。 |
| 122 | + |
| 123 | +```shell |
| 124 | +git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting |
| 125 | +git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions |
| 126 | +``` |
| 127 | + |
| 128 | +修改配置文件 |
| 129 | + |
| 130 | +```shell |
| 131 | +vi ~/.zshrc |
| 132 | +``` |
| 133 | + |
| 134 | +```text |
| 135 | +plugins=(git dnf z web-search zsh-syntax-highlighting zsh-autosuggestions) |
| 136 | +
|
| 137 | +ZSH_THEME="crcandy" |
| 138 | +``` |
| 139 | + |
| 140 | +修改完成之后,我们好用的zsh就配置完成了!记得把原先`~/.bashrc`的代理配置复制进`~/.zshrc` |
| 141 | + |
| 142 | + |
| 143 | +## qemu内部署Hbase |
| 144 | + |
| 145 | +### 复制脚本 |
| 146 | + |
| 147 | +将部署脚本复制进qemu中运行,注意运行过程中需要代理。 |
| 148 | + |
| 149 | +```shell |
| 150 | +Error parsing proxy URL socks5://192.168.6.115:7890: Unsupported scheme. |
| 151 | +``` |
| 152 | + |
| 153 | +### 运行脚本时报错内存不足 |
| 154 | + |
| 155 | +表现现象为,编译中的终端出现 |
| 156 | +```shell |
| 157 | +c++: fatal error: Killed signal terminated program cc1plus |
| 158 | +compilation terminated. |
| 159 | +make[2]: *** [third_party/abseil-cpp/absl/flags/CMakeFiles/flags_usage_internal.dir/build.make:76: third_party/abseil-cpp/absl/flags/CMakeFiles/flags_usage_internal.dir/internal/usage.cc.o] Error 1 |
| 160 | +make[1]: *** [CMakeFiles/Makefile2:2849: third_party/abseil-cpp/absl/flags/CMakeFiles/flags_usage_internal.dir/all] Error 2 |
| 161 | +make[1]: *** Waiting for unfinished jobs.... |
| 162 | +[ 61%] Building CXX object CMakeFiles/libprotobuf-lite.dir/src/google/protobuf/io/coded_stream.cc.o |
| 163 | +c++: fatal error: Killed signal terminated program cc1plus |
| 164 | +compilation terminated. |
| 165 | +make[2]: *** [CMakeFiles/libprotobuf.dir/build.make:356: CMakeFiles/libprotobuf.dir/src/google/protobuf/descriptor.pb.cc.o] Error 1 |
| 166 | +make[2]: *** Waiting for unfinished jobs.... |
| 167 | +[ 61%] Building CXX object CMakeFiles/libprotobuf-lite.dir/src/google/protobuf/io/zero_copy_stream.cc.o |
| 168 | +c++: fatal error: Killed signal terminated program cc1plus |
| 169 | +``` |
| 170 | + |
| 171 | +其他终端出现 |
| 172 | +```shell |
| 173 | +root@localhost [21:58:57] [~] |
| 174 | +-> # [ 1364.294650][T17314] Out of memory: Killed process 17151 (cc1plus) total-vm:244616kB, anon-rss:194012kB, file-rss:5072kB, shmem-rss:0kB, UID:0 pgtables:472kB oom_score_adj:0 |
| 175 | +[ 1393.317504][T18497] Out of memory: Killed process 17332 (cc1plus) total-vm:259756kB, anon-rss:200312kB, file-rss:1460kB, shmem-rss:0kB, UID:0 pgtables:492kB oom_score_adj:0 |
| 176 | +[ 1417.950128][T18591] Out of memory: Killed process 17329 (cc1plus) total-vm:262140kB, anon-rss:202724kB, file-rss:2424kB, shmem-rss:0kB, UID:0 pgtables:492kB oom_score_adj:0 |
| 177 | +[ 1452.754503][T17330] Out of memory: Killed process 17324 (cc1plus) total-vm:243340kB, anon-rss:198084kB, file-rss:1564kB, shmem-rss:0kB, UID:0 pgtables:476kB oom_score_adj:0 |
| 178 | +``` |
| 179 | + |
| 180 | + |
| 181 | +## arm服务器部署HBase |
| 182 | + |
| 183 | +### build报错 |
| 184 | + |
| 185 | +```shell |
| 186 | +在arm平台执行hbase从src部署时候 mvn install的时候出现 |
| 187 | + |
| 188 | +[INFO] Apache HBase - Archetypes .......................... SKIPPED |
| 189 | +[INFO] Apache HBase - Exemplar for hbase-client archetype . SKIPPED |
| 190 | +[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SKIPPED |
| 191 | +[INFO] Apache HBase - Archetype builder ................... SKIPPED |
| 192 | +[INFO] ------------------------------------------------------------------------ |
| 193 | +[INFO] BUILD FAILURE |
| 194 | +[INFO] ------------------------------------------------------------------------ |
| 195 | +[INFO] Total time: 27:04 min |
| 196 | +[INFO] Finished at: 2025-05-07T20:07:30+08:00 |
| 197 | +[INFO] ------------------------------------------------------------------------ |
| 198 | +[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.1.0:test (secondPartTestsExecution) on project hbase-server: There are test failures. |
| 199 | +[ERROR] |
| 200 | +[ERROR] Please refer to /home/wangchenyu/HBase/hbase-2.5.11/hbase-server/target/surefire-reports for the individual test results. |
| 201 | +[ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. |
| 202 | +[ERROR] -> [Help 1] |
| 203 | +[ERROR] |
| 204 | +[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. |
| 205 | +[ERROR] Re-run Maven using the -X switch to enable full debug logging. |
| 206 | +[ERROR] |
| 207 | +[ERROR] For more information about the errors and possible solutions, please read the following articles: |
| 208 | +[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException |
| 209 | +[ERROR] |
| 210 | +[ERROR] After correcting the problems, you can resume the build with the command |
| 211 | +[ERROR] mvn <args> -rf :hbase-server |
| 212 | +``` |
| 213 | + |
| 214 | +解决办法 |
| 215 | + |
| 216 | +```shell |
| 217 | + mvn install -DskipTests -rf :hbase-server |
| 218 | +``` |
| 219 | + |
| 220 | +从失败的模块继续构建,执行过后可以构建成功。 |
| 221 | +### SLF4J多绑定警告 |
| 222 | + |
| 223 | +```text |
| 224 | +SLF4J: Class path contains multiple SLF4J bindings. |
| 225 | +SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| 226 | +SLF4J: Found binding in [jar:file:/home/wangchenyu/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.17.2/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| 227 | +SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. |
| 228 | +SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] |
| 229 | +running master, logging to /home/wangchenyu/HBase/hbase-2.5.11/bin/../logs/hbase-wangchenyu-master-localhost.out |
| 230 | +SLF4J: Class path contains multiple SLF4J bindings. |
| 231 | +SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.4.0/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| 232 | +SLF4J: Found binding in [jar:file:/home/wangchenyu/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.17.2/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] |
| 233 | +SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. |
| 234 | +``` |
| 235 | + |
| 236 | +不管,不是关键 |
| 237 | + |
| 238 | +### zookeeper间断式休眠 |
| 239 | + |
| 240 | +在运行`./bin/start-hbase.sh`启动之后,运行`./bin/hbase shell`之后zookeeper时不时断联,但是连续按下回车之后,shell确实是有反应的,推测是zookeeper运行后被强制下线,但是又不断重连,经过短暂时间之后又会下线。 |
| 241 | + |
| 242 | +```shell |
| 243 | +2025-05-08 00:32:31,348 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) |
| 244 | +2025-05-08 00:32:31,348 WARN zookeeper.ClientCnxn: Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. |
| 245 | +java.net.ConnectException: Connection refused |
| 246 | + at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 247 | + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) |
| 248 | + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) |
| 249 | + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) |
| 250 | +2025-05-08 00:32:32,449 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. |
| 251 | +2025-05-08 00:32:32,449 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) |
| 252 | +2025-05-08 00:32:32,449 WARN zookeeper.ClientCnxn: Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. |
| 253 | +java.net.ConnectException: Connection refused |
| 254 | + at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 255 | + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) |
| 256 | + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) |
| 257 | + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) |
| 258 | +2025-05-08 00:32:33,550 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. |
| 259 | +2025-05-08 00:32:33,550 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) |
| 260 | +2025-05-08 00:32:33,551 WARN zookeeper.ClientCnxn: Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. |
| 261 | +java.net.ConnectException: Connection refused |
| 262 | + at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 263 | + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) |
| 264 | + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) |
| 265 | + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) |
| 266 | + |
| 267 | +hbase:002:0> |
| 268 | +hbase:003:0> 2025-05-08 00:32:34,651 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. |
| 269 | +2025-05-08 00:32:34,651 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) |
| 270 | +2025-05-08 00:32:34,652 WARN zookeeper.ClientCnxn: Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. |
| 271 | +java.net.ConnectException: Connection refused |
| 272 | + at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 273 | + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) |
| 274 | + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) |
| 275 | + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) |
| 276 | + |
| 277 | +hbase:004:0> |
| 278 | +hbase:005:0> |
| 279 | +hbase:006:0> |
| 280 | +hbase:007:0> |
| 281 | +hbase:008:0> |
| 282 | +hbase:009:0> 2025-05-08 00:32:35,753 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. |
| 283 | +2025-05-08 00:32:35,753 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) |
| 284 | +2025-05-08 00:32:35,753 WARN zookeeper.ClientCnxn: Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. |
| 285 | +java.net.ConnectException: Connection refused |
| 286 | + at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) |
| 287 | + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) |
| 288 | + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) |
| 289 | + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) |
| 290 | + |
| 291 | +hbase:010:0> |
| 292 | +``` |
| 293 | + |
| 294 | +注意在我连续按回车的时候,shell短暂的显示了出来,实际上短暂的进入到了shell中,但是感觉zookeeper一直被什么东西杀死,然后又重启,在这个间隙中,我可以短暂的进入shell。 |
| 295 | + |
| 296 | +#### 尝试解决方案1 |
| 297 | + |
| 298 | +手动启动zookeeper |
| 299 | + |
| 300 | +```shell |
| 301 | +bin/hbase-daemon.sh start zookeeper |
| 302 | +``` |
| 303 | + |
| 304 | +再次尝试连接,即可连接上 |
| 305 | + |
| 306 | +``` |
| 307 | +./bin/hbase shell |
| 308 | +``` |
| 309 | + |
| 310 | +但是出现新问题 |
| 311 | + |
| 312 | +```shell |
| 313 | +hbase:001:0> list |
| 314 | +TABLE |
| 315 | + |
| 316 | +ERROR: KeeperErrorCode = NoNode for /hbase/master |
| 317 | + |
| 318 | +For usage try 'help "list"' |
| 319 | + |
| 320 | +Took 0.1152 seconds |
| 321 | +``` |
| 322 | + |
| 323 | +找到原因,这是分布式部署时需要的步骤,单机部署应该不需要显示的手动启动zookeeper。所以实际上问题未能解决。 |
| 324 | + |
| 325 | +#### 尝试解决方案2 |
| 326 | + |
| 327 | +更换jdk版本。 |
| 328 | + |
| 329 | +由jdk17更换为jdk11,未能解决。 |
| 330 | + |
| 331 | +#### 尝试解决方案3 |
| 332 | + |
| 333 | +指定网络环境 |
| 334 | + |
| 335 | +修改hbase目录下的`conf/hbase-site.xml`,指定了zookeeper的运行参数 |
| 336 | + |
| 337 | +```xml |
| 338 | +<property> |
| 339 | + <name>hbase.zookeeper.quorum</name> |
| 340 | + <value>127.0.0.1</value> |
| 341 | + </property> |
| 342 | + <property> |
| 343 | + <name>hbase.zookeeper.property.clientPort</name> |
| 344 | + <value>2181</value> |
| 345 | + <description>ZooKeeper客户端连接端口</description> |
| 346 | + </property> |
| 347 | +``` |
| 348 | + |
| 349 | +未能解决问题。 |
| 350 | + |
| 351 | +#### 尝试解决方案3 |
| 352 | + |
| 353 | +修改`conf/hbase-site.xml`配置,指定了hbase存放数据目录,成功运行。 |
| 354 | + |
| 355 | +```xml |
| 356 | +<property> |
| 357 | + <name>hbase.cluster.distributed</name> |
| 358 | + <value>false</value> |
| 359 | + </property> |
| 360 | + <property> |
| 361 | + <name>hbase.tmp.dir</name> |
| 362 | + <value>/home/wangchenyu/HBase/hbase-data/tmp</value> |
| 363 | + </property> |
| 364 | + <property> |
| 365 | + <name>hbase.rootdir</name> |
| 366 | + <value>file:///home/wangchenyu/HBase/hbase-data/hbase</value> |
| 367 | + </property> |
| 368 | + <property> |
| 369 | + <name>hbase.unsafe.stream.capability.enforce</name> |
| 370 | + <value>false</value> |
| 371 | + </property> |
| 372 | +</configuration> |
| 373 | +``` |
| 374 | + |
| 375 | +其中hbase.rootdir属性必须指定 |
| 376 | + |
| 377 | +>在单机模式下,如果不明确指定使用本地文件系统(通过`file://`前缀),HBase默认会尝试使用HDFS作为存储系统,这会导致连接错误,因为单机模式通常没有配置HDFS |
| 378 | +
|
| 379 | +--- |
| 380 | + |
| 381 | +单机部署阶段参考文章 |
| 382 | + |
| 383 | +https://www.cnblogs.com/h--d/p/11580398.html |
| 384 | + |
| 385 | +https://www.cnblogs.com/xxxchik/p/16417292.html |
| 386 | + |
| 387 | +https://blog.51cto.com/u_16099192/8703059 |
0 commit comments