
MySQL主从复制是构建高可用、高性能MySQL架构的核心基石,其本质是通过日志同步实现多节点的数据一致性,核心应用场景覆盖四大方向:
在深入原理前,必须先厘清两个极易混淆的核心日志,这是理解主从复制的前提:
日志类型 | 所属层级 | 日志性质 | 核心作用 | 生命周期 |
|---|---|---|---|---|
binlog(二进制日志) | MySQL Server层 | 逻辑日志 | 记录所有数据修改类操作,是主从复制的数据载体 | 循环写入,可通过配置保留时长 |
redo log(重做日志) | InnoDB引擎层 | 物理日志 | 实现事务的持久性,崩溃恢复时保障已提交事务不丢失 | 固定大小,循环覆盖写入 |
主从复制的核心数据载体是binlog,与存储引擎无关,无论使用InnoDB还是MyISAM引擎,都可以通过binlog实现主从复制。
binlog支持三种格式,MySQL 8.0默认使用ROW格式:
MySQL主从复制的核心是单主多从的流式日志同步架构,全链路由3个核心线程协同完成,分别是主库的Binlog Dump线程、从库的IO Thread与SQL Thread。
主从复制的完整数据流转链路共8个核心步骤,覆盖从主库事务提交到从库数据同步完成的全流程:

MySQL 5.6引入的GTID(Global Transaction Identifier)是主从复制的核心优化,彻底解决了传统基于文件+位置复制的痛点,是当前生产环境的标准配置。
source_id:transaction_id,其中source_id是MySQL实例的server_uuid(全局唯一),transaction_id是该实例上递增的事务编号MySQL提供了三种核心复制模式,分别是异步复制、半同步复制、组复制(MGR),三者的核心差异在于数据一致性保障级别、事务提交逻辑、性能损耗,适用于不同的业务场景。
异步复制是MySQL的默认复制模式,也是最基础的复制架构。
主库执行完客户端事务,完成binlog刷盘后,会立即向客户端返回事务提交成功,无需等待从库的IO线程接收binlog并返回ACK确认。主库与从库的同步完全异步,主库不感知从库的同步状态。
主库my.cnf核心配置
[mysqld]
server-id=1
log-bin=mysql-bin
binlog_format=ROW
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_expire_logs_seconds=604800
从库my.cnf核心配置
[mysqld]
server-id=2
relay_log=relay-bin
read_only=ON
super_read_only=ON
gtid_mode=ON
enforce_gtid_consistency=ON
log_slave_updates=OFF
从库建立复制链路SQL
CHANGE MASTER TO
MASTER_HOST='192.168.1.100',
MASTER_PORT=3306,
MASTER_USER='repl',
MASTER_PASSWORD='Repl@123456',
MASTER_AUTO_POSITION=1;
START SLAVE;
半同步复制是MySQL在异步复制基础上优化的强一致复制模式,MySQL 5.5引入,5.7升级为无损半同步复制,MySQL 8.0默认开启无损模式。
主库事务提交时,完成binlog刷盘后,不会立即向客户端返回成功,而是等待至少N个从库的IO线程接收binlog、写入Relay Log并返回ACK确认后,才会向客户端返回事务提交成功。
无损半同步与传统半同步的核心差异在于等待ACK的时机:
AFTER_SYNC,默认):在引擎层事务提交前等待从库ACK,保障主库宕机时,已返回客户端成功的事务一定已同步到至少一个从库,无数据丢失风险AFTER_COMMIT):在引擎层事务提交后等待从库ACK,若主库在提交后、收到ACK前宕机,会出现已提交事务未同步到从库的情况,存在数据丢失风险主库安装插件与配置
INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
SET GLOBAL rpl_semi_sync_master_enabled = 1;
SET GLOBAL rpl_semi_sync_master_wait_for_slave_count = 1;
SET GLOBAL rpl_semi_sync_master_wait_point = 'AFTER_SYNC';
SET GLOBAL rpl_semi_sync_master_timeout = 1000;
从库安装插件与配置
INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
SET GLOBAL rpl_semi_sync_slave_enabled = 1;
STOP SLAVE IO_THREAD;
START SLAVE IO_THREAD;
核心参数说明:
rpl_semi_sync_master_wait_for_slave_count:等待ACK的从库数量,生产环境建议设置为1rpl_semi_sync_master_timeout:超时时间,单位毫秒,超时后自动降级为异步复制,避免阻塞主库业务组复制是MySQL 5.7.17引入的分布式强一致复制方案,基于Paxos分布式共识协议实现,彻底解决了传统主从复制的脑裂、数据不一致问题。
组复制由多个MySQL节点组成复制组,组内所有节点通过Paxos协议达成数据共识,事务必须经过组内多数节点(N/2+1) 确认后才能提交。支持单主模式(仅一个节点可写,其他节点只读)和多主模式(所有节点均可写),自带故障检测、自动选主、脑裂防护、数据一致性校验能力。
对比维度 | 异步复制 | 无损半同步复制 | 组复制(MGR) |
|---|---|---|---|
数据一致性保障 | 弱,主库宕机易丢失数据 | 强,已提交事务不丢失 | 极强,分布式共识保障 |
事务提交逻辑 | 主库刷盘后立即返回 | 等待至少1个从库ACK后返回 | 等待多数节点共识确认后返回 |
性能损耗 | 极低 | 低,仅增加网络RTT | 中高,共识协议带来额外开销 |
故障转移 | 人工介入,复杂度高 | 需第三方组件(MGR/Keepalived) | 原生自动完成,无需人工介入 |
脑裂防护 | 无,需人工规避 | 无,需第三方组件保障 | 原生支持,彻底杜绝脑裂 |
多主写入 | 不支持 | 不支持 | 原生支持 |
运维成本 | 极低 | 低 | 中高 |
主从延迟是生产环境最常见的问题,指从库重放数据的速度跟不上主库写入速度,导致从库数据落后于主库,影响读写分离的一致性与故障切换的RTO。
很多开发者仅依赖Seconds_Behind_Master判断延迟,这个值存在明显的局限性,仅能作为参考,不能作为唯一判断标准。
SHOW SLAVE STATUS\G
核心关键字段解读:
Slave_IO_Running/Slave_SQL_Running:两个线程必须均为Yes,否则复制已中断Master_Log_File/Read_Master_Log_Pos:从库IO线程已读取到的主库binlog位置Relay_Master_Log_File/Exec_Master_Log_Pos:从库SQL线程已重放完成的主库binlog位置Seconds_Behind_Master:从库重放的事务与主库最新事务的时间差,单位秒,长事务、系统时间不一致、复制中断时会出现误判Read_Master_Log_Pos与主库当前binlog位置差距大:延迟发生在IO线程,主从之间的日志传输环节出现瓶颈Read_Master_Log_Pos与主库一致,Exec_Master_Log_Pos与Read_Master_Log_Pos差距大:延迟发生在SQL线程,从库日志重放环节出现瓶颈,90%的主从延迟都属于此类-- 主库执行,查看当前最新GTID
SELECT @@global.gtid_executed;
-- 从库执行,查看已重放的GTID
SELECT @@global.gtid_executed;
对比两个结果的GTID集合差值,可精准判断延迟的事务数量,无Seconds_Behind_Master的误判问题。
SELECT * FROM performance_schema.replication_applier_status_by_worker;
可查看每个并行复制worker线程的执行状态、延迟事务数,精准定位哪个worker线程出现了重放瓶颈。
IO线程延迟的核心是主库binlog无法快速传输到从库,核心排查方向:
sync_binlog=1会保障binlog持久化,但会增加刷盘IO开销,非核心业务可适当调整SQL线程延迟是主从延迟的核心重灾区,核心是从库重放binlog的速度跟不上主库写入速度,核心排查方向按出现概率从高到低排序:
UPDATE/DELETE语句时,无法通过索引快速定位行,只能通过全表扫描逐行匹配所有字段,表数据量越大,重放耗时越长,最终导致严重延迟。 排查SQL:SELECT table_schema,table_name,engine FROM information_schema.tables
WHERE table_schema NOT IN ('sys','mysql','information_schema','performance_schema')
AND (engine='InnoDB' AND table_name NOT IN (SELECT table_name FROM information_schema.key_column_usage WHERE constraint_name='PRIMARY'));
-- 主库查看当前运行的长事务
SELECT trx_id,trx_started,trx_rows_modified,user,host,db FROM information_schema.innodb_trx
WHERE TIMESTAMPDIFF(SECOND,trx_started,NOW()) > 10;
-- 从库查看正在重放的事务
SELECT * FROM performance_schema.events_statements_current WHERE thread_id IN (SELECT thread_id FROM performance_schema.replication_applier_status_by_worker);
-- 查看从库并行复制配置
SHOW VARIABLES LIKE 'slave_parallel%';
SHOW VARIABLES LIKE 'binlog_transaction_dependency_tracking';
super_read_only=ON,人为或业务程序在从库执行了写操作,导致主从数据不一致,重放时出现行匹配失败,复制中断,延迟累积主从延迟的优化核心是减少主库写入的binlog体积、提升从库的重放并行度、降低重放的资源开销,需从架构、主库、从库、SQL四个维度全链路优化。
[mysqld]
slave_parallel_type=LOGICAL_CLOCK
slave_parallel_workers=8
binlog_transaction_dependency_tracking=WRITESET
transaction_write_set_extraction=XXHASH64
slave_preserve_commit_order=ON
配置说明:
- `slave_parallel_workers`:并行复制worker线程数,建议设置为从库CPU核心数的50%-70%,最高不超过32
- `binlog_transaction_dependency_tracking=WRITESET`:开启基于行修改的并行复制,只要事务修改的行无冲突,即可并行重放,无需同一批提交
- `slave_preserve_commit_order=ON`:保障从库事务提交顺序与主库一致,避免主从数据不一致
-- 优化前:大事务,一次性删除百万行数据,导致严重主从延迟
DELETE FROM order_info WHERE create_time < '2024-01-01';
-- 优化后:拆分小事务,每次删除1000行,循环执行
WHILE ROW_COUNT() > 0 DO
DELETE FROM order_info WHERE create_time < '2024-01-01' LIMIT 1000;
SELECT SLEEP(0.1);
END WHILE;
gh-ost/pt-online-schema-change工具实现无锁表DDLROW格式的binlog,保障主从一致性binlog_transaction_compression=ONinnodb_buffer_pool_size设置为物理内存的50%-70%,最大化缓存数据,减少磁盘IO[mysqld]
read_only=ON
super_read_only=ON
innodb_flush_log_at_trx_commit=2
sync_binlog=1000
innodb_flush_method=O_DIRECT
log_slave_updates=OFF
slow_query_log=OFF
配置说明:
- `super_read_only=ON`:禁止超级用户在从库执行写操作,彻底避免人为写入导致的数据不一致
- `innodb_flush_log_at_trx_commit=2`:从库无需和主库一样的双1配置,降低刷盘IO开销,提升重放性能
- `log_slave_updates=OFF`:关闭从库的binlog写入(级联复制除外),减少IO开销
UPDATE user SET name='test' WHERE id=1,即使数据无变化,也会生成binlog,增加主从同步压力CREATE TABLE user_no_pk (
nameVARCHAR(50) NOTNULL,
age INTNOTNULL,
create_time DATETIME NOTNULLDEFAULTCURRENT_TIMESTAMP
) ENGINE=InnoDBDEFAULTCHARSET=utf8mb4;
-- 插入100万行测试数据
INSERTINTO user_no_pk (name,age)
WITHRECURSIVE t AS (
SELECT1AS n UNIONALLSELECT n+1FROM t WHERE n < 1000000
)
SELECTCONCAT('user',n),n%100FROM t;
UPDATE user_no_pk SET age=20 WHERE name='user999999';
主库执行耗时0.01秒,但是从库重放该语句耗时超过10秒,直接导致主从延迟飙升。核心原因是ROW格式下,从库无主键,只能全表扫描100万行数据,逐行匹配字段定位要更新的行。
给表添加显式主键,优化后从库重放耗时降至0.01秒,延迟彻底消除:
ALTER TABLE user_no_pk ADD COLUMN id BIGINT PRIMARY KEY AUTO_INCREMENT FIRST;
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.5</version>
<relativePath/>
</parent>
<groupId>com.jam.demo</groupId>
<artifactId>mysql-rw-split-demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>mysql-rw-split-demo</name>
<properties>
<java.version>17</java.version>
<mybatis-plus.version>3.5.7</mybatis-plus.version>
<guava.version>33.1.0-jre</guava.version>
<fastjson2.version>2.0.49</fastjson2.version>
<springdoc.version>2.5.0</springdoc.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>${mybatis-plus.version}</version>
</dependency>
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
<version>${springdoc.version}</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.32</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>${guava.version}</version>
</dependency>
<dependency>
<groupId>com.alibaba.fastjson2</groupId>
<artifactId>fastjson2</artifactId>
<version>${fastjson2.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
package com.jam.demo.enums;
import lombok.Getter;
@Getter
publicenum DataSourceType {
MASTER("master", "主库-写操作"),
SLAVE("slave", "从库-读操作");
privatefinal String code;
privatefinal String desc;
DataSourceType(String code, String desc) {
this.code = code;
this.desc = desc;
}
}
package com.jam.demo.config;
import com.jam.demo.enums.DataSourceType;
import org.springframework.util.ObjectUtils;
publicclass DynamicDataSourceContextHolder {
privatestaticfinal ThreadLocal<DataSourceType> CONTEXT_HOLDER = new ThreadLocal<>();
private DynamicDataSourceContextHolder() {
}
/**
* 设置当前数据源类型
* @param dataSourceType 数据源类型
*/
public static void setDataSourceType(DataSourceType dataSourceType) {
if (ObjectUtils.isEmpty(dataSourceType)) {
thrownew IllegalArgumentException("数据源类型不能为空");
}
CONTEXT_HOLDER.set(dataSourceType);
}
/**
* 获取当前数据源类型
* @return 数据源类型
*/
public static DataSourceType getDataSourceType() {
return ObjectUtils.isEmpty(CONTEXT_HOLDER.get()) ? DataSourceType.MASTER : CONTEXT_HOLDER.get();
}
/**
* 清除数据源类型
*/
public static void clearDataSourceType() {
CONTEXT_HOLDER.remove();
}
}
package com.jam.demo.config;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
public class DynamicRoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DynamicDataSourceContextHolder.getDataSourceType().getCode();
}
}
package com.jam.demo.config;
import com.alibaba.druid.pool.DruidDataSource;
import com.baomidou.mybatisplus.core.MybatisConfiguration;
import com.baomidou.mybatisplus.core.config.GlobalConfig;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import com.jam.demo.enums.DataSourceType;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.support.TransactionTemplate;
import javax.sql.DataSource;
import java.util.HashMap;
import java.util.Map;
@Configuration
publicclass DataSourceConfig {
@Bean
@ConfigurationProperties(prefix = "spring.datasource.master")
public DataSource masterDataSource() {
returnnew DruidDataSource();
}
@Bean
@ConfigurationProperties(prefix = "spring.datasource.slave")
public DataSource slaveDataSource() {
returnnew DruidDataSource();
}
@Bean
@Primary
public DataSource dynamicDataSource() {
Map<Object, Object> targetDataSources = new HashMap<>(2);
targetDataSources.put(DataSourceType.MASTER.getCode(), masterDataSource());
targetDataSources.put(DataSourceType.SLAVE.getCode(), slaveDataSource());
DynamicRoutingDataSource dynamicDataSource = new DynamicRoutingDataSource();
dynamicDataSource.setTargetDataSources(targetDataSources);
dynamicDataSource.setDefaultTargetDataSource(masterDataSource());
return dynamicDataSource;
}
@Bean
public MybatisSqlSessionFactoryBean sqlSessionFactory() throws Exception {
MybatisSqlSessionFactoryBean sessionFactory = new MybatisSqlSessionFactoryBean();
sessionFactory.setDataSource(dynamicDataSource());
sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml"));
MybatisConfiguration configuration = new MybatisConfiguration();
configuration.setMapUnderscoreToCamelCase(true);
configuration.setCacheEnabled(false);
sessionFactory.setConfiguration(configuration);
GlobalConfig globalConfig = new GlobalConfig();
globalConfig.setBanner(false);
sessionFactory.setGlobalConfig(globalConfig);
return sessionFactory;
}
@Bean
public PlatformTransactionManager transactionManager() {
returnnew DataSourceTransactionManager(dynamicDataSource());
}
@Bean
public TransactionTemplate transactionTemplate(PlatformTransactionManager transactionManager) {
returnnew TransactionTemplate(transactionManager);
}
}
package com.jam.demo.annotation;
import com.jam.demo.enums.DataSourceType;
import java.lang.annotation.*;
@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface DataSourceSwitch {
DataSourceType value() default DataSourceType.MASTER;
}
package com.jam.demo.aspect;
import com.jam.demo.annotation.DataSourceSwitch;
import com.jam.demo.config.DynamicDataSourceContextHolder;
import lombok.extern.slf4j.Slf4j;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
@Slf4j
@Aspect
@Component
@Order(1)
publicclass DataSourceSwitchAspect {
@Pointcut("@annotation(com.jam.demo.annotation.DataSourceSwitch)")
public void dataSourceSwitchPointCut() {
}
@Around("dataSourceSwitchPointCut()")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
Method method = signature.getMethod();
DataSourceSwitch annotation = method.getAnnotation(DataSourceSwitch.class);
if (annotation != null) {
DynamicDataSourceContextHolder.setDataSourceType(annotation.value());
}
try {
return joinPoint.proceed();
} finally {
DynamicDataSourceContextHolder.clearDataSourceType();
}
}
}
package com.jam.demo.entity;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import io.swagger.v3.oas.annotations.media.Schema;
import lombok.Data;
import java.time.LocalDateTime;
@Data
@TableName("user_info")
@Schema(description = "用户信息实体")
publicclass UserInfo {
@TableId(type = IdType.AUTO)
@Schema(description = "用户ID", example = "1")
private Long id;
@Schema(description = "用户名", example = "testUser")
private String username;
@Schema(description = "年龄", example = "20")
private Integer age;
@Schema(description = "创建时间")
private LocalDateTime createTime;
}
package com.jam.demo.mapper;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.jam.demo.entity.UserInfo;
import org.apache.ibatis.annotations.Mapper;
@Mapper
public interface UserInfoMapper extends BaseMapper<UserInfo> {
}
package com.jam.demo.service;
import com.baomidou.mybatisplus.extension.service.IService;
import com.jam.demo.entity.UserInfo;
public interface UserInfoService extends IService<UserInfo> {
boolean createUser(UserInfo userInfo);
UserInfo getUserById(Long id);
}
package com.jam.demo.service.impl;
import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl;
import com.jam.demo.annotation.DataSourceSwitch;
import com.jam.demo.entity.UserInfo;
import com.jam.demo.enums.DataSourceType;
import com.jam.demo.mapper.UserInfoMapper;
import com.jam.demo.service.UserInfoService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import org.springframework.transaction.support.TransactionTemplate;
import org.springframework.util.ObjectUtils;
/**
* 用户信息服务实现类
* @author ken
*/
@Slf4j
@Service
publicclass UserInfoServiceImpl extends ServiceImpl<UserInfoMapper, UserInfo> implements UserInfoService {
privatefinal TransactionTemplate transactionTemplate;
public UserInfoServiceImpl(TransactionTemplate transactionTemplate) {
this.transactionTemplate = transactionTemplate;
}
@Override
@DataSourceSwitch(DataSourceType.MASTER)
public boolean createUser(UserInfo userInfo) {
if (ObjectUtils.isEmpty(userInfo)) {
thrownew IllegalArgumentException("用户信息不能为空");
}
Boolean result = transactionTemplate.execute(status -> {
try {
returnthis.save(userInfo);
} catch (Exception e) {
status.setRollbackOnly();
log.error("创建用户失败", e);
returnfalse;
}
});
return Boolean.TRUE.equals(result);
}
@Override
@DataSourceSwitch(DataSourceType.SLAVE)
public UserInfo getUserById(Long id) {
if (ObjectUtils.isEmpty(id) || id <= 0) {
thrownew IllegalArgumentException("用户ID不合法");
}
returnthis.getById(id);
}
}
package com.jam.demo.controller;
import com.jam.demo.entity.UserInfo;
import com.jam.demo.service.UserInfoService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
@Slf4j
@RestController
@RequestMapping("/user")
@Tag(name = "用户管理", description = "用户信息增删改查接口")
publicclass UserInfoController {
privatefinal UserInfoService userInfoService;
public UserInfoController(UserInfoService userInfoService) {
this.userInfoService = userInfoService;
}
@PostMapping
@Operation(summary = "创建用户", description = "新增用户信息,路由至主库执行")
public ResponseEntity<Boolean> createUser(@RequestBody UserInfo userInfo) {
return ResponseEntity.ok(userInfoService.createUser(userInfo));
}
@GetMapping("/{id}")
@Operation(summary = "查询用户", description = "根据用户ID查询信息,路由至从库执行")
public ResponseEntity<UserInfo> getUserById(
@Parameter(description = "用户ID", required = true) @PathVariable Long id) {
return ResponseEntity.ok(userInfoService.getUserById(id));
}
}
spring:
application:
name:mysql-rw-split-demo
datasource:
master:
driver-class-name:com.mysql.cj.jdbc.Driver
url:jdbc:mysql://192.168.1.100:3306/demo_db?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
username:root
password:Root@123456
initial-size:5
max-active:20
min-idle:5
max-wait:60000
slave:
driver-class-name:com.mysql.cj.jdbc.Driver
url:jdbc:mysql://192.168.1.101:3306/demo_db?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
username:root
password:Root@123456
initial-size:5
max-active:20
min-idle:5
max-wait:60000
springdoc:
swagger-ui:
path:/swagger-ui.html
enabled:true
api-docs:
enabled:true
mybatis-plus:
mapper-locations:classpath:mapper/*.xml
type-aliases-package:com.jam.demo.entity
configuration:
map-underscore-to-camel-case:true
log-impl:org.apache.ibatis.logging.stdout.StdOutImpl
MySQL主从复制的核心本质是基于binlog的日志同步与重放,理解全链路原理是解决所有主从问题的前提。三种复制模式的选型核心是在数据一致性与性能之间找到平衡,核心业务优先选择无损半同步复制,金融级强一致需求选择MGR组复制。
主从延迟的核心矛盾是主库的并行写入与从库的串行重放之间的性能差距,优化的核心思路是:从根源上减少不必要的binlog生成,最大化提升从库的重放并行度,降低重放环节的资源开销。生产环境中,90%的主从延迟问题都可以通过规范表结构(强制主键)、拆分大事务、开启并行复制这三个手段解决。