vlambda博客
学习文章列表

给MySQL1.0亿数据,把连查速度压到4.2S

这儿有坑

土豆脑子一闪而过的是分页查询,limit分页。但HR能把这问题留到决赛圈,说明他肯定想你往坑里跳,把你绕进去他就能再压压你工资给MySQL1.0亿数据,把连查速度压到4.2S给MySQL1.0亿数据,把连查速度压到4.2S给MySQL1.0亿数据,把连查速度压到4.2S

破局

《阿里巴巴JAVA开发手册》里土豆找到这么一段话

给MySQL1.0亿数据,把连查速度压到4.2S

实操一下

实操环境:

vmware10+centos7.4+mysql5.7.22
centos7内存4.5G,4核,50G硬盘。
mysql配置为2G,特别说明硬盘是SSD。

给MySQL1.0亿数据,把连查速度压到4.2S

准备简介

这里土豆简介一些实操环境,都了解的大佬可以跳过这一段。

我们电脑很多是 Windows 系统,Windows 有 Windows 2003, Windows 2007, Win7,Win10。这些都是版本
还有其他普通人不常用的操作系统,如 Linux,Dos,Mac,Unix等。


其中Linux 系统,有2个系列:debian 和 redhat 。

debian系主要有Debian,Ubuntu,Mint等及其衍生版本;

redhat系主要有RedHat,Fedora,CentOs等。


那有小伙伴就要问了,我的电脑是Windows,难道还要安装CentOs吗?这里就要介绍vmware10了,用vmware10创建一个虚拟机,在虚拟机安装CentOs就基本准备完成了。


CentOs下载土豆觉得这篇博客写的不错

https://blog.csdn.net/qq_39135287/article/details/83993574
https://vault.centos.org/

这个是vmware10相关下载:

VMware Workstation for Windows

下载过程指导土豆给大家找了篇不错的博客,这里土豆就不赘述了

https://blog.csdn.net/weixin_45551608/article/details/115347481?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522164913141216781683911091%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=164913141216781683911091&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~baidu_landing_v2~default-5-115347481.142^v5^pc_search_insert_es_download,157^v4^control&utm_term=vmware10&spm=1018.2226.3001.4187

数据库mysql土豆就不说了吧,需要安装包或者上述三个安装过程出现问题可以私聊土豆。

下面回到正题

压进4.2S

首先!!!这个实验起码要1.5亿数据才够用.

代码创建1.5亿?

给MySQL1.0亿数据,把连查速度压到4.2S


代码跑一天也时间也不够。这里土豆写好了两个数据库脚本,快速生成1.5亿数据,一个是mysql数据库,一个Oracle数据库。各位CV战神自便,土豆只需要你可爱的小手点的给MySQL1.0亿数据,把连查速度压到4.2S

1.mysql

use stu;drop table if exists student;create table student ( s_id int(11) not null auto_increment , sno int(11),  sname varchar(50),  sage int(11),  ssex varchar(8) , father_id int(11), mather_id int(11), note varchar(500), primary key (s_id), unique key uk_sno (sno) ) engine=innodb default charset=utf8mb4;truncate table student; delimiter $$drop function if exists insert_student_data $$create function insert_student_data() returns int deterministic begin declare i int;set i=1;while i<50000000 do insert into student values(i ,i, concat('name',i),i,case when floor(rand()*10)%2=0 then 'f' else 'm' end,floor(rand()*100000),floor(rand()*1000000),concat('note',i) );set i=i+1; end while;return 1; end$$delimiter ; select insert_student_data();select count(*) from student;use stu;create table course (  c_id int(11) not null auto_increment , cname varchar(50) note varchar(500), primary key (c_id) ) engine=innodb default charset=utf8mb4;truncate table course; delimiter $$drop function if exists insert_course_data $$create function insert_course_data() returns int deterministic begin declare i int;set i=1;while i<=1000 do insert into course values(i , concat('course',i),floor(rand()*1000),concat('note',i) );set i=i+1; end while;return 1; end$$delimiter ; select insert_course_data();select count(*) from course;use stu;drop table if exists sc;create table sc (  s_id int(11),  c_id int(11),  t_id int(11), score int(11)  ) engine=innodb default charset=utf8mb4;truncate table sc; delimiter $$drop function if exists insert_sc_data $$create function insert_sc_data() returns int deterministic begin declare i int;set i=1;while i<=50000000 do insert into sc values( i,floor(rand()*1000),floor(rand()*10000000),floor(rand()*750)) ;set i=i+1; end while;return 1; end$$delimiter ; select insert_sc_data();commit;select insert_sc_data();commit;create index idx_s_id on sc(s_id) ; create index idx_t_id on sc(t_id) ; create index idx_c_id on sc(c_id) ; select count(*) from sc;use stu;drop table if exists teacher;create table teacher (  t_id int(11) not null auto_increment , tname varchar(50) , note varchar(500),primary key (t_id) ) engine=innodb default charset=utf8mb4;
truncate table teacher; delimiter $$drop function if exists insert_teacher_data $$create function insert_teacher_data() returns int deterministic begin declare i int;set i=1;while i<=10000000 do insert into teacher values(i , concat('tname',i),concat('note',i) );set i=i+1; end while;return 1; end$$delimiter ; select insert_teacher_data();commit;select count(*) from teacher;

2.Oracle


create tablespace scott_data datafile '/home/oracle/oracle_space/sitpay1/scott_data.dbf' size 1024m autoextend on; create tablespace scott_index datafile '/home/oracle/oracle_space/sitpay1/scott_index.dbf' size 64m autoextend on; create temporary tablespace scott_temp tempfile '/home/oracle/oracle_space/sitpay1/scott_temp.dbf' size 64m autoextend on; drop user scott cascade;create user scott identified by tiger default tablespace scott_data temporary tablespace scott_temp ;grant resource,connect,dba to scott;drop table student;create table student ( s_id number(11) , sno number(11) , sname varchar2(50), sage number(11), ssex varchar2(8) , father_id number(11), mather_id number(11), note varchar2(500) ) nologging;truncate table student;create or replace procedure insert_student_datais q number(11);begin q:=0; for i in 1..50 loop insert /*+append*/ into student select rownum+q as s_id,rownum+q as sno, concat('sutdent',rownum+q ) as sname,floor(dbms_random.value(1,100)) as sage,'f' as ssex,rownum+q as father_id,rownum+q as mather_id,concat('note',rownum+q ) as note from dual connect by level<=1000000; q:=q+1000000;commit;end loop; end insert_student_data;/call insert_student_data();alter table student add constraint pk_student primary key (s_id);commit; select count(*) from student;create table course ( c_id number(11) primary key, cname varchar2(50), note varchar2(500) ) ;truncate table course;create or replace procedure insert_course_datais q number(11);begin
for i in 1..1000 loopinsert /*+append*/ into course values(i , concat('name',i),concat('note',i) ); end loop; end insert_course_data;/call insert_course_data();commit; select count(*) from course;create table sc ( s_id number(11), c_id number(11), t_id number(11), score number(11) ) nologging;truncate table sc;create or replace procedure insert_sc_datais q number(11);begin q:=0; for i in 1..50 loop insert /*+append*/ into sc select rownum+q as s_id, floor(dbms_random.value(0,1000)) as c_id,floor(dbms_random.value(0,10000000)) t_id,floor(dbms_random.value(0,750)) as score from dual connect by level<=1000000; q:=q+1000000;commit;end loop; end insert_sc_data;/call insert_sc_data();create index idx_s_id on sc(s_id) ; create index idx_t_id on sc(t_id) ; create index idx_c_id on sc(c_id) ; select count(*) from sc;create table teacher ( t_id number(11) , tname varchar2(50) , note varchar2(500) )nologging ;truncate table teacher;create or replace procedure insert_teacher_datais q number(11);begin q:=0; for i in 1..10 loop insert /*+append*/ into teacher select rownum+q as t_id, concat('teacher',rownum+q ) as tname,concat('note',rownum+q ) as note from dual connect by level<=1000000; q:=q+1000000;commit;end loop; end insert_teacher_data;/call insert_teacher_data();alter table teacher add constraint pk_teacher primary key (t_id);select count(*) from teacher;
整个设计结构是:student表,teacher表,course表,sc关系表。
实践查询选修“tname333”所授课程成绩最高学生姓名及成绩。
sql语句查询是:

给MySQL1.0亿数据,把连查速度压到4.2S

select Student.Sname,course.cname,score from Student,sc,course,teacher where Student.s_id=Sc.s_id and sc.c_id=course.c_id and sc.t_id=teacher.t_id and teacher.tname='tname333' and sc.score=(select max(score)from sc wheresc.t_id=teacher.t_id);

土豆来分析下,4张表内联隐式查询,带个子查询。比起动辄就十几张表查询很基础了。土豆把它分解成3个sql:

给MySQL1.0亿数据,把连查速度压到4.2S

select max(score) from sc,teacher where sc.t_id=teacher.t_id and teacher.tname='tname333';select sc.t_id,sc.s_id,score from sc,teacher where sc.t_id=teacher.t_id and teacher.tname='tname333' and score=665;select Student.sname,course.cname,score from student,sc,course where student.s_id=sc.s_id andsc.s_id in(2078635200,564600020,45693210050) and course.c_id=sc.c_id;

然后土豆做数据:1000W的Score(Student选2个course,500W的Student,100W的teacher,teacher对应Student是5,1000course。
sc表分了有索引和没有索引情况。

再接下来,我会造1亿选课记录(一个学生选修2门课),5000万学生,1000万老师,1000门课。然后分别执行上述语句。最后我会在oracle数据库上执行上述语句。


让我们看看一号选手MySQL:

给MySQL1.0亿数据,把连查速度压到4.2S

0.09秒,也就是90毫秒的速度,但整个查询结束用时4.2秒,而高出1.5亿时候结果就挂掉了。

接下来看看Oracle数据库精彩表现:

给MySQL1.0亿数据,把连查速度压到4.2S

mysql一晚上没结果。没索引还join很多的oracle26秒就出结果了。可见 mysql的join有待改进。
我就知道你会点赞关注加“在看”)

这里将持续更新互联网技术和一些故事,同时也会分享一些土豆认为有价值的财会知识和趣事(土豆本科学的财会)

如果文章对你的生活有帮助,可点在看或分享,感谢帅哥美女对土豆的支持。