vlambda博客
学习文章列表

列表分页查询优化以及缓存优化


 分页查询作为一个基础的功能,CRUD中的一环是必不可少的,那在做分页查询接口的时候,需要注意什么问题呢?



01
分页查询优化
描述:
     用户表user( uid , name,age,create_time,update_time)

如上表所示当用户量少的时候,无论怎么查询是否设置索引其实关系都不大,当用户数据量激增到千万,数亿级别的时候,这个时候就凸显索引的优势了。

正常的分页请求SQL为:

select uid,name,age,create_time,update_timefrom userorder by uid asclimit 0,20
//耗时20 rows in set (0.00 sec)
//explain+----+-------------+------------------+------------+-------+---------------+---------+---------+------+------+----------+-------+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+------+----------+-------+|  1 | SIMPLE      | user  | NULL       | index | NULL          | PRIMARY | 8       | NULL |   20 |   100.00 | NULL  |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+------+----------+-------+


当我们翻页到500页的时候:

select uid,name,age,create_time,update_timefrom userorder by uid asclimit 10000,20
//耗时20 rows in set (0.02 sec)
//explain+----+-------------+------------------+------------+-------+---------------+---------+---------+------+-------+----------+-------+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+-------+----------+-------+| 1 | SIMPLE | user | NULL | index | NULL | PRIMARY | 8 | NULL | 10020 | 100.00 | NULL |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+-------+----------+-------+


当我们翻页到5000页的时候:

select uid,name,age,create_time,update_timefrom userorder by uid asclimit 100000,20
//耗时20 rows in set (0.08 sec)
//expian+----+-------------+------------------+------------+-------+---------------+---------+---------+------+--------+----------+-------+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+--------+----------+-------+| 1 | SIMPLE | user | NULL | index | NULL | PRIMARY | 8 | NULL | 100020 | 100.00 | NULL |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+--------+----------+-------+


当我们翻页到50000页的时候:

select uid,name,age,create_time,update_timefrom userorder by uid asclimit 1000000,20
//耗时20 rows in set (0.78 sec)+----+-------------+------------------+------------+-------+---------------+---------+---------+------+---------+----------+-------+| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+---------+----------+-------+|  1 | SIMPLE      | user  | NULL       | index | NULL          | PRIMARY | 8       | NULL | 1000020 |   100.00 | NULL  |+----+-------------+------------------+------------+-------+---------------+---------+---------+------+---------+----------+-------+


过上述操作与分析可以得出以下结论:
页操作 耗时 扫描行数
limit 0,20 0.00 sec 20
limit 10000,20 0.02 sec 10020
limit 100000,20 0.08 sec 100020
limit 1000000,20 0.78 sec 1000020


所以可以看出当offset越大的时候,查询影响的行数以及耗时都是急剧增加的,故我们可以通过设置游标的查询方式进行优化:



比如目前已经返回20条,我们取前一次返回20条数据的最后一条作为uid游标(不同的业务表,游标值可以不一样,根据实际情况来设计),然后进行查询,这样我们始终影响的行数都是20行,提升了整体分页的效率。

select uid,name,age,create_time,update_timefrom userwhere uid > 20order by uid asclimit 0,20



02
分页查询缓存
描述:
     用户表user( uid , name,age,create_time,update_time)

在第一小节已经对数据层的SQL进行优化,那如何对列表的缓存进行优化呢。这个时候我们就可以想到使用redis的zset进行存储,其中以游标值作为score,可以升序以及降序进行查询与存储。


先来看zset关键的几个方法:

@Autowiredprivate RedisTemplate redisTemplate;
//批量插入zset数据,其中value为对象的json序列化,score为uidpublic void addSortedList(String key,Set<ZSetOperations.TypedTuple<String>> typedTuple){ if(ObjectUtils.isEmpty(typedTuple)){ return; } redisTemplate.opsForZSet().add(key,typedTuple); redisTemplate.expire(key, defaultExpiryTime, TimeUnit.SECONDS);    } //默认按照score升序获取zset数据//其中为包括score[min,max]之间的范围public  Set<ZSetOperations.TypedTuple<String>> rangeByScore(String key,double min,double max,long offset, long count){ return redisTemplate.opsForZSet().rangeByScore(key,min,max,offset,count); }
//默认按照score降序获取zset数据//其中为包括score[min,max]之间的范围public Set<String> reverseRangeByScore(String key,double min,double max,long offset, long count){ return redisTemplate.opsForZSet().reverseRangeByScore(key,min,max,offset,count); }
//删除指定score[min,max]之间的数据//如果删除指定的一个值,则可以min=max即可    public void removeRangeByScore(String key, double min,double max) { redisTemplate.opsForZSet().removeRangeByScore(key,min,max); }

再来看查询用户列表时缓存应该怎么样写:根据uid降序查询用户信息20条

//如果uid没有传就默认设置uid为Long.MAX_VALUEList<User> userList = new ArrayList<>(32); int limit = 0; if(Long.MAX_VALUE == userVO.getUid()){ limit = userVO.getLimit(); }else { limit = userVO.getLimit()+1; }        Set<String> userRedis = redisService.reverseRangeByScore("passport:user",                0,userVO.getUid(),0,limit); if(!ObjectUtils.isEmpty(userRedis)){            for (String userStr:userRedis) {                User user = JSON.parseObject(userStr,User.class); if(!ObjectUtils.isEmpty(user)                        &&!user.getUid().equals(user.getUid())){                   userList.add(user); } } }
//计算limit数量 long cursor = user.getUid(); //判断缓存数量是否够,如果不够在去DB进行查询 if(userVO.getLimit().equals(userList.size())){ return userList;        }else if(ObjectUtils.isEmpty(userList)){ limit = userVO.getLimit();        }else if(userVO.getLimit()>userVO.size()){ limit = userVO.getLimit() - userList.size(); User user = userList.get(userList.size()-1);            cursor = user.getUid(); }
        List<User> userDBList =  slaveMapper.getUserList(userVO.getUid(),cursor,limit); if(!ObjectUtils.isEmpty(userDBList)){            userList.addAll(userDBList); Set<ZSetOperations.TypedTuple<String>> zSetValue = new HashSet<>(32);            for (User u:userDBList) {                zSetValue.add(new DefaultTypedTuple<>(JSON.toJSONString(u),Double.valueOf(u.getUid()))); }            redisService.addSortedList("passport:user",zSetValue); } return userList;


注:对于redis的key管理,建议写一个静态文件类,按照规则进行生成,这样方便后续维护:比如常见的key规则为:

系统:模块:[主键] 

passport:user:1

主键为种具体某个值的时候可以使用






附:
分页请求公共请求参数实体与统一返回实体,如下所示,该分页请求实体可作为各业务模块VO类extends后使用:


分页请求实体(QueryVO):

public class QueryVO implements Serializable {
private static final long serialVersionUID = 1L;
/** * 默认分页-每页最大值 */ private static final int MAX_PAGE_SIZE = 10000;
/** * 批量操作ID */ private List<String> ids;
/** * 当前页 */ private Integer curPage;
/** * 页面大小 */ private Integer pageSize; /** * 排序名称 */ private List<String> columns; /** * 是否升序:默认降序 */ private Boolean asc; /** * 开始时间 */ private Long startTime; /** * 结束时间 */ private Long endTime;
public Integer getCurPage() { if (null == curPage || curPage < 1) { curPage = 1; } return curPage; }
public void setCurPage(Integer curPage) { if (null == curPage || curPage < 1) { curPage = 1; } this.curPage = curPage; }
public Integer getPageSize() { if (null == pageSize || pageSize < 0 || pageSize > MAX_PAGE_SIZE) { pageSize = 10; } return pageSize; }
public void setPageSize(Integer pageSize) { this.pageSize = pageSize; }
public List<String> getColumns() { return columns; }
public void setColumns(List<String> columns) { this.columns = columns; }
public Boolean getAsc() { return asc; }
public void setAsc(Boolean asc) { this.asc = asc; }
public Long getStartTime() { return startTime; }
public void setStartTime(Long startTime) { this.startTime = startTime; }
public Long getEndTime() { return endTime; }
public void setEndTime(Long endTime) { this.endTime = endTime; }
public List<String> getIds() { return ids; }
public void setIds(List<String> ids) { this.ids = ids; }
@Override public String toString() { return super.toString() + "QueryVO{" + "ids=" + ids + ", curPage=" + curPage + ", pageSize=" + pageSize + ", columns=" + columns + ", asc=" + asc + ", startTime=" + startTime + ", endTime=" + endTime + '}'; }}


分页返回实体(ResponsePage):
public class ResponsePage<T> { /** * 返回查询的数组对象 */ private List<T> list;
/** * 当前数据所在页 */ private int page;
/** * 每页显示条数 */ private int limit;
/** * 本次返回数据条数 */ private int count; /** * 服务器估算数据总条数 */ private long total;
/** * 排序字段 */ private String orderField;


public ResponsePage() { }
public ResponsePage(List<T> list, int page, int limit, int count, long total) { this.list = list; this.page = page; this.limit = limit; this.count = count; this.total = total; }
public List<T> getList() { return list; }
public void setList(List<T> list) { this.list = list; }
public int getPage() { return page; }
public void setPage(int page) { this.page = page; }
public int getLimit() { return limit; }
public void setLimit(int limit) { this.limit = limit; }
public int getCount() { return count; }
public void setCount(int count) { this.count = count; }
public long getTotal() { return total; }
public void setTotal(long total) { this.total = total; }
public String getOrderField() { return orderField; }
public void setOrderField(String orderField) { this.orderField = orderField; }
@Override public String toString() { return "PageEntity{" + "list=" + list + ", page=" + page + ", limit=" + limit + ", count=" + count + ", total=" + total + ", orderField='" + orderField + '\'' + '}'; }}