MySQL?中?Varchar(50)?和?varchar(500)?區(qū)別介紹
問(wèn)題
我們?cè)谠O(shè)計(jì)表結(jié)構(gòu)的時(shí)候,設(shè)計(jì)規(guī)范里面有一條如下規(guī)則:對(duì)于可變長(zhǎng)度的字段,在滿(mǎn)足條件的前提下,盡可能使用較短的變長(zhǎng)字段長(zhǎng)度。為什么這么規(guī)定,主要基于兩個(gè)方面
基于存儲(chǔ)空間的考慮
基于性能的考慮
網(wǎng)上說(shuō)Varchar(50)和varchar(500)存儲(chǔ)空間上是一樣的,真的是這樣嗎?基于性能考慮,是因?yàn)檫^(guò)長(zhǎng)的字段會(huì)影響到查詢(xún)性能?
本文我將帶著這兩個(gè)問(wèn)題探討驗(yàn)證一下:
驗(yàn)證存儲(chǔ)空間的區(qū)別
1、準(zhǔn)備兩張表
CREATE TABLE `category_info_varchar_50` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主鍵', `name` varchar(50) NOT NULL COMMENT '分類(lèi)名稱(chēng)', `is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1啟用', `sort` int(11) NOT NULL DEFAULT '0' COMMENT '序號(hào)', `deleted` tinyint(1) DEFAULT '0' COMMENT '是否刪除', `create_time` datetime NOT NULL COMMENT '創(chuàng)建時(shí)間', `update_time` datetime NOT NULL COMMENT '更新時(shí)間', PRIMARY KEY (`id`) USING BTREE, KEY `idx_name` (`name`) USING BTREE COMMENT '名稱(chēng)索引' ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='分類(lèi)'; CREATE TABLE `category_info_varchar_500` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主鍵', `name` varchar(500) NOT NULL COMMENT '分類(lèi)名稱(chēng)', `is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1啟用', `sort` int(11) NOT NULL DEFAULT '0' COMMENT '序號(hào)', `deleted` tinyint(1) DEFAULT '0' COMMENT '是否刪除', `create_time` datetime NOT NULL COMMENT '創(chuàng)建時(shí)間', `update_time` datetime NOT NULL COMMENT '更新時(shí)間', PRIMARY KEY (`id`) USING BTREE, KEY `idx_name` (`name`) USING BTREE COMMENT '名稱(chēng)索引' ) ENGINE=InnoDB AUTO_INCREMENT=288135 DEFAULT CHARSET=utf8mb4 COMMENT='分類(lèi)';
2、準(zhǔn)備數(shù)據(jù)
給每張表插入相同的數(shù)據(jù),為了凸顯不同,插入100萬(wàn)條數(shù)據(jù)
DELIMITER $$ CREATE PROCEDURE batchInsertData(IN total INT) BEGIN DECLARE start_idx INT DEFAULT 1; DECLARE end_idx INT; DECLARE batch_size INT DEFAULT 500; DECLARE insert_values TEXT; SET end_idx = LEAST(total, start_idx + batch_size - 1); WHILE start_idx <= total DO SET insert_values = ''; WHILE start_idx <= end_idx DO SET insert_values = CONCAT(insert_values, CONCAT('(\'name', start_idx, '\', 0, 0, 0, NOW(), NOW()),')); SET start_idx = start_idx + 1; END WHILE; SET insert_values = LEFT(insert_values, LENGTH(insert_values) - 1); -- Remove the trailing comma SET @sql = CONCAT('INSERT INTO category_info_varchar_50 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';'); PREPARE stmt FROM @sql; EXECUTE stmt; SET @sql = CONCAT('INSERT INTO category_info_varchar_500 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';'); PREPARE stmt FROM @sql; EXECUTE stmt; SET end_idx = LEAST(total, start_idx + batch_size - 1); END WHILE; END$$ DELIMITER ; CALL batchInsertData(1000000);
3、驗(yàn)證存儲(chǔ)空間
查詢(xún)第一張表SQL
SELECT table_schema AS "數(shù)據(jù)庫(kù)", table_name AS "表名", table_rows AS "記錄數(shù)", TRUNCATE ( data_length / 1024 / 1024, 2 ) AS "數(shù)據(jù)容量(MB)", TRUNCATE ( index_length / 1024 / 1024, 2 ) AS "索引容量(MB)" FROM information_schema.TABLES WHERE table_schema = 'test_mysql_field' and TABLE_NAME = 'category_info_varchar_50' ORDER BY data_length DESC, index_length DESC;
查詢(xún)結(jié)果
查詢(xún)第二張表SQL
SELECT table_schema AS "數(shù)據(jù)庫(kù)", table_name AS "表名", table_rows AS "記錄數(shù)", TRUNCATE ( data_length / 1024 / 1024, 2 ) AS "數(shù)據(jù)容量(MB)", TRUNCATE ( index_length / 1024 / 1024, 2 ) AS "索引容量(MB)" FROM information_schema.TABLES WHERE table_schema = 'test_mysql_field' and TABLE_NAME = 'category_info_varchar_500' ORDER BY data_length DESC, index_length DESC;
查詢(xún)結(jié)果
4、結(jié)論
兩張表在占用空間上確實(shí)是一樣的,并無(wú)差別。
驗(yàn)證性能區(qū)別
1、驗(yàn)證索引覆蓋查詢(xún)
select name from category_info_varchar_50 where name = 'name100000' -- 耗時(shí)0.012s select name from category_info_varchar_500 where name = 'name100000' -- 耗時(shí)0.012s select name from category_info_varchar_50 order by name; -- 耗時(shí)0.370s select name from category_info_varchar_500 order by name; -- 耗時(shí)0.379s
通過(guò)索引覆蓋查詢(xún)性能差別不大
2、驗(yàn)證索引查詢(xún)
select * from category_info_varchar_50 where name = 'name100000' --耗時(shí) 0.012s select * from category_info_varchar_500 where name = 'name100000' --耗時(shí) 0.012s select * from category_info_varchar_50 where name in('name100','name1000','name100000','name10000','name1100000', 'name200','name2000','name200000','name20000','name2200000','name300','name3000','name300000','name30000','name3300000', 'name400','name4000','name400000','name40000','name4400000','name500','name5000','name500000','name50000','name5500000', 'name600','name6000','name600000','name60000','name6600000','name700','name7000','name700000','name70000','name7700000','name800', 'name8000','name800000','name80000','name6600000','name900','name9000','name900000','name90000','name9900000') -- 耗時(shí) 0.011s -0.014s -- 增加 order by name 耗時(shí) 0.012s - 0.015s select * from category_info_varchar_50 where name in('name100','name1000','name100000','name10000','name1100000', 'name200','name2000','name200000','name20000','name2200000','name300','name3000','name300000','name30000','name3300000', 'name400','name4000','name400000','name40000','name4400000','name500','name5000','name500000','name50000','name5500000', 'name600','name6000','name600000','name60000','name6600000','name700','name7000','name700000','name70000','name7700000','name800', 'name8000','name800000','name80000','name6600000','name900','name9000','name900000','name90000','name9900000') -- 耗時(shí) 0.012s -0.014s -- 增加 order by name 耗時(shí) 0.014s - 0.017s
索引范圍查詢(xún)性能基本相同, 增加了order By后開(kāi)始有一定性能差別;
3、驗(yàn)證全表查詢(xún)和排序
全表無(wú)排序
全表有排序
select * from category_info_varchar_50 order by name ; --耗時(shí) 1.498s select * from category_info_varchar_500 order by name ; --耗時(shí) 4.875s
結(jié)論:
全表掃描無(wú)排序情況下,兩者性能無(wú)差異,在全表有排序的情況下, 兩種性能差異巨大;
分析原因
varchar50 全表執(zhí)行sql分析
我發(fā)現(xiàn)86%的時(shí)花在數(shù)據(jù)傳輸上,接下來(lái)我們看狀態(tài)部分,關(guān)注Created_tmp_files和sort_merge_passes
Created_tmp_files為3
sort_merge_passes為95
varchar500 全表執(zhí)行sql分析
增加了臨時(shí)表排序
Created_tmp_files 為 4
sort_merge_passes為645
關(guān)于sort_merge_passes, Mysql給出了如下描述:
Number of merge passes that the sort algorithm has had to do. If this value is large, you may want to increase the value of the sort_buffer_size.
其實(shí)sort_merge_passes對(duì)應(yīng)的就是MySQL做歸并排序的次數(shù),也就是說(shuō),如果sort_merge_passes值比較大,說(shuō)明sort_buffer和要排序的數(shù)據(jù)差距越大,我們可以通過(guò)增大sort_buffer_size或者讓填入sort_buffer_size的鍵值對(duì)更小來(lái)緩解sort_merge_passes歸并排序的次數(shù)。
最終結(jié)論
至此,我們不難發(fā)現(xiàn),當(dāng)我們最該字段進(jìn)行排序操作的時(shí)候,Mysql會(huì)根據(jù)該字段的設(shè)計(jì)的長(zhǎng)度進(jìn)行內(nèi)存預(yù)估,如果設(shè)計(jì)過(guò)大的可變長(zhǎng)度,會(huì)導(dǎo)致內(nèi)存預(yù)估的值超出sort_buffer_size的大小,導(dǎo)致mysql采用磁盤(pán)臨時(shí)文件排序,最終影響查詢(xún)性能。
相關(guān)文章
sql format()函數(shù)的用法及簡(jiǎn)單實(shí)例
下面小編就為大家?guī)?lái)一篇sql format函數(shù)()的用法及簡(jiǎn)單實(shí)例。小編覺(jué)得挺不錯(cuò)的,現(xiàn)在就分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2016-05-05Mysql5.7定時(shí)備份的實(shí)現(xiàn)
這篇文章主要介紹了Mysql5.7定時(shí)備份的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2020-11-11如何測(cè)試mysql觸發(fā)器和存儲(chǔ)過(guò)程
本文將詳細(xì)介紹怎樣mysql觸發(fā)器和存儲(chǔ)過(guò)程,需要了解的朋友可以詳細(xì)參考下2012-11-11Mysql 5.7.18 解壓版下載安裝及啟動(dòng)mysql服務(wù)的圖文詳解
這篇文章主要介紹了Mysql 5.7.18 解壓版下載安裝及啟動(dòng)mysql服務(wù)的圖文詳解,非常不錯(cuò),具有參考借鑒價(jià)值,需要的朋友可以參考下2017-05-05mysql5.7.42到mysql8.2.0的升級(jí)(rpm方式)
隨著數(shù)據(jù)量的增長(zhǎng)和業(yè)務(wù)需求的變更,我們可能需要升級(jí)MySQL,本文主要介紹了mysql5.7.42到mysql8.2.0的升級(jí)(rpm方式),具有一定的參考價(jià)值,感興趣的可以了解一下2024-03-03mysql報(bào)錯(cuò):MySQL server version for the right syntax to use nea
這篇文章主要介紹了mysql報(bào)錯(cuò):MySQL server version for the right syntax to use near type=InnoDB的解決方法,涉及MySQL語(yǔ)句的使用技巧,需要的朋友可以參考下2016-01-01很全面的MySQL處理重復(fù)數(shù)據(jù)代碼
這篇文章主要為大家詳細(xì)介紹了MySQL處理重復(fù)數(shù)據(jù)的實(shí)現(xiàn)代碼,如何防止數(shù)據(jù)表出現(xiàn)重復(fù)數(shù)據(jù)及如何刪除數(shù)據(jù)表中的重復(fù)數(shù)據(jù),感興趣的小伙伴們可以參考一下2016-05-05mysql觸發(fā)器實(shí)時(shí)檢測(cè)一條語(yǔ)句進(jìn)行備份刪除思路詳解
遇到過(guò)這樣一個(gè)需求,在一張表里會(huì)不時(shí)出現(xiàn) “違規(guī)” 字樣的字段,需要在出現(xiàn)這個(gè)字段的時(shí)候,把整行的數(shù)據(jù)刪掉,針對(duì)這個(gè)需求我們?cè)撊绾尾僮髂?,下面跟隨小編看下mysql觸發(fā)器實(shí)時(shí)檢測(cè)一條語(yǔ)句進(jìn)行備份刪除的解決思路,一起看看吧2021-09-09