亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

Redis過期鍵與內(nèi)存淘汰策略深入分析講解

 更新時間:2022年11月28日 11:14:56   作者:基層搬磚的Panda  
因為redis數(shù)據(jù)是基于內(nèi)存的,然而內(nèi)存是非常寶貴的資源,然后我們就會對一些不常用或者只用一次的數(shù)據(jù)進行存活時間設置,這樣才能提高內(nèi)存的使用效率,下面這篇文章主要給大家介紹了關于Redis中過期鍵與內(nèi)存淘汰策略,需要的朋友可以參考下

以下內(nèi)容是基于Redis 6.2.6 版本整理總結(jié)

一、Redis數(shù)據(jù)庫的組織方式

Redis服務器將所有的數(shù)據(jù)庫 都保存在src/server.h/redisServer結(jié)構(gòu)中的db數(shù)組中。db數(shù)組的每個entry都是src/server.h/redisDb結(jié)構(gòu),每個redisDb結(jié)構(gòu)代表一個數(shù)據(jù)庫。Redis默認有16個數(shù)據(jù)庫。

1.1 redisServer結(jié)構(gòu)定義

struct redisServer {
    /* General */
    pid_t pid;                  /* Main process pid. */
    pthread_t main_thread_id;         /* Main thread id */
	...
    redisDb *db;   // db數(shù)組
    ...
    int dbnum;     // redis db的數(shù)量
    ...
};

1.2 redisDb 結(jié)構(gòu)定義

typedef struct redisDb {
    dict *dict;                 /* The keyspace for this DB */ //鍵空間,保存數(shù)據(jù)庫中所有的鍵值對
    dict *expires;              /* Timeout of keys with a timeout set */
    dict *blocking_keys;        /* Keys with clients waiting for data (BLPOP)*/
    dict *ready_keys;           /* Blocked keys that received a PUSH */
    dict *watched_keys;         /* WATCHED keys for MULTI/EXEC CAS */
    int id;                     /* Database ID */
    long long avg_ttl;          /* Average TTL, just for stats */
    unsigned long expires_cursor; /* Cursor of the active expire cycle. */
    list *defrag_later;         /* List of key names to attempt to defrag one by one, gradually. */
} redisDb;

各字段含義解釋:

  • dict保存了數(shù)據(jù)庫中的所有鍵值對,這個字典也被稱為:鍵空間(key space)。鍵空間的鍵就是數(shù)據(jù)庫的鍵,每個鍵都是字符串對象;鍵空間的值就是數(shù)據(jù)庫的值,每個值可以是五種對象中的任意一種對象。
  • expires字典保存了數(shù)據(jù)庫中所有鍵的過期時間,也叫過期字典。過期字典的鍵是指向鍵空間中的某個鍵的指針;值是一個long long類型的unix毫秒級時間戳。
  • blocking_keys使用比較少,redis只有blpop、brpop等命令造成主動阻塞。
  • ready_keys和blocking_keys配合使用,比如:一個客戶端blpop阻塞等待數(shù)據(jù),另一個客戶端在push時,會檢查blocking_keys中是否存在相應的key,如果有就將該key移動到ready_keys中,阻塞的客戶端收到數(shù)據(jù)。
  • watched_keys用來實現(xiàn)WATCH功能,實際線上環(huán)境不會使用,影響redis性能。

1.3 redisdb初始化

// src/server.c
void initServer(void) {
    int j;
    // ...
	server.db = zmalloc(sizeof(redisDb)*server.dbnum);
	// ...
	/* Create the Redis databases, and initialize other internal state. */
    for (j = 0; j < server.dbnum; j++) {
        server.db[j].dict = dictCreate(&dbDictType,NULL);
        server.db[j].expires = dictCreate(&dbExpiresDictType,NULL);
        server.db[j].expires_cursor = 0;
        server.db[j].blocking_keys = dictCreate(&keylistDictType,NULL);
        server.db[j].ready_keys = dictCreate(&objectKeyPointerValueDictType,NULL);
        server.db[j].watched_keys = dictCreate(&keylistDictType,NULL);
        server.db[j].id = j;
        server.db[j].avg_ttl = 0;
        server.db[j].defrag_later = listCreate();
        listSetFreeMethod(server.db[j].defrag_later,(void (*)(void*))sdsfree);
    }
   //...
}

二、過期鍵

2.1 設置鍵的過期時間

redis客戶端提供了expire或pexpire命令來設置鍵的過期時間(Time to live, TTL),在經(jīng)過指定秒數(shù)或者毫秒數(shù)后,redis服務器會自動刪除生存時間為0的鍵。ttl命令是以秒為單位返回鍵的剩余生存時間,pttl命令則是以毫秒為單位。

也可以通過 setex 在設置某個鍵的同時為其設置過期時間:

如果一個鍵沒有設置過期時間或者設置了過期時間又通過persist命令取消了過期時間,則執(zhí)行ttl查看鍵的過期時間返回-1

2.2 過期鍵的判定

開頭我們在學習redisDb 結(jié)構(gòu)的時候說過,過redisDb 中的expires過期字典保存了數(shù)據(jù)中的所有鍵的過期時間。要判斷一個鍵是否過期:

  • 檢查給定鍵是不是在過期字典中,如果在,則拿到過期時間
  • 跟當前unix時間戳比較,如果小于當前unix時間戳則過期,否則還沒過期。

2.3 過期鍵的刪除策略

惰性刪除:放任過期鍵不管,但是每次從鍵空間獲取鍵的時候,都會先檢查鍵是否過期,如果過期了就刪除,否則就正常返回。

優(yōu)點:對CPU友好,對內(nèi)存不友好,如果有訪問的不到鍵,且已經(jīng)過期了,則永遠不會被刪除。

定期刪除:每隔一段時間,檢查一次數(shù)據(jù)庫,刪除里面的過期鍵。要掃描多少個數(shù)據(jù)庫,以及要刪除多少過期鍵,由算法控制。

Redis服務器采用了上面兩種策略的組合使用,很好的平衡了CPU的使用和內(nèi)存的使用。

2.3.1 惰性刪除的實現(xiàn)

惰性刪除由expireIfNeeded函數(shù)實現(xiàn),Redis在執(zhí)行讀寫命令時都會先調(diào)用expireIfNeeded函數(shù)對鍵進行檢查。如果已經(jīng)過期,expireIfNeeded函數(shù)就會刪除該鍵值對;如果沒有過期,則什么都不做。

// db.c
int expireIfNeeded(redisDb *db, robj *key) {
    // 如果沒過期,什么都不做,直接返回
    if (!keyIsExpired(db,key)) return 0;
    /* If we are running in the context of a slave, instead of
     * evicting the expired key from the database, we return ASAP:
     * the slave key expiration is controlled by the master that will
     * send us synthesized DEL operations for expired keys.
     *
     * Still we try to return the right information to the caller,
     * that is, 0 if we think the key should be still valid, 1 if
     * we think the key is expired at this time. */
    if (server.masterhost != NULL) return 1;
    /* If clients are paused, we keep the current dataset constant,
     * but return to the client what we believe is the right state. Typically,
     * at the end of the pause we will properly expire the key OR we will
     * have failed over and the new primary will send us the expire. */
    if (checkClientPauseTimeoutAndReturnIfPaused()) return 1;
    /* Delete the key */
    // 刪除過期鍵
    deleteExpiredKeyAndPropagate(db,key);
    return 1;
}
/* Check if the key is expired. */
int keyIsExpired(redisDb *db, robj *key) {
    mstime_t when = getExpire(db,key);
    mstime_t now;
    // 如果該鍵沒有設置過期時間
    if (when < 0) return 0; /* No expire for this key */
    /* Don't expire anything while loading. It will be done later. */
    // server加載過程中,不執(zhí)行任何過期鍵刪除操作
    if (server.loading) return 0;
    // 獲取當前時間now
    /* If we are in the context of a Lua script, we pretend that time is
     * blocked to when the Lua script started. This way a key can expire
     * only the first time it is accessed and not in the middle of the
     * script execution, making propagation to slaves / AOF consistent.
     * See issue #1525 on Github for more information. */
    if (server.lua_caller) {
        now = server.lua_time_snapshot;
    }
    /* If we are in the middle of a command execution, we still want to use
     * a reference time that does not change: in that case we just use the
     * cached time, that we update before each call in the call() function.
     * This way we avoid that commands such as RPOPLPUSH or similar, that
     * may re-open the same key multiple times, can invalidate an already
     * open object in a next call, if the next call will see the key expired,
     * while the first did not. */
    else if (server.fixed_time_expire > 0) {
        now = server.mstime;
    }
    /* For the other cases, we want to use the most fresh time we have. */
    else {
        now = mstime();
    }
    /* The key expired if the current (virtual or real) time is greater
     * than the expire time of the key. */
    // 如果當前時間大于過期時間,則該鍵過期,返回true
    return now > when;
}
/* Return the expire time of the specified key, or -1 if no expire
 * is associated with this key (i.e. the key is non volatile) */
// 從過期字典中獲取key的過期時間
long long getExpire(redisDb *db, robj *key) {
    dictEntry *de;
    /* No expire? return ASAP */
    // dictSize = db對應的ht[0].used+ht[1].used
    // 在過期字典中找不到該key,則直接返回-1
    if (dictSize(db->expires) == 0 ||
       (de = dictFind(db->expires,key->ptr)) == NULL) return -1;
    /* The entry was found in the expire dict, this means it should also
     * be present in the main dict (safety check). */
    serverAssertWithInfo(NULL,key,dictFind(db->dict,key->ptr) != NULL);
    // 如果找到了,返回鍵的unix時間戳
    return dictGetSignedIntegerVal(de);
}

2.3.2 定時刪除的實現(xiàn)

惰性刪除由src/db.c/activeExpireCycle函數(shù)實現(xiàn).

#define ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP 20 /* Keys for each DB loop. */ // 每個數(shù)據(jù)庫默認檢查20個key
#define ACTIVE_EXPIRE_CYCLE_FAST_DURATION 1000 /* Microseconds. */        // 每個數(shù)據(jù)庫默認檢查20個key
#define ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC 25 /* Max % of CPU to use. */  // CPU最大使用率25%
#define ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE 10 /* % of stale keys after which
                                                   we do extra efforts. */
void activeExpireCycle(int type) {
    /* Adjust the running parameters according to the configured expire
     * effort. The default effort is 1, and the maximum configurable effort
     * is 10. */
    unsigned long
    effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */
    config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP +
                           ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort,
    config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION +
                                 ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort,
    config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC +
                                  2*effort,
    config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE-
                                    effort;
    /* This function has some global state in order to continue the work
     * incrementally across calls. */
    static unsigned int current_db = 0; /* Next DB to test. */
    static int timelimit_exit = 0;      /* Time limit hit in previous call? */
    static long long last_fast_cycle = 0; /* When last fast cycle ran. */
    int j, iteration = 0;
    int dbs_per_call = CRON_DBS_PER_CALL;  // 每次默認檢查16個數(shù)據(jù)庫
    long long start = ustime(), timelimit, elapsed;
    /* When clients are paused the dataset should be static not just from the
     * POV of clients not being able to write, but also from the POV of
     * expires and evictions of keys not being performed. */
    if (checkClientPauseTimeoutAndReturnIfPaused()) return;
    if (type == ACTIVE_EXPIRE_CYCLE_FAST) {
        /* Don't start a fast cycle if the previous cycle did not exit
         * for time limit, unless the percentage of estimated stale keys is
         * too high. Also never repeat a fast cycle for the same period
         * as the fast cycle total duration itself. */
        if (!timelimit_exit &&
            server.stat_expired_stale_perc < config_cycle_acceptable_stale)
            return;
        if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2)
            return;
        last_fast_cycle = start;
    }
    /* We usually should test CRON_DBS_PER_CALL per iteration, with
     * two exceptions:
     *
     * 1) Don't test more DBs than we have.
     * 2) If last time we hit the time limit, we want to scan all DBs
     * in this iteration, as there is work to do in some DB and we don't want
     * expired keys to use memory for too much time. */
    if (dbs_per_call > server.dbnum || timelimit_exit)
        dbs_per_call = server.dbnum;
    /* We can use at max 'config_cycle_slow_time_perc' percentage of CPU
     * time per iteration. Since this function gets called with a frequency of
     * server.hz times per second, the following is the max amount of
     * microseconds we can spend in this function. */
    timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;
    timelimit_exit = 0;
    if (timelimit <= 0) timelimit = 1;
    if (type == ACTIVE_EXPIRE_CYCLE_FAST)
        timelimit = config_cycle_fast_duration; /* in microseconds. */
    /* Accumulate some global stats as we expire keys, to have some idea
     * about the number of keys that are already logically expired, but still
     * existing inside the database. */
    long total_sampled = 0;
    long total_expired = 0;
    // 遍歷各個數(shù)據(jù)庫
    for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) {
        /* Expired and checked in a single loop. */
        unsigned long expired, sampled;
        // 獲取當前要處理的數(shù)據(jù)庫
        redisDb *db = server.db+(current_db % server.dbnum);
        /* Increment the DB now so we are sure if we run out of time
         * in the current DB we'll restart from the next. This allows to
         * distribute the time evenly across DBs. */
        current_db++;
        /* Continue to expire if at the end of the cycle there are still
         * a big percentage of keys to expire, compared to the number of keys
         * we scanned. The percentage, stored in config_cycle_acceptable_stale
         * is not fixed, but depends on the Redis configured "expire effort". */
        do {
            unsigned long num, slots;
            long long now, ttl_sum;
            int ttl_samples;
            iteration++;
            /* If there is nothing to expire try next DB ASAP. */
            // 如果當前數(shù)據(jù)庫過期字典為空,跳過這個數(shù)據(jù)庫
            if ((num = dictSize(db->expires)) == 0) {
                db->avg_ttl = 0;
                break;
            }
            slots = dictSlots(db->expires);
            now = mstime();
            /* When there are less than 1% filled slots, sampling the key
             * space is expensive, so stop here waiting for better times...
             * The dictionary will be resized asap. */
            if (slots > DICT_HT_INITIAL_SIZE &&
                (num*100/slots < 1)) break;
            /* The main collection cycle. Sample random keys among keys
             * with an expire set, checking for expired ones. */
            expired = 0;
            sampled = 0;
            ttl_sum = 0;
            ttl_samples = 0;
            if (num > config_keys_per_loop)
                num = config_keys_per_loop;
            /* Here we access the low level representation of the hash table
             * for speed concerns: this makes this code coupled with dict.c,
             * but it hardly changed in ten years.
             *
             * Note that certain places of the hash table may be empty,
             * so we want also a stop condition about the number of
             * buckets that we scanned. However scanning for free buckets
             * is very fast: we are in the cache line scanning a sequential
             * array of NULL pointers, so we can scan a lot more buckets
             * than keys in the same time. */
            long max_buckets = num*20;
            long checked_buckets = 0;
            while (sampled < num && checked_buckets < max_buckets) {
                for (int table = 0; table < 2; table++) {
                    if (table == 1 && !dictIsRehashing(db->expires)) break;
                    unsigned long idx = db->expires_cursor;
                    idx &= db->expires->ht[table].sizemask;
                    dictEntry *de = db->expires->ht[table].table[idx];
                    long long ttl;
                    /* Scan the current bucket of the current table. */
                    checked_buckets++;
                    while(de) {
                        /* Get the next entry now since this entry may get
                         * deleted. */
                        dictEntry *e = de;
                        de = de->next;
                        ttl = dictGetSignedIntegerVal(e)-now;
                        if (activeExpireCycleTryExpire(db,e,now)) expired++;
                        if (ttl > 0) {
                            /* We want the average TTL of keys yet
                             * not expired. */
                            ttl_sum += ttl;
                            ttl_samples++;
                        }
                        sampled++;
                    }
                }
                db->expires_cursor++;
            }
            total_expired += expired;
            total_sampled += sampled;
            /* Update the average TTL stats for this database. */
            if (ttl_samples) {
                long long avg_ttl = ttl_sum/ttl_samples;
                /* Do a simple running average with a few samples.
                 * We just use the current estimate with a weight of 2%
                 * and the previous estimate with a weight of 98%. */
                if (db->avg_ttl == 0) db->avg_ttl = avg_ttl;
                db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50);
            }
            /* We can't block forever here even if there are many keys to
             * expire. So after a given amount of milliseconds return to the
             * caller waiting for the other active expire cycle. */
            if ((iteration & 0xf) == 0) { /* check once every 16 iterations. */
                elapsed = ustime()-start;
                if (elapsed > timelimit) {
                    timelimit_exit = 1;
                    server.stat_expired_time_cap_reached_count++;
                    break;
                }
            }
            /* We don't repeat the cycle for the current database if there are
             * an acceptable amount of stale keys (logically expired but yet
             * not reclaimed). */
        } while (sampled == 0 ||
                 (expired*100/sampled) > config_cycle_acceptable_stale);
    }
    elapsed = ustime()-start;
    server.stat_expire_cycle_time_used += elapsed;
    latencyAddSampleIfNeeded("expire-cycle",elapsed/1000);
    /* Update our estimate of keys existing but yet to be expired.
     * Running average with this sample accounting for 5%. */
    double current_perc;
    if (total_sampled) {
        current_perc = (double)total_expired/total_sampled;
    } else
        current_perc = 0;
    server.stat_expired_stale_perc = (current_perc*0.05)+
                                     (server.stat_expired_stale_perc*0.95);
}

三、Redis內(nèi)存淘汰策略

Redis為什么要有內(nèi)存淘汰策略?因為Redis是內(nèi)存數(shù)據(jù)庫,不能無限大,達到閾值時需要淘汰部分內(nèi)存的數(shù)據(jù),來存儲新的數(shù)據(jù)。

redis內(nèi)存配置參數(shù):maxmemory,一般設置為系統(tǒng)內(nèi)存的一半(經(jīng)驗值),比如你的系統(tǒng)運行內(nèi)存有哦96G,就設置為48G。

3.1 Redis針對過期key的淘汰策略

看你的業(yè)務是否使用了 expire 過期時間,如果使用了,則:

  • volatile-lru (Least Recently Used的縮寫,即最近最少使用)
  • volatile-lfu(east frequently used的縮寫,即最少次數(shù)使用)
  • volatile-ttl(time to live的縮寫,最近要過期的)
  • volatile-random (隨機淘汰)

3.2 Redis最對所有key的淘汰策略

  • alllkeys-lru
  • allkeys-lfu
  • allkeys-random

3.3 禁止淘汰策略

redis還有一種淘汰策略,就是禁止淘汰,這種策略,當redis使用的內(nèi)存達到設定的最大值時,后續(xù)的寫進redis的操作會失敗。

四、增刪改查圖解

4.1 新增鍵值對

舉例:我們在一個空的redis數(shù)據(jù)庫中執(zhí)行分別執(zhí)行以下命令:

127.0.0.1:6379[1]> keys *
(empty array)  // 表示此時數(shù)據(jù)庫中沒有任何數(shù)據(jù)
127.0.0.1:6379[1]> set msg "hello world"
OK
127.0.0.1:6379[1]>

 127.0.0.1:6379[1]> hmset student name panda age 20 addr beijing
OK
127.0.0.1:6379[1]>

127.0.0.1:6379[1]> rpush teacher Darren Mark King
(integer) 3
127.0.0.1:6379[1]> 

4.2 更新鍵值對

127.0.0.1:6379[1]> set msg "redis"
OK
127.0.0.1:6379[1]> get msg
"redis"
127.0.0.1:6379[1]> hset student sex male
(integer) 1
127.0.0.1:6379[1]>

4.3 獲取鍵的值

127.0.0.1:6379[1]> get msg
"redis"
127.0.0.1:6379[1]> hmget student name age addr sex
1) "panda"
2) "20"
3) "beijing"
4) "male"
127.0.0.1:6379[1]>

4.4 刪除鍵值對

127.0.0.1:6379[1]> keys *
1) "msg"
2) "student"
3) "teacher"
127.0.0.1:6379[1]> del student
(integer) 1
127.0.0.1:6379[1]> keys *
1) "msg"
2) "teacher"
127.0.0.1:6379[1]>

到此這篇關于Redis過期鍵與內(nèi)存淘汰策略深入分析講解的文章就介紹到這了,更多相關Redis過期鍵與內(nèi)存淘汰策略內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!

相關文章

  • 詳解RedisTemplate下Redis分布式鎖引發(fā)的系列問題

    詳解RedisTemplate下Redis分布式鎖引發(fā)的系列問題

    這篇文章主要介紹了詳解RedisTemplate下Redis分布式鎖引發(fā)的系列問題,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧
    2021-03-03
  • redis啟動報錯Can‘t?open?the?log?file:?No?such?file?or?directory

    redis啟動報錯Can‘t?open?the?log?file:?No?such?file?or?d

    這篇文章主要介紹了redis啟動報錯Can‘t?open?the?log?file:?No?such?file?or?directory問題及解決方案,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2023-11-11
  • Redis集群節(jié)點通信過程/原理流程分析

    Redis集群節(jié)點通信過程/原理流程分析

    這篇文章主要介紹了Redis集群節(jié)點通信過程/原理,詳細介紹了Cluster(集群)的節(jié)點通信的流程,本文給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2022-03-03
  • Redis?常見緩存問題總結(jié)

    Redis?常見緩存問題總結(jié)

    這篇文章主要給大家總結(jié)了一些Redis?常見緩存問題,并介紹了解決辦法,文中的圖文示例介紹的非常仔細,感興趣的同學可以參考閱讀下
    2023-06-06
  • Redis未授權訪問配合SSH key文件利用詳解

    Redis未授權訪問配合SSH key文件利用詳解

    這篇文章主要給大家介紹了關于Redis未授權訪問配合SSH key文件利用的相關資料,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧
    2018-09-09
  • Redis配置文件代碼講解

    Redis配置文件代碼講解

    在本篇文章里小編給大家整理的是一篇關于Redis配置文件的說明內(nèi)容,需要的朋友們可以學習下。
    2020-03-03
  • 解決Redis的緩存與數(shù)據(jù)庫雙寫不一致問題

    解決Redis的緩存與數(shù)據(jù)庫雙寫不一致問題

    在使用緩存和數(shù)據(jù)庫配合時,常見的CacheAsidePattern模式要求讀操作先訪問緩存,若缺失再讀數(shù)據(jù)庫并更新緩存;寫操作則是先寫數(shù)據(jù)庫后刪除緩存,但這種模式可能導致緩存與數(shù)據(jù)庫間的雙寫不一致問題
    2024-10-10
  • 使用Redis實現(xiàn)微信步數(shù)排行榜功能

    使用Redis實現(xiàn)微信步數(shù)排行榜功能

    這篇文章主要介紹了使用Redis實現(xiàn)微信步數(shù)排行榜功能,本文通過圖文實例代碼相結(jié)合給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2020-06-06
  • Redis IP地址的綁定的實現(xiàn)

    Redis IP地址的綁定的實現(xiàn)

    這篇文章主要介紹了Redis IP地址的綁定的實現(xiàn),文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧
    2021-05-05
  • redis調(diào)用二維碼時的不斷刷新排查分析

    redis調(diào)用二維碼時的不斷刷新排查分析

    這篇文章主要為大家介紹了redis調(diào)用二維碼時不斷刷新排查分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步早日升職加薪
    2022-04-04

最新評論