亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

Python實(shí)現(xiàn) 多進(jìn)程導(dǎo)入CSV數(shù)據(jù)到 MySQL

 更新時(shí)間:2017年02月26日 08:54:28   作者:李林克斯  
本文給大家分享的是使用python實(shí)現(xiàn)多進(jìn)程導(dǎo)入CSV文件數(shù)據(jù)到MySQL的思路方法以及具體的代碼分享,有相同需求的小伙伴可以參考下

前段時(shí)間幫同事處理了一個(gè)把 CSV 數(shù)據(jù)導(dǎo)入到 MySQL 的需求。兩個(gè)很大的 CSV 文件, 分別有 3GB、2100 萬(wàn)條記錄和 7GB、3500 萬(wàn)條記錄。對(duì)于這個(gè)量級(jí)的數(shù)據(jù),用簡(jiǎn)單的單進(jìn)程/單線程導(dǎo)入 會(huì)耗時(shí)很久,最終用了多進(jìn)程的方式來實(shí)現(xiàn)。具體過程不贅述,記錄一下幾個(gè)要點(diǎn):

  1. 批量插入而不是逐條插入
  2. 為了加快插入速度,先不要建索引
  3. 生產(chǎn)者和消費(fèi)者模型,主進(jìn)程讀文件,多個(gè) worker 進(jìn)程執(zhí)行插入
  4. 注意控制 worker 的數(shù)量,避免對(duì) MySQL 造成太大的壓力
  5. 注意處理臟數(shù)據(jù)導(dǎo)致的異常
  6. 原始數(shù)據(jù)是 GBK 編碼,所以還要注意轉(zhuǎn)換成 UTF-8
  7. 用 click 封裝命令行工具

具體的代碼實(shí)現(xiàn)如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import codecs
import csv
import logging
import multiprocessing
import os
import warnings

import click
import MySQLdb
import sqlalchemy

warnings.filterwarnings('ignore', category=MySQLdb.Warning)

# 批量插入的記錄數(shù)量
BATCH = 5000

DB_URI = 'mysql://root@localhost:3306/example?charset=utf8'

engine = sqlalchemy.create_engine(DB_URI)


def get_table_cols(table):
  sql = 'SELECT * FROM `{table}` LIMIT 0'.format(table=table)
  res = engine.execute(sql)
  return res.keys()


def insert_many(table, cols, rows, cursor):
  sql = 'INSERT INTO `{table}` ({cols}) VALUES ({marks})'.format(
      table=table,
      cols=', '.join(cols),
      marks=', '.join(['%s'] * len(cols)))
  cursor.execute(sql, *rows)
  logging.info('process %s inserted %s rows into table %s', os.getpid(), len(rows), table)


def insert_worker(table, cols, queue):
  rows = []
  # 每個(gè)子進(jìn)程創(chuàng)建自己的 engine 對(duì)象
  cursor = sqlalchemy.create_engine(DB_URI)
  while True:
    row = queue.get()
    if row is None:
      if rows:
        insert_many(table, cols, rows, cursor)
      break

    rows.append(row)
    if len(rows) == BATCH:
      insert_many(table, cols, rows, cursor)
      rows = []


def insert_parallel(table, reader, w=10):
  cols = get_table_cols(table)

  # 數(shù)據(jù)隊(duì)列,主進(jìn)程讀文件并往里寫數(shù)據(jù),worker 進(jìn)程從隊(duì)列讀數(shù)據(jù)
  # 注意一下控制隊(duì)列的大小,避免消費(fèi)太慢導(dǎo)致堆積太多數(shù)據(jù),占用過多內(nèi)存
  queue = multiprocessing.Queue(maxsize=w*BATCH*2)
  workers = []
  for i in range(w):
    p = multiprocessing.Process(target=insert_worker, args=(table, cols, queue))
    p.start()
    workers.append(p)
    logging.info('starting # %s worker process, pid: %s...', i + 1, p.pid)

  dirty_data_file = './{}_dirty_rows.csv'.format(table)
  xf = open(dirty_data_file, 'w')
  writer = csv.writer(xf, delimiter=reader.dialect.delimiter)

  for line in reader:
    # 記錄并跳過臟數(shù)據(jù): 鍵值數(shù)量不一致
    if len(line) != len(cols):
      writer.writerow(line)
      continue

    # 把 None 值替換為 'NULL'
    clean_line = [None if x == 'NULL' else x for x in line]

    # 往隊(duì)列里寫數(shù)據(jù)
    queue.put(tuple(clean_line))
    if reader.line_num % 500000 == 0:
      logging.info('put %s tasks into queue.', reader.line_num)

  xf.close()

  # 給每個(gè) worker 發(fā)送任務(wù)結(jié)束的信號(hào)
  logging.info('send close signal to worker processes')
  for i in range(w):
    queue.put(None)

  for p in workers:
    p.join()


def convert_file_to_utf8(f, rv_file=None):
  if not rv_file:
    name, ext = os.path.splitext(f)
    if isinstance(name, unicode):
      name = name.encode('utf8')
    rv_file = '{}_utf8{}'.format(name, ext)
  logging.info('start to process file %s', f)
  with open(f) as infd:
    with open(rv_file, 'w') as outfd:
      lines = []
      loop = 0
      chunck = 200000
      first_line = infd.readline().strip(codecs.BOM_UTF8).strip() + '\n'
      lines.append(first_line)
      for line in infd:
        clean_line = line.decode('gb18030').encode('utf8')
        clean_line = clean_line.rstrip() + '\n'
        lines.append(clean_line)
        if len(lines) == chunck:
          outfd.writelines(lines)
          lines = []
          loop += 1
          logging.info('processed %s lines.', loop * chunck)

      outfd.writelines(lines)
      logging.info('processed %s lines.', loop * chunck + len(lines))


@click.group()
def cli():
  logging.basicConfig(level=logging.INFO,
            format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')


@cli.command('gbk_to_utf8')
@click.argument('f')
def convert_gbk_to_utf8(f):
  convert_file_to_utf8(f)


@cli.command('load')
@click.option('-t', '--table', required=True, help='表名')
@click.option('-i', '--filename', required=True, help='輸入文件')
@click.option('-w', '--workers', default=10, help='worker 數(shù)量,默認(rèn) 10')
def load_fac_day_pro_nos_sal_table(table, filename, workers):
  with open(filename) as fd:
    fd.readline()  # skip header
    reader = csv.reader(fd)
    insert_parallel(table, reader, w=workers)


if __name__ == '__main__':
  cli()

以上就是本文給大家分享的全部沒人了,希望大家能夠喜歡

相關(guān)文章

最新評(píng)論