大模型量化 - 基于激活感知的权重量化AWQ

1. 背景

计划通过FastChat加载一个语言大模型或代码大模型,7B参数的没问题。
尝试加载量化之后的13B或33B级别的模型。

FastChat支持AWQ(llm-awq)和GPTQ两种量化模型,本次先尝试AWQ(llm-awq)。
https://github.com/lm-sys/FastChat/blob/main/docs/awq.md

AWQ量化还有一种实现:autoawq,已经被transformers嵌入,所以推荐采用这个版本的AWQ。
参考:transformers/src/transformers/integrations/awq.py at main · huggingface/transformers (github.com)
本文也会介绍AutoAWQ这种量化方法。

LLM

2. 加载模型

qwen1.5

llm-awq不支持qwen2模型(实际是qwen1.5模型)

python3 -m fastchat.serve.cli \
    --model-path /data/shuzhang/models/qwen/Qwen1.5-14B-Chat-AWQ \
    --awq-wbits 4 \
    --awq-groupsize 128 
File "/home/jinxiao/code/llm-deploy/llm-awq/awq/quantize/quantizer.py", line 132, in real_quantize_model_weight
    layers = get_blocks(model)
  File "/home/jinxiao/code/llm-deploy/llm-awq/awq/quantize/pre_quant.py", line 43, in get_blocks
    raise NotImplementedError(type(model))
NotImplementedError: <class 'transformers.models.qwen2.modeling_qwen2.Qwen2ForCausalLM'>

如果采用AutoAWQ的话,可以直接启动(脚本如下),会走transformers的加载过程,前提是需要pip install autoawq
并且,不在llm-awq的目录下,否则,会报错ModuleNotFoundError: No module named 'awq.modules'
其实,如果transformers可以直接load AWQ模型,没有采用llm-awq,说明这个模型也是采用AutoAWQ量化的
注意:下面的启动脚本,没有--awq-wbits 4 --awq-groupsize 128参数设置,fastchat会默认采用transformers库加载预训练模型

python3 -m fastchat.serve.cli \
    --model-path /data/shuzhang/models/qwen/Qwen1.5-14B-Chat-AWQ

deepseek

deepseek模型采用llama架构,所以llm-awq支持。
但是报了一个莫名的错误,怀疑是量化checkpoint的问题。

自行量化,但是GPU显存不足,一个24GB的3090会oom。
llm-awq也不支持两张卡,拉胯!

$ python3 -m fastchat.serve.cli \
>     --model-path /data/shuzhang/models/deepseek/deepseek-coder-33B-instruct-AWQ \
>     --awq-wbits 4 \
>     --awq-groupsize 128

Loading AWQ quantized model...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
real weight quantization...(init only): 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 62/62 [00:02<00:00, 28.97it/s]

[Warning] The awq quantized checkpoint seems to be in v1 format.
If the model cannot be loaded successfully, please use the latest awq library to re-quantized the model, or repack the current checkpoint with tinychat/offline-weight-repacker.py

Loading checkpoint:   0%|                                                                                                                                   | 0/1 [00:11<?, ?it/s]
Traceback (most recent call last):
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/cli.py", line 304, in <module>
    main(args)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/cli.py", line 227, in main
    chat_loop(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/inference.py", line 361, in chat_loop
    model, tokenizer = load_model(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 294, in load_model
    model, tokenizer = load_awq_quantized(model_path, awq_config, device)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/modules/awq.py", line 65, in load_awq_quantized
    model = load_quant.load_awq_model(
  File "/home/jinxiao/code/llm-deploy/llm-awq/tinychat/utils/load_quant.py", line 82, in load_awq_model
    model = load_checkpoint_and_dispatch(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/accelerate/big_modeling.py", line 589, in load_checkpoint_and_dispatch
    load_checkpoint_in_model(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1645, in load_checkpoint_in_model
    model.load_state_dict(checkpoint, strict=False)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
      size mismatch for model.layers.34.mlp.up_proj.qweight: copying a param with shape torch.Size([7168, 2400]) from checkpoint, the shape in current model is torch.Size([4800, 7168]).
      size mismatch for model.layers.34.mlp.down_proj.qweight: copying a param with shape torch.Size([19200, 896]) from checkpoint, the shape in current model is torch.Size([1792, 19200]).
      size mismatch for model.layers.34.mlp.down_proj.scales: copying a param with shape torch.Size([150, 7168]) from checkpoint, the shape in current model is torch.Size([152, 7168]).
      ...

3. llm-awq量化过程

The current release supports of llm-awq :

  • AWQ search for accurate quantization.
  • Pre-computed AWQ model zoo for LLMs (LLaMA, Llama2, OPT, CodeLlama, StarCoder, Vicuna, VILA, LLaVA; load to generate quantized weights).
  • Memory-efficient 4-bit Linear in PyTorch.
  • Efficient CUDA kernel implementation for fast inference (support context and decoding stage).
  • Examples on 4-bit inference of an instruction-tuned model (Vicuna) and multi-modal LM (VILA).

记录一下llm-awq的量化过程,以后如果GPU显存充足,可以试试。(本次量化没有成功,显存不足)

环境安装

mit-han-lab/llm-awq: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration (github.com)

量化步骤

llm-awq/scripts/llama2_example.sh at main · mit-han-lab/llm-awq (github.com)

MODEL_NAME=deepseek-coder-6.7b-instruct
MODEL_PATH=/home/shuzhang/ai/deepseek/$MODEL_NAME

CACHE_PATH=/data/models/llm-awq
AWQ_CACHE=$CACHE_PATH/awq_cache
QUANT_CACHE=$CACHE_PATH/quant_cache


# run AWQ search (optional; we provided the pre-computed results)
python -m awq.entry --model_path $MODEL_PATH \
    --w_bit 4 --q_group_size 128 \
    --run_awq --dump_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt 

# evaluate the AWQ quantize model (simulated pseudo quantization)
python -m awq.entry --model_path $MODEL_PATH \
    --tasks wikitext \
    --w_bit 4 --q_group_size 128 \
    --load_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt \
    --q_backend fake

# generate real quantized weights (w4)
python -m awq.entry --model_path $MODEL_PATH \
    --w_bit 4 --q_group_size 128 \
    --load_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt \
    --q_backend real --dump_quant $QUANT_CACHE/$MODEL_NAME-w4-g128-awq.pt

# load and evaluate the real quantized model (smaller gpu memory usage)
python -m awq.entry --model_path $MODEL_PATH \
    --tasks wikitext \
    --w_bit 4 --q_group_size 128 \
    --load_quant $QUANT_CACHE/$MODEL_NAME-w4-g128-awq.pt

遇到的问题

问题1

Traceback (most recent call last):
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jinxiao/code/llm-deploy/llm-awq/awq/entry.py", line 15, in <module>
    from awq.quantize.pre_quant import run_awq, apply_awq
ModuleNotFoundError: No module named 'awq.quantize.pre_quant'

解决办法

  • 创建文件/home/jinxiao/code/llm-deploy/llm-awq/awq/init.py

问题2

File "/home/jinxiao/code/llm-deploy/llm-awq/awq/utils/calib_data.py", line 7, in get_calib_dataset
    dataset = load_dataset("mit-han-lab/pile-val-backup", split="validation")

解决办法

4. AutoAWQ量化过程

https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#examples

  • 量化脚本如下,耗时大概20分钟(两张3090 24GB显卡)
  • 量化之后,通过fastchat加载测试,也没问题。显存使用更少,推理更快。
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = '/data/shuzhang/models/deepseek/deepseek-coder-6.7b-instruct'
quant_path = 'deepseek-coder-6.7b-instruct-AWQ'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

问题1

  • 加载数据失败,网络不通导致,有proxy应该没问题
dataset = load_dataset("mit-han-lab/pile-val-backup", split="validation")

解决办法

问题2

File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1067, in _update_causal_mask
    if hasattr(self.layers[0].self_attn, "past_key_value"):  # static cache
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1688, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Catcher' object has no attribute 'self_attn'

解决办法

  • 文件:.../miniconda3/envs/llm_new/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py
if hasattr(self.layers[0].self_attn, "past_key_value"):  # static cache
=> 改成
if False:

5. 总结

由于没有测试llm-awq的量化模型,也没能通过llm-awq量化成功。
所以,并不清楚llm-awq量化后的模型,推理速度如何,显存占用怎样。
如果和AutoAWQ资源占用情况和推理速度相似,更推荐使用AutoAWQ。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 199,393评论 5 467
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 83,790评论 2 376
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 146,391评论 0 330
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 53,703评论 1 270
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,613评论 5 359
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,003评论 1 275
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,507评论 3 390
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,158评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,300评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,256评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,274评论 1 328
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,984评论 3 316
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,569评论 3 303
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,662评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,899评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,268评论 2 345
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 41,840评论 2 339

推荐阅读更多精彩内容