久久久久精品一区二区三区不卡_2020国产精品午夜视频_下载91香蕉视频_丁香婷婷在线观看_日韩在线中文字幕av

社區(qū)供稿 | RLHF 實踐中的框架使用與一些坑 (TRL, LMFlow) 天天最新

來源:個人圖書館-520jefferson時間:2023-06-16 14:30:40
1 前言

之前看見文章總結(jié)了常見的一些 RLHF 框架的經(jīng)驗, 但是似乎沒看見 Hugging Face 自己維護的 TRL 庫的相關(guān)文章, 正好最近調(diào) TRL 比較多, 就想寫一個文章分享一下使用過程中踩到的坑,另外也介紹一下我們的全流程框架 LMFlow 。

LMFlow 框架示意圖。

我們主要用一個具體的例子展示如何在兩個框架下做RLHF,并且記錄下訓(xùn)練過程中我們踩到的主要的坑。這個例子包括完整的SFT,獎勵建模和 RLHF, 其中RLHF包括通過 RAFT 算法(Reward rAnked FineTuning)或者TRL-PPO 對齊模型兩個部分。為了方便用戶,我們已經(jīng)在 Hugging Face repo 中提供了一個基于 GPT-Neo-2.7B 的獎勵模型,因此也可以先跳過獎勵建模。


【資料圖】

這個例子是基于僅適用于非商業(yè)用途的許可的 LLaMA 構(gòu)建的, 為了使用LLaMA-7B 模型, 大家需要填寫前面的 request form。測試的環(huán)境是 8 X A100 (40G)。

1.1 環(huán)境準(zhǔn)備

LMFlow 的安裝包中也包含了 TRL, 所以我們只需要按照官方的示例安裝 LMFlow 即可。

git clone https://github.com/OptimalScale/LMFlow.gitcd LMFlowconda create -n lmflow python=3.9 -yconda activate lmflowconda install mpi4pypip install -e .

以上安裝自動會把依賴的 PyTorch 等包也一起安裝, 除此之外, 我們額外手動安裝一下 matplotlib 這個包

1.2 數(shù)據(jù)集描述

我們使用Dahoas/full-hh-rlhf數(shù)據(jù)集作為例子,其中每個數(shù)據(jù)集樣本包括一個提示和來自助手的兩個回應(yīng)。特別地,標(biāo)記為 "chosen" 的回應(yīng)相對于標(biāo)記為 "rejected" 的回應(yīng)更被人類所喜歡。數(shù)據(jù)集包括 112K 個訓(xùn)練樣本和 12.5K 個測試樣本。以下是數(shù)據(jù)集的一個示例樣本:

" Human: What kind of noises did dinosaurs make? Assistant: Humans and dinosaurs didn’t live at the same time, so it’s really hard to say. The best place to find out what noises dinosaurs made would be Human: yes they did Assistant: to guess, and that would probably require lots of reading and a certain amount of imagination, so we’re not really prepared to do that. Human: you cant read Assistant: Chosen response: "You can read?"Rejected response: "there’s a lot of stuff humans don’t know"

為了便于訓(xùn)練,我們在字符開頭添加 ``###"" 來重新構(gòu)建提示,以便模型知道要回復(fù)。新樣本的格式將是:

"###Human: What kind of noises did dinosaurs make? ###Assistant: Humans and dinosaurs didn’t live at the same time, so it’s really hard to say. The best place to find out what noises dinosaurs made would be ###Human: yes they did ###Assistant: to guess, and that would probably require lots of reading and a certain amount of imagination, so we’re not really prepared to do that. ###Human: you cant read ###Assistant: Chosen response: "You can read?"Rejected response: "there’s a lot of stuff humans don’t know"

我們在目錄 ./data/hh_rlhf 中準(zhǔn)備了所有需要使用的所有數(shù)據(jù)集,需要通過在 LMFlow目錄下中運行以下命令獲得:

cd data && ./download.sh hh_rlhf && cd -
2 RLHF 之前的準(zhǔn)備

在這一節(jié), 我們需要先完成SFT模型以及 reward model 的訓(xùn)練, 這部分我們使用LMFlow 完成。

2.1 SFT

這是數(shù)據(jù)集/home/usrname/LMFlow/data/hh_rlhf/sft/hh_rlhf_sft.json的一個示例。我們只使用首選回應(yīng),因此我們得到 112K 個訓(xùn)練樣本。

{"type": "text_only", "instances": [{"text": "###Human: Should you buy a case to protect your cell phone?###Assistant: It depends on your circumstances.  If you carry your phone in a pocket or a purse then you probably want a case.  But if you only need a phone for quick interactions, a case may actually cause more harm than good.  What do you need the phone for?  Are you a parent, or do you work from home?###Human: What harm could it do?###Assistant: A phone case can damage the screen, for one thing.  It can also get you in trouble if you have your phone turned off for some reason.  Then you will turn it back on and it won’t do anything.  If you can afford to replace it, then you need a case to protect it.  The problem is that most people aren’t able to afford to replace their phones all the time.###Human: Thanks for letting me know.###Assistant: You’re welcome."}, {"text": "###Human: I"m trying to learn about the salam witch trials###Assistant: If you’re looking for books about witchcraft trials, I can recommend some books for you. But you probably mean you’re looking for more historical information about the Salem witch trials in 1692, and specifically about the salam witch trials in 1692?###Human: What are some good books to learn about the salam witch trials###Assistant: What would you like to learn?  If you’re interested in history, one of the best books is The Witch Persecutions in Colonial America: A History.  If you’re interested in witchcraft as a cultural phenomenon, you might enjoy two excellent books: Religion and the Decline of Magic: Studies in Popular Beliefs in Sixteenth- and Seventeenth-Century England by Keith Thomas and Magic, Witchcraft, and the Otherworld: An Anthropology of Superstition by Jack Goody.  If you’re interested in history specifically as it relates to religion, you might enjoy The Popish Plot, or Prelates" Plot: A History of the Popish Plot in England, by K. J. Everett."}]}

你可以編輯/scripts/run_finetune.sh并修改參數(shù)。我們在這里用 GPT-Neo-2.7B 作為一個例子, 你應(yīng)當(dāng)把它換成你獲得的 llama-7b 模型的地址。

--model_name_or_path: EleutherAI/gpt-neo-2.7B

--dataset_path: ${project_dir}/data/hh_rlhf/sft

--output_dir: the path you want to store the sft model

--num_train_epochs: 1

--learning_rate: 2e-5

--per_device_train_batch_size: 根據(jù)你的GPU資源調(diào)整。

exp_id: hh_rlhf_llama_sft

你可以編輯/scripts/run_finetune.sh并修改參數(shù)。我們在這里用 GPT-Neo-2.7B 作為一個例子。

然后,我們可以運行以下命令來執(zhí)行 SFT。

./scripts/run_finetune.sh

你還可以通過以下命令使用 lora 訓(xùn)練,但還需要通過編輯run_finetune_with_lora.sh設(shè)置 model_name_or_path 和 dataset。

./scripts/run_finetune_with_lora.sh

下面這個損失圖像示例中我們設(shè)了 epoch 為4, 但是提前停止并使用一個epoch結(jié)束的模型作為SFT模型, 此外我們的logging step 設(shè)置為了20, 所以整體看起來會比較平滑

SFT 模型訓(xùn)練曲線, 這個例子截取了1.6個epoch 的訓(xùn)練曲線。

在我的例子中, 得到的SFT模型存儲在/home/usrname/LMFlow/output_models/hh_rlhf_llama_sft/checkpoint-1271

2.2 Reward Modeling

我們首先按照 InstructGPT 論文的過程:https://zhuanlan.zhihu.com/p/629920420)。同時,請查看我們的 LMFlow 框架,以獲取更多 LLMs 的樂趣:

OptimalScale/LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Model for All. (github.com)

標(biāo)簽:

責(zé)任編輯:FD31
上一篇:天天快播:開考,順!順!順!
下一篇:最后一頁

精彩圖集(熱圖)

熱點圖集

最近更新

信用中國

  • 信用信息
  • 行政許可和行政處罰
  • 網(wǎng)站文章