手撕BeamSearch代码

2024-03-08 22:44
文章标签 代码 beamsearch

本文主要是介绍手撕BeamSearch代码,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、目录

  1. 手撕beam search
  2. transformer generate() 解读

二、实现

  1. 手撕beam search
def pred(input):batch,seq_len=input.shapegenerate=torch.randn(size=(batch,1,10))return generatedef beam_search(input_ids,max_length,num_beams):batch=input_ids.shape[0]#输入扩展expand_size=num_beamsexpanded_return_idx = (torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device))input_ids = input_ids.index_select(0, expanded_return_idx)print(input_ids)batch_beam_size,cur_len=input_ids.shapebeam_scores=torch.zeros(size=(batch,num_beams),dtype=torch.float,device=input_ids.device)beam_scores[:,1:]=-1e9beam_scores=beam_scores.view(size=(batch*num_beams,))next_tokens=torch.zeros(size=(batch,num_beams),dtype=torch.long,device=input_ids.device)next_indices=torch.zeros(size=(batch,num_beams),dtype=torch.long,device=input_ids.device)while cur_len<max_length:logits=pred(input_ids)    #batch,seq_len,vocabnext_token_logits=logits[:,-1,:]  #当前时刻的输出#归一化next_token_scores=F.log_softmax(next_token_logits,dim=-1)   # (batch_size * num_beams, vocab_size)#求概率next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)  # 当前概率+先前概率# reshape for beam searchvocab_size = next_token_scores.shape[-1]next_token_scores = next_token_scores.view(batch, num_beams * vocab_size)# 当前时刻的token 得分,  token_idnext_token_scores, next_tokens = torch.topk(next_token_scores, num_beams, dim=1, largest=True, sorted=True)next_indices = next_tokens // vocab_size  #对应的beam_idnext_tokens = next_tokens % vocab_size    #对应的indices#集束搜索核心def process(input_ids,next_scores,next_tokens,next_indices):batch_size=3group_size=3next_beam_scores = torch.zeros((batch_size, num_beams), dtype=next_scores.dtype)next_beam_tokens = torch.zeros((batch_size, num_beams), dtype=next_tokens.dtype)next_beam_indices = torch.zeros((batch_size,num_beams), dtype=next_indices.dtype)for batch_idx in range(batch_size):beam_idx=0for beam_token_rank, (next_token, next_score, next_index) in enumerate(zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx])):batch_beam_idx=batch_idx*num_beams+next_indexnext_beam_scores[batch_idx, beam_idx] = next_score      #当前路径得分next_beam_tokens[batch_idx, beam_idx] = next_token      #当前时刻的tokennext_beam_indices[batch_idx, beam_idx] = batch_beam_idx  #先前对应的idbeam_idx += 1return next_beam_scores.view(-1), next_beam_tokens.view(-1), next_beam_indices.view(-1)beam_scores, beam_next_tokens, beam_idx=process(input_ids,next_token_scores,next_tokens,next_indices)# 更新输入, 找到对应的beam_idx, 选择的tokens, 拼接为新的输入      #(batch*beam,seq_len)input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)cur_len = cur_len + 1#输出return input_ids,beam_scoresif __name__ == '__main__':input_ids=torch.randint(0,100,size=(3,1))print(input_ids)input_ids,beam_scores=beam_search(input_ids,max_length=10,num_beams=3)print(input_ids)

参考:transformers generate实现。

  1. transformer generate() 解读
@torch.no_grad()
def generate(          #模型入口self,inputs: Optional[torch.Tensor] = None,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,synced_gpus: Optional[bool] = None,assistant_model: Optional["PreTrainedModel"] = None,streamer: Optional["BaseStreamer"] = None,negative_prompt_ids: Optional[torch.Tensor] = None,negative_prompt_attention_mask: Optional[torch.Tensor] = None,**kwargs,
) -> Union[GenerateOutput, torch.LongTensor]:
# 10. go into different generation modes
# 根据不同的生产模型进行解码生产
if generation_mode == GenerationMode.ASSISTED_GENERATION:...#以beam search 为例子
elif generation_mode == GenerationMode.BEAM_SEARCH:     #beam search 算法# 11. prepare beam search scorer    #参数初始化beam_scorer = BeamSearchScorer(batch_size=batch_size,num_beams=generation_config.num_beams,device=inputs_tensor.device,length_penalty=generation_config.length_penalty,do_early_stopping=generation_config.early_stopping,num_beam_hyps_to_keep=generation_config.num_return_sequences,max_length=generation_config.max_length,)#将输入进行扩展# 12. interleave input_ids with `num_beams` additional sequences per batchinput_ids, model_kwargs = self._expand_inputs_for_generation(input_ids=input_ids,expand_size=generation_config.num_beams,is_encoder_decoder=self.config.is_encoder_decoder,**model_kwargs,)# 13. run beam search     核心,beam search 算法解码result = self.beam_search(input_ids,beam_scorer,logits_processor=prepared_logits_processor,stopping_criteria=prepared_stopping_criteria,pad_token_id=generation_config.pad_token_id,eos_token_id=generation_config.eos_token_id,output_scores=generation_config.output_scores,output_logits=generation_config.output_logits,return_dict_in_generate=generation_config.return_dict_in_generate,synced_gpus=synced_gpus,sequential=generation_config.low_memory,**model_kwargs,)
def beam_search(self, input_ids, encoder_output, attention_mask, num_beams, max_length, pad_token_id: int, eos_token_id: int
):batch_size = self.beam_scorer.batch_size    #扩展前batch sizenum_beams = self.beam_scorer.num_beamsbatch_beam_size, cur_len = input_ids.shape     #扩展后batchassert (num_beams * batch_size == batch_beam_size), f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)beam_scores[:, 1:] = -1e9beam_scores = beam_scores.view((batch_size * num_beams,))next_tokens = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device)next_indices = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device)past: List[torch.Tensor] = []while cur_len < max_length:#生成相应logits, past = self._decoder_forward(input_ids, encoder_output, attention_mask, past)    #迭代输出next_token_logits = logits[:, -1, :]    #当前时刻输出# adjust tokens for Bart, *e.g.*    cur_len=1 与 max_length 输出调整next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len, max_length=max_length)#归一化next_token_scores = F.log_softmax(next_token_logits, dim=-1)  # (batch_size * num_beams, vocab_size)    #归一化# pre-process distributionnext_token_scores = self.logits_processor(input_ids, next_token_scores)next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)   #当前概率+先前概率# reshape for beam searchvocab_size = next_token_scores.shape[-1]next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)#取前beam 个路径next_token_scores, next_tokens = torch.topk(next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True)next_indices = next_tokens // vocab_sizenext_tokens = next_tokens % vocab_size#获取对应路径,路径得分,对应的id   核心,不同beam search 不同点beam_scores, beam_next_tokens, beam_idx = self.beam_scorer.process(input_ids,next_token_scores,next_tokens,next_indices,pad_token_id=pad_token_id,eos_token_id=eos_token_id,)#更新输入, 找到对应的beam_idx, 选择的tokens, 拼接为新的输入      #(batch*beam,seq_len)input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)cur_len = cur_len + 1if len(past) > 0:past = self._reorder_cache(past, beam_idx)if self.beam_scorer.is_done():break#选择最优的输出,输出标准化sequences, sequence_scores = self.beam_scorer.finalize(input_ids,beam_scores,next_tokens,next_indices,pad_token_id=pad_token_id,eos_token_id=eos_token_id,)return sequences

这篇关于手撕BeamSearch代码的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/788650

相关文章

Java集合之Iterator迭代器实现代码解析

《Java集合之Iterator迭代器实现代码解析》迭代器Iterator是Java集合框架中的一个核心接口,位于java.util包下,它定义了一种标准的元素访问机制,为各种集合类型提供了一种统一的... 目录一、什么是Iterator二、Iterator的核心方法三、基本使用示例四、Iterator的工

Java 线程池+分布式实现代码

《Java线程池+分布式实现代码》在Java开发中,池通过预先创建并管理一定数量的资源,避免频繁创建和销毁资源带来的性能开销,从而提高系统效率,:本文主要介绍Java线程池+分布式实现代码,需要... 目录1. 线程池1.1 自定义线程池实现1.1.1 线程池核心1.1.2 代码示例1.2 总结流程2. J

JS纯前端实现浏览器语音播报、朗读功能的完整代码

《JS纯前端实现浏览器语音播报、朗读功能的完整代码》在现代互联网的发展中,语音技术正逐渐成为改变用户体验的重要一环,下面:本文主要介绍JS纯前端实现浏览器语音播报、朗读功能的相关资料,文中通过代码... 目录一、朗读单条文本:① 语音自选参数,按钮控制语音:② 效果图:二、朗读多条文本:① 语音有默认值:②

Vue实现路由守卫的示例代码

《Vue实现路由守卫的示例代码》Vue路由守卫是控制页面导航的钩子函数,主要用于鉴权、数据预加载等场景,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着... 目录一、概念二、类型三、实战一、概念路由守卫(Navigation Guards)本质上就是 在路

uni-app小程序项目中实现前端图片压缩实现方式(附详细代码)

《uni-app小程序项目中实现前端图片压缩实现方式(附详细代码)》在uni-app开发中,文件上传和图片处理是很常见的需求,但也经常会遇到各种问题,下面:本文主要介绍uni-app小程序项目中实... 目录方式一:使用<canvas>实现图片压缩(推荐,兼容性好)示例代码(小程序平台):方式二:使用uni

JAVA实现Token自动续期机制的示例代码

《JAVA实现Token自动续期机制的示例代码》本文主要介绍了JAVA实现Token自动续期机制的示例代码,通过动态调整会话生命周期平衡安全性与用户体验,解决固定有效期Token带来的风险与不便,感兴... 目录1. 固定有效期Token的内在局限性2. 自动续期机制:兼顾安全与体验的解决方案3. 总结PS

C#中通过Response.Headers设置自定义参数的代码示例

《C#中通过Response.Headers设置自定义参数的代码示例》:本文主要介绍C#中通过Response.Headers设置自定义响应头的方法,涵盖基础添加、安全校验、生产实践及调试技巧,强... 目录一、基础设置方法1. 直接添加自定义头2. 批量设置模式二、高级配置技巧1. 安全校验机制2. 类型

Python屏幕抓取和录制的详细代码示例

《Python屏幕抓取和录制的详细代码示例》随着现代计算机性能的提高和网络速度的加快,越来越多的用户需要对他们的屏幕进行录制,:本文主要介绍Python屏幕抓取和录制的相关资料,需要的朋友可以参考... 目录一、常用 python 屏幕抓取库二、pyautogui 截屏示例三、mss 高性能截图四、Pill

使用MapStruct实现Java对象映射的示例代码

《使用MapStruct实现Java对象映射的示例代码》本文主要介绍了使用MapStruct实现Java对象映射的示例代码,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,... 目录一、什么是 MapStruct?二、实战演练:三步集成 MapStruct第一步:添加 Mave

Java抽象类Abstract Class示例代码详解

《Java抽象类AbstractClass示例代码详解》Java中的抽象类(AbstractClass)是面向对象编程中的重要概念,它通过abstract关键字声明,用于定义一组相关类的公共行为和属... 目录一、抽象类的定义1. 语法格式2. 核心特征二、抽象类的核心用途1. 定义公共接口2. 提供默认实