手撕BeamSearch代码

2024-03-08 22:44
文章标签 代码 beamsearch

本文主要是介绍手撕BeamSearch代码,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、目录

  1. 手撕beam search
  2. transformer generate() 解读

二、实现

  1. 手撕beam search
def pred(input):batch,seq_len=input.shapegenerate=torch.randn(size=(batch,1,10))return generatedef beam_search(input_ids,max_length,num_beams):batch=input_ids.shape[0]#输入扩展expand_size=num_beamsexpanded_return_idx = (torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device))input_ids = input_ids.index_select(0, expanded_return_idx)print(input_ids)batch_beam_size,cur_len=input_ids.shapebeam_scores=torch.zeros(size=(batch,num_beams),dtype=torch.float,device=input_ids.device)beam_scores[:,1:]=-1e9beam_scores=beam_scores.view(size=(batch*num_beams,))next_tokens=torch.zeros(size=(batch,num_beams),dtype=torch.long,device=input_ids.device)next_indices=torch.zeros(size=(batch,num_beams),dtype=torch.long,device=input_ids.device)while cur_len<max_length:logits=pred(input_ids)    #batch,seq_len,vocabnext_token_logits=logits[:,-1,:]  #当前时刻的输出#归一化next_token_scores=F.log_softmax(next_token_logits,dim=-1)   # (batch_size * num_beams, vocab_size)#求概率next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)  # 当前概率+先前概率# reshape for beam searchvocab_size = next_token_scores.shape[-1]next_token_scores = next_token_scores.view(batch, num_beams * vocab_size)# 当前时刻的token 得分,  token_idnext_token_scores, next_tokens = torch.topk(next_token_scores, num_beams, dim=1, largest=True, sorted=True)next_indices = next_tokens // vocab_size  #对应的beam_idnext_tokens = next_tokens % vocab_size    #对应的indices#集束搜索核心def process(input_ids,next_scores,next_tokens,next_indices):batch_size=3group_size=3next_beam_scores = torch.zeros((batch_size, num_beams), dtype=next_scores.dtype)next_beam_tokens = torch.zeros((batch_size, num_beams), dtype=next_tokens.dtype)next_beam_indices = torch.zeros((batch_size,num_beams), dtype=next_indices.dtype)for batch_idx in range(batch_size):beam_idx=0for beam_token_rank, (next_token, next_score, next_index) in enumerate(zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx])):batch_beam_idx=batch_idx*num_beams+next_indexnext_beam_scores[batch_idx, beam_idx] = next_score      #当前路径得分next_beam_tokens[batch_idx, beam_idx] = next_token      #当前时刻的tokennext_beam_indices[batch_idx, beam_idx] = batch_beam_idx  #先前对应的idbeam_idx += 1return next_beam_scores.view(-1), next_beam_tokens.view(-1), next_beam_indices.view(-1)beam_scores, beam_next_tokens, beam_idx=process(input_ids,next_token_scores,next_tokens,next_indices)# 更新输入, 找到对应的beam_idx, 选择的tokens, 拼接为新的输入      #(batch*beam,seq_len)input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)cur_len = cur_len + 1#输出return input_ids,beam_scoresif __name__ == '__main__':input_ids=torch.randint(0,100,size=(3,1))print(input_ids)input_ids,beam_scores=beam_search(input_ids,max_length=10,num_beams=3)print(input_ids)

参考:transformers generate实现。

  1. transformer generate() 解读
@torch.no_grad()
def generate(          #模型入口self,inputs: Optional[torch.Tensor] = None,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None,synced_gpus: Optional[bool] = None,assistant_model: Optional["PreTrainedModel"] = None,streamer: Optional["BaseStreamer"] = None,negative_prompt_ids: Optional[torch.Tensor] = None,negative_prompt_attention_mask: Optional[torch.Tensor] = None,**kwargs,
) -> Union[GenerateOutput, torch.LongTensor]:
# 10. go into different generation modes
# 根据不同的生产模型进行解码生产
if generation_mode == GenerationMode.ASSISTED_GENERATION:...#以beam search 为例子
elif generation_mode == GenerationMode.BEAM_SEARCH:     #beam search 算法# 11. prepare beam search scorer    #参数初始化beam_scorer = BeamSearchScorer(batch_size=batch_size,num_beams=generation_config.num_beams,device=inputs_tensor.device,length_penalty=generation_config.length_penalty,do_early_stopping=generation_config.early_stopping,num_beam_hyps_to_keep=generation_config.num_return_sequences,max_length=generation_config.max_length,)#将输入进行扩展# 12. interleave input_ids with `num_beams` additional sequences per batchinput_ids, model_kwargs = self._expand_inputs_for_generation(input_ids=input_ids,expand_size=generation_config.num_beams,is_encoder_decoder=self.config.is_encoder_decoder,**model_kwargs,)# 13. run beam search     核心,beam search 算法解码result = self.beam_search(input_ids,beam_scorer,logits_processor=prepared_logits_processor,stopping_criteria=prepared_stopping_criteria,pad_token_id=generation_config.pad_token_id,eos_token_id=generation_config.eos_token_id,output_scores=generation_config.output_scores,output_logits=generation_config.output_logits,return_dict_in_generate=generation_config.return_dict_in_generate,synced_gpus=synced_gpus,sequential=generation_config.low_memory,**model_kwargs,)
def beam_search(self, input_ids, encoder_output, attention_mask, num_beams, max_length, pad_token_id: int, eos_token_id: int
):batch_size = self.beam_scorer.batch_size    #扩展前batch sizenum_beams = self.beam_scorer.num_beamsbatch_beam_size, cur_len = input_ids.shape     #扩展后batchassert (num_beams * batch_size == batch_beam_size), f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)beam_scores[:, 1:] = -1e9beam_scores = beam_scores.view((batch_size * num_beams,))next_tokens = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device)next_indices = torch.zeros((batch_size, num_beams), dtype=torch.long, device=input_ids.device)past: List[torch.Tensor] = []while cur_len < max_length:#生成相应logits, past = self._decoder_forward(input_ids, encoder_output, attention_mask, past)    #迭代输出next_token_logits = logits[:, -1, :]    #当前时刻输出# adjust tokens for Bart, *e.g.*    cur_len=1 与 max_length 输出调整next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len, max_length=max_length)#归一化next_token_scores = F.log_softmax(next_token_logits, dim=-1)  # (batch_size * num_beams, vocab_size)    #归一化# pre-process distributionnext_token_scores = self.logits_processor(input_ids, next_token_scores)next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)   #当前概率+先前概率# reshape for beam searchvocab_size = next_token_scores.shape[-1]next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)#取前beam 个路径next_token_scores, next_tokens = torch.topk(next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True)next_indices = next_tokens // vocab_sizenext_tokens = next_tokens % vocab_size#获取对应路径,路径得分,对应的id   核心,不同beam search 不同点beam_scores, beam_next_tokens, beam_idx = self.beam_scorer.process(input_ids,next_token_scores,next_tokens,next_indices,pad_token_id=pad_token_id,eos_token_id=eos_token_id,)#更新输入, 找到对应的beam_idx, 选择的tokens, 拼接为新的输入      #(batch*beam,seq_len)input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)cur_len = cur_len + 1if len(past) > 0:past = self._reorder_cache(past, beam_idx)if self.beam_scorer.is_done():break#选择最优的输出,输出标准化sequences, sequence_scores = self.beam_scorer.finalize(input_ids,beam_scores,next_tokens,next_indices,pad_token_id=pad_token_id,eos_token_id=eos_token_id,)return sequences

这篇关于手撕BeamSearch代码的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/788650

相关文章

Java中调用数据库存储过程的示例代码

《Java中调用数据库存储过程的示例代码》本文介绍Java通过JDBC调用数据库存储过程的方法,涵盖参数类型、执行步骤及数据库差异,需注意异常处理与资源管理,以优化性能并实现复杂业务逻辑,感兴趣的朋友... 目录一、存储过程概述二、Java调用存储过程的基本javascript步骤三、Java调用存储过程示

Visual Studio 2022 编译C++20代码的图文步骤

《VisualStudio2022编译C++20代码的图文步骤》在VisualStudio中启用C++20import功能,需设置语言标准为ISOC++20,开启扫描源查找模块依赖及实验性标... 默认创建Visual Studio桌面控制台项目代码包含C++20的import方法。右键项目的属性:

MySQL数据库的内嵌函数和联合查询实例代码

《MySQL数据库的内嵌函数和联合查询实例代码》联合查询是一种将多个查询结果组合在一起的方法,通常使用UNION、UNIONALL、INTERSECT和EXCEPT关键字,下面:本文主要介绍MyS... 目录一.数据库的内嵌函数1.1聚合函数COUNT([DISTINCT] expr)SUM([DISTIN

Java实现自定义table宽高的示例代码

《Java实现自定义table宽高的示例代码》在桌面应用、管理系统乃至报表工具中,表格(JTable)作为最常用的数据展示组件,不仅承载对数据的增删改查,还需要配合布局与视觉需求,而JavaSwing... 目录一、项目背景详细介绍二、项目需求详细介绍三、相关技术详细介绍四、实现思路详细介绍五、完整实现代码

Go语言代码格式化的技巧分享

《Go语言代码格式化的技巧分享》在Go语言的开发过程中,代码格式化是一个看似细微却至关重要的环节,良好的代码格式化不仅能提升代码的可读性,还能促进团队协作,减少因代码风格差异引发的问题,Go在代码格式... 目录一、Go 语言代码格式化的重要性二、Go 语言代码格式化工具:gofmt 与 go fmt(一)

HTML5实现的移动端购物车自动结算功能示例代码

《HTML5实现的移动端购物车自动结算功能示例代码》本文介绍HTML5实现移动端购物车自动结算,通过WebStorage、事件监听、DOM操作等技术,确保实时更新与数据同步,优化性能及无障碍性,提升用... 目录1. 移动端购物车自动结算概述2. 数据存储与状态保存机制2.1 浏览器端的数据存储方式2.1.

基于 HTML5 Canvas 实现图片旋转与下载功能(完整代码展示)

《基于HTML5Canvas实现图片旋转与下载功能(完整代码展示)》本文将深入剖析一段基于HTML5Canvas的代码,该代码实现了图片的旋转(90度和180度)以及旋转后图片的下载... 目录一、引言二、html 结构分析三、css 样式分析四、JavaScript 功能实现一、引言在 Web 开发中,

Python如何去除图片干扰代码示例

《Python如何去除图片干扰代码示例》图片降噪是一个广泛应用于图像处理的技术,可以提高图像质量和相关应用的效果,:本文主要介绍Python如何去除图片干扰的相关资料,文中通过代码介绍的非常详细,... 目录一、噪声去除1. 高斯噪声(像素值正态分布扰动)2. 椒盐噪声(随机黑白像素点)3. 复杂噪声(如伪

Java Spring ApplicationEvent 代码示例解析

《JavaSpringApplicationEvent代码示例解析》本文解析了Spring事件机制,涵盖核心概念(发布-订阅/观察者模式)、代码实现(事件定义、发布、监听)及高级应用(异步处理、... 目录一、Spring 事件机制核心概念1. 事件驱动架构模型2. 核心组件二、代码示例解析1. 事件定义

Python实例题之pygame开发打飞机游戏实例代码

《Python实例题之pygame开发打飞机游戏实例代码》对于python的学习者,能够写出一个飞机大战的程序代码,是不是感觉到非常的开心,:本文主要介绍Python实例题之pygame开发打飞机... 目录题目pygame-aircraft-game使用 Pygame 开发的打飞机游戏脚本代码解释初始化部