重要度采样 important sample及 MATLAB,python实现

2023-10-15 14:18

本文主要是介绍重要度采样 important sample及 MATLAB,python实现,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1实践

MATLAB:

% Demo Kalman + Sequential Importance Sampling for linear gaussian model
% use prior proposal and locally optimal proposalT=100;
sv=1;
sw=sqrt(1);
phi=0.95;% simulate data according to the model
x=zeros(1,T);
y=zeros(1,T);
x=randn(1);
for k=2:Tx(1,k)=phi*x(1,k-1)+sv.*randn(1);
end
y=x+sw.*randn(1,T);
plot(y)
% load simulatedstatesobs   data corresponding to lecture notes% number of samples/particles
N=1000;% exact inference using Kalman filter
mp=zeros(1,T);
mf=zeros(1,T);
vp=zeros(1,T);
vf=zeros(1,T);
my=zeros(1,T);
vy=zeros(1,T);
loglike=0;mp(1,1)=0;
vp(1,1)=1;
my(1,1)=mp(1,1);
vy(1,1)=vp(1,1)+sw^2;
loglike=-0.5*log(2*pi*vy(1,1))-0.5*(y(1,1)-my(1,1))^2/vy(1,1);for k=1:T% updatevf(1,k)=sw^2*vp(1,k)/(vp(1,k)+sw^2);mf(1,k)=vf(1,k)*(mp(1,k)/vp(1,k)+y(1,k)/sw^2);% predictif (k<T)mp(1,k+1)=phi*mf(1,k);vp(1,k+1)=phi^2*vf(1,k)+sv^2;my(1,k+1)=mp(1,k+1);vy(1,k+1)=vp(1,k+1)+sw^2;loglike=loglike-0.5*log(2*pi*vy(1,k+1))-0.5*(y(1,k+1)-my(1,k+1))^2/vy(1,k+1);end
end% prior proposal
xs1=zeros(T,N);
lw1=zeros(T,N);
w1=zeros(T,N);
wnorm1=zeros(T,N);
ess1=zeros(1,T);
xmean1=zeros(1,T);
xvar1=zeros(1,T);
varlogw1=zeros(1,T);% locally optimal proposal
xs2=zeros(T,N);
lw2=zeros(T,N);
w2=zeros(T,N);
wnorm2=zeros(T,N);
ess2=zeros(1,T);
xmean2=zeros(1,T);
xvar2=zeros(1,T);
varlogw2=zeros(1,T);% SIS using prior
for k=1:Tif (k==1)% sample particles initial distributionxs1(k,:)=randn(1,N);% compute log weights for numerical stabilitylw1(k,:)=-0.5*log(2*pi*sw^2).*ones(1,N)-0.5*(y(1,k)*ones(1,N)-xs1(k,:)).^2./sw^2;else% propagate particles according to prior xs1(k,:)=phi.*xs1(k-1,:)+sv.*randn(1,N);% compute log weights for numerical stabilitylw1(k,:)=lw1(k-1,:)-0.5*log(2*pi*sw^2).*ones(1,N)-0.5*(y(1,k)*ones(1,N)-xs1(k,:)).^2./sw^2;endvarlogw1(1,k)=var(lw1(k,:));lmax=max(lw1(k,:));w1(k,:)=exp(lw1(k,:)-lmax);  % correct only up to a multiplicative factor for unnormalized weightswnorm1(k,:)=w1(k,:)./sum(w1(k,:));% effective sample sizeess1(1,k)=1/sum(wnorm1(k,:).^2);% compute estimate of E(Xk|y1:k)xmean1(1,k)=sum(wnorm1(k,:).*xs1(k,:));% compute estimate of Var(Xk|y1:k)xvar1(1,k)=sum(wnorm1(k,:).*xs1(k,:).^2)-xmean1(1,k)^2;
end% SIS using optimal% variance locally optimal proposal
ss2=sw^2*sv^2/(sv^2+sw^2);
ss=sqrt(ss2);for k=1:Tif (k==1)% sample particles initial proposal; e.g p(x1|y1)xs2(k,:)=ss2*y(1,k)*ones(1,N)/sw^2+ss.*randn(1,N);% compute log weights for numerical stabilitylw2(k,:)=-0.5*log(2*pi*(sw^2+sv^2)).*ones(1,N)-0.5*(y(1,k)*ones(1,N)).^2./(sw^2+sv^2);else% propagate particles according to p(xk|xk-1,yk) xs2(k,:)=ss2.*(phi.*xs2(k-1,:)./sv^2+y(1,k)/sw^2)+ss.*randn(1,N);lw2(k,:)=lw2(k-1,:)-0.5*log(2*pi*(sw^2+sv^2)).*ones(1,N)-0.5*(y(1,k)*ones(1,N)-phi.*xs2(k-1,:)).^2./(sw^2+sv^2);endvarlogw2(1,k)=var(lw2(k,:));lmax=max(lw2(k,:));w2(k,:)=exp(lw2(k,:)-lmax);  % correct only up to a multiplicative factor for unnormalized weightswnorm2(k,:)=w2(k,:)./sum(w2(k,:));% effective sample sizeess2(1,k)=1/sum(wnorm2(k,:).^2);% compute estimate of E(Xk|y1:k)xmean2(1,k)=sum(wnorm2(k,:).*xs2(k,:));% compute estimate of Var(Xk|y1:k)xvar2(1,k)=sum(wnorm2(k,:).*xs2(k,:).^2)-xmean2(1,k)^2;
end% display ESS
figure(1)
plot(ess1,'r')
hold on
plot (ess2,'b')
print essprioroptimalhighsw -depsc
axis([1 T 0 N])
% display variance log weightsfigure(2)
subplot(2,1,1)
plot(varlogw1,'r');
subplot(2,1,2)
plot(varlogw2,'b');
print varlogweightprioroptimalhighsw -depsc% display absolute error exact conditional mean versus SIS
figure(3)
plot(abs(mf-xmean1),'r');
hold on
plot(abs(mf-xmean2),'b');
axis([1 T 0 5])
print errorxmeanprioroptkalman -depsc% display exact variance versus SIS
figure(4)
plot(xvar1,'r');
hold on
plot(xvar2,'b');
plot(vf,'c');
axis([1 T 0 3])
print vmeanprioroptkalman -depschold off

python:

# -*- coding: utf-8 -*-
"""
Created on Mon Apr 22 21:19:55 2019@author: win10
"""import numpy as np
from scipy import stats
from numpy.random import randn,random
import matplotlib.pyplot as pltdef gaussian_particles(mean,std,N):particles = np.empty((N,1))particles[:, 0] = mean + (randn(N) * std)return particlesdef predict(particles, d, std, dt=1.):N = len(particles)degradation = (d * dt) + (randn(N) * std)return degradationdef update(particles, weights, z, R):weights.fill(1.)weights = weights*stats.norm(z, R).pdf(particles)weights += 1.e-100weights /= sum(weights)def simple_resample(particles, weights):N = len(particles)cumulative_sum = np.cumsum(weights)cumulative_sum[-1] = 1.indexes = np.searchsorted(cumulative_sum, random(N))# resample according to indexesparticles[:] = particles[indexes]weights.fill(1.0 / N)def neff(weights):return 1. / np.sum(np.square(weights))def estimate(particles, weights):'''returns the mean and variance of the weighted particles.'''mean = np.average(particles, weights=weights, axis = 0)var  = np.average((particles - mean)**2, weights=weights, axis = 0)return mean, varif __name__ == '__main__':N = 100 # Number of particlesx_0 = 0.1 #initial statex_N = 0.001  # Noise covariance in the systemx_R = 0.001  # Noise covariance in the measurementT = 10 # Time stepsd = 0.001 # very simple State update model# Initial particles, gaussian distributedparticles = gaussian_particles(x_0,x_N,N)weights = np.zeros(N)true_state = [0.1]for t in range(T):# measurement. We just add some random sensor noisetrue_state.append(true_state[t] + 0.001)z = true_state[t] + (randn(N) * x_R)# predict particlespred_z = predict(particles, d, x_N)# updated Observationz_updated = particles + (randn(N) * x_R)# incorporate measurements and update our belief- posteriorupdate(particles, weights, z=z, R=x_R)# resample if number of effective particles drops below N/2if neff(weights) < N/2:simple_resample(particles, weights)mu, var = estimate(particles, weights)plt.plot(np.arange(len(true_state)), true_state)

参考:
1 Gaussian Process with Particle Filter - Part 2 python 源码;
2 Oxford 课站 含MATLAB code;
3 豆瓣 粒子群滤波总结

这篇关于重要度采样 important sample及 MATLAB,python实现的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/218254

相关文章

Spring Boot整合Redis注解实现增删改查功能(Redis注解使用)

《SpringBoot整合Redis注解实现增删改查功能(Redis注解使用)》文章介绍了如何使用SpringBoot整合Redis注解实现增删改查功能,包括配置、实体类、Repository、Se... 目录配置Redis连接定义实体类创建Repository接口增删改查操作示例插入数据查询数据删除数据更

Java Lettuce 客户端入门到生产的实现步骤

《JavaLettuce客户端入门到生产的实现步骤》本文主要介绍了JavaLettuce客户端入门到生产的实现步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要... 目录1 安装依赖MavenGradle2 最小化连接示例3 核心特性速览4 生产环境配置建议5 常见问题

使用python生成固定格式序号的方法详解

《使用python生成固定格式序号的方法详解》这篇文章主要为大家详细介绍了如何使用python生成固定格式序号,文中的示例代码讲解详细,具有一定的借鉴价值,有需要的小伙伴可以参考一下... 目录生成结果验证完整生成代码扩展说明1. 保存到文本文件2. 转换为jsON格式3. 处理特殊序号格式(如带圈数字)4

linux ssh如何实现增加访问端口

《linuxssh如何实现增加访问端口》Linux中SSH默认使用22端口,为了增强安全性或满足特定需求,可以通过修改SSH配置来增加或更改SSH访问端口,具体步骤包括修改SSH配置文件、增加或修改... 目录1. 修改 SSH 配置文件2. 增加或修改端口3. 保存并退出编辑器4. 更新防火墙规则使用uf

Java 的ArrayList集合底层实现与最佳实践

《Java的ArrayList集合底层实现与最佳实践》本文主要介绍了Java的ArrayList集合类的核心概念、底层实现、关键成员变量、初始化机制、容量演变、扩容机制、性能分析、核心方法源码解析、... 目录1. 核心概念与底层实现1.1 ArrayList 的本质1.1.1 底层数据结构JDK 1.7

C++中unordered_set哈希集合的实现

《C++中unordered_set哈希集合的实现》std::unordered_set是C++标准库中的无序关联容器,基于哈希表实现,具有元素唯一性和无序性特点,本文就来详细的介绍一下unorder... 目录一、概述二、头文件与命名空间三、常用方法与示例1. 构造与析构2. 迭代器与遍历3. 容量相关4

C++中悬垂引用(Dangling Reference) 的实现

《C++中悬垂引用(DanglingReference)的实现》C++中的悬垂引用指引用绑定的对象被销毁后引用仍存在的情况,会导致访问无效内存,下面就来详细的介绍一下产生的原因以及如何避免,感兴趣... 目录悬垂引用的产生原因1. 引用绑定到局部变量,变量超出作用域后销毁2. 引用绑定到动态分配的对象,对象

SpringBoot基于注解实现数据库字段回填的完整方案

《SpringBoot基于注解实现数据库字段回填的完整方案》这篇文章主要为大家详细介绍了SpringBoot如何基于注解实现数据库字段回填的相关方法,文中的示例代码讲解详细,感兴趣的小伙伴可以了解... 目录数据库表pom.XMLRelationFieldRelationFieldMapping基础的一些代

Java HashMap的底层实现原理深度解析

《JavaHashMap的底层实现原理深度解析》HashMap基于数组+链表+红黑树结构,通过哈希算法和扩容机制优化性能,负载因子与树化阈值平衡效率,是Java开发必备的高效数据结构,本文给大家介绍... 目录一、概述:HashMap的宏观结构二、核心数据结构解析1. 数组(桶数组)2. 链表节点(Node

Java AOP面向切面编程的概念和实现方式

《JavaAOP面向切面编程的概念和实现方式》AOP是面向切面编程,通过动态代理将横切关注点(如日志、事务)与核心业务逻辑分离,提升代码复用性和可维护性,本文给大家介绍JavaAOP面向切面编程的概... 目录一、AOP 是什么?二、AOP 的核心概念与实现方式核心概念实现方式三、Spring AOP 的关