单层LSTM网络对MNIST数据集分类

2024-01-07 03:18

本文主要是介绍单层LSTM网络对MNIST数据集分类,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

单层LSTM网络对MNIST数据集分类

实验代码:(使用tensorflow框架)

# -*- coding: utf-8 -*-import tensorflow as tf
# 导入 MINST 数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/data/", one_hot=True)n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10  # MNIST 列别 (0-9 ,一共10类)tf.reset_default_graph()# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])x1 = tf.unstack(x, n_steps, 1)#1 BasicLSTMCell
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)#2 LSTMCell
#lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias=1.0)
#outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)#3 gru
#gru = tf.contrib.rnn.GRUCell(n_hidden)
#outputs = tf.contrib.rnn.static_rnn(gru, x1, dtype=tf.float32)#4 创建动态RNN
#outputs,_  = tf.nn.dynamic_rnn(gru,x,dtype=tf.float32)
#outputs = tf.transpose(outputs, [1, 0, 2])pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)learning_rate = 0.001
training_iters = 100000
batch_size = 128
display_step = 10# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# 启动session
with tf.Session() as sess:sess.run(tf.global_variables_initializer())step = 1# Keep training until reach max iterationswhile step * batch_size < training_iters:batch_x, batch_y = mnist.train.next_batch(batch_size)# Reshape data to get 28 seq of 28 elementsbatch_x = batch_x.reshape((batch_size, n_steps, n_input))# Run optimization op (backprop)sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})if step % display_step == 0:# 计算批次数据的准确率acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})# Calculate batch lossloss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \"{:.6f}".format(loss) + ", Training Accuracy= " + \"{:.5f}".format(acc))step += 1print (" Finished!")# 计算准确率 for 128 mnist test imagestest_len = 128test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))test_label = mnist.test.labels[:test_len]print ("Testing Accuracy:", \sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

实验结果:
Iter 1280, Minibatch Loss= 2.098885, Training Accuracy= 0.30469
Iter 2560, Minibatch Loss= 1.772232, Training Accuracy= 0.38281
Iter 3840, Minibatch Loss= 1.404505, Training Accuracy= 0.52344
Iter 5120, Minibatch Loss= 1.321466, Training Accuracy= 0.57031
Iter 6400, Minibatch Loss= 1.020606, Training Accuracy= 0.65625
Iter 7680, Minibatch Loss= 0.767583, Training Accuracy= 0.76562
Iter 8960, Minibatch Loss= 0.945606, Training Accuracy= 0.66406
Iter 10240, Minibatch Loss= 0.643211, Training Accuracy= 0.78906
Iter 11520, Minibatch Loss= 0.737389, Training Accuracy= 0.76562
Iter 12800, Minibatch Loss= 0.589967, Training Accuracy= 0.83594
Iter 14080, Minibatch Loss= 0.432091, Training Accuracy= 0.89062
Iter 15360, Minibatch Loss= 0.375092, Training Accuracy= 0.90625
Iter 16640, Minibatch Loss= 0.509971, Training Accuracy= 0.82031
Iter 17920, Minibatch Loss= 0.431015, Training Accuracy= 0.85156
Iter 19200, Minibatch Loss= 0.420453, Training Accuracy= 0.85156
Iter 20480, Minibatch Loss= 0.338827, Training Accuracy= 0.88281
Iter 21760, Minibatch Loss= 0.427024, Training Accuracy= 0.86719
Iter 23040, Minibatch Loss= 0.419629, Training Accuracy= 0.87500
Iter 24320, Minibatch Loss= 0.343750, Training Accuracy= 0.90625
Iter 25600, Minibatch Loss= 0.232130, Training Accuracy= 0.92188
Iter 26880, Minibatch Loss= 0.491618, Training Accuracy= 0.89062
Iter 28160, Minibatch Loss= 0.226970, Training Accuracy= 0.92188
Iter 29440, Minibatch Loss= 0.287028, Training Accuracy= 0.91406
Iter 30720, Minibatch Loss= 0.348053, Training Accuracy= 0.90625
Iter 32000, Minibatch Loss= 0.232494, Training Accuracy= 0.92969
Iter 33280, Minibatch Loss= 0.294077, Training Accuracy= 0.89062
Iter 34560, Minibatch Loss= 0.269400, Training Accuracy= 0.90625
Iter 35840, Minibatch Loss= 0.257503, Training Accuracy= 0.92969
Iter 37120, Minibatch Loss= 0.176288, Training Accuracy= 0.95312
Iter 38400, Minibatch Loss= 0.263634, Training Accuracy= 0.89844
Iter 39680, Minibatch Loss= 0.350406, Training Accuracy= 0.89062
Iter 40960, Minibatch Loss= 0.175449, Training Accuracy= 0.94531
Iter 42240, Minibatch Loss= 0.311644, Training Accuracy= 0.89844
Iter 43520, Minibatch Loss= 0.202412, Training Accuracy= 0.92188
Iter 44800, Minibatch Loss= 0.238732, Training Accuracy= 0.92188
Iter 46080, Minibatch Loss= 0.262362, Training Accuracy= 0.91406
Iter 47360, Minibatch Loss= 0.277031, Training Accuracy= 0.92188
Iter 48640, Minibatch Loss= 0.167007, Training Accuracy= 0.93750
Iter 49920, Minibatch Loss= 0.208343, Training Accuracy= 0.95312
Iter 51200, Minibatch Loss= 0.237634, Training Accuracy= 0.91406
Iter 52480, Minibatch Loss= 0.133993, Training Accuracy= 0.96094
Iter 53760, Minibatch Loss= 0.255377, Training Accuracy= 0.92188
Iter 55040, Minibatch Loss= 0.204812, Training Accuracy= 0.92969
Iter 56320, Minibatch Loss= 0.183624, Training Accuracy= 0.92969
Iter 57600, Minibatch Loss= 0.131443, Training Accuracy= 0.96094
Iter 58880, Minibatch Loss= 0.096448, Training Accuracy= 0.97656
Iter 60160, Minibatch Loss= 0.163977, Training Accuracy= 0.96875
Iter 61440, Minibatch Loss= 0.185323, Training Accuracy= 0.95312
Iter 62720, Minibatch Loss= 0.107512, Training Accuracy= 0.97656
Iter 64000, Minibatch Loss= 0.174152, Training Accuracy= 0.95312
Iter 65280, Minibatch Loss= 0.173235, Training Accuracy= 0.95312
Iter 66560, Minibatch Loss= 0.115825, Training Accuracy= 0.96875
Iter 67840, Minibatch Loss= 0.190322, Training Accuracy= 0.92969
Iter 69120, Minibatch Loss= 0.073072, Training Accuracy= 0.97656
Iter 70400, Minibatch Loss= 0.161416, Training Accuracy= 0.93750
Iter 71680, Minibatch Loss= 0.148715, Training Accuracy= 0.95312
Iter 72960, Minibatch Loss= 0.174622, Training Accuracy= 0.95312
Iter 74240, Minibatch Loss= 0.100780, Training Accuracy= 0.97656
Iter 75520, Minibatch Loss= 0.177840, Training Accuracy= 0.96094
Iter 76800, Minibatch Loss= 0.119568, Training Accuracy= 0.96094
Iter 78080, Minibatch Loss= 0.116565, Training Accuracy= 0.96094
Iter 79360, Minibatch Loss= 0.124705, Training Accuracy= 0.96094
Iter 80640, Minibatch Loss= 0.068246, Training Accuracy= 0.97656
Iter 81920, Minibatch Loss= 0.152009, Training Accuracy= 0.97656
Iter 83200, Minibatch Loss= 0.150834, Training Accuracy= 0.96094
Iter 84480, Minibatch Loss= 0.082806, Training Accuracy= 0.98438
Iter 85760, Minibatch Loss= 0.239210, Training Accuracy= 0.94531
Iter 87040, Minibatch Loss= 0.194339, Training Accuracy= 0.94531
Iter 88320, Minibatch Loss= 0.141747, Training Accuracy= 0.96094
Iter 89600, Minibatch Loss= 0.110870, Training Accuracy= 0.97656
Iter 90880, Minibatch Loss= 0.066232, Training Accuracy= 0.98438
Iter 92160, Minibatch Loss= 0.085497, Training Accuracy= 0.96875
Iter 93440, Minibatch Loss= 0.141791, Training Accuracy= 0.96094
Iter 94720, Minibatch Loss= 0.143089, Training Accuracy= 0.93750
Iter 96000, Minibatch Loss= 0.234196, Training Accuracy= 0.93750
Iter 97280, Minibatch Loss= 0.143507, Training Accuracy= 0.94531
Iter 98560, Minibatch Loss= 0.069923, Training Accuracy= 0.96875
Iter 99840, Minibatch Loss= 0.079662, Training Accuracy= 0.98438
Finished!
Testing Accuracy: 0.976562


参考资料:《深度学习之Tensorflow》李金洪编著

这篇关于单层LSTM网络对MNIST数据集分类的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/578550

相关文章

SQL中如何添加数据(常见方法及示例)

《SQL中如何添加数据(常见方法及示例)》SQL全称为StructuredQueryLanguage,是一种用于管理关系数据库的标准编程语言,下面给大家介绍SQL中如何添加数据,感兴趣的朋友一起看看吧... 目录在mysql中,有多种方法可以添加数据。以下是一些常见的方法及其示例。1. 使用INSERT I

Python使用vllm处理多模态数据的预处理技巧

《Python使用vllm处理多模态数据的预处理技巧》本文深入探讨了在Python环境下使用vLLM处理多模态数据的预处理技巧,我们将从基础概念出发,详细讲解文本、图像、音频等多模态数据的预处理方法,... 目录1. 背景介绍1.1 目的和范围1.2 预期读者1.3 文档结构概述1.4 术语表1.4.1 核

MySQL 删除数据详解(最新整理)

《MySQL删除数据详解(最新整理)》:本文主要介绍MySQL删除数据的相关知识,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录一、前言二、mysql 中的三种删除方式1.DELETE语句✅ 基本语法: 示例:2.TRUNCATE语句✅ 基本语

Linux中压缩、网络传输与系统监控工具的使用完整指南

《Linux中压缩、网络传输与系统监控工具的使用完整指南》在Linux系统管理中,压缩与传输工具是数据备份和远程协作的桥梁,而系统监控工具则是保障服务器稳定运行的眼睛,下面小编就来和大家详细介绍一下它... 目录引言一、压缩与解压:数据存储与传输的优化核心1. zip/unzip:通用压缩格式的便捷操作2.

MyBatisPlus如何优化千万级数据的CRUD

《MyBatisPlus如何优化千万级数据的CRUD》最近负责的一个项目,数据库表量级破千万,每次执行CRUD都像走钢丝,稍有不慎就引起数据库报警,本文就结合这个项目的实战经验,聊聊MyBatisPl... 目录背景一、MyBATis Plus 简介二、千万级数据的挑战三、优化 CRUD 的关键策略1. 查

python实现对数据公钥加密与私钥解密

《python实现对数据公钥加密与私钥解密》这篇文章主要为大家详细介绍了如何使用python实现对数据公钥加密与私钥解密,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录公钥私钥的生成使用公钥加密使用私钥解密公钥私钥的生成这一部分,使用python生成公钥与私钥,然后保存在两个文

mysql中的数据目录用法及说明

《mysql中的数据目录用法及说明》:本文主要介绍mysql中的数据目录用法及说明,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1、背景2、版本3、数据目录4、总结1、背景安装mysql之后,在安装目录下会有一个data目录,我们创建的数据库、创建的表、插入的

Navicat数据表的数据添加,删除及使用sql完成数据的添加过程

《Navicat数据表的数据添加,删除及使用sql完成数据的添加过程》:本文主要介绍Navicat数据表的数据添加,删除及使用sql完成数据的添加过程,具有很好的参考价值,希望对大家有所帮助,如有... 目录Navicat数据表数据添加,删除及使用sql完成数据添加选中操作的表则出现如下界面,查看左下角从左

MySQL中的索引结构和分类实战案例详解

《MySQL中的索引结构和分类实战案例详解》本文详解MySQL索引结构与分类,涵盖B树、B+树、哈希及全文索引,分析其原理与优劣势,并结合实战案例探讨创建、管理及优化技巧,助力提升查询性能,感兴趣的朋... 目录一、索引概述1.1 索引的定义与作用1.2 索引的基本原理二、索引结构详解2.1 B树索引2.2

SpringBoot中4种数据水平分片策略

《SpringBoot中4种数据水平分片策略》数据水平分片作为一种水平扩展策略,通过将数据分散到多个物理节点上,有效解决了存储容量和性能瓶颈问题,下面小编就来和大家分享4种数据分片策略吧... 目录一、前言二、哈希分片2.1 原理2.2 SpringBoot实现2.3 优缺点分析2.4 适用场景三、范围分片