TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别

2024-06-05 19:48

本文主要是介绍TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

github:https://github.com/MichaelBeechan
CSDN:https://blog.csdn.net/u011344545

涉及代码:https://github.com/MichaelBeechan/Learning_TensorFlow-Kaggle_MNIST 欢迎Fork和Star

Learning_TensorFlow-Kaggle_MNIS

一步步带你通过项目(MNIST手写识别)学习入门TensorFlow以及神经网络的知识

**

TF_Variable:TensorFlow入门

**

# -*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: tensorflow变量初始化
https://baike.baidu.com/item/TensorFlow/18828108?fr=aladdin
"""
import tensorflow as tf
# 变量定义
w = tf.Variable([[0.5, 1.0]])
x = tf.Variable([[2.0], [1.0]])
# 矩阵乘法
y = tf.matmul(w, x)
print(y)# 函数
norm = tf.random_normal([2, 3], mean = -1, stddev = 4)
c = tf.constant([[1, 2], [3, 4], [5, 6]])
shuff = tf.random_shuffle(c)  # shuffle洗牌
sess = tf.Session()
print(sess.run(norm))
print(sess.run(shuff))
# 将numpy的一些数据转换为tensorflow能用的类型
import numpy as np
a = np.zeros((3, 3))
ta = tf.convert_to_tensor(a)
print(sess.run(ta))# 创建一个变量 并用for循环对变量进行赋值操作
num  =tf.Variable(0, name="count")
new_value = tf.add(num, 10)
op = tf.assign(num, new_value)
print(op)
# 初始化全局变量
init_op = tf.global_variables_initializer()
# 定义运行会话
with tf.Session() as sess:sess.run(init_op)print(sess.run(num))for i in range(5):sess.run(op)print(sess.run(num))# 通过feed设置placeholder的值
# 声明变量是不赋值,计算时进行赋值  使用feed
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
value_new = tf.multiply(input1, input2)with tf.Session() as sess:print(sess.run(value_new, feed_dict={input1:23.0, input2:11.0}))

**

Kaggle_mnist

**
使用softMax作为激活函数,交叉熵做损失函数,梯度下降法优化的单层神经网络学习识别
准确率:88%左右

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: Kaggle MINIST 手写图片识别  Digit Recognizer
http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html
"""
"""
一、数据的准备
二、模型的设计
三、代码实现
28*28 = 784 的二维数组
训练数据和测试数据,都可以分别转化为[42000,769]和[28000,768]的数组
模型建立:
1)使用一个最简单的单层的神经网络进行学习
2)用SoftMax来做为激活函数
3)用交叉熵来做损失函数
4)用梯度下降来做优化方式
"""#88.45% 识别正确率
import pandas as pd
import numpy as np
import tensorflow as tf#加载数据
train = pd.read_csv("train.csv")
images = train.iloc[:, 1:].values
#labels_flat = train[[0]].values.ravel()
labels_flat = train.iloc[:, 0].values.ravel()#输入处理
images = images.astype(np.float)
images = np.multiply(images, 1.0 / 255.0)
print("输入数据的数量:(%g, %g)" % images.shape)
images_size = images.shape[1]
images_width = images_height = np.ceil(np.sqrt(images_size)).astype(np.uint8)
print("图片的长 = {0}\n图片的高 = {1}".format(images_width, images_height))x = tf.placeholder('float', shape=[None, images_size])#结果处理
labels_count = np.unique(labels_flat).shape[0]
print('结果的种类 = {0}'.format(labels_count))
y = tf.placeholder('float', shape=[None, labels_count])#One-Hot编码 :离散特征处理——独热编码  scikit_learn有封装了现成的编码函数OneHotEncoder()
def dense_to_one_hot(labels_dense, num_calsses):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_calsseslabels_one_hot = np.zeros((num_labels, num_calsses))#flat返回的是一个迭代器labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotlabels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)
print('结果的数量:({0[0]}, {0[1]})'.format(labels.shape))#数据划分
VALIDATION_SIZE = 2000validation_images = images[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]train_images = images[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]batch_size = 100
n_batch = len(train_images)//batch_size#建立神经网络
weight = tf.Variable(tf.zeros([784, 10]))
biases = tf.Variable(tf.zeros([10]))
result = tf.matmul(x, weight) + biases
prediction = tf.nn.softmax(result)loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=prediction))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)init = tf.global_variables_initializer()correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))with tf.Session() as sess:sess.run(init)for epoch in range(50):for batch in range(n_batch):batch_x = train_images[batch * batch_size:(batch+1) * batch_size]batch_y = train_labels[batch * batch_size:(batch+1) * batch_size]sess.run(train_step, feed_dict={x:batch_x, y:batch_y})accuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels})print("第"+str(epoch+1)+"轮,准确度为:" + str(accuracy_n))```**

CNN_mnist

卷积神经网络——卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
准确率:训练20 accuracy is 0.984

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: MINIST Digit Recognizer CNN
https://www.zhihu.com/question/52668301
"""
#卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plot
from tensorflow.examples.tutorials.mnist import input_data
import pandas as pd#Add data
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")#Get data and deal data  astype()转换数据类型
x_train = train.iloc[:, 1:].values
x_train = x_train.astype(np.float)
x_train = np.multiply(x_train, 1.0 / 255.0)#Get image width and height
image_size = x_train.shape[1]
images_width = images_height = np.ceil(np.sqrt(image_size)).astype(np.uint8)print('数据样本大小:(%g, %g)' % x_train.shape)
print('图像的维度大小:{0}'.format(image_size))
print('图像长度:{0}\n高度:{1}'.format(images_width, images_height))#Get data labels
labels_flat = train.iloc[:, 0].values.ravel()
#对于一维数组或者列表,unique函数去除其中重复的元素,并按元素由大到小返回一个新的无元素重复的元组或者列表
labels_count = np.unique(labels_flat).shape[0]#One-Hot function
def dense_to_one_hot(labels_dense, num_classes):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_classeslabels_one_hot = np.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hot#one-hot deal labels
labels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)print('标签({0[0]}, {0[1]})'.format(labels.shape))
print('图像标签Example:[{0}] --> {1}'.format(25, labels[25]))#Divide train data to train and validation
VALIDATION_SIZE = 2000
train_images = x_train[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]validation_images = x_train[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]#set batch size and get the sum total of batch
batch_size = 100
n_batch = len(train_images) // batch_size#define Empty variable (data)x: 784 (labels)y: 10
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])#define function to deal data
def weight_variable(shape):#initial weight --- normal distribution#一个截断的产生正太分布的函数,就是说产生正太分布的值如果与均值的差值大于两倍的标准差,那就重新生成initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial)def bias_variable(shape):# initial bias -- nonzeroinitial = tf.constant(0.1, shape=shape)return tf.Variable(initial)#packaging TensorFlow 2D convolution
def conv2D(x, W):return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
#packaging Tensorflow Pooling layer
def max_pool_2x2(x):return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')#Transform input data to 4D tensor, 2 and 3 is width and height, 4 is color
x_image = tf.reshape(x, [-1, 28, 28, 1])#compute 32 features 3*3 patch
w_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])#28*28 images conv step-size is 1   2*2 max pool
#After pool [28/2, 28/2] = [14, 14] the second pool [14/2, 14/2] = [7, 7]
#conv data
h_conv1 = tf.nn.relu(conv2D(x_image, w_conv1) + b_conv1)
#pool result
h_pool1 = max_pool_2x2(h_conv1)#On the previous basis, generate 64 features
w_conv2 = weight_variable([6, 6, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2D(h_pool1, w_conv2) + b_conv2)#max_pool 2*2 --> [7, 7]
h_pool2 = max_pool_2x2(h_conv2)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])#Fully connected neural network  1024 Neural
w_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)#Dropout
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#1024 to 10D output
w_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, w_fc2) + b_fc2#build loss function --> cross entropy
#tf.nn.softmax_cross_entropy_with_logits
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y, logits=y_conv))
#optimizing para
train_step_1 = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(loss)#compute accuracy
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_conv, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))#Set the filename parameter to save the model
global_step = tf.Variable(0, name='globle_step', trainable=False)
saver  =tf.train.Saver()#initial variable
init = tf.global_variables_initializer()#train
with tf.Session() as sess:sess.run(init)# saver.restore(sess, "model.ckpt-12")# iter 20for epoch in range(1, 20):for batch in range(n_batch):# each times get one data patch to trainbatch_x = train_images[(batch) * batch_size:(batch+1) * batch_size]batch_y = train_labels[(batch) * batch_size:(batch+1) * batch_size]# the most important step -->sess.run(train_step_1, feed_dict={x:batch_x, y:batch_y, keep_prob:0.5})# each period compute accuracyaccuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels, keep_prob:1.0})print("The " + str(epoch+1) + "th, accuracy is " + str(accuracy_n))# save train model# global_step.assign(epoch).eval()# saver.save(sess, "model.ckpt", global_step=global_step)

接下来改进方案进一步提高准确率。。。。。使用大神的自归一化神经网络

这篇关于TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1034009

相关文章

使用Python实现IP地址和端口状态检测与监控

《使用Python实现IP地址和端口状态检测与监控》在网络运维和服务器管理中,IP地址和端口的可用性监控是保障业务连续性的基础需求,本文将带你用Python从零打造一个高可用IP监控系统,感兴趣的小伙... 目录概述:为什么需要IP监控系统使用步骤说明1. 环境准备2. 系统部署3. 核心功能配置系统效果展

Python实现微信自动锁定工具

《Python实现微信自动锁定工具》在数字化办公时代,微信已成为职场沟通的重要工具,但临时离开时忘记锁屏可能导致敏感信息泄露,下面我们就来看看如何使用Python打造一个微信自动锁定工具吧... 目录引言:当微信隐私遇到自动化守护效果展示核心功能全景图技术亮点深度解析1. 无操作检测引擎2. 微信路径智能获

使用Java将各种数据写入Excel表格的操作示例

《使用Java将各种数据写入Excel表格的操作示例》在数据处理与管理领域,Excel凭借其强大的功能和广泛的应用,成为了数据存储与展示的重要工具,在Java开发过程中,常常需要将不同类型的数据,本文... 目录前言安装免费Java库1. 写入文本、或数值到 Excel单元格2. 写入数组到 Excel表格

redis中使用lua脚本的原理与基本使用详解

《redis中使用lua脚本的原理与基本使用详解》在Redis中使用Lua脚本可以实现原子性操作、减少网络开销以及提高执行效率,下面小编就来和大家详细介绍一下在redis中使用lua脚本的原理... 目录Redis 执行 Lua 脚本的原理基本使用方法使用EVAL命令执行 Lua 脚本使用EVALSHA命令

Python中pywin32 常用窗口操作的实现

《Python中pywin32常用窗口操作的实现》本文主要介绍了Python中pywin32常用窗口操作的实现,pywin32主要的作用是供Python开发者快速调用WindowsAPI的一个... 目录获取窗口句柄获取最前端窗口句柄获取指定坐标处的窗口根据窗口的完整标题匹配获取句柄根据窗口的类别匹配获取句

Java 中的 @SneakyThrows 注解使用方法(简化异常处理的利与弊)

《Java中的@SneakyThrows注解使用方法(简化异常处理的利与弊)》为了简化异常处理,Lombok提供了一个强大的注解@SneakyThrows,本文将详细介绍@SneakyThro... 目录1. @SneakyThrows 简介 1.1 什么是 Lombok?2. @SneakyThrows

在 Spring Boot 中实现异常处理最佳实践

《在SpringBoot中实现异常处理最佳实践》本文介绍如何在SpringBoot中实现异常处理,涵盖核心概念、实现方法、与先前查询的集成、性能分析、常见问题和最佳实践,感兴趣的朋友一起看看吧... 目录一、Spring Boot 异常处理的背景与核心概念1.1 为什么需要异常处理?1.2 Spring B

Python位移操作和位运算的实现示例

《Python位移操作和位运算的实现示例》本文主要介绍了Python位移操作和位运算的实现示例,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 目录1. 位移操作1.1 左移操作 (<<)1.2 右移操作 (>>)注意事项:2. 位运算2.1

如何在 Spring Boot 中实现 FreeMarker 模板

《如何在SpringBoot中实现FreeMarker模板》FreeMarker是一种功能强大、轻量级的模板引擎,用于在Java应用中生成动态文本输出(如HTML、XML、邮件内容等),本文... 目录什么是 FreeMarker 模板?在 Spring Boot 中实现 FreeMarker 模板1. 环

Qt实现网络数据解析的方法总结

《Qt实现网络数据解析的方法总结》在Qt中解析网络数据通常涉及接收原始字节流,并将其转换为有意义的应用层数据,这篇文章为大家介绍了详细步骤和示例,感兴趣的小伙伴可以了解下... 目录1. 网络数据接收2. 缓冲区管理(处理粘包/拆包)3. 常见数据格式解析3.1 jsON解析3.2 XML解析3.3 自定义