tensorflow泰坦尼克号沉船数据预测模型

2024-06-14 08:38

本文主要是介绍tensorflow泰坦尼克号沉船数据预测模型,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

首先下载数据
https://www.kaggle.com/c/titanic/data
kaggle上面的数据

import pandas as pd
import numpy as np
import os,sys
os.getcwd()
data = pd.read_csv(’./tt/train.csv’)
data.columns
data = data[[‘Survived’, ‘Pclass’, ‘Sex’, ‘Age’, ‘SibSp’,
‘Parch’, ‘Fare’, ‘Cabin’, ‘Embarked’]]
data[‘Age’] = data[‘Age’].fillna(data[‘Age’].mean())
data[‘Cabin’] = pd.factorize(data.Cabin)[0]
data.fillna(0,inplace = True)
data[‘p1’] = np.array(data[‘Pclass’] == 1).astype(np.int32)

data[‘p2’] = np.array(data[‘Pclass’] == 2).astype(np.int32)

data[‘p3’] = np.array(data[‘Pclass’] == 3).astype(np.int32)
del data[‘Pclass’]
data.Embarked.unique()

data[‘e1’] = np.array(data[‘Embarked’] == ‘S’).astype(np.int32)

data[‘e2’] = np.array(data[‘Embarked’] == ‘C’).astype(np.int32)

data[‘e3’] = np.array(data[‘Embarked’] == ‘Q’).astype(np.int32)

del data[‘Embarked’]

data[‘Sex’] = [1 if x == ‘male’ else 0 for x in data.Sex]

data.values.dtype
data_train = data[[ ‘Sex’, ‘Age’, ‘SibSp’,
‘Parch’, ‘Fare’, ‘Cabin’, ‘p1’,‘p2’,‘p3’,‘e1’,‘e2’,‘e3’]]
data_target = data[‘Survived’].values.reshape(len(data),1)

np.shape(data_train),np.shape(data_target)

import tensorflow as tf

x = tf.placeholder(“float”,shape=[None,12])
y = tf.placeholder(“float”,shape=[None,1])

weight = tf.Variable(tf.random_normal([12,1]))
bias = tf.Variable(tf.random_normal([1]))
output = tf.matmul(x,weight) + bias
pred = tf.cast(tf.sigmoid(output)>0.5,tf.float32)

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = y,logits = output))

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = y,logits = output))

train_step = tf.train.GradientDescentOptimizer(0.0003).minimize(loss)

accuracy = tf.reduce_mean(tf.cast(tf.equal(pred,y),tf.float32))

data_test = pd.read_csv(’./tt/test.csv’)

data_test.column

data_test.columns

In[42]:

date_test = data_test[[‘Pclass’, ‘Sex’, ‘Age’, ‘SibSp’, ‘Parch’,
‘Fare’, ‘Cabin’, ‘Embarked’]].copy()

In[43]:

data_test

In[44]:

data_test = data_test[[‘Pclass’, ‘Sex’, ‘Age’, ‘SibSp’, ‘Parch’,
‘Fare’, ‘Cabin’, ‘Embarked’]]

In[51]:

data_test

In[46]:

data_test[‘Age’] = data_test[‘Age’].fillna(data[‘Age’].mean())

In[47]:

data_test[‘Age’] = data_test[‘Age’].fillna(data_test[‘Age’].mean())

In[48]:

data_test

In[49]:

data_test[‘Age’] = data_test[‘Age’].fillna(data_test[‘Age’].mean())

In[50]:

data_test[‘Cabin’] = pd.factorize(data_test.Cabin)[0]

In[52]:

data_test = data_test[[‘Pclass’, ‘Sex’, ‘Age’, ‘SibSp’, ‘Parch’,
‘Fare’, ‘Cabin’, ‘Embarked’]].copy()
data_test[‘Cabin’] = pd.factorize(data_test.Cabin)[0]

In[53]:

data_test[‘Age’] = data_test[‘Age’].fillna(data_test[‘Age’].mean())

In[54]:

data_test.fillna(0,inplace = True)

In[55]:

data_test[‘Sex’] = [1 if x == ‘male’ else 0 for x in data_test.Sex]

In[56]:

data_test[‘p1’] = np.array(data_test[‘Pclass’] == 1).astype(np.int32)
data_test[‘p2’] = np.array(data_test[‘Pclass’] == 2).astype(np.int32)
data_test[‘p3’] = np.array(data_test[‘Pclass’] == 3).astype(np.int32)
data_test[‘e1’] = np.array(data_test[‘Embarked’] == ‘S’).astype(np.int32)
data_test[‘e2’] = np.array(data_test[‘Embarked’] == ‘C’).astype(np.int32)
data_test[‘e3’] = np.array(data_test[‘Embarked’] == ‘Q’).astype(np.int32)
del data_test[‘Pclass’]
del data_test[‘Embarked’]

In[57]:

test_lable = pd.read_csv(’./tt/gender.csv’)
test_lable = np.reshape(test_lable.Survived.values.astype(np.float32),(418,1))

In[58]:

test_lable = pd.read_csv(’./tt/gender.csv’)
test_lable = np.reshape(test_lable.Survived.values.astype(np.float32),(418,1))

In[59]:

sess = tf.Session()
sess.run(tf.global_variables_initializer())
loss_train = []
train_acc = []
test_acc = []

In[61]:

for i in range(25000):
index = np.random.permutation(len(data_target))
data_train = data_train[index]
data_target = data_target[index]
for n in range(len(data_target)//100 + 1):
batch_xs = data_train[n100:n100 + 100]
batch_ys = data_target[n100:n100 + 100]
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
if i % 1000==0:
loss_temp = sess.run(loss,feed_dict={x: batch_xs,y: batch_ys})
loss_train.append(loss_temp)
train_acc_temp = sess.run(accuracy,feed_dict={x: batch_xs,y: batch_ys})
train_acc.append(train_acc_temp)
print(loss_temp,train_acc_temp)

In[62]:

for i in range(25000):
index = np.random.permutation(len(data_target))
data_train = data_train[index]
data_target = data_target[index]
for n in range(len(data_target)//100 + 1):
batch_xs = data_train[n100:n100 + 100]
batch_ys = data_target[n100:n100 + 100]
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
if i%1000 == 0:
loss_temp = sess.run(loss,feed_dict={x: batch_xs,y: batch_ys})
loss_train.append(loss_temp)
train_acc_temp = sess.run(accuracy,feed_dict={x: batch_xs,y: batch_ys})
train_acc.append(train_acc_temp)
print(loss_temp,train_acc_temp)

In[64]:

for i in range(25000):
index = np.random.permutation(len(data_target))
data_train = data_train[index]
data_target = data_target[index]
for n in range(len(data_target)//100 + 1):
batch_xs = data_train[n100:n100 + 100]
batch_ys = data_target[n100:n100 + 100]
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
if i%1000 == 0:
loss_temp = sess.run(loss,feed_dict={x: batch_xs,y: batch_ys})
loss_train.append(loss_temp)
train_acc_temp = sess.run(accuracy,feed_dict={x: batch_xs,y: batch_ys})
train_acc.append(train_acc_temp)
print(loss_temp,train_acc_temp)

In[65]:

for i in range(25000):
index = np.random.permutation(len(data_target))
data_train = data_train[index]
data_target = data_target[index]
for n in range(len(data_target)//100 + 1):
batch_xs = data_train[n100:n100 + 100]
batch_ys = data_target[n100:n100 + 100]
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
if i%1000 == 0:
loss_temp = sess.run(loss,feed_dict={x: batch_xs,y: batch_ys})
loss_train.append(loss_temp)
train_acc_temp = sess.run(accuracy,feed_dict={x: batch_xs,y: batch_ys})
train_acc.append(train_acc_temp)
print(loss_temp,train_acc_temp)

In[66]:

for i in range(25000):
#index = np.random.permutation(len(data_target))
#data_train = data_train[index]
#data_target = data_target[index]
for n in range(len(data_target)//100 + 1):
batch_xs = data_train[n100:n100 + 100]
batch_ys = data_target[n100:n100 + 100]
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
if i%1000 == 0:
loss_temp = sess.run(loss,feed_dict={x: batch_xs,y: batch_ys})
loss_train.append(loss_temp)
train_acc_temp = sess.run(accuracy,feed_dict={x: batch_xs,y: batch_ys})
train_acc.append(train_acc_temp)
print(loss_temp,train_acc_temp)

In[67]:

import matplotlib.pyplot as plt

In[68]:

plt.plot(loss_train,‘k-’)
plt.title(‘train loss’)
plt.show()

In[69]:

plt.plot(train_acc,‘b–’,label = ‘train_acc’)
plt.plot(test_acc,‘r–’,label = ‘test_acc’)
plt.title(‘acc’)
plt.legend()
plt.show()

这篇关于tensorflow泰坦尼克号沉船数据预测模型的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1059913

相关文章

使用Python开发一个Ditto剪贴板数据导出工具

《使用Python开发一个Ditto剪贴板数据导出工具》在日常工作中,我们经常需要处理大量的剪贴板数据,下面将介绍如何使用Python的wxPython库开发一个图形化工具,实现从Ditto数据库中读... 目录前言运行结果项目需求分析技术选型核心功能实现1. Ditto数据库结构分析2. 数据库自动定位3

pandas数据的合并concat()和merge()方式

《pandas数据的合并concat()和merge()方式》Pandas中concat沿轴合并数据框(行或列),merge基于键连接(内/外/左/右),concat用于纵向或横向拼接,merge用于... 目录concat() 轴向连接合并(1) join='outer',axis=0(2)join='o

批量导入txt数据到的redis过程

《批量导入txt数据到的redis过程》用户通过将Redis命令逐行写入txt文件,利用管道模式运行客户端,成功执行批量删除以Product*匹配的Key操作,提高了数据清理效率... 目录批量导入txt数据到Redisjs把redis命令按一条 一行写到txt中管道命令运行redis客户端成功了批量删除k

SpringBoot多环境配置数据读取方式

《SpringBoot多环境配置数据读取方式》SpringBoot通过环境隔离机制,支持properties/yaml/yml多格式配置,结合@Value、Environment和@Configura... 目录一、多环境配置的核心思路二、3种配置文件格式详解2.1 properties格式(传统格式)1.

解决pandas无法读取csv文件数据的问题

《解决pandas无法读取csv文件数据的问题》本文讲述作者用Pandas读取CSV文件时因参数设置不当导致数据错位,通过调整delimiter和on_bad_lines参数最终解决问题,并强调正确参... 目录一、前言二、问题复现1. 问题2. 通过 on_bad_lines=‘warn’ 跳过异常数据3

C#监听txt文档获取新数据方式

《C#监听txt文档获取新数据方式》文章介绍通过监听txt文件获取最新数据,并实现开机自启动、禁用窗口关闭按钮、阻止Ctrl+C中断及防止程序退出等功能,代码整合于主函数中,供参考学习... 目录前言一、监听txt文档增加数据二、其他功能1. 设置开机自启动2. 禁止控制台窗口关闭按钮3. 阻止Ctrl +

java如何实现高并发场景下三级缓存的数据一致性

《java如何实现高并发场景下三级缓存的数据一致性》这篇文章主要为大家详细介绍了java如何实现高并发场景下三级缓存的数据一致性,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 下面代码是一个使用Java和Redisson实现的三级缓存服务,主要功能包括:1.缓存结构:本地缓存:使

在MySQL中实现冷热数据分离的方法及使用场景底层原理解析

《在MySQL中实现冷热数据分离的方法及使用场景底层原理解析》MySQL冷热数据分离通过分表/分区策略、数据归档和索引优化,将频繁访问的热数据与冷数据分开存储,提升查询效率并降低存储成本,适用于高并发... 目录实现冷热数据分离1. 分表策略2. 使用分区表3. 数据归档与迁移在mysql中实现冷热数据分

C#解析JSON数据全攻略指南

《C#解析JSON数据全攻略指南》这篇文章主要为大家详细介绍了使用C#解析JSON数据全攻略指南,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录一、为什么jsON是C#开发必修课?二、四步搞定网络JSON数据1. 获取数据 - HttpClient最佳实践2. 动态解析 - 快速

MyBatis-Plus通用中等、大量数据分批查询和处理方法

《MyBatis-Plus通用中等、大量数据分批查询和处理方法》文章介绍MyBatis-Plus分页查询处理,通过函数式接口与Lambda表达式实现通用逻辑,方法抽象但功能强大,建议扩展分批处理及流式... 目录函数式接口获取分页数据接口数据处理接口通用逻辑工具类使用方法简单查询自定义查询方法总结函数式接口