Logistics回归、Softmax代码实现--Tensorflow部分--潘登同学的机器学习笔记
python版本--3.6 ; Tensorflow版本--1.15.0 ;编辑器--Pycharm
@
- 任务:
对mnist手写数据集进行分类, mnist是入门级的数据集, 我们会经常使用他
导入必要的库和数据集
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow_core.examples.tutorials.mnist import input_data
from tensorflow.python.framework import ops
ops.reset_default_graph()
# 创建会话
sess = tf.Session()
mnist = input_data.read_data_sets(r'C:\Users\潘登\PycharmProjects\神经网络\MNIST_data_bak', one_hot=True)
声明学习率, 批量大小, 占位符和模型变量
learning_rate = 0.01
batch_size = 500
x_data = tf.placeholder(shape=[None, 784], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 10], dtype=tf.float32) # 因为是one_hot编码所以占位符的宽度是10
W = tf.Variable(tf.zeros(shape=[784, 10])) # W初始化为0
b = tf.Variable(tf.zeros(shape=[10])) # b初始化为0
Sotfmax模型
model_output = tf.nn.softmax(tf.add(tf.matmul(x_data, W), b))
声明损失函数(交叉熵), 初始化变量, 声明优化器
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_target, logits=model_output)
init = tf.global_variables_initializer()
my_opt = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_step = my_opt.minimize(cross_entropy)
sess.run(init)
保存并绘制计算图
writer = tf.summary.FileWriter(r"C:\Users\潘登\PycharmProjects\神经网络\Tensorflow深度学习与实战\graph_softmax",sess.graph) # 保存计算图
注:
如何查看计算图, 在线性回归代码实现--Tensorflow部分讲过
训练模型
# 迭代遍历, 并在随机选择的数据上进行模型训练, 迭代1000次 每100次迭代输出变量值和损失值, 将其用于之后的可视化
loss_vec = []
for i in range(1000):
rand_x, rand_y = mnist.train.next_batch(batch_size)
# 目标:最优化损失
sess.run(train_step, feed_dict={x_data:rand_x,
y_target:rand_y})
# 更新loss值
temp_loss = sess.run(cross_entropy, feed_dict={x_data:rand_x,
y_target:rand_y})
loss_vec.append(temp_loss.mean())
# 每100次打印
if (i+1) % 100 == 0:
print('Step:', i+1)
print('Loss为:', temp_loss.mean())
查看训练结果
plt.figure(2)
plt.plot(loss_vec, 'k--')
plt.title('cross_entropy per Generation')
plt.xlabel('Generation')
plt.ylabel('cross_entropy')
plt.show()
从训练集中挑一组进行检验
# 从训练集中挑一组进行检验
images = mnist.test.images
labels = mnist.test.labels
# 测试刚刚训练好的模型
y = tf.nn.softmax(model_output)
# 打印图片
plt.figure(figsize=(12, 8))
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
for test_num in range(1877, 1883):
image = images[test_num]
label = labels[test_num]
a = np.array(image).reshape(1, 784)
result = sess.run(y, feed_dict={x_data:a})
plt.subplot(2, 3, test_num - 1876)
plt.imshow(image.reshape(28,28))
plt.title('这张照片的实际数字是:%d\n 预测结果为:%d'%(np.argwhere(label == 1), np.argmax(result)))
plt.show()
结果如下:
完整代码
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow_core.examples.tutorials.mnist import input_data
from tensorflow.python.framework import ops
ops.reset_default_graph()
# 创建会话
sess = tf.Session()
mnist = input_data.read_data_sets(r'C:\Users\潘登\PycharmProjects\神经网络\MNIST_data_bak', one_hot=True)
# 声明学习率, 批量大小, 占位符和模型变量
learning_rate = 0.01
batch_size = 500
x_data = tf.placeholder(shape=[None, 784], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 10], dtype=tf.float32) # 因为是one_hot编码所以占位符的宽度是10
W = tf.Variable(tf.zeros(shape=[784, 10])) # W初始化为0
b = tf.Variable(tf.zeros(shape=[10])) # b初始化为0
# 增加softmax模型
model_output = tf.nn.softmax(tf.add(tf.matmul(x_data, W), b))
# 声明损失函数(交叉熵), 其为批量损失的平均值, 初始化变量, 声明优化器
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_target, logits=model_output)
init = tf.global_variables_initializer()
my_opt = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_step = my_opt.minimize(cross_entropy)
sess.run(init)
writer = tf.summary.FileWriter(r"C:\Users\潘登\PycharmProjects\神经网络\Tensorflow深度学习与实战\graph_softmax",sess.graph) # 保存计算图
# 迭代遍历, 并在随机选择的数据上进行模型训练, 迭代1000次 每100次迭代输出变量值和损失值, 将其用于之后的可视化
loss_vec = []
for i in range(1000):
rand_x, rand_y = mnist.train.next_batch(batch_size)
# 目标:最优化损失
sess.run(train_step, feed_dict={x_data:rand_x,
y_target:rand_y})
# 更新loss值
temp_loss = sess.run(cross_entropy, feed_dict={x_data:rand_x,
y_target:rand_y})
loss_vec.append(temp_loss.mean())
# 每100次打印
if (i+1) % 100 == 0:
print('Step:', i+1)
print('Loss为:', temp_loss.mean())
# 绘图查看结果
plt.figure(2)
plt.plot(loss_vec, 'k--')
plt.title('cross_entropy per Generation')
plt.xlabel('Generation')
plt.ylabel('cross_entropy')
plt.show()
# 从训练集中挑一组进行检验
images = mnist.test.images
labels = mnist.test.labels
# 测试刚刚训练好的模型
y = tf.nn.softmax(model_output)
# 打印图片
plt.figure(figsize=(12, 8))
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False
for test_num in range(1877, 1883):
image = images[test_num]
label = labels[test_num]
a = np.array(image).reshape(1, 784)
result = sess.run(y, feed_dict={x_data:a})
plt.subplot(2, 3, test_num - 1876)
plt.imshow(image.reshape(28,28))
plt.title('这张照片的实际数字是:%d\n 预测结果为:%d'%(np.argwhere(label == 1), np.argmax(result)))
plt.show()
Logistics回归、Softmax代码实现--Tensorflow部分就是这样了, 继续下一章吧!pd的Machine Learning