广告
返回顶部
首页 > 资讯 > 后端开发 > Python >详解TensorFlow2实现线性回归
  • 232
分享到

详解TensorFlow2实现线性回归

2024-04-02 19:04:59 232人浏览 安东尼

Python 官方文档:入门教程 => 点击学习

摘要

目录概述MSE线性回归公式梯度下降线性回归实现计算 MSE梯度下降迭代训练主函数完整代码概述 线性回归 (Linear Regression) 是利用回归分析来确定两种或两种以上变量

概述

线性回归 (Linear Regression) 是利用回归分析来确定两种或两种以上变量间相互依赖的定量关系.

在这里插入图片描述

对线性回归还不是很了解的同学可以看一下这篇文章:

python深度总结线性回归

MSE

均方误差 (Mean Square Error): 是用来描述连续误差的一种方法. 公式:

在这里插入图片描述

y_predict: 我们预测的值y_real: 真实值

线性回归

公式

在这里插入图片描述

w: weight, 权重系数

b: bias, 偏置顶

x: 特征值

y: 预测值

梯度下降

梯度下降 (Gradient Descent) 是一种优化算法. 参数会沿着梯度相反的方向前进, 以实现损失函数 (loss function) 的最小化.

在这里插入图片描述

计算公式:

在这里插入图片描述

在这里插入图片描述

w: weight, 权重参数

w': 更新后的 weight

lr : learning rate, 学习

dloss/dw: 损失函数对 w 求导

在这里插入图片描述

在这里插入图片描述

w: weight, 权重参数

w': 更新后的 weight

lr : learning rate, 学习率

dloss/dw: 损失函数对 b 求导

线性回归实现

计算 MSE


def calculate_MSE(w, b, points):
    """
    计算误差MSE
    :param w: weight, 权重
    :param b: bias, 偏置顶
    :param points: 数据
    :return: 返回MSE (Mean Square Error)
    """

    total_error = 0  # 存放总误差, 初始化为0

    # 遍历数据
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 计算MSE
        total_error += (y - (w * x + b)) ** 2  # 计总误差
        MSE = total_error / len(points)  # 计算平均误差

    # 返回MSE
    return MSE

梯度下降


def step_gradient(index, w_current, b_current, points, learning_rate=0.0001):
    """
    计算梯度下降, 跟新权重
    :param index: 现行迭代编号
    :param w_current: weight, 权重
    :param b_current: bias, 偏置顶
    :param points: 数据
    :param learning_rate: lr, 学习率 (默认值: 0.0001)
    :return: 返回跟新过后的参数数组
    """

    b_gradient = 0  # b的导, 初始化为0
    w_gradient = 0  # w的导, 初始化为0
    N = len(points)  # 数据长度

    # 遍历数据
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 计算w的导, w的导 = 2x(wx+b-y)
        w_gradient += (2 / N) * x * ((w_current * x + b_current) - y)

        # 计算b的导, b的导 = 2(wx+b-y)
        b_gradient += (2 / N) * ((w_current * x + b_current) - y)

    # 跟新w和b
    w_new = w_current - (learning_rate * w_gradient)  # 下降导数*学习率
    b_new = b_current - (learning_rate * b_gradient)  # 下降导数*学习率

    # 每迭代10次, 调试输出
    if index % 10 == 0:
        print("This is the {}th iterations w = {}, b = {}, error = {}"
              .fORMat(index, w_new, b_new,
                      calculate_MSE(w_new, b_new, points)))

    # 返回更新后的权重和偏置顶
    return [w_new, b_new]

迭代训练


def runner(w_start, b_start, points, learning_rate, num_iterations):
    """
    迭代训练
    :param w_start: 初始weight
    :param b_start: 初始bias
    :param points: 数据
    :param learning_rate: 学习率
    :param num_iterations: 迭代次数
    :return: 训练好的权重和偏执顶
    """

    # 定义w_end, b_end, 存放返回权重
    w_end = w_start
    b_end = b_start

    # 更新权重
    for i in range(1, num_iterations + 1):
        w_end, b_end = step_gradient(i, w_end, b_end, points, learning_rate)

    # 返回训练好的b, w
    return [w_end, b_end]

主函数


def run():
    """
    主函数
    :return: 无返回值
    """

    # 读取数据
    data = pd.read_csv("data.csv")  

    # 定义超参数
    learning_rate = 0.00001  # 学习率
    w_initial = 0  # 权重初始化
    b_initial = 0  # 偏置顶初始化
    w_end = 0  # 存放返回结果
    b_end = 0  # 存放返回结果
    num_interations = 200  # 迭代次数

    # 调试输出初始误差
    print("Starting gradient descent at w = {}, b = {}, error = {}"
          .format(w_initial, b_initial, calculate_MSE(w_initial, b_initial, data)))
    print("Running...")

    # 得到训练好的值
    w_end, b_end = runner(w_initial, b_initial, data, learning_rate, num_interations, )

    # 调试输出训练后的误差
    print("\nAfter {} iterations w = {}, b = {}, error = {}"
          .format(num_interations, w_end, b_end, calculate_MSE(w_end, b_end, data)))

完整代码


import pandas as pd
import Tensorflow as tf


def run():
    """
    主函数
    :return: 无返回值
    """

    # 读取数据
    data = pd.read_csv("data.csv")

    # 定义超参数
    learning_rate = 0.00001  # 学习率
    w_initial = 0  # 权重初始化
    b_initial = 0  # 偏置顶初始化
    w_end = 0  # 存放返回结果
    b_end = 0  # 存放返回结果
    num_interations = 200  # 迭代次数

    # 调试输出初始误差
    print("Starting gradient descent at w = {}, b = {}, error = {}"
          .format(w_initial, b_initial, calculate_MSE(w_initial, b_initial, data)))
    print("Running...")

    # 得到训练好的值
    w_end, b_end = runner(w_initial, b_initial, data, learning_rate, num_interations, )

    # 调试输出训练后的误差
    print("\nAfter {} iterations w = {}, b = {}, error = {}"
          .format(num_interations, w_end, b_end, calculate_MSE(w_end, b_end, data)))


def calculate_MSE(w, b, points):
    """
    计算误差MSE
    :param w: weight, 权重
    :param b: bias, 偏置顶
    :param points: 数据
    :return: 返回MSE (Mean Square Error)
    """

    total_error = 0  # 存放总误差, 初始化为0

    # 遍历数据
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 计算MSE
        total_error += (y - (w * x + b)) ** 2  # 计总误差
        MSE = total_error / len(points)  # 计算平均误差

    # 返回MSE
    return MSE


def step_gradient(index, w_current, b_current, points, learning_rate=0.0001):
    """
    计算梯度下降, 跟新权重
    :param index: 现行迭代编号
    :param w_current: weight, 权重
    :param b_current: bias, 偏置顶
    :param points: 数据
    :param learning_rate: lr, 学习率 (默认值: 0.0001)
    :return: 返回跟新过后的参数数组
    """

    b_gradient = 0  # b的导, 初始化为0
    w_gradient = 0  # w的导, 初始化为0
    N = len(points)  # 数据长度

    # 遍历数据
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 计算w的导, w的导 = 2x(wx+b-y)
        w_gradient += (2 / N) * x * ((w_current * x + b_current) - y)

        # 计算b的导, b的导 = 2(wx+b-y)
        b_gradient += (2 / N) * ((w_current * x + b_current) - y)

    # 跟新w和b
    w_new = w_current - (learning_rate * w_gradient)  # 下降导数*学习率
    b_new = b_current - (learning_rate * b_gradient)  # 下降导数*学习率

    # 每迭代10次, 调试输出
    if index % 10 == 0:
        print("This is the {}th iterations w = {}, b = {}, error = {}"
              .format(index, w_new, b_new,
                      calculate_MSE(w_new, b_new, points)))

    # 返回更新后的权重和偏置顶
    return [w_new, b_new]


def runner(w_start, b_start, points, learning_rate, num_iterations):
    """
    迭代训练
    :param w_start: 初始weight
    :param b_start: 初始bias
    :param points: 数据
    :param learning_rate: 学习率
    :param num_iterations: 迭代次数
    :return: 训练好的权重和偏执顶
    """

    # 定义w_end, b_end, 存放返回权重
    w_end = w_start
    b_end = b_start

    # 更新权重
    for i in range(1, num_iterations + 1):
        w_end, b_end = step_gradient(i, w_end, b_end, points, learning_rate)

    # 返回训练好的b, w
    return [w_end, b_end]


if __name__ == "__main__":  # 判断是否为直接运行
    # 执行主函数
    run()

输出结果:

Starting gradient descent at w = 0, b = 0, error = 5611.166153823905
Running...
This is the 10th iterations w = 0.5954939346814911, b = 0.011748797759247776, error = 2077.4540105037636
This is the 20th iterations w = 0.9515563561471605, b = 0.018802975867006404, error = 814.0851271130122
This is the 30th iterations w = 1.1644557718428263, b = 0.023050105300353223, error = 362.4068500146176
This is the 40th iterations w = 1.291753898278705, b = 0.02561881917471017, error = 200.92329896151622
This is the 50th iterations w = 1.3678685455519075, b = 0.027183959773995233, error = 143.18984477036037
This is the 60th iterations w = 1.4133791147591803, b = 0.02814903475888354, error = 122.54901023376003
This is the 70th iterations w = 1.4405906232245687, b = 0.028755312994862656, error = 115.16948797045545
This is the 80th iterations w = 1.4568605956220553, b = 0.029147056093611835, error = 112.53113537539161
This is the 90th iterations w = 1.4665883081088924, b = 0.029410522232548166, error = 111.58784050644537
This is the 100th iterations w = 1.4724042147529013, b = 0.029597287663210802, error = 111.25056079777497
This is the 110th iterations w = 1.475881139890538, b = 0.029738191313600983, error = 111.12994295811941
This is the 120th iterations w = 1.477959520545057, b = 0.02985167266801462, error = 111.08678583026905
This is the 130th iterations w = 1.479201671130221, b = 0.029948757225817496, error = 111.07132237076124
This is the 140th iterations w = 1.4799438156483897, b = 0.03003603745100295, error = 111.06575992136905
This is the 150th iterations w = 1.480386992125614, b = 0.030117455167888288, error = 111.06373727064113
This is the 160th iterations w = 1.4806514069946144, b = 0.030195367306897165, error = 111.0629801653088
This is the 170th iterations w = 1.4808089351476725, b = 0.030271183144693698, error = 111.06267551686379
This is the 180th iterations w = 1.4809025526554018, b = 0.030345745328433527, error = 111.0625326308038
This is the 190th iterations w = 1.4809579561496398, b = 0.030419557701150367, error = 111.0624475783524
This is the 200th iterations w = 1.480990510387525, b = 0.030492921525124016, error = 111.06238320300855
This is the 210th iterations w = 1.4810094024003952, b = 0.030566016933760057, error = 111.06232622062124
This is the 220th iterations w = 1.4810201253791957, b = 0.030638951634017437, error = 111.0622718818556
This is the 230th iterations w = 1.4810259638611891, b = 0.030711790026994222, error = 111.06221848873447
This is the 240th iterations w = 1.481028881765914, b = 0.030784570619965538, error = 111.06216543419914
This is the 250th iterations w = 1.4810300533774932, b = 0.030857316437543122, error = 111.06211250121454
This is the 260th iterations w = 1.4810301808342632, b = 0.03093004124680784, error = 111.06205961218657
This is the 270th iterations w = 1.4810296839649824, b = 0.031002753279495907, error = 111.06200673937376
This is the 280th iterations w = 1.4810288137973704, b = 0.031075457457601333, error = 111.06195387285815
This is the 290th iterations w = 1.48102772042814, b = 0.031148156724127858, error = 111.06190100909376
This is the 300th iterations w = 1.4810264936044433, b = 0.03122085283878386, error = 111.06184814681296
This is the 310th iterations w = 1.4810251869886903, b = 0.0312935468537513, error = 111.06179528556238
This is the 320th iterations w = 1.4810238326671836, b = 0.031366239398161695, error = 111.0617424251801
This is the 330th iterations w = 1.4810224498252484, b = 0.031438930848192506, error = 111.06168956560795
This is the 340th iterations w = 1.481021049934344, b = 0.03151162142877266, error = 111.06163670682551
This is the 350th iterations w = 1.4810196398535866, b = 0.03158431127439525, error = 111.06158384882504
This is the 360th iterations w = 1.4810182236842395, b = 0.03165700046547913, error = 111.0615309916041
This is the 370th iterations w = 1.4810168038785667, b = 0.031729689050110664, error = 111.06147813516172
This is the 380th iterations w = 1.4810153819028469, b = 0.03180237705704362, error = 111.06142527949757
This is the 390th iterations w = 1.48101395863381, b = 0.03187506450347233, error = 111.06137242461139
This is the 400th iterations w = 1.48101253459568, b = 0.03194775139967933, error = 111.06131957050317
This is the 410th iterations w = 1.4810111101019028, b = 0.03202043775181446, error = 111.06126671717288
This is the 420th iterations w = 1.4810096853398989, b = 0.032093123563556446, error = 111.06121386462064
This is the 430th iterations w = 1.4810082604217312, b = 0.032165808837106485, error = 111.06116101284626
This is the 440th iterations w = 1.481006835414406, b = 0.03223849357378233, error = 111.06110816184975
This is the 450th iterations w = 1.4810054103579875, b = 0.03231117777437349, error = 111.06105531163115
This is the 460th iterations w = 1.4810039852764323, b = 0.0323838614393536, error = 111.06100246219052
This is the 470th iterations w = 1.4810025601840635, b = 0.032456544569007456, error = 111.0609496135277
This is the 480th iterations w = 1.4810011350894463, b = 0.03252922716350693, error = 111.06089676564281
This is the 490th iterations w = 1.4809997099977015, b = 0.032601909222956374, error = 111.06084391853577
This is the 500th iterations w = 1.4809982849118903, b = 0.032674590747419754, error = 111.0607910722065

After 500 iterations w = 1.4809982849118903, b = 0.032674590747419754, error = 111.0607910722065

到此这篇关于详解TensorFlow2实现线性回归的文章就介绍到这了,更多相关TensorFlow2线性回归内容请搜索编程网以前的文章或继续浏览下面的相关文章希望大家以后多多支持编程网!

--结束END--

本文标题: 详解TensorFlow2实现线性回归

本文链接: https://www.lsjlt.com/news/134504.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

本篇文章演示代码以及资料文档资料下载

下载Word文档到电脑,方便收藏和打印~

下载Word文档
猜你喜欢
  • 详解TensorFlow2实现线性回归
    目录概述MSE线性回归公式梯度下降线性回归实现计算 MSE梯度下降迭代训练主函数完整代码概述 线性回归 (Linear Regression) 是利用回归分析来确定两种或两种以上变量...
    99+
    2022-11-12
  • Python线性回归图文实例详解
    目录前言:1.简单线性回归模型 2.多元线性回归模型2.1 应用F检验法完成模型的显著性检验2.2应用t检验法完成回归系数的显著性检验3.基于回归模型识别异常点4.含有离散变量的回归...
    99+
    2022-11-12
  • PyTorch实现线性回归详细过程
    目录一、实现步骤1、准备数据2、设计模型3、构造损失函数和优化器4、训练过程5、结果展示二、参考文献一、实现步骤 1、准备数据 x_data = torch.tensor([[1.0...
    99+
    2022-11-13
  • pytorch实现线性回归
    pytorch实现线性回归代码练习实例,供大家参考,具体内容如下 欢迎大家指正,希望可以通过小的练习提升对于pytorch的掌握 # 随机初始化一个二维数据集,使用朋友torch...
    99+
    2022-11-12
  • pytorch实现线性回归以及多元回归
    本文实例为大家分享了pytorch实现线性回归以及多元回归的具体代码,供大家参考,具体内容如下 最近在学习pytorch,现在把学习的代码放在这里,下面是github链接 直接附上g...
    99+
    2022-11-12
  • 线性回归与岭回归python代码实现
      在线性回归中我们要求的参数为: 详细的推导可以参见:http://blog.csdn.net/weiyongle1996/article/details/73727505 所以代码实现主要就是实现上式,python代码如下: i...
    99+
    2023-01-31
    线性 代码 python
  • R语言多元线性回归实例详解
    目录一、模型简介二、求解过程总结一、模型简介 一元线性回归是一个主要影响因素作为自变量来解释因变量的变化,在现实问题研究中,因变量的变化往往受几个重要因素的影响,此时就需要用两个或两...
    99+
    2022-11-13
  • pytorch怎样实现线性回归
    这篇文章给大家分享的是有关pytorch怎样实现线性回归的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。具体内容如下# 随机初始化一个二维数据集,使用朋友torch训练一个回归模型import ...
    99+
    2023-06-14
  • TensorFlow实现简单线性回归
    本文实例为大家分享了TensorFlow实现简单线性回归的具体代码,供大家参考,具体内容如下 简单的一元线性回归 一元线性回归公式: 其中x是特征:[x1,x2,x3,&helli...
    99+
    2022-11-13
  • python实现线性回归算法
    本文用python实现线性回归算法,供大家参考,具体内容如下 # -*- coding: utf-8 -*- """ Created on Fri Oct 11 19:25:11...
    99+
    2022-11-12
  • 使用pytorch实现线性回归
    本文实例为大家分享了pytorch实现线性回归的具体代码,供大家参考,具体内容如下 线性回归都是包括以下几个步骤:定义模型、选择损失函数、选择优化函数、 训练数据、测试 ...
    99+
    2022-11-12
  • Python怎么实现线性回归
    Python怎么实现线性回归,很多新手对此不是很清楚,为了帮助大家解决这个难题,下面小编将为大家详细讲解,有这方面需求的人可以来学习下,希望你能有所收获。1、概述(1)人工智能学习        &n...
    99+
    2023-06-26
  • pytorch实现线性回归和多元回归的方法
    本篇内容介绍了“pytorch实现线性回归和多元回归的方法”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!最近在学习pytorch,现在把学习...
    99+
    2023-06-14
  • Python反向传播实现线性回归步骤详细讲解
    目录1. 导入包2. 生成数据3. 训练数据4. 绘制图像5. 代码1. 导入包 我们这次的任务是随机生成一些离散的点,然后用直线(y = w *x + b )去拟合 首先看一下我们...
    99+
    2022-11-11
  • python机器学习之线性回归详解
    目录一、python机器学习–线性回归二、OLS线性回归2.1 Ordinary Least Squares 最小二乘法2.2 OLS线性回归的代码实现三、梯度下降算法3.1 GDL...
    99+
    2022-11-12
  • Python数学建模StatsModels统计回归之线性回归示例详解
    目录1、背景知识1.1 插值、拟合、回归和预测1.2 线性回归2、Statsmodels 进行线性回归2.1 导入工具包2.2 导入样本数据2.3 建模与拟合2.4 拟合和统计结果的...
    99+
    2022-11-12
  • python机器学习基础线性回归与岭回归算法详解
    目录一、什么是线性回归1.线性回归简述2.数组和矩阵数组矩阵3.线性回归的算法二、权重的求解1.正规方程2.梯度下降三、线性回归案例1.案例概述2.数据获取3.数据分割4.数据标准化...
    99+
    2022-11-12
  • pytorch实现线性回归的方法
    这篇文章主要介绍“pytorch实现线性回归的方法”,在日常操作中,相信很多人在pytorch实现线性回归的方法问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”pytorch实现线性回归的方法”的疑惑有所帮助!...
    99+
    2023-06-14
  • 人工智能—Python实现线性回归
    1、概述 (1)人工智能学习           (2)机器学习  (3)有监督学习  (4)线...
    99+
    2022-11-12
  • python一元线性回归怎么实现
    在Python中,可以使用NumPy和Scikit-learn库来实现一元线性回归。以下是一元线性回归的实现步骤:1. 导入所需的库...
    99+
    2023-08-18
    python
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作