PyTorch 张量Basicsoperation

深入understandingPyTorch core concepts——张量, includingcreation, operation, 转换 and 数学运算, Master张量 basicusingmethod

PyTorch 张量Basicsoperation

1. 张量 concepts

张量 (Tensor) is PyTorchin最basic datastructure, class似于NumPy ndarray, 但具 has GPU加速capacity and 自动微分functions. 张量可以看作 is 标量, 向量, 矩阵 high 维推广:

  • 标量 (0维张量) : 单个数值, such as 5
  • 向量 (1维张量) : 数值 一维array, such as [1, 2, 3]
  • 矩阵 (2维张量) : 数值 二维array, such as [[1, 2], [3, 4]]
  • high 维张量: 三维及以 on array

2. creation张量

PyTorchproviding了 many 种creation张量 method, 我们可以 from Pythonlist, NumPyarray or using in 置functioncreation张量.

2.1 from Pythonlistcreation

import torch

# creation标量
s = torch.tensor(5)
print("标量:", s)
print("标量维度:", s.ndim)
print("标量形状:", s.shape)

# creation向量
v = torch.tensor([1, 2, 3])
print("\n向量:", v)
print("向量维度:", v.ndim)
print("向量形状:", v.shape)

# creation矩阵
m = torch.tensor([[1, 2], [3, 4]])
print("\n矩阵:", m)
print("矩阵维度:", m.ndim)
print("矩阵形状:", m.shape)

# creation3维张量
t = torch.tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
print("\n3维张量:", t)
print("3维张量维度:", t.ndim)
print("3维张量形状:", t.shape)

2.2 from NumPyarraycreation

import numpy as np
import torch

# creationNumPyarray
np_array = np.array([[1, 2], [3, 4]])
print("NumPyarray:", np_array)

#  from NumPyarraycreation张量
tensor_from_np = torch.from_numpy(np_array)
print("\n from NumPycreation 张量:", tensor_from_np)
print("张量形状:", tensor_from_np.shape)

#  from 张量creationNumPyarray
np_array_from_tensor = tensor_from_np.numpy()
print("\n from 张量creation NumPyarray:", np_array_from_tensor)

2.3 using in 置functioncreation

import torch

# creation全零张量
zeros = torch.zeros((2, 3))
print("全零张量:", zeros)

# creation全一张量
ones = torch.ones((2, 3))
print("\n全一张量:", ones)

# creation单位矩阵
eye = torch.eye(3)
print("\n单位矩阵:", eye)

# creation随机张量 (均匀分布) 
rand = torch.rand((2, 3))
print("\n均匀分布随机张量:", rand)

# creation随机张量 (正态分布) 
randn = torch.randn((2, 3))
print("\n正态分布随机张量:", randn)

# creation指定范围 张量
arange = torch.arange(0, 10, step=2)
print("\n指定范围张量:", arange)

# creation线性空间张量
linspace = torch.linspace(0, 1, steps=5)
print("\n线性空间张量:", linspace)

3. 张量 basicoperation

PyTorch张量support many 种basicoperation, including算术运算, index and 切片, 形状operationetc..

3.1 算术运算

import torch

# creation两个张量
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])

# 加法
print("加法:", a + b)
print("加法:", torch.add(a, b))

# 减法
print("\n减法:", a - b)
print("减法:", torch.sub(a, b))

# 乘法 (元素级) 
print("\n乘法:", a * b)
print("乘法:", torch.mul(a, b))

# 除法 (元素级) 
print("\n除法:", a / b)
print("除法:", torch.div(a, b))

# 矩阵乘法
c = torch.tensor([[1, 2], [3, 4]])
d = torch.tensor([[5, 6], [7, 8]])
print("\n矩阵乘法:", torch.matmul(c, d))
print("矩阵乘法:", c @ d)

3.2 index and 切片

import torch

# creation一个张量
t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print("原始张量:", t)

# 访问单个元素
print("\n访问单个元素:", t[0, 0])

# 切片operation
print("\n切片operation - 第一行:", t[0, :])
print("切片operation - 第一列:", t[:, 0])
print("切片operation - 子矩阵:", t[0:2, 0:2])

# advancedindex
print("\nadvancedindex:", t[[0, 2], [1, 2]])

# 掩码index
mask = t > 5
print("\n掩码:", mask)
print("掩码index:", t[mask])

3.3 形状operation

import torch

# creation一个张量
t = torch.tensor([[1, 2, 3], [4, 5, 6]])
print("原始张量:", t)
print("原始形状:", t.shape)

# reshape - 改变形状
reshaped = t.reshape(3, 2)
print("\nreshape after :", reshaped)
print("reshape after 形状:", reshaped.shape)

# view - 改变视graph (共享memory) 
viewed = t.view(3, 2)
print("\nview after :", viewed)
print("view after 形状:", viewed.shape)

# squeeze - 移除维度 for 1 维度
squeezed = torch.tensor([[1, 2, 3]]).squeeze()
print("\nsqueeze after :", squeezed)
print("squeeze after 形状:", squeezed.shape)

# unsqueeze - 添加维度 for 1 维度
unsqueezed = squeezed.unsqueeze(0)
print("\nunsqueeze after :", unsqueezed)
print("unsqueeze after 形状:", unsqueezed.shape)

# transpose - 转置
Transposed = t.transpose(0, 1)
print("\ntranspose after :", Transposed)
print("transpose after 形状:", Transposed.shape)

# permute - 排列维度
permuted = t.permute(1, 0)
print("\npermute after :", permuted)
print("permute after 形状:", permuted.shape)

3.4 广播mechanism

PyTorch 广播mechanism允许不同形状 张量for算术运算, class似于NumPy 广播规则.

import torch

# creation两个不同形状 张量
a = torch.tensor([[1, 2, 3], [4, 5, 6]])  # 形状 (2, 3)
b = torch.tensor([10, 20, 30])  # 形状 (3,)

# 广播运算
result = a + b
print("广播加法结果:", result)
print("结果形状:", result.shape)

4. 张量 dataclass型 and 设备

PyTorch张量support many 种dataclass型, 并且可以 in 不同设备 (CPU/GPU) 之间move.

4.1 dataclass型

import torch

# creation不同dataclass型 张量
t_float32 = torch.tensor([1, 2, 3], dtype=torch.float32)
print("float32张量:", t_float32)
print("dataclass型:", t_float32.dtype)

t_int64 = torch.tensor([1, 2, 3], dtype=torch.int64)
print("\nint64张量:", t_int64)
print("dataclass型:", t_int64.dtype)

# class型转换
t_converted = t_float32.to(torch.int64)
print("\n转换 after 张量:", t_converted)
print("转换 after dataclass型:", t_converted.dtype)

4.2 设备management

import torch

# check is 否 has 可用 GPU
print(" is 否 has 可用GPU:", torch.cuda.is_available())
print("GPU数量:", torch.cuda.device_count())

# creationCPU on  张量
t_cpu = torch.tensor([1, 2, 3])
print("\nCPU张量设备:", t_cpu.device)

# such as果 has GPU, 将张量移 to GPU
if torch.cuda.is_available():
    t_gpu = t_cpu.to('cuda')
    print("GPU张量设备:", t_gpu.device)
    
    # 将张量移回CPU
    t_back_to_cpu = t_gpu.to('cpu')
    print("移回CPU 张量设备:", t_back_to_cpu.device)

提示

in PyTorchin, using`.to()`method可以 in 不同设备 and dataclass型之间转换张量. 当processing big 型model and data集时, 合理usingGPU可以显著improving计算速度.

实践练习

练习1: creation and operation张量

creation一个形状 for (3, 4) 随机张量, 然 after 执行以 under operation:

  1. 计算张量 均值 and 标准差
  2. 将张量重塑 for (4, 3)
  3. 提取张量in big 于0.5 元素
  4. 将张量 and 一个形状 for (4,) 全一张量for广播加法

练习2: 矩阵运算

creation两个形状分别 for (2, 3) and (3, 4) 随机张量, 执行矩阵乘法, 然 after 计算结果 转置.

练习3: 设备management

creation一个 big 型张量 (形状 for (1000, 1000)) , 测量 in CPU and GPU (such as果可用) on 执行矩阵乘法所需 时间.

5. summarized

本tutorial介绍了PyTorch张量 basicconcepts and operation, including:

  • 张量 creationmethod ( from list, NumPyarray, in 置function)
  • 张量 basicoperation (算术运算, index and 切片, 形状operation)
  • 张量 广播mechanism
  • 张量 dataclass型 and 设备management

张量 is PyTorch coredatastructure, Master张量 basicoperation is usingPyTorchfor深度Learning Basics. in after 续tutorialin, 我们将using张量来构建 and 训练神经networkmodel.