机器LearningBasics

Understand机器Learning basicconcepts, class型, 流程, algorithms and application

1. what is 机器Learning?

机器Learning (Machine Learning, 简称ML) is artificial intelligence 一个branch, 它使计算机able to from datainLearning, 而不需要明确 programming. 机器Learning core思想 is 让计算机throughLearningdatain 模式, 自动improvement其performance.

提示

机器Learning and 传统programming 区别 in 于: 传统programming is 人classwriting规则, 计算机执行; 而机器Learning is 计算机 from datainLearning规则, 然 after 执行.

1.1 机器Learning 定义

根据汤姆·米切尔 定义: "such as果一个计算机程序 in taskT on performanceP随着experienceE而improving, 那么我们就说这个程序 from experienceEinLearning. "

  • taskT: 例such asclassification, 回归, 聚classetc..
  • performanceP: 例such as准确率, 精确率, 召回率etc..
  • experienceE: 例such as训练data.

1.2 机器Learning important 性

机器Learning important 性体现 in 以 under 几个方面:

  • processing complex issues: for 于一些传统programming难以解决 complex issues, 机器Learningproviding了 new solution.
  • 自动适应: 机器Learningmodelable to自动adapting to new data and environment.
  • 发现隐藏模式: 机器Learningable to from big 量datain发现人class难以察觉 模式.
  • improvingefficiency: automation决策 and 预测过程, improving工作efficiency.
  • 推动创 new : in 医疗, 金融, 交通 and other fields推动创 new .

2. 机器Learning class型

根据Learning方式 and dataclass型, 机器Learning可以分 for 以 under 几种class型:

2.1 监督Learning

监督Learning is using标记dataforLearning 机器Learningclass型. 标记data is 指带 has 正确答案 data.

  • classification: 预测离散 class别tag, such as垃圾email检测, graph像classificationetc..
  • 回归: 预测连续 数值, such as房价预测, 股票价格预测etc..

common 监督Learningalgorithmsincluding: 线性回归, 逻辑回归, 决策tree, 随机森林, support向量机, 神经networketc..

2.2 无监督Learning

无监督Learning is using未标记dataforLearning 机器Learningclass型. 未标记data is 指没 has 正确答案 data.

  • 聚class: 将相似 data点group, such as客户细分, exception检测etc..
  • 降维: reducingdata 维度, such as主成分analysis (PCA) , t-SNEetc..
  • 关联规则Learning: 发现datain 关联规则, such as购物篮analysisetc..

common 无监督Learningalgorithmsincluding: k-means聚class, 层次聚class, DBSCAN, PCA, 关联规则etc..

2.3 半监督Learning

半监督Learning is 结合 few 量标记data and big 量未标记dataforLearning 机器Learningclass型. 它介于监督Learning and 无监督Learning之间.

半监督Learning适用于标记data获取成本 high 场景, such as医学影像analysis, 自然languageprocessingetc..

2.4 强化Learning

强化Learning is through and environment交互并接收反馈来Learning 机器Learningclass型. 智能体 (Agent) in environmentin采取行动, 获得奖励 or 惩罚, from 而Learning最优策略.

  • status (State) : environment 当 before status.
  • 行动 (Action) : 智能体采取 行动.
  • 奖励 (Reward) : environment for 行动 反馈.
  • 策略 (Policy) : from status to 行动 map.

common 强化Learningalgorithmsincluding: Q-learning, SARSA, 策略梯度, 深度Qnetwork (DQN) etc..

3. 机器Learning basic流程

机器Learning basic流程通常including以 under 步骤:

3.1 issues定义

明确issues class型 and 目标, 例such as:

  • 这 is a classificationissues还 is 回归issues?
  • 我们 目标 is what? is 预测, 聚class还 is other?
  • 我们需要whatperformance指标来assessmentmodel?

3.2 data收集

收集 and issues相关 data. data可以来自:

  • 公开data集
  • 自己收集 data
  • 第三方dataproviding商
  • 传感器 or logdata

3.3 data预processing

data预processing is 机器Learning流程in 关键步骤, including:

  • data清洗: processing缺失值, exception值 and 重复值.
  • 特征选择: 选择 and issues相关 特征.
  • 特征提取: from 原始datain提取 has 意义 特征.
  • 特征转换: such as标准化, 归一化, 编码etc..
  • data分割: 将data分 for 训练集, verification集 and test集.

3.4 model选择

根据issuesclass型 and data特点选择合适 model:

  • for 于线性issues, 可以选择线性回归 or 逻辑回归.
  • for 于非线性issues, 可以选择决策tree, 随机森林 or 神经network.
  • for 于 high 维data, 可以选择support向量机 or 深度Learning.

3.5 model训练

using训练data训练model:

  • 设置modelparameter (超parameter) .
  • usingoptimizationalgorithms最 small 化损失function.
  • monitor训练过程, 防止过拟合.

3.6 modelassessment

usingverification集assessmentmodelperformance:

  • 计算performance指标, such as准确率, 精确率, 召回率, F1分数, 均方误差etc..
  • analysismodel 优势 and 劣势.

3.7 model调优

根据assessment结果调整model:

  • 调整超parameter, such asLearning率, 正则化强度, tree 深度etc..
  • 尝试不同 特征组合.
  • 尝试不同 modelarchitecture.

3.8 modeltest

usingtest集最终assessmentmodelperformance, 确保model in 未见过 data on 表现良 good .

3.9 modeldeployment

将训练 good modeldeployment to produceenvironment, 用于practicalapplication.

4. 特征工程

特征工程 is 机器Learningin非常 important 一步, 它 is 指 from 原始datain提取 has 意义 特征, 以improvingmodelperformance.

4.1 特征class型

  • 数值特征: 连续 数值, such as年龄, 收入etc..
  • class别特征: 离散 class别, such as性别, 职业etc..
  • 时间特征: and 时间相关 特征, such as日期, 时间戳etc..
  • 文本特征: 文本data, such as评论, new 闻etc..
  • graph像特征: graph像data 特征.

4.2 特征processingtechniques

  • 缺失值processing: delete, 填充 (均值, in位数, 众数etc.) .
  • exception值processing: delete, 转换, 分箱etc..
  • 特征缩放: 标准化, 归一化, 正则化etc..
  • class别编码: 独热编码, tag编码, 目标编码etc..
  • 特征组合: creation new 特征组合.
  • 特征选择: filter法, package装法, 嵌入法etc..

5. modelassessment指标

不同class型 机器Learningtask需要不同 assessment指标:

5.1 classificationtaskassessment指标

  • 准确率 (Accuracy) : 正确预测 样本数占总样本数 比例.
  • 精确率 (Precision) : 预测 for 正例 样本inpractical for 正例 比例.
  • 召回率 (Recall) : practical for 正例 样本in被正确预测 比例.
  • F1分数: 精确率 and 召回率 调 and 平均值.
  • 混淆矩阵: 显示预测结果 and practical结果 for 比.
  • ROC曲线: describesclassificationmodel in 不同阈值 under performance.
  • AUC值: ROC曲线 under 面积, 衡量model区分正负例 capacity.

5.2 回归taskassessment指标

  • 均方误差 (MSE) : 预测值 and practical值之差 平方 平均值.
  • 均方根误差 (RMSE) : MSE 平方根.
  • 平均绝 for 误差 (MAE) : 预测值 and practical值之差 绝 for 值 平均值.
  • R²分数: model解释因variable变异 比例.

5.3 聚classtaskassessment指标

  • 轮廓系数: 衡量聚classquality 指标.
  • Davies-Bouldin指数: 衡量聚class分离度 and 紧凑度 指标.
  • 调整兰德指数: 衡量聚class结果 and 真实tag consistency.

6. common 机器Learningalgorithms

6.1 线性回归

线性回归 is a用于预测连续数值 监督Learningalgorithms, 它fake设特征 and 目标值之间存 in 线性relationships.

6.2 逻辑回归

逻辑回归 is a用于classification 监督Learningalgorithms, 它using sigmoid function将线性回归 输出map to [0, 1] 区间.

6.3 决策tree

决策tree is a基于treestructure 监督Learningalgorithms, 它through一系列决策规则 for dataforclassification or 回归.

6.4 随机森林

随机森林 is a集成Learningalgorithms, 它由 many 个决策tree组成, through投票 or 平均来improving预测performance.

6.5 support向量机

support向量机 (SVM) is a用于classification and 回归 监督Learningalgorithms, 它through寻找最优超平面来分离不同class别 data.

6.6 k-近邻algorithms

k-近邻algorithms (k-NN) is a基于距离 监督Learningalgorithms, 它根据最近 k个样本 class别来预测 new 样本 class别.

6.7 朴素贝叶斯

朴素贝叶斯 is a基于贝叶斯定理 监督Learningalgorithms, 它fake设特征之间相互独立.

6.8 神经network

神经network is a模仿人脑structure 监督Learningalgorithms, 它由 many 个神经元组成, through before 向传播 and 反向传播来Learning.

7. codeexample: 线性回归

under 面 is a usingPython and scikit-learnlibraryimplementation线性回归 simple example:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score

# 生成mockdata
np.random.seed(42)
x = np.random.rand(100, 1) * 10
y = 2 * x + 1 + np.random.randn(100, 1) * 2

# 分割data for 训练集 and test集
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)

# creation线性回归model
model = LinearRegression()

# 训练model
model.fit(x_train, y_train)

# 预测test集
y_pred = model.predict(x_test)

# 计算assessment指标
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
r2 = r2_score(y_test, y_pred)

print(f"均方误差 (MSE): {mse:.4f}")
print(f"均方根误差 (RMSE): {rmse:.4f}")
print(f"R² 分数: {r2:.4f}")
print(f"model系数: {model.coef_[0][0]:.4f}")
print(f"model截距: {model.intercept_[0]:.4f}")

# visualization结果
plt.figure(figsize=(10, 6))
plt.scatter(x_train, y_train, color='blue', label='训练data')
plt.scatter(x_test, y_test, color='green', label='testdata')
plt.plot(x, model.predict(x), color='red', linewidth=2, label='回归直线')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('线性回归example')
plt.legend()
plt.show()

8. 实践case: 鸢尾花classification

under 面 is a usingPython and scikit-learnlibrary for 鸢尾花data集forclassification example:

8.1 加载data

from sklearn.datasets import load_iris

# 加载鸢尾花data集
iris = load_iris()

# 查看datainformation
print("特征名称:", iris.feature_names)
print("目标class别:", iris.target_names)
print("data集形状:", iris.data.shape)
print("目标值形状:", iris.target.shape)

8.2 data预processing and model训练

from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

# 分割data for 训练集 and test集
x_train, x_test, y_train, y_test = train_test_split(
    iris.data, iris.target, test_size=0.2, random_state=42
)

# creation随机森林classification器
model = RandomForestClassifier(n_estimators=100, random_state=42)

# 训练model
model.fit(x_train, y_train)

# 预测test集
y_pred = model.predict(x_test)

# 计算准确率
accuracy = accuracy_score(y_test, y_pred)
print(f"准确率: {accuracy:.4f}")

# 生成classification报告
print("\nclassification报告:")
print(classification_report(y_test, y_pred, target_names=iris.target_names))

# 生成混淆矩阵
print("\n混淆矩阵:")
print(confusion_matrix(y_test, y_pred))

# 查看特征 important 性
feature_importance = model.feature_importances_
print("\n特征 important 性:")
for feature, importance in zip(iris.feature_names, feature_importance):
    print(f"{feature}: {importance:.4f}")

9. 互动练习

练习 1: 线性回归实践

  1. copy on 面 线性回归code.
  2. modifycode, using不同 随机种子生成data.
  3. 尝试using不同 test集比例 (such as0.3, 0.4) .
  4. 比较不同test集比例 under modelperformance.
  5. 思考such as何improvementmodelperformance.

练习 2: classificationalgorithms比较

  1. using鸢尾花data集.
  2. 尝试using不同 classificationalgorithms, such as逻辑回归, 决策tree, support向量机etc..
  3. 比较不同algorithms 准确率, 精确率, 召回率 and F1分数.
  4. analysis不同algorithms Pros and Cons.
  5. 思考 in whatcircumstances under 应该选择哪种algorithms.

练习 3: 特征工程实践

  1. 选择一个公开data集 (such as波士顿房价data集, 泰坦尼克号data集etc.) .
  2. for datafor预processing, includingprocessing缺失值, exception值etc..
  3. 尝试不同 特征工程method, such as特征选择, 特征组合, 特征转换etc..
  4. usingprocessing after data训练model, 比较不同特征工程method 效果.
  5. summarized特征工程 for modelperformance 影响.
返回tutoriallist under 一节: 神经networkBasics