Introduction to Applied Linear Algebra

Introduction to Applied Linear Algebra

作者:Stephen Boyd

出版社:Cambridge University Press

出版年:2018-6-7

评分:8.8

ISBN:9781316518960

所属分类:行业好书

书刊介绍

内容简介

This book is meant to provide an introduction to vectors, matrices, and least squares methods, basic topics in applied linear algebra. Our goal is to give the beginning student, with little or no prior exposure to linear algebra, a good grounding in the basic ideas, as well as an appreciation for how they are used in many applications, including data fitting, machine learning and artificial intelligence, to-mography, navigation, image processing, finance, and automatic control systems.

The background required of the reader is familiarity with basic mathematical notation. We use calculus in just a few places, but it does not play a critical role and is not a strict prerequisite. Even though the book covers many topics that are traditionally taught as part of probability and statistics, such as fitting mathematical models to data, no knowledge of or background in probability and statistics is needed.

The book covers less mathematics than a typical text on applied linear algebra. We use only one theoretical concept from linear algebra, linear independence, and only one computational tool, the QR factorization; our approach to most applications relies on only one method, least squares (or some extension). In this sense we aim for intellectual economy: With just a few basic mathematical ideas, con-cepts, and methods, we cover many applications. The mathematics we do present, however, is complete, in that we carefully justify every mathematical statement. In contrast to most introductory linear algebra texts, however, we describe many applications, including some that are typically considered advanced topics, like document classification, control, state estimation, and portfolio optimization.

The book does not require any knowledge of computer programming, and can be used as a conventional textbook, by reading the chapters and working the exercises that do not involve numerical computation. This approach however misses out on one of the most compelling reasons to learn the material: You can use the ideas and methods described in this book to do practical things like build a prediction model from data, enhance images, or optimize an investment portfolio. The growing power of computers, together with the development of high level computer languages and packages that support vector and matrix computation, have made it easy to use the methods described in this book for real applications. For this reason we hope that every student of this book will complement their study with computer programming exercises and projects, including some that involve real data. This book includes some generic exercises that require computation; additional ones, and the associated data files and language-specific resources, are available online.

If you read the whole book, work some of the exercises, and carry out computer exercises to implement or use the ideas and methods, you will learn a lot. While there will still be much for you to learn, you will have seen many of the basic ideas behind modern data science and other application areas. We hope you will be empowered to use the methods for your own applications.

The book is divided into three parts. Part I introduces the reader to vectors, and various vector operations and functions like addition, inner product, distance, and angle. We also describe how vectors are used in applications to represent word counts in a document, time series, attributes of a patient, sales of a product, an audio track, an image, or a portfolio of investments. Part II does the same for matrices, culminating with matrix inverses and methods for solving linear equa-tions. Part III, on least squares, is the payoff, at least in terms of the applications. We show how the simple and natural idea of approximately solving a set of over-determined equations, and a few extensions of this basic idea, can be used to solve many practical problems.

The whole book can be covered in a 15 week (semester) course; a 10 week (quarter) course can cover most of the material, by skipping a few applications and perhaps the last two chapters on nonlinear least squares. The book can also be used for self-study, complemented with material available online. By design, the pace of the book accelerates a bit, with many details and simple examples in parts I and II, and more advanced examples and applications in part III. A course for students with little or no background in linear algebra can focus on parts I and II, and cover just a few of the more advanced applications in part III. A more advanced course on applied linear algebra can quickly cover parts I and II as review, and then focus on the applications in part III, as well as additional topics.

We are grateful to many of our colleagues, teaching assistants, and students for helpful suggestions and discussions during the development of this book and the associated courses. We especially thank our colleagues Trevor Hastie, Rob Tibshirani, and Sanjay Lall, as well as Nick Boyd, for discussions about data fitting and classification, and Jenny Hong, Ahmed Bou-Rabee, Keegan Go, David Zeng, and Jaehyun Park, Stanford undergraduates who helped create and teach the course EE103. We thank David Tse, Alex Lemon, Neal Parikh, and Julie Lancashire for carefully reading drafts of this book and making many good suggestions.

作品目录

I Vectors
1 Vectors ix
1.1 Vectors
1.2 Vector addition
1.3 Scalar-vector multiplication
1.4 Inner product
1.5 Complexity of vector computations
Exercises
2 Linear functions
2.1 Linear functions
2.2 Taylor approximation
2.3 Regression model
Exercises
3 Norm and distance
3.1 Norm
3.2 Distance
3.3 Standard deviation
3.4 Angle
3.5 Complexity
Exercises
4 Clustering
4.1 Clustering
4.2 A clustering objective
4.3 The k-means algorithm
4.4 Examples
4.5 Applications
Exercises
5 Linear independence
5.1 Linear dependence
5.2 Basis
5.3 Orthonormal vectors
5.4 Gram–Schmidt algorithm
Exercises
II Matrices
6 Matrices
6.1 Matrices
6.2 Zero and identity matrices
6.3 Transpose, addition, and norm
6.4 Matrix-vector multiplication
6.5 Complexity
Exercises
7 Matrix examples
7.1 Geometric transformations
7.2 Selectors
7.3 Incidence matrix
7.4 Convolution
Exercises
8 Linear equations
8.1 Linear and affine functions
8.2 Linear function models
8.3 Systems of linear equations
Exercises
9 Linear dynamical systems
9.1 Linear dynamical systems
9.2 Population dynamics
9.3 Epidemic dynamics
9.4 Motion of a mass
9.5 Supply chain dynamics
Exercises
10 Matrix multiplication
10.1 Matrix-matrix multiplication
10.2 Composition of linear functions
10.3 Matrix power
10.4 QR factorization
Exercises
11 Matrix inverses
11.1 Left and right inverses
11.2 Inverse
11.3 Solving linear equations
11.4 Examples
11.5 Pseudo-inverse
Exercises
III Least squares
12 Least squares
12.1 Least squares problem
12.2 Solution
12.3 Solving least squares problems
12.4 Examples
Exercises
13 Least squares data fitting
13.1 Least squares data fitting
13.2 Validation
13.3 Feature engineering
Exercises
14 Least squares classification
14.1 Classification
14.2 Least squares classifier
14.3 Multi-class classifiers
Exercises
15 Multi-objective least squares
15.1 Multi-objective least squares
15.2 Control
15.3 Estimation and inversion
15.4 Regularized data fitting
15.5 Complexity
Exercises
16 Constrained least squares
16.1 Constrained least squares problem
16.2 Solution
16.3 Solving constrained least squares problems
Exercises
17 Constrained least squares applications
17.1 Portfolio optimization
17.2 Linear quadratic control
17.3 Linear quadratic state estimation
Exercises
18 Nonlinear least squares
18.1 Nonlinear equations and least squares
18.2 Gauss–Newton algorithm
18.3 Levenberg–Marquardt algorithm
18.4 Nonlinear model fitting
18.5 Nonlinear least squares classification
Exercises
19 Constrained nonlinear least squares
19.1 Constrained nonlinear least squares
19.2 Penalty algorithm
19.3 Augmented Lagrangian algorithm
19.4 Nonlinear control
Exercises
Appendices
A Notation
B Complexity
C Derivatives and optimization
C.1 Derivatives
C.2 Optimization
C.3 Lagrange multipliers
D Further study Index
· · · · · ·

作者简介

Stephen P. Boyd is the Samsung Professor of Engineering, and Professor of Electrical Engineering at Stanford University with courtesy appointments in the Department of Computer Science, and the Department of Management Science and Engineering. He is the co-author ofConvex Optimization, written with Lieven Vandenberghe and published by Cambridge University Press in 2004.

Lieven ...

(展开全部)

相关推荐

微信二维码