*
>The friend said: "it should be easy to understand. Most people can understand it and have fun. Just like the comparison between Yang Zhenning and Hawking, Hawking's speech is interesting, has a large audience and is famous. For example, the relationship between high-performance computing and space universe and biomedicine. With these examples, it will be read a lot more often."
* ### Preface
Why are vector and matrix operations a very important set of basic operation functions in computer software? Let's talk about Scalar first. Scalar is to describe a thing. It only needs the size of the value. For example, an apple, a kilogram of apple, 365 days a year, if you want to calculate, an abacus or calculator will be done. If you learn this, you can be a snack shop owner. However, when describing a thing more often, one quantity is often not enough, and more characteristic quantities are needed. For example, to describe a force in physics, the magnitude and direction of force; A color needs three quantities: red, green and blue; Describe a line segment and a face in a two-dimensional space and a three-dimensional structure in a three-dimensional space; One day's stock market trading data, etc. Therefore, mathematicians have built the strongest mathematical concept of "vector" through thousands of years of efforts. It can be said that you should be a small shop owner. Only by understanding vectors can you understand what algorithms are, and you will have a new understanding of the world. In addition, linear algebra is not only a basis of mathematics, but also the basis of the whole science, because almost all human knowledge is based on linearity. What does that mean? Linear algebra is the foundation of all computer software technologies you see in video, audio, artificial intelligence. Vector, vector and vector operation run through the whole linear algebra. The content of linear algebra is how to solve linear problems. How to linearize complex problems is the content of other disciplines, such as calculus, signals and systems, etc.
![](././../../images_dir/1638153625/1.png)
### Main Body
Today I'm going to talk about the fifth most popular computer code, BLAS, which stands for Basic Linear Algebra Subprograms, a collection of basic operator functions common to linear algebraic computation.
![](././../../images_dir/1638153741/2.png)
**
A brief history of BLAS
**
**
(1979)
**
Scientific calculations usually make use of vectors and matrices to perform relatively simple mathematical operations, but they are still very computationally intensive. And in the 1970s, the scientific community lacked a common set of computational tools to perform these operations. So programmers in the scientific community were spending their time writing code to perform basic mathematical operations, rather than being able to concentrate on scientific problems. But the programming world needed a standard. So, in 1979, the Basic Linear Algebra Subprograms (BLAS) [6] emerged. This standard was developed until 1990, defining a set of basic programs for vector and later matrix mathematics. Jack Dongarra, a computer scientist at the University of Tennessee, believes that BLAS actually reduces complex matrix and vector operations to simple computational units as basic as addition and subtraction. He was part of the BLAS development team.
![](././../../images_dir/1638153931/3.png)
Cray-1 supercomputers: Researchers working on machines such as the Cray-1 supercomputer at Lawrence Livermore National Laboratory in California before the introduction of BLAS programming tools in 1979, with no linear algebra standard. Source: ScienceHistory Images/Alamy
Robert van de Geijn, a computer scientist at the University of Texas at Austin, says that BLAS 'is probably the most important interface defined in the field of scientific computing'. In addition to providing standard naming for commonly used functions, researchers can ensure that BLAS-based code can run the same way on any computer. This standard also allows computer manufacturers to continually optimise BLAS to run quickly on their hardware. For more than 40 years, BLAS has been at the heart of the scientific computing technology stack, allowing scientific computing software to continue to evolve," says Dongarra.
**
(20 July 2013)
**
【OpenBLAS wiki】OpenBLAS is an optimized Basic Linear Algebra Subprograms (BLAS) library based on GotoBLAS 21.13 BSD version. Openblas is an optimized Blas Computing Library Based on BSD license (open source). It was initiated by Zhang Xianyi on July 20, 2013 and released the first version of openblas 0.2.7. As the initiator and main maintainer of openblas open source project, Zhang Xianyi won the second prize of 2016ccf science and Technology Award [2]. Blas (Basic Linear Algebra Subprograms) is an application program interface (API) standard, which is used to standardize and publish the numerical library of basic linear algebra operations (such as vector or matrix multiplication). Openblas is a specific implementation of Blas standard. So far, openblas has released more than 30 releases, obtained more than 3700 stars and thousands of forks. On August 1, 2013, openblas 0.2.8 version On July 20, 2013, openblas 0.2.7 version ...... September 3, 2016, OpenBLAS 0.2.19, multi-core ...... August 19, 2019, OpenBLAS 0.3.7 version December 13, 2020, OpenBLAS 0.3.13 version March 18, 2021, OpenBLAS 0.3.14 version
![](././../../images_dir/1638154206/4.png) *
I have to say it was deliberately selected. Therefore, I searched the Internet. I thought it was very interesting and a little scientist. Curious, is he holding the wall with his left hand?
*
https://github.com/xianyi/OpenBLAS
![](././../../images_dir/1638154334/5.png)
Due to the need of the article with pictures, I searched the pictures related to "BLAS". It turns out that BLAS is still a handsome man. Although I digress from the topic, because it's so handsome, it also shows that I try my best to explore, so with the following pictures, I hope I can add a chicken leg.
![](././../../images_dir/1638154390/6.png)
**
Three levels of functions in BLAS Library
**
Level 1: Functions handle the computation of a single vector as well as the computation of two vectors. Level 1 functions originally appeared in the BLAS library, published in 1979. Level 2: Functions deal with the computation of matrices and vectors and also contain the computation of linear equation solving. Level 2 functions were published in 1988. Level 3: Functions contain matrix and matrix operations. Level 3 functions were published in 1990.
*
>As the above relates to vectors and matrices, I was curious if there was a more popular article on these strange terms "matrix, vector, tensor", but I couldn't find one. I found some diagrams at https://easyai.tech/ai-definition/vector, so I'll start with them.
* **
Relationships between scalars, vectors, matrices and tensors
**
These four concepts are of increasing dimensionality and are more easily understood when we explain them metaphorically using the concept of the point, the line and the body. + points - scalars + lines - vectors + surfaces - matrices + body - tensor
![](././../../images_dir/1638154732/7.png)
**
Scalar Quantity
**
1 apple, 2 apples, 3 apples
**
Vectors
**
vectors (also known as Euclidean vectors, geometric vectors, vectors) are quantities that have magnitude and direction.
![](././../../images_dir/1638154858/8.png) ![](././../../images_dir/1638154864/9.png)
The vector can have n characteristic latitudes, so that it can be such that: v={a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19....,an}; "Hey ~~~, are you going to charge a fee for such a wrong word count?" Vectors, originally applied to physics, are more often referred to as vectors in physics and engineering. Many physical quantities are vectors, such as the displacement of an object, the force exerted on a ball when it hits a wall, etc. The concept of geometric vectors was abstracted in linear algebra to give a more general concept of vectors.
**
Matrix
**
How do I explain this further? Well, a bunch of apples with detailed feature descriptions!
![](././../../images_dir/1638154970/10.png)
**
Tensor
**
What can I do about this?
**
Operations
**
There are so many beautiful transformations in the world that if you program them and visualise them, you get the diagram below. In the eyes of a mathematician, all the world can be expressed mathematically!
*
>Plugged in: Who invented "vectors"? 8 brilliant mathematicians, 2000 years in the making! Aristotle's parallelogram law of "velocity" was documented in his book "Mechanics" and proved by Helen three centuries later. Physics divides the elements of mechanics into two categories: vectors and scalars, where forces are quantities that have both magnitude and direction, which we call vectors. (Whereas quantities such as mass, which have only magnitude but no direction, are called scalar quantities.) (Isaac Newton (1643-1727), the 17th century English mathematician, in his famous mathematical book "The Mathematical Principles of Natural Philosophy", accurately stated and proved the "parallelogram law" of force, giving the method of decomposition and synthesis of force, which played a major role in the construction of his whole system of mechanics. In the 18th century, the complex numbers came to the attention of Bernoulli, Euler and others, the most famous being Euler's formula, which extended the domain of definition of the exponential function to the domain of the complex numbers and established the relationship between the trigonometric and exponential functions, known as the "catwalk of mathematics". ...... >Is it finally possible to find a strong reason for failing a course?
*
![](././../../images_dir/1638155273/11.png)
*
How do you calculate these weird vectors, matrices, trigonometric inverse trigonometric functions, integral differentiation, exponential logarithms? This required software to design these algorithms, because it was so difficult that I almost failed the course anyway. So there are BLAS, LIBM/VML, FFT, and then a collection of mathematical libraries, such as Intel MKL (Math Kernel Library), AMD MCL (Math Core Library), APL (Arm Performance Libraries). PerfXAPI-MPL (Math Performance Libraries), which is a home-grown intellectual property of PerfXLab.
*
**
A master's student develops mathematical computer software that earns $1 billion a year! Know who that says?
**
The 1970s saw the fast track development of computers in the United States, and it was during this era that Little grew up. After graduating from MIT in 1978 with a degree in electrical engineering and computer science, Litt went on to earn a master's degree in electrical engineering from Stanford University in 1980. During his studies he began to think about the question: How could computers be made faster and simpler to compute? To realise this vision, he began to devote himself to research. In 1984 Little set up his company, wanting to bring together more like-minded people to work together on a common vision. He never wanted to give up, but he worked day and night in his rented house on programs and codes, and his persistence finally brought him two other partners, who gradually got the company up and running. After much trial and error they developed Matlab, the company's first and most successful mathematical computer software, which became well known in the world of science and technology. After that, the company went on a "frenzy" of upgrades and expansions. As new employees joined the company, Myerswock moved from its original rented house to a larger one in Massachusetts, but there were still many employees who had to work remotely from different locations. The company then moved to the top floor of a building in South Natick and eventually "took over" the entire building. Now, the company has spread to other parts of the world. What is the core problem that Matlab solves? It's really about "how to calculate all these weird vectors, matrices, trigonometric inverse trigonometric functions, integral differentials, exponential logarithms, and all sorts of other calculations and equations".
**
All this, did you know?
**
LAPACK Introduction: A linear algebra library, also written by Netlib in the fortran language, whose underlying layer is BLAS. lAPACK provides a rich set of instrumental functions for problems such as solving multivariate linear equations, least square solutions of systems of linear equations, computing characteristic vectors, Householder transformations for computing matrix QR decompositions, and singular value decompositions. The library runs more efficiently than the BLAS library. Netlib implements the functionality of this set of specifications, resulting in a library called LAPACK. MKL Introduction: Intel MKL is built on Intel® C++ and Fortran compilers and is threaded using OpenMP*. The library's algorithms distribute data and tasks equally, making full use of multiple cores and processors. Linux/Win is supported. it has at its base. + BLAS:All inter-matrix operations (Level 3) are threaded for dense and sparse BLAS. Many inter-vector operations (Level 1) and matrix-to-vector operations (Level 2) are threaded for dense matrices in 64-bit programs on Intel® 64 architecture. For sparse matrices, all Level 2 operations are threaded except for the triangular sparse matrix solver. + ...Other ignored so as not to upset the BLAS protagonist... Eigen Introduction: Eigen is a C++ library that can be used to perform linear algebra, matrix and vector operations, and it contains many algorithms. Eigen is available as source code and can be used by including only the Eigen header files. The reason for this approach is that Eigen is implemented as a template, and as template functions are not compiled separately, they are only available as source code and not as a dynamic library. Ground floor: + BLAS/LAPACK:Supports all F77-based BLAS or LAPACK libraries as a base layer(EIGEN_USE_BLAS、EIGEN_USE_LAPACKE) + MKL:Support MKL as the underlying layer(EIGEN_USE_MKL_ALL) + CUDA:Support for using CUDA in CUDA kernels + OpenMP:Multi-threaded optimisation Relationships + BLAS/LAPACK in the narrow sense can be understood as an API for linear algebraic libraries + Netlib implements Fortran/C versions of BLAS/LAPACK, CBLAS/CLAPACK + Open source community and commercial companies have implemented targeted optimizations of BLAS (ATLAS, OpenBLAS) and LAPACK (MKL, ACML, CUBLAS) for APIs + Eigen and Armadillo support the above BLAS/LAPACK based substrates in addition to their own implementation of linear algebra libraries to accelerate operations Comparison + Ease of interface: Eigen > Armadillo > MKL/OpenBLAS + Speed: MKL ≈ OpenBLAS > Eigen (with MKL) > Eigen > Armadillo
*
> The chain of knowledge "basic mathematics -> applied mathematics -> computer science -> concrete, visual examples" is a bit too long, so I can hardly finish it in less time. Can any of you help? Please contact me. > Why do you want to do this? >+ When I was studying linear algebra and calculus, I said "I'm not going to be a mathematician in the future, so I don't see the point in studying this". And when I learned the principles of self-control, I realized that mathematics as an algorithm for PID is so powerful. >+ On the other hand, there are a large number of engineers in China who are skilled in calling on various libraries to write an app and system, but we have completely ignored the existence and value of a large amount of good basic software (the libraries, frameworks and algorithms that are called on); we are also used to thinking about innovation in terms of "standing on the shoulders of giants", but We are also used to thinking of innovation in terms of "standing on the shoulders of giants", but high-quality technological innovation is usually the result of exploring the underlying technology going forward. >+ We are also used to thinking of innovation in terms of "standing on the shoulders of giants".
* --- ![](././../../images_dir/1632828056/1.png)