Software 2.0
https://medium.com/@karpathy/software-2-0-a64152b37c35
这篇文章挺有意思的。软件1.0是通过复杂规则来构造的,而软件2.0则是通过目标以及约束来构造的。这个区别和:
- 命令式编程和声明式编程语言;
- 传统的机器学习和基于DL的深度学习
之间的差别非常类似。这篇文章分析了“软件2.0”的一些特点,列举了它的优势和劣势。非常有意思的一篇文章。
软件2.0是基于神经网络来构建的,而作者是深度学习方面的专家,难怪可以写出这么有深度的文章。
软件1.0和2.0之间的区别。软件2.0是基于深度学习来实现的。
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0.
The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer is identifying a specific point in program space with some desirable behavior.
In contrast, Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent.
It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.
软件2.0并不会取代1.0(2.0的很多功能都需要1.0来进行实现),但是却会取代很多1.0方面的实现,比如:
- Visual Recognition
- Speech Recognition
- Speech Synthesis
- Machine Translation
- Robotics
- Games
软件2.0的优势
- Computationally homogeneous. A typical neural network is, to the first order, made up of a sandwich of only two operations: matrix multiplication and thresholding at zero (ReLU). Compare that with the instruction set of classical software, which is significantly more heterogenous and complex. Because you only have to provide Software 1.0 implementation for a small number of the core computational primitives (e.g. matrix multiply), it is much easier to make various correctness/performance guarantees. (计算架构的同构性。典型的NN只使用矩阵乘法以及ReLU计算,相比1.0在计算上更加有同构性)
- Simple to bake into silicon. As a corollary, since the instruction set of a neural network is relatively small, it is significantly easier to implement these networks much closer to silicon, e.g. with custom ASICs, neuromorphic chips, and so on. The world will change when low-powered intelligence becomes pervasive around us. E.g., small, inexpensive chips could come with a pretrained ConvNet, a speech recognizer, and a WaveNet speech synthesis network all integrated in a small protobrain that you can attach to anything. (可以很容易地烧制入芯片,可以使用低功率以及广泛地来运行这些软件)
- Constant running time. Every iteration of a typical neural net forward pass takes exactly the same amount of FLOPS. There is zero variability based on the different execution paths your code could take through some sprawling C++ code base. Of course, you could have dynamic compute graphs but the execution flow is normally still significantly constrained. This way we are also almost guaranteed to never find ourselves in unintended infinite loops. (运行时固定,对于NN来说FLOPS是固定的)
- Constant memory use. Related to the above, there is no dynamically allocated memory anywhere so there is also little possibility of swapping to disk, or memory leaks that you have to hunt down in your code.(同理内存使用量也是固定的)
- It is highly portable. A sequence of matrix multiplies is significantly easier to run on arbitrary computational configurations compared to classical binaries or scripts.
- It is very agile. If you had a C++ code and someone wanted you to make it twice as fast (at cost of performance if needed), it would be highly non-trivial to tune the system for the new spec. However, in Software 2.0 we can take our network, remove half of the channels, retrain, and there — it runs exactly at twice the speed and works a bit worse. It’s magic. Conversely, if you happen to get more data/compute, you can immediately make your program work better just by adding more channels and retraining. (软件架构非常非常灵活,比如可以通过去掉一般的channels让程序运行速度提高一倍)
- Modules can meld into an optimal whole. Our software is often decomposed into modules that communicate through public functions, APIs, or endpoints. However, if two Software 2.0 modules that were originally trained separately interact, we can easily backpropagate through the whole. Think about how amazing it could be if your web browser could automatically re-design the low-level system instructions 10 stacks down to achieve a higher efficiency in loading web pages. With 2.0, this is the default behavior.(NN在架构上是可以组合在一起的。每个部分可以单独训练,然后组合在一起可以重新训练)
- Modules can meld into an optimal whole. Our software is often decomposed into modules that communicate through public functions, APIs, or endpoints. However, if two Software 2.0 modules that were originally trained separately interact, we can easily backpropagate through the whole. Think about how amazing it could be if your web browser could automatically re-design the low-level system instructions 10 stacks down to achieve a higher efficiency in loading web pages. With 2.0, this is the default behavior.(软件2.0不太容易掌握,但是却非常容易使用)
- It is better than you. Finally, and most importantly, a neural network is a better piece of code than anything you or I can come up with in a large fraction of valuable verticals, which currently at the very least involve anything to do with images/video, sound/speech, and text.(在上面取代1.0的方向上,2.0做的都比传统方法表现要好)
软件2.0的限制
- At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how. Across many applications areas, we’ll be left with a choice of using a 90% accurate model we understand, or 99% accurate model we don’t.(NN在很多应用上表现很好,但是却没有办法解释它。所以通常我们通常要在两个模型中选择:一个是准确率90%但是可以解释,一个是准确率是99%不过没有办法解释)
- The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”, e.g., by silently adopting biases in their training data, which are very difficult to properly analyze and examine when their sizes are easily in the millions in most cases.(NN很容易已受到输入数据本身分布的影响?比如悄悄地将训练数据中的偏差学进去了)
- Finally, we’re still discovering some of the peculiar properties of this stack. For instance, the existence of adversarial examples and attacks highlights the unintuitive nature of this stack.(现在我们知道可以构造一些数据来对抗和攻击NN)