轉(zhuǎn)帖|使用教程|編輯:我只采一朵|2017-06-02 10:05:41.000|閱讀 160 次
概述:本文整理的這個(gè)深度學(xué)習(xí)完全指南 ,匯集了目前網(wǎng)絡(luò)上最優(yōu)秀的深度學(xué)習(xí)自學(xué)資源,而且會(huì)不定期更新,值得收藏!
# 界面/圖表報(bào)表/文檔/IDE等千款熱門軟控件火熱銷售中 >>
深度學(xué)習(xí)作為機(jī)器學(xué)習(xí)的一個(gè)分支,是近年來(lái)最熱門同時(shí)也是發(fā)展最快的人工智能技術(shù)之一,相關(guān)學(xué)習(xí)資源包括免費(fèi)公開教程和工具都極大豐富,同時(shí)這也為學(xué)習(xí) 的IT人才帶來(lái)選擇上的困擾,Yerevann整理的這個(gè)深度學(xué)習(xí)完全指南 ,匯集了目前網(wǎng)絡(luò)上最優(yōu)秀的深度學(xué)習(xí)自學(xué)資源,而且會(huì)不定期更新,非常值得收藏關(guān)注,以下是IT經(jīng)理網(wǎng)編譯整理的指南內(nèi)容:
數(shù)學(xué)知識(shí):學(xué)員需要具備普通大學(xué)數(shù)學(xué)知識(shí),例如 《Deep Learning》 一書中若干章節(jié)提到的數(shù)學(xué)概念:
編程知識(shí):你需要懂得編程才能開發(fā)和測(cè)試深度學(xué)習(xí)模型,我們建議在機(jī)器學(xué)習(xí)領(lǐng)域首選Python。同時(shí)也要用到面向科學(xué)計(jì)算的NumPy/SciPy代碼庫(kù)。資源鏈接如下(本文出現(xiàn)的星標(biāo)代表難度等級(jí)):
★
– 涵蓋了常用的各種庫(kù),介紹也比較詳細(xì),還涉及一些深入的技術(shù)話題 ★★
如果你具備以上自學(xué)基本要求技能,我們建議從以下四大入門在線教程中任選一項(xiàng)或多項(xiàng)組合學(xué)習(xí)(星標(biāo)為難度等級(jí)):
這是YouTube上很火的一個(gè)深度學(xué)習(xí)視頻教程,錄制于2013年,但今天看內(nèi)容并不過(guò)時(shí),很詳細(xì)地闡釋了神經(jīng)網(wǎng)絡(luò)背后的數(shù)學(xué)理論。 . ★★
(應(yīng)用于視覺識(shí)別的卷積神經(jīng)網(wǎng)絡(luò)) 由已經(jīng)投奔Google的李飛飛教授和 Andrej Karpathy、Justin Johnson共同執(zhí)教的課程,重點(diǎn)介紹了圖像處理,同時(shí)也涵蓋了深度學(xué)習(xí)領(lǐng)域的大多數(shù)重要概念。 、 ★★
Michael Nielsen的在線著作: 是目前學(xué)習(xí)神經(jīng)網(wǎng)絡(luò) 最容易的教材 ,雖然該書并未涵蓋所有重要議題,但是包含大量簡(jiǎn)明易懂的闡釋,同時(shí)還為一些基礎(chǔ)概念提供了實(shí)現(xiàn)代碼。★
Ian Goodfellow、Yoshua Bengio and Aaron Courville共同編著的 是目前深度學(xué)習(xí)領(lǐng)域 最全面的教程 資源,比其他課程涵蓋的范圍都要廣。 ★★★
機(jī)器學(xué)習(xí)基礎(chǔ)
機(jī)器學(xué)習(xí)是通過(guò)數(shù)據(jù)教計(jì)算機(jī)做事的科學(xué),同時(shí)也是一門藝術(shù),機(jī)器學(xué)習(xí)是計(jì)算機(jī)科學(xué)和數(shù)學(xué)交匯的一個(gè)相對(duì)成熟的領(lǐng)域,深度學(xué)習(xí)只是其中新興的一小部分,因此,了解機(jī)器學(xué)習(xí)的概念和工具對(duì)我們學(xué)好深度學(xué)習(xí)非常重要。以下是機(jī)器學(xué)習(xí)的一些重要學(xué)習(xí)資源(以下課程介紹部分內(nèi)容不再翻譯):
– decision trees ★
, the most popular course on Coursera ★★
Larochelle’s course doesn’t have separate introductory lectures for general machine learning, but all required concepts are defined and explained whenever needed.
★★
★★★
★
★★
機(jī)器學(xué)習(xí)的編程學(xué)習(xí)資料:大多數(shù)流行機(jī)器學(xué)習(xí)算法都部署在Scikit-learn 這個(gè)Python庫(kù)中,從頭部署算法能夠幫我們更好地了解機(jī)器學(xué)習(xí)的工作原理,以下是相關(guān)學(xué)習(xí)資源:
covers linear regression, k-nearest-neighbors and support vector machines. First it shows how to use them from scikit-learn, then implements the algorithms from scratch. ★
Andrew Ng’s course on Coursera has many assignments in Octave language. The same algorithms can be implemented in Python. ★★
神經(jīng)網(wǎng)絡(luò)基礎(chǔ)
神經(jīng)網(wǎng)絡(luò)是強(qiáng)大的機(jī)器學(xué)習(xí)算法,同時(shí)也是深度學(xué)習(xí)的基礎(chǔ):
– shows how simple neural networks can do linear regression ★
★★
★★
★★
★★
★
★
★
★★★
explains why it is important to implement backpropagation once from scratch ★★
★★
★
神經(jīng)網(wǎng)絡(luò)實(shí)操教程
– Jupyter notebook available ★
Andrej Karpathy implements backpropagation in Javascript in his . ★
in Python ★
改進(jìn)神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)
神經(jīng)網(wǎng)絡(luò)的訓(xùn)練可不容易,很多時(shí)候機(jī)器壓根不會(huì)學(xué)習(xí)(underfitting),有時(shí)候又“死學(xué)”,照本宣科你輸入的知識(shí),無(wú)法總結(jié)歸納出新的數(shù)據(jù)(overfitting),解決上述問(wèn)題的方法有很多,如下是
推薦教程:
★★
★★
★★
★
★
★★★
★★★
★★★
– visualizes the performance of different optimization algorithms ★
★★★
★★★
常用的主流框架
目前很多 都對(duì)最新的計(jì)算機(jī)硬件進(jìn)行了優(yōu)化,大多數(shù)框架也提供Python接口(Torch除外,需要Lua)。當(dāng)你了解基本的深度學(xué)習(xí)算法的部署后,是時(shí)候選擇一個(gè)框架開工了(這部分還可CTOCIO文章: ):
provides low-level primitives for constructing all kinds of neural networks. It is maintained by . See also: – Jupyter notebook available ★
is another low-level framework. Its architecture is similar to Theano. It is maintained by the Google Brain team.
is a popular framework that uses Lua language. The main disadvantage is that Lua’s community is not as large as Python’s. Torch is mostly maintained by Facebook and Twitter.
There are also higher-level frameworks that run on top of these:
is a higher level framework built on top of Theano. It provides simple functions to create large networks with few lines of code.
is a higher level framework that works on top of either Theano or TensorFlow.
如果你有框架選擇困難癥,可以參考斯坦福課程 . ★★
卷積神經(jīng)網(wǎng)絡(luò)
卷積神經(jīng)網(wǎng)絡(luò)Convolutional networks (CNNs),是一種特定的神經(jīng)網(wǎng)絡(luò),通過(guò)一些聰明的方法大大提高了學(xué)習(xí)速度和質(zhì)量。卷積神經(jīng)網(wǎng)絡(luò)掀起了計(jì)算機(jī)視覺的革命,并廣泛應(yīng)用于語(yǔ)音識(shí)別和文本歸類等領(lǐng)域,以下是
推薦教程:
★★
★★
★★
★★
★★
includes upconvolutions ★★
★
★★★
– shows how convolutional filters (also known as image kernels) transform the image ★
– live visualization of a convolutional network right in the browser ★
★★
★★★
★★
卷積神經(jīng)網(wǎng)絡(luò)框架部署和應(yīng)用
所有重要的框架都支持卷積神經(jīng)網(wǎng)絡(luò)的部署,通常使用高級(jí)函數(shù)庫(kù)編寫的代碼的可讀性要更好一些。
★★
★
– a blog post by one of the best performers of Diabetic retinopathy detection contest in Kaggle. Includes a good example of data augmentation. ★★
– the authors used different ConvNets for localization and classification. . ★★
for image classification on CIFAR-10 dataset ★★
★★
★★★
– implements famous VGGNet network with batch normalization layers in Torch ★
– Residual networks perform very well on image classification tasks. Two researchers from Facebook and CornellTech implemented these networks in Torch ★★★
– lots of practical tips on using convolutional networks including data augmentation, transfer learning, fast implementations of convolution operation ★★
遞歸神經(jīng)網(wǎng)絡(luò)
遞歸神經(jīng)網(wǎng)絡(luò)Recurrent entworks(RNNs)被設(shè)計(jì)用來(lái)處理序列數(shù)據(jù)(例如文本、股票、基因組、傳感器等)相關(guān)問(wèn)題,通常應(yīng)用于語(yǔ)句分類(例如情感分析)和語(yǔ)音識(shí)別,也適用于文本生成甚至圖像生成。
教程如下:
– describes how RNNs can generate text, math papers and C++ code ★
Hugo Larochelle’s course doesn’t cover recurrent neural networks (although it covers many topics that RNNs are used for). We suggest watching by Nando de Freitas to fill the gap ★★
★★
★★
Michael Nielsen’s book stops at convolutional networks. In the section there is just a brief review of simple recurrent networks and LSTMs. ★
★★★
from Stanford’s CS224d (2016) by Richard Socher ★★
★★
遞歸神經(jīng)網(wǎng)絡(luò)的框架部署與應(yīng)用
★★★
★★★
★★
★
in Lasagne ★
using Lasagne ★
for language modeling ★★
★★
★★★
★★
★★★
Autoencoders
Autoencoder是為非監(jiān)督式學(xué)習(xí)設(shè)計(jì)的神經(jīng)網(wǎng)絡(luò),例如當(dāng)數(shù)據(jù)沒有標(biāo)記的情況。Autoencoder可以用來(lái)進(jìn)行數(shù)據(jù)維度消減,以及為其他神經(jīng)網(wǎng)絡(luò)進(jìn)行預(yù)訓(xùn)練,以及數(shù)據(jù)生成等。以下課程資源中,我們還收錄了Autoencoder與概率圖模型整合的一個(gè)autoencoders的變種,其背后的數(shù)學(xué)機(jī)理在下一章“概率圖模型”中會(huì)介紹。
推薦教程:
★★
★★
– this video also touches an exciting topic of generative adversarial networks. ★★
★★★
★
★★
Autoencoder的部署
大多數(shù)autoencoders都非常容易部署,但我們還是建議您從簡(jiǎn)單的開始嘗試。課程資源如下:
★★
★★
★★
★★
★
概率圖模型
概率圖模型(Probabilistic Graphical model,PGM)是統(tǒng)計(jì)學(xué)和機(jī)器學(xué)習(xí)交叉分支領(lǐng)域,關(guān)于概率圖模型的書籍和課程非常多,以下我們收錄的資源重點(diǎn)關(guān)注概率圖模型在深度學(xué)習(xí)場(chǎng)景中的應(yīng)用。其中Hugo Larochelles的課程介紹了一些非常著名的模型,而Deep Learning一書有整整四個(gè)章節(jié)專門介紹,并在最后一章介紹了十幾個(gè)模型。本領(lǐng)域的學(xué)習(xí)需要讀者掌握大量數(shù)學(xué)知識(shí):
★★★
★★★
★★★
★★★
★★★
– first steps towards probabilistic models ★★★
★★★
★★★
★★★
★★★
– includes Boltzmann machines (RBM, DBN, …), variational autoencoders, generative adversarial networks, autoregressive models etc. ★★★
– a blog post on variational autoencoders, generative adversarial networks and their improvements by OpenAI. ★★★
attempts to organize lots of architectures using a single scheme. ★★
概率圖模型的部署
高級(jí)框架(Lasagne、Keras)不支持概率圖模型的部署,但是Theano、Tensorflow和Torch有很多可用的代碼。
★★★
★★★
– uses a combination of variational autoencoders and generative adversarial networks. ★★★
– another application of generative adversarial networks. ★★★
– Torch implementation of Generative Adversarial Networks ★★
精華論文、視頻與論壇匯總
深度學(xué)習(xí)重要論文的大清單。
為瀏覽 arXiv上的論文提供了一個(gè)漂亮的界面.
含有大量關(guān)于深度學(xué)習(xí)的高級(jí)議題視頻
一個(gè)非常活躍的Reddit分支. 幾乎所有重要的新論文這里都有討論。
本文轉(zhuǎn)載自:36大數(shù)據(jù)
本站文章除注明轉(zhuǎn)載外,均為本站原創(chuàng)或翻譯。歡迎任何形式的轉(zhuǎn)載,但請(qǐng)務(wù)必注明出處、不得修改原文相關(guān)鏈接,如果存在內(nèi)容上的異議請(qǐng)郵件反饋至chenjj@fc6vip.cn