پاورپوینت کامل ۲۰۱۷ ImageNet Classification with Deep Convolutional Neural Networks 35 اسلاید در PowerPoint
توجه : این فایل به صورت فایل power point (پاور پوینت) ارائه میگردد
پاورپوینت کامل ۲۰۱۷ ImageNet Classification with Deep Convolutional Neural Networks 35 اسلاید در PowerPoint دارای ۳۵ اسلاید می باشد و دارای تنظیمات کامل در PowerPoint می باشد و آماده ارائه یا چاپ است
شما با استفاده ازاین پاورپوینت میتوانید یک ارائه بسیارعالی و با شکوهی داشته باشید و همه حاضرین با اشتیاق به مطالب شما گوش خواهند داد.
لطفا نگران مطالب داخل پاورپوینت نباشید، مطالب داخل اسلاید ها بسیار ساده و قابل درک برای شما می باشد، ما عالی بودن این فایل رو تضمین می کنیم.
توجه : در صورت مشاهده بهم ریختگی احتمالی در متون زیر ،دلیل ان کپی کردن این مطالب از داخل فایل می باشد و در فایل اصلی پاورپوینت کامل ۲۰۱۷ ImageNet Classification with Deep Convolutional Neural Networks 35 اسلاید در PowerPoint،به هیچ وجه بهم ریختگی وجود ندارد
بخشی از مطالب داخلی اسلاید ها
پاورپوینت کامل ۲۰۱۷ ImageNet Classification with Deep Convolutional Neural Networks 35 اسلاید در PowerPoint
اسلاید ۴: ۲ – IntroductionCurrent approaches to object recognition make essential use of machine learning methods.datasets of labeled images were relatively small — on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and CIFAR-10/100 [12]).But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets.
اسلاید ۵: ۲ – IntroductionThe new larger datasets include LabelMe [23], which consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of over 15 million labeled high-resolution images in over 22,000 categories.To learn about thousands of objects from millions of images, we need a model with a large learning capacity like CNN.Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture, they have still been prohibitively expensive to apply in large scale to high-resolution images for this reason we using GPU.
اسلاید ۶: ۳ – The Architecture
اسلاید ۷: The architecture of our network is summarized in Figure 2. It contains eight learned layers —five convolutional and three fully-connected. Below, we describe some of the novel or unusual features of our network’s architecture. Sections 3.1-3.4 are sorted according to our estimation of their importance, with the most important first.3 – The Architecture
اسلاید ۸: ۳ – The Architecture
اسلاید ۹: ۳.۱ – ReLU NonlinearityThe standard way to model a neuron’s output f as a function of its input x is with f(x) = tanh(x) OrDeep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units.
اسلاید ۱۰: ۳.۱ – ReLU NonlinearityFigure 1: A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line). The learning rates for each network were chosen independently to make training as fast as possible. No regularization of any kind was employed. The magnitude of the effect demonstrated here varies with network architecture, but networks with ReLUs consistently learn several times faster than equivalents with saturating neurons.
اسلاید ۱۱: ۳.۲ – Training on Multiple GPUsA single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to one another’s memory directly, without going through host machine memory.The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. This means that, for example, the kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input only from those kernel maps in layer 3 which reside on the same GPU.
اسلاید ۱۲: ۳.۳ – Local Response NormalizationReLUs have the desirable property that they do not require input normalization to prevent them from saturating. If at
- همچنین لینک دانلود به ایمیل شما ارسال خواهد شد به همین دلیل ایمیل خود را به دقت وارد نمایید.
- ممکن است ایمیل ارسالی به پوشه اسپم یا Bulk ایمیل شما ارسال شده باشد.
- در صورتی که به هر دلیلی موفق به دانلود فایل مورد نظر نشدید با ما تماس بگیرید.
مهسا فایل |
سایت دانلود فایل 