فایل ورد کامل الگوریتم اندازه گیری بافر برای شبکه ها در تراشه با استفاده از TDMA و کنترل جریان پایان به پایان اعتباری
توجه : به همراه فایل word این محصول فایل پاورپوینت (PowerPoint) و اسلاید های آن به صورت هدیه ارائه خواهد شد
این مقاله، ترجمه شده یک مقاله مرجع و معتبر انگلیسی می باشد که به صورت بسیار عالی توسط متخصصین این رشته ترجمه شده است و به صورت فایل ورد (microsoft word) ارائه می گردد
متن داخلی مقاله بسیار عالی، پر محتوا و قابل درک می باشد و شما از استفاده ی آن بسیار لذت خواهید برد. ما عالی بودن این مقاله را تضمین می کنیم
فایل ورد این مقاله بسیار خوب تایپ شده و قابل کپی و ویرایش می باشد و تنظیمات آن نیز به صورت عالی انجام شده است؛ به همراه فایل ورد این مقاله یک فایل پاور پوینت نیز به شما ارئه خواهد شد که دارای یک قالب بسیار زیبا و تنظیمات نمایشی متعدد می باشد
توجه : در صورت مشاهده بهم ریختگی احتمالی در متون زیر ،دلیل ان کپی کردن این مطالب از داخل فایل می باشد و در فایل اصلی فایل ورد کامل الگوریتم اندازه گیری بافر برای شبکه ها در تراشه با استفاده از TDMA و کنترل جریان پایان به پایان اعتباری،به هیچ وجه بهم ریختگی وجود ندارد
تعداد صفحات این فایل: ۲۵ صفحه
بخشی از ترجمه :
بخشی از مقاله انگلیسیعنوان انگلیسی:A buffer-sizing Algorithm for Networks on Chip using TDMA and credit-based end-to-end Flow Control~~en~~
Abstract
When designing a system-on-chip (SoC) using a network- on-chip (NoC), silicon area and power consumption are two key elements to optimize. A dominant part of the NoC area and power consumption is due to the buffers in the network interfaces (NIs) needed to decouple computation from communication. Having such a decoupling prevents stalling of IP blocks due to the communication interconnect. The size of these buffers is especially important in real-time systems, as there they should be big enough to obtain predictable performance. To ensure that buffers do not overflow, end- to-end flow-control is needed. One form of end-to-end flow- control used in NoCs is credit-based flow-control. This form places additional requirements on the buffer sizes, because the flow-control delays need to be taken into account. In this work, we present an algorithm to find the minimal decoupling buffer sizes for a NoC using TDMA and credit- based end-to-end flow-control, subject to the performance constraints of the applications running on the SoC. Our experiments show that our method results in a 84% reduction of the total NoC buffer area when compared to the state-of- the art buffer-sizing methods. Moreover, our method has a low run-time complexity, producing results in the order of minutes for our experiments, enabling quick design cycles for large SoC designs. Finally, our method can take into account multiple usecases running on the same SoC.
۱ Introduction
To effectively tackle the increasing design complexity of SoCs, the computation architecture needs to be decoupled from the communication architecture [16]. By such decoupling, the computation and the communication architectures can be designed independently, thereby speeding up the entire design process and hence reducing the time-to-market of SoCs. NoCs can offer such decoupling with decoupling buffers between the computational blocks and the communication blocks, thereby hiding the differences between the operating speeds and burstiness of the cores and the NoC. This allows the cores to execute their transactions without noticing the presence or impact of an interconnect, for example they will not stall if the NoC is busy with another core. Methods to find the minimum size of the NoC decoupling buffers for the set of applications that are run on the SoC is an important problem for two reasons. First, the decoupling buffers take up a significant amount of the NoC area and power consumption, thus finding the minimum buffering requirements is key to achieve an efficient NoC implementation. Second, for a predictable system behavior, we need to compute the minimum buffering that still satisfies the application requirements. Moreover, some NoCs employ credit-based end-to-end flow control mechanisms to provide guaranteed system operation and to remove message-dependent deadlocks in the system [1]. In this case, additional buffering is required to hide the end-to-end latency for the flow control mechanism and to provide full throughput operation. If the buffers are too small, then the throughput and latency are affected and no end-to-end guarantees can be given. In this paper we address the problem of computing the minimum size of the decoupling buffers of the NoC. We present an application-specific design method for determining the minimal buffer sizes for the Guaranteed Throughput (GT) connections of the thereal NoC architecture [15]. We model the application traffic behavior and the network behavior to determine the exact bounds on buffer-sizing. In our method, we also consider the buffering requirements due to the use of credit-based end-to-end flow control. We apply our method to several SoC designs, which show that the proposed method leads to a large reduction in the total NoC buffer area (84% on average) and power consumption when compared to an analytical method. Our method has a low run-time complexity and is therefore applicable to complex SoC designs too. The method can be applied for designs with multiple usecases, by taking the maximum required buffer size over all usecases for each buffer. Finally, the method is also integrated into our fully automatic design flow, enabling fast design cycles over a SoC design. Although the algorithmic method is presented for the thereal architecture, it can be applied to any NoC for which the behavior of both the IP cores and the network is periodic, such as aSoc [5] and Nostrum [6]. Traditionally, simulation (or trace) based approaches such as [12] are used to compute the buffering requirements in systems. While they provide an optimal bound for the given trace, there is no guarantee that the derived buffer sizes will satisfy different traces. Hence, they cannot be used to build predictable systems. Analytical methods for sizing buffers based on jitter-constrained periodic behavior are known, such as the ones presented in [2, 3]. These methods are usually too pessimistic and can result in larger buffers than required for the design. We quantify this in Section 5. Stochastic approaches based on queuing theory are shown in [7]. Such stochastic models can only approximate the actual traffic characteristics of the application, and hence system behavior cannot be guaranteed. A general mathematical theory, network calculus [8], has been established to model network behavior.It allows computing bounds on delays and back-logs in networks. The foundations of our proposed algorithmic approach to buffersizing are based on the models of network calculus. Synchronous Data Flow (SDF) graphs to model signal processing and multimedia applications have been presented by several researchers [9]. Using SDF models to minimize buffering requirements of processors has been presented in [10]. The use of SDFs to model NoCs has been presented in [11]. The SDF models however assume a uniform data production and consumption to compute the buffering requirements. In NoCs that provide throughput guarantees, the TDMA slots allocated to a traffic stream need not be uniformly spread over time. Thus, SDF models can not model the network in such detail as shown here, and the results are hence less optimal.
$$en!!
- همچنین لینک دانلود به ایمیل شما ارسال خواهد شد به همین دلیل ایمیل خود را به دقت وارد نمایید.
- ممکن است ایمیل ارسالی به پوشه اسپم یا Bulk ایمیل شما ارسال شده باشد.
- در صورتی که به هر دلیلی موفق به دانلود فایل مورد نظر نشدید با ما تماس بگیرید.
مهسا فایل |
سایت دانلود فایل 