M. Rowley, V. Corces
Hasil untuk "Architecture"
Menampilkan 20 dari ~2880047 hasil · dari DOAJ, Semantic Scholar, CrossRef, arXiv
C. Cachin
Ejaz Ahmed, Michael Jones, Tim K. Marks
Jesse R. Dixon, Inkyung Jung, Siddarth Selvaraj et al.
Higher-order chromatin structure is emerging as an important regulator of gene expression. Although dynamic chromatin structures have been identified in the genome, the full scope of chromatin dynamics during mammalian development and lineage specification remains to be determined. By mapping genome-wide chromatin interactions in human embryonic stem (ES) cells and four human ES-cell-derived lineages, we uncover extensive chromatin reorganization during lineage specification. We observe that although self-associating chromatin domains are stable during differentiation, chromatin interactions both within and between domains change in a striking manner, altering 36% of active and inactive chromosomal compartments throughout the genome. By integrating chromatin interaction maps with haplotype-resolved epigenome and transcriptome data sets, we find widespread allelic bias in gene expression correlated with allele-biased chromatin states of linked promoters and distal enhancers. Our results therefore provide a global view of chromatin dynamics and a resource for studying long-range control of gene expression in distinct human cell lineages.
Jeffrey C. Miller, Siyuan Tan, Guijuan Qiao et al.
J. Hennessy, D. Patterson
Innovations like domain-specific hardware, enhanced security, open instruction sets, and agile chip development will lead the way.
W. Vollmer, D. Blanot, M. de Pedro
J. Contreras-Castillo, S. Zeadally, J. Guerrero-Ibáñez
Today, vehicles are increasingly being connected to the Internet of Things which enable them to provide ubiquitous access to information to drivers and passengers while on the move. However, as the number of connected vehicles keeps increasing, new requirements (such as seamless, secure, robust, scalable information exchange among vehicles, humans, and roadside infrastructures) of vehicular networks are emerging. In this context, the original concept of vehicular ad-hoc networks is being transformed into a new concept called the Internet of Vehicles (IoV). We discuss the benefits of IoV along with recent industry standards developed to promote its implementation. We further present recently proposed communication protocols to enable the seamless integration and operation of the IoV. Finally, we present future research directions of IoV that require further consideration from the vehicular research community.
G. Wiederhold
D. Perry, A. Wolf
Philippe B Kruchten
C. Cremer
M. Lades, J. Vorbrüggen, J. Buhmann et al.
B. Wang, Steven M. L. Smith, Jiayang Li
Han Cai, Tianyao Chen, Weinan Zhang et al.
Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.
Ibrar Yaqoob, E. Ahmed, I. A. T. Hashem et al.
D. Budgen
Yu Weng, Tianbao Zhou, Yujie Li et al.
Neural architecture search (NAS) has significant progress in improving the accuracy of image classification. Recently, some works attempt to extend NAS to image segmentation which shows preliminary feasibility. However, all of them focus on searching architecture for semantic segmentation in natural scenes. In this paper, we design three types of primitive operation set on search space to automatically find two cell architecture DownSC and UpSC for semantic image segmentation especially medical image segmentation. Inspired by the U-net architecture and its variants successfully applied to various medical image segmentation, we propose NAS-Unet which is stacked by the same number of DownSC and UpSC on a U-like backbone network. The architectures of DownSC and UpSC updated simultaneously by a differential architecture strategy during the search stage. We demonstrate the good segmentation results of the proposed method on Promise12, Chaos, and ultrasound nerve datasets, which collected by magnetic resonance imaging, computed tomography, and ultrasound, respectively. Without any pretraining, our architecture searched on PASCAL VOC2012, attains better performances and much fewer parameters (about 0.8M) than U-net and one of its variants when evaluated on the above three types of medical image datasets.
Colin White, Willie Neiswanger, Yash Savani
Over the past half-decade, many methods have been considered for neural architecture search (NAS). Bayesian optimization (BO), which has long had success in hyperparameter optimization, has recently emerged as a very promising strategy for NAS when it is coupled with a neural predictor. Recent work has proposed different instantiations of this framework, for example, using Bayesian neural networks or graph convolutional networks as the predictive model within BO. However, the analyses in these papers often focus on the full-fledged NAS algorithm, so it is difficult to tell which individual components of the framework lead to the best performance. In this work, we give a thorough analysis of the "BO + neural predictor framework" by identifying five main components: the architecture encoding, neural predictor, uncertainty calibration method, acquisition function, and acquisition function optimization. We test several different methods for each component and also develop a novel path-based encoding scheme for neural architectures, which we show theoretically and empirically scales better than other encodings. Using all of our analyses, we develop a final algorithm called BANANAS, which achieves state-of-the-art performance on NAS search spaces. We adhere to the NAS research checklist (Lindauer and Hutter 2019) to facilitate best practices, and our code is available at https://github.com/naszilla/naszilla.
Mingyu Yan, Lei Deng, Xing Hu et al.
Inspired by the great success of neural networks, graph convolutional neural networks (GCNs) are proposed to analyze graph data. GCNs mainly include two phases with distinct execution patterns. The Aggregation phase, behaves as graph processing, showing a dynamic and irregular execution pattern. The Combination phase, acts more like the neural networks, presenting a static and regular execution pattern. The hybrid execution patterns of GCNs require a design that alleviates irregularity and exploits regularity. Moreover, to achieve higher performance and energy efficiency, the design needs to leverage the high intra-vertex parallelism in Aggregation phase, the highly reusable inter-vertex data in Combination phase, and the opportunity to fuse phase-by-phase execution introduced by the new features of GCNs. However, existing architectures fail to address these demands. In this work, we first characterize the hybrid execution patterns of GCNs on Intel Xeon CPU. Guided by the characterization, we design a GCN accelerator, HyGCN, using a hybrid architecture to efficiently perform GCNs. Specifically, first, we build a new programming model to exploit the fine-grained parallelism for our hardware design. Second, we propose a hardware design with two efficient processing engines to alleviate the irregularity of Aggregation phase and leverage the regularity of Combination phase. Besides, these engines can exploit various parallelism and reuse highly reusable data efficiently. Third, we optimize the overall system via inter-engine pipeline for inter-phase fusion and priority-based off-chip memory access coordination to improve off-chip bandwidth utilization. Compared to the state-of-the-art software framework running on Intel Xeon CPU and NVIDIA V100 GPU, our work achieves on average 1509× speedup with 2500× energy reduction and average 6.5× speedup with 10× energy reduction, respectively.
Halaman 7 dari 144003