Emerging Topics in Computing

前瞻性技術,產業界不見得用這麼快.

企業喜歡找"Ready person", 把R&D最辛苦不確定的過程簡化. 這樣的心態有點短視.  應該是每家企業願意投入相關的前瞻性科研的領域和方向都不同. 看需要, 但直接撿現成的成果或人才,心態有點要不得, 產學應該合作吧.

相關產品研究:

  • Enterprise Computing Systems
  • Computational Networks
  • Hardware and Embedded System Security
  • Educational Computing
  • High Performance Computing
  • Next Generation Wireless Computing Systems

效能方面的人才很少,但又很需要. 雲端,行動都要理解系統計算效能. 我想起來以前遊戲公司在開發3D遊戲吃了很大的虧, 效能簡直就是所有產業發展的根本.  想發展雲端運算, 大數據, 行動運算,物聯網,必要掌握系統運算的最適化.

Special Issue on Big Data Benchmarks(大數據基準), Performance Optimization(效能最佳化), and Emerging Hardware

Big data are emerging as a strategic property of nations and organizations. There are driving needs to generate values from big data. However, the sheer volume of big data requires significant

It is expected that systems with unprecedented scales (前所未有的尺度) can resolve the problems caused by varieties of big data with daunting volumes. Nevertheless, without big data benchmarks, it is very difficult for big data owners to make a decision on which system is best for meeting with their specific requirements. They also face challenges on how to optimize the systems for specific or even comprehensive workloads.

Meanwhile, researchers are also working on innovative data management systems, hardware architectures, and operating systems to improve performance in dealing with big data.

This focus of this special issue will be on architecture and system support for big data systems.

Special Issue on Methods and Techniques for Processing Streaming Big Data in Datacentre Clouds

Internet of Things (IoT) is a part of Future Internet and comprises many billions of Internet connected Objects (ICOs) or ‘things’

ICOs can include sensors, RFIDs, social media, actuators (such as machines/equipments fitted with sensors) as well as lab instruments (e.g., high energy physics synchrotron), and smart consumer appliances (smart TV, smart phone, etc.).

The IoT vision has recently given rise to IoT big data applications that are capable of producing billions of data stream and tens of years of historical data to support timely decision making.

Some of the emerging IoT big data applications, e.g. smart energy grids, syndromic bio-surveillance, environmental monitoring, emergency situation awareness, digital agriculture, and smart manufacturing, need to process and manage massive, streaming, and multi-dimensional (from multiple sources) data from geographically distributed data sources.

Despite recent technological advances of the data-intensive computing paradigms (e.g. the MapReduce paradigm, workflow technologies, stream processing engines, distributed machine learning frameworks) and datacentre clouds, large-scale reliable system-level software for IoT big data applications are yet to become commonplace.

As new diverse IoT applications begin to emerge, there is a need for optimized techniques to distribute processing of the streaming data produced by such applications across multiple datacentres that combine multiple, independent, and geographically distributed software and hardware resources.

However, the capability of existing data-intensive computing paradigms is limited in many important aspects such as:

(i) they can only process data on compute and storage resources within a centralised local area network, e.g., a single cluster within a datacentre. This leads to unsatisfied Quality of Service (QoS) in terms of timeliness of decision making, resource availability, data availability, etc. as application demands increase;

(ii) they do not provide mechanisms to seamlessly integrate data spread across multiple distributed heterogeneous data sources (ICOs);

(iii) lack support for rapid formulation of intuitive queries over streaming data based on general purpose concepts, vocabularies and data discovery; and

(iv) they do not provide any decision making support for selecting optimal data mining and machine algorithms, data application programming frameworks, and NoSQL database systems based on nature of the big data (volume, variety, and velocity).

Furthermore, adoption of existing datacentre cloud platform for hosting IoT applications is yet to be realised due to lack of techniques and software frameworks that can guarantee QoS under uncertain big data application behaviours (data arrival rate, number of data sources, decision making urgency, etc.), unpredictable datacentre resource conditions (failures, availability, malfunction, etc.) and capacity demands (bandwidth, memory, storage, and CPU cycles). It is clear that existing data intensive computing paradigms and related datacentre cloud resource provisioning techniques fall short of the IoT big data challenge or do not exist.

Special Issue on Approximate and Stochastic Computing Circuits, Systems and Algorithms

The last decade has seen renewed interest in non-traditional computing paradigms. Several (re-)emerging paradigms are aimed at leveraging the error resiliency of many systems by releasing the strict requirement of exactness in computing.

This special issue of TETC focuses on two specific lines of research, known as approximate and stochastic computing.

Approximate computing is driven by considerations of energy efficiency. Applications such as multimedia, recognition, and data mining are inherently error-tolerant and do not require perfect accuracy in computation. The results of signal processing algorithms used in image and video processing are ultimately left to human perception. Therefore, strict exactness may not be required and an imprecise result may suffice. In these applications, approximate circuits aim to improve energy-efficiency by maximally exploiting the tolerable loss of accuracy and trading it for energy and area savings.

Stochastic computing is a paradigm that achieves fault-tolerance and area savings through randomness. Information is represented by random binary bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. The approach is applicable for data intensive applications such as signal processing where small fluctuations can be tolerated but large errors are catastrophic. In such contexts, it offers savings in computational resources and provides tolerance to errors. This fault tolerance scales gracefully to high error rates. The focus of this special issue will be on the novel design and analysis of approximate and stochastic computing circuits, systems, algorithms and applications.

Special Issue/Section on Low-Power Image Recognition

Digital images have become integral parts of everyday life. It is estimated that 10 million images are uploaded to social networks each hour and 100 hours of video uploaded for sharing each minute. Sophisticated image / video processing has fundamentally changed how people interact.

This special issue focuses on the intersection of image recognition and energy conservation. P

Special Issue/Section on Defect and Fault Tolerance in VLSI and Nanotechnology Systems

The continuous scaling of CMOS devices as well as the increased interest in the use of emerging technologies make more and more important the topics related to defect and fault tolerance in VLSI and nanotechnology systems. All aspects of design, manufacturing, test, reliability, and availability that are affected by defects during manufacturing and by faults during system operation, are of interest. The IEEE Transaction on Emerging Topics in Computing (TETC) seeks original manuscripts for a Special Section on Defect and Fault Tolerance in VLSI Systems scheduled to appear in the December issue of 2016.

Special Issue/Section on Emerging Computational Paradigms and Architectures for Multicore Platforms

Multicore and many core embedded architectures are emerging as computational platforms in many application domains ranging for high performance computing to deeply embedded systems. The new generations of parallel systems, both homogeneous and heterogeneous that are developed on top of these architectures represent what is called the emerging computing continuum paradigm. A successful evolution of this paradigm is however imposing various challenges from both an architectural and a programming point of view. The design of embedded multicores/manycores requires innovative hardware specification and modeling strategies, as well as low power simulation, analysis and testing. New synthesis approaches, possibly including reliability and variability compensation, are key issues in the coming technology nodes.

Furthermore, thermal aware design is mandatory to manage power density issues. The design of effective interconnection networks is a key enabling technology in a manycore paradigm. New solutions such as photonics and RF NoCs architectures are emerging solutions on this regard. At the same time, these new interconnection systems have to be compliant with innovative 3D VLSI packaging technologies involving vertical interconnections in 3D and stacked ICs. These design solutions enable the integration of more and more IPs, resulting in heterogeneous platform where reconfigurable components, multi-DSP engines and GPUs collaborate to provide the target performance and energy requirements. Along with design and architectural innovations, many challenges have to be faced to enable an effective programming environment to many core systems. These challenges call from innovative solutions at the various levels of the programming toolchain, including compilers, programming models, runtime management and operating systems aspects. Holistic and cross-layer programming approaches have to be targeted considering not only performance, but also energy, dependability and real-time requirements. Finally, on the application side, multicore/manycore embedded systems are pushing developments in various domains such as biomedical, health care, internet of things, smart mobility, and aviation.

This special issue/section asks for emerging computation technology aspects related, but not limited to the mentioned topics. Contributions must be original and highlight emerging computation technologies in design, testing and programming multicore and manycore systems.

Special Issue/Section on New Paradigms in Ad Hoc, Sensor and Mesh Networks, From Theory to Practice

Ad hoc, sensor and mesh networks have attracted significant attention by academia and industry in the past decade. In recent years however new paradigms have emerged due to the large increase in number and processing power of smart phones and other portable devices. Furthermore, new applications and emerging technologies have created new research challenges for ad hoc networks. The emergence of new operational paradigms such as Smart Home and Smart City, Body Area Networks and E-Health, Device-to-Device Communications, Machine-to-Machine Communications, Software Defined Networks, the Internet of Things, RFID, and Small Cells require substantial changes in traditional ad hoc networking. The focus of this special issue is on novel applications, protocols and architectures, non-traditional measurement, modeling, analysis and evaluation, prototype systems, and experiments in ad hoc, sensor and mesh networks.

廣告
發表留言

發表迴響

在下方填入你的資料或按右方圖示以社群網站登入:

WordPress.com Logo

您的留言將使用 WordPress.com 帳號。 登出 / 變更 )

Twitter picture

您的留言將使用 Twitter 帳號。 登出 / 變更 )

Facebook照片

您的留言將使用 Facebook 帳號。 登出 / 變更 )

Google+ photo

您的留言將使用 Google+ 帳號。 登出 / 變更 )

連結到 %s

%d 位部落客按了讚: