期刊:
IEEE INTERNET OF THINGS JOURNAL,2020年7(3):2247-2262 ISSN:2327-4662
通讯作者:
Qin, Hua
作者机构:
[Qin, Hua; Xiao, Xiang; Cao, Buwen; He, Jianxin; Chen, Weihong] Hunan City Univ, Coll Informat & Elect Engn, Yiyang 413000, Peoples R China.;[Peng, Yang] Univ Washington Bothell, Div Comp & Software Syst, Bothell, WA 98011 USA.
通讯机构:
[Qin, Hua] H;Hunan City Univ, Coll Informat & Elect Engn, Yiyang 413000, Peoples R China.
关键词:
Cross interface;energy-efficiency;gateway;Internet of Things (IoT)
摘要:
Featured with high bandwidth, high reliability, and native IP compatibility, WiFi has been recommended for a wide range of Internet-of-Things (IoT) applications. However, WiFi is inherently energy-hungry and it may impose high energy consumption on not only IoT devices but also gateways. To reduce gateway's WiFi energy consumption, many energy-efficient solutions for WiFi tethering services can be applied. However, these solutions mainly target the energy optimization of downlink data traffic in WLANs, and they are not suitable for the uplink data traffic of delivering massive IoT data from device to gateway (D2G), which is more common in IoT. When a gateway is powered by batteries, the high energy consumption caused by D2G communications may deplete the gateway quickly and renders the whole system dysfunctional as a result. Toward achieving energy-efficient D2G communications, we propose an innovative Green IoT Gateway (GIG) scheme, which aims at minimizing gateway energy consumption while ensuring the specific delay requirements of devices via cross-interface collaboration. Through utilizing the coexisting low-power ZigBee radios, GIG dynamically schedules the wakeup behaviors of high-power WiFi radios for energy-efficient and delay-bounded D2G communications. GIG has been implemented and evaluated in a prototype system, and the experiment results show that, under the moderate uplink data traffic and delay requirements, the energy consumption of GIG is 38.5% and 12.7% lower than those of a state-of-the-art WiFi tethering scheme and a simple version of the GIG scheme, respectively. Moreover, a great reduction of energy consumption at the device side can also be observed.
期刊:
International Journal of Information and Communication Technology,2020年17(2):164-177 ISSN:1466-6642
通讯作者:
Xu, S.
作者机构:
1. College of Information and Electronic Engineering, Hunan City University, Yiyang, Hunan, 413000, China
通讯机构:
College of Information and Electronic Engineering, Hunan City University, Yiyang, Hunan, China
关键词:
ant colony algorithm;mixed type;big data;subregion;abnormal detection;weighted network nodes;coordinate matrix
摘要:
Due to the problems of low accuracy and poor degree of freedom of the existing big data anomaly detection methods, a mixed big data partition anomaly detection method based on ant colony algorithm is proposed. The number of common neighbourhood between nodes in weighted network is redefined and the mixed big data sub-region is realised. Combining the operation, vulnerability and threat of the database, the security situation value is substituted into the abnormal location part to form the coordinate matrix. The pheromone concentration of each region was calculated, and the region where the concentration was reduced was defined as the abnormal region to complete the big data anomaly detection. Experimental results show that this method has high accuracy, freedom of anomaly location and good accuracy performance, which is a great progress of big data anomaly detection technology. In the future, an effective method to repair abnormal data and improve the specific application scope of this method should be developed on the basis of this method.
摘要:
Purpose Nowadays, the rapid growth of information technology strategies such as cloud computing is very noticeable in organizations. The advantages of the cloud environment are unavoidable because of an increase in innovation, flexibility and economy. Therefore, the critical topic is considering the factors affecting the adoption of cloud computing. This study aims to understand the factors of the adoption of cloud computing and its benefit in companies. Design/methodology/approach A research framework with four hypotheses has been developed based on the results of previous studies. Structural equation modeling has been used for data analysis. Findings The proposed model is verified by the results. In addition, the results have shown that cloud computing adoption is affected by four variables as follows, including human factor (with sub-indicator personal innovativeness and knowledge), organizational factor (with sub-indicator size, adequacy of resources and top management support), technical factor (with sub-indicator compatibility and security) and environmental factor (with sub-indicator regulatory environment, competitive pressure and trading partner). Research limitations/implications There are crucial implications in the findings: they have an essential contribution to the research community, administrators and Information and Communications Technology providers with respect to framing improved tactics for the adoption of cloud computing. The proposed model can enhance the perception of service providers about why some services sectors accept cloud computing amenities, whereas apparently the same ones having the same market situations do not. In addition, the above providers should enhance their interaction with the services sectors contributed to the cloud computing experience to make a well-organized setting for the adoption of cloud computing, and eliminate any ambiguity about this sort of technology. Moreover, the sample has been limited to Iran respondents. Practical implications The research studies about the usage of cloud computing have shown its effects on organizations today. Also, the different impacts of cloud computing on other contexts and organizations are in the center of attention. By carefully considering and managing cloud computing adoption logics, organizations could get significant advantages. Originality/value Cloud computing's technical and operational issues have been central in most of the previous studies. Some surveys have referred to the adoption of cloud computing by the organizations in terms of human characteristics or the contextual factor. Therefore, there should be a model and outline to assess the effect of aforesaid factors on cloud computing adoption.
作者机构:
[Chen, Guibin; Zhai, Zhangyin; Ma, Chunlin] Huaiyin Normal Univ, Sch Phys & Elect Elect Engn, Huaian 223001, Peoples R China.;[Wang, Xingyu; Wang, Xiaoxiong; Ma, Chunlin] Nanjing Univ Sci & Technol, Sch Sci, Nanjing 210094, Peoples R China.;[Tan, Weishi] Hunan City Univ, All Solid State Energy Storage Mat & Devices Key, Coll Informat & Elect Engn, Yiyang 413002, Peoples R China.;[Cheng, Zhenzhi; Zhou, Weiping] Nanchang Univ, Sch Mat Sci & Engn, Nanchang 330031, Jiangxi, Peoples R China.
通讯机构:
[Tan, Weishi] H;[Zhou, Weiping] N;Hunan City Univ, All Solid State Energy Storage Mat & Devices Key, Coll Informat & Elect Engn, Yiyang 413002, Peoples R China.;Nanchang Univ, Sch Mat Sci & Engn, Nanchang 330031, Jiangxi, Peoples R China.
关键词:
cognitive radio networks;flow-adaptive spectrum leasing;channel aggregating
摘要:
Cognitive radio networks (CRNs), which allow secondary users (SUs) to dynamically access a network without affecting the primary users (PUs), have been widely regarded as an effective approach to mitigate the shortage of spectrum resources and the inefficiency of spectrum utilization. However, the SUs suffer from frequent spectrum handoffs and transmission limitations. In this paper, considering the quality of service (QoS) requirements of PUs and SUs, we propose a novel dynamic flow-adaptive spectrum leasing with channel aggregation. Specifically, we design an adaptive leasing algorithm, which adaptively adjusts the portion of leased channels based on the number of ongoing and buffered PU flows. Furthermore, in the leased spectrum band, the SU flows with access priority employ dynamic spectrum access of channel aggregation, which enables one flow to occupy multiple channels for transmission in a dynamically changing environment. For performance evaluation, the continuous time Markov chain (CTMC) is developed to model our proposed strategy and conduct theoretical analyses. Numerical results demonstrate that the proposed strategy effectively improves the spectrum utilization and network capacity, while significantly reducing the forced termination probability and blocking probability of SU flows.
摘要:
With the development of modern communication, available spectrum resources are becoming increasingly scarce, which reduce network throughput. Moreover, the mobility of nodes results in the changes of network topological structure. Hence, a considerable amount of control information is consumed, which causes a corresponding increase in network power consumption and exerts a substantial impact on network lifetime. To solve the real-time transmission problem in large-scale wireless mobile sensor networks, opportunistic spectrum access is applied to adjust the transmission power of sensor nodes and the transmission velocity of data. A cognitive routing and optimization protocol based on multiple channels with a cross-layer design is proposed to study joint optimal cognitive routing with maximizing network throughput and network lifetime. Experimental results show that the cognitive routing and optimization protocol based on multiple channels achieves low computational complexity, which maximizes network throughput and network lifetime. This protocol can be also effectively applied to large-scale wireless mobile sensor networks.
摘要:
Encryption plays an important role in protecting data, especially data transferred on the Internet. However, encryption is computationally expensive and this leads to high energy costs. Parallel encryption solutions using more CPU/GPU cores can achieve high performance. If we consider energy efficiency to be cost effective using parallel encryption solutions at the same time, this problem can be alleviated effectively. Because many CPU/GPU cores and encryption are pervasive currently, saving energy cost by parallel encrypting has become an unavoidable problem. In this paper, we propose an energy-efficient parallel Advance Encryption Standard (AES) algorithm for CPU-GPU heterogeneous platforms. These platforms, such as the Green 500 computers, are popular in both high performance and general computing. Parallelizing AES algorithm, using both GPUs and CPUs, balances the workload between CPUs and GPUs based on their computing capacities. This approach also uses the Nvidia Management Library (NVML) to adjust GPU frequencies, overlaps data transfers and computation, and fully utilizes GPU computing resources to reduce energy consumption as much as possible. Experiments conducted on a platform with one K20M GPU and two Xeon E5-2640 v2 CPUs show that this approach can reduce energy consumption by 74% compared to CPU-only parallel AES algorithm and 21% compared to GPU-only parallel AES algorithm on the same platform. Its energy efficiency is 4.66 MB/Joule on average higher than both CPU-only parallel AES algorithm (1.15 MB/Joule) and GPU-only parallel AES algorithm (3.65 MB/Joule). As an energy-efficient parallel AES algorithm solution, it can be used to encrypt data on heterogeneous platforms to save energy, especially for the computers with thousands of heterogeneous nodes. (C) 2020 Elsevier B.V. All rights reserved.