Heuristic Algorithms for Optimal Distribution of Multi-Persona Components

Mobile Computation

Offloading Many approaches have proposed mobile computation offloading techniques to support mobile devices. These approaches have indeed proved their ability to enhance the applications performance and minimize the energy consumption on mobile devices. In mCloud framework [Zhou et al. (2016)], different cloud resources are considered; mobile ad-hoc device cloud, cloudlets and public cloud. The work aims to find where tasks should be executed so that the overall energy consumption and execution time is the lowest among all cloud resources in the mobile cloud infrastructure based on the current state of the device. MAUI [Cuervo et al. (2010)] is an offloading framework that has been proposed by Cuervo et al. in order to reduce the energy consumption of mobile applications. The framework consists of a proxy server responsible of communicating the method state, a profiler that can monitor the device, program and network conditions, and a solver that can decide whether to run the method locally or remotely.

MAUI uses its optimization framework to decide which method to send for remote execution based on the information gathered by the profiler. The results show the ability of MAUI to minimize the energy consumption of a running application. CloneCloud [Chun et al. (2011)] is another offloading approach that has been presented in order to minimize the energy consumption and speed-up the execution of the running application. A profiler collects the data about the threads running in this application and communicates the gathered data with an optimization solver. Based on cost metrics of execution time and energy, the solver decides about the best partitioning of these threads between local and remote execution. This approach does not require modification of the original application since it works at the binary level. The experiments of CloneCloud showed promising results in terms of minimizing both execution time and energy consumption of an application. However, only one thread at a time can be encapsulated in a VM and migrated for remote execution, which diminishes the concurrency of executing the components of an application.

Relying on distributed shared memory (DSM) systems and virtual machine (VM) synchronization techniques, COMET [Gordon et al. (2012)] enable multithreaded offloading and overcomes the limitations of MAUI and CloneCloud, which can offload one method/thread at a time. To manage memory consistency, a field-level granularity is used, reducing the frequency of required communication between the mobile device and the cloud. Kemp has followed a different strategy and proposed Cuckoo [Kemp (2014)] that assumes computation-intensive code to be implemented as an Android service. The framework includes sensors to decide, at runtime, whether or not to offload particular service since circumstances like network type and status and invocation parameters of the service call on mobile devices get changed continuously, making offloading sometimes beneficial but not always. Cuckoo framework has been able to reduce the energy consumption and increase the speed of computation intensive applications. Chen et al. [Chen et al. (2012)] have proposed a framework that automatically offloads heavy back-end services of a regular standalone Android application in order to reduce the energy loss and execution time of an application.

Based on a decision model, the services are offloaded to an Android virtual machine in the cloud. An offloading-decision making algorithm that considers user delay-tolerance threshold has been proposed by Xia et al. [Xia et al. (2014)]. The tool predicts the average execution time and energy of an application when running locally on the device, then compares them to cloud-based execution cost in order to decide where the application should be executed. ThinkAir [Kosta et al. (2012)] has been introduced as a technique to improve both computational performance and power efficiency of mobile devices by bridging smartphones to the cloud. The proposed architecture consists of a cloud infrastructure, an application server that communicates with applications and executes remote methods, a set of profilers to monitor the device, program, and network conditions, and an execution controller that decides about offloading. ThinkAir applies a method-level code offloading. It parallelizes method execution by invoking multiple virtual machines (VMs) to execute in the cloud in a seamless and on-demand manner achieving greater reduction in execution time and energy consumption.

Shi et al. have presented COSMOS system [Shi et al. (2014)] with the objective of managing cloud resources to reduce their usage monetary cost while maintaining good offloading performance. Through a master component, COSMOS collects periodically information of computation tasks and remote VMs workloads. Based on the gathered information, COSMOS is able to control the number of active VMs over time. Particularly, whenever VMs are overloaded, the system turns on new instance to handle the upcoming requests. It can also decide to shut down unnecessary instances to reduce the monetary cost in case the rest are enough to handle the mobile devices requests. Chae et al. [Chae et al. (2014)] have proposed CMcloud, a new scheme that aims to maximize the throughput or minimize the server cost at cloud provider end by running as many mobile applications as possible per server and offer the userโ€™s expected acceleration in the mobile application execution. CMcloud seeks to find the least costly server which has enough remaining resources to finish the execution of the mobile application within a target deadline. Predictive Management Techniques for Virtual Environments Sharing a single physical end terminal between several virtual machines raises many problems more critically, autonomic load balancing of resources.

In this context, different approaches have been proposed to predict physical machine loads. Predicting future load enables proactive consolidation of VMs on the overloaded and under-loaded physical machines [Farahnakian et al. (2015)]. In [Farahnakian et al. (2013a)] and [Farahnakian et al. (2013b)], the authors have proposed regression methods to predict CPU utilization of a physical machine. These methods use the linear regression and the K-nearest neighbor (KNN) regression algorithms, respectively, to approximate a function based on the data collected during the lifetimes of the VMs. The formulated function is then used to predict an overloaded or an under-loaded machine. A linear regression based approach has been implemented by Fahimeh Farahnakian [Farahnakian et al. (2013a)]. The CPU usage of the host machine is predicted on the basis of linear regression technique and then live migration process was used to detect under-utilized and over-utilized machine.

Bala et al. [Bala & Chana (2016)], have proposed a proactive load balancing approach that based on a prior knowledge of the resource utilization parameters and gathered data, machine learning techniques are applied to predict future resource needs. Various approaches have been studied such as KNN, Artificial Neural Network (ANN), Support Vector Machines (SVM) and Random Forest (RF). The approach having maximum accuracy has been utilized as prediction-based approach. Xiao et al. [Xiao et al. (2013)] have also used a load prediction algorithm to capture the rising trend of resource usage patterns and help identifying hot spots and cold spots machines. After predicting the resource needs, Hot spot and cold spot machines are identified. When the resource utilization of any physical machine is above the hot threshold, the latter is marked as hotspot. If so, some VMs running on it will be migrated away to reduce its load [Xiao et al. (2013)]. On the other hand, cold spot machines either idle or having the average utilization below particular threshold, are also identified. If so, some of those physical machines could potentially be turned off to save energy [Xiao et al. (2013)] [Beloglazov & Buyya (2010)].

Le rapport de stage ou le pfe est un document dโ€™analyse, de synthรจse et dโ€™รฉvaluation de votre apprentissage, cโ€™est pour cela chatpfe.com propose le tรฉlรฉchargement des modรจles complet de projet de fin dโ€™รฉtude, rapport de stage, mรฉmoire, pfe, thรจse, pour connaรฎtre la mรฉthodologie ร  avoir et savoir comment construire les parties dโ€™un projet de fin dโ€™รฉtude.

Table des matiรจres

INTRODUCTION
0.1 Motivations
0.2 Problem Statement
0.3 Main Goal
0.4 Methodology
0.5 Technical Contributions
0.6 Publications
0.7 Thesis Organization
LITERATURE REVIEW
1.1 Mobile Virtualization
1.2 Mobile Computation Offloading
1.3 Predictive Management Techniques for Virtual Environments
1.4 Dynamic Offloading Algorithms
1.5 Conclusion
CHAPTER 2 ARTICLE 1: SELECTIVE MOBILE CLOUD OFFLOADING TO AUGMENT MULTI-PERSONA PERFORMANCE AND VIABILITY
2.1 Abstract
2.2 Introduction
2.3 Background and Related Work
2.3.1 Mobile Virtualization
2.3.2 Offloading
2.3.3 Proposed Approach Positioning
2.4 Problem Illustration
2.5 Offloading Meets Multi-Persona
2.6 Multi-Objective Optimization for Multi-Persona
2.6.1 Problem Definition
2.6.2 Problem Formulation
2.7 Heuristic Algorithms for Optimal Distribution of Multi-Persona Components
2.7.1 Representation of Individuals
2.7.2 Fitness Evaluation
2.7.3 Operators
2.7.4 Algorithm and Time Complexity Analysis
2.8 Implementation and Experiments
2.8.1 Implementation
2.8.2 Experiments
2.8.2.1 Testbed Setup
2.8.2.2 Assumptions
2.8.2.3 Results and Analysis
2.9 Conclusion and Future Directions
CHAPTER 3 ARTICLE 2: SMART MOBILE COMPUTATION OFFLOADING: CENTRALIZED SELECTIVE AND MULTI-OBJECTIVE APPROACH
3.1 Abstract
3.2 Introduction
3.3 Computations Offloading Overview
3.4 Related Work
3.5 Technical Problems
3.5.1 Accuracy and Overhead of Decision Model Evaluation
3.5.2 Decision Model Metrics
3.6 Centralized Selective and Multi-Objective Offloading: Insights
3.7 Selective Mechanism
3.7.1 Hotspots Profiling
3.7.2 Hotspots Detection
3.7.3 Selection Algorithm
3.8 Centralized Selective Offloading Decision Model
3.8.1 Definition
3.8.2 Model Formulation
3.9 Intelligent Decision Making Process
3.9.1 Solution Encoding
3.9.2 Fitness Evaluation
3.9.3 Evolution Process
3.9.3.1 Selection
3.9.3.2 Crossover
3.9.3.3 Mutation
3.10 Numerical Analysis
3.10.1 Testbed Setup
3.10.2 Results
3.10.2.1 Decision Model Efficiency
3.10.2.2 Selective Mechanism and Intelligent Decision Maker Efficiency
3.11 Conclusion and Future Directions
CHAPTER 4 ARTICLE 3: COST-EFFECTIVE CLOUD-BASED SOLUTION FOR MULTI-PERSONA MOBILE COMPUTING IN WORKPLACE
4.1 Abstract
4.2 Introduction
4.3 Computation offloading to support mobile devices: Background
4.4 Related Work
4.4.1 Mobile Centric Offloading
4.4.2 Cloud Centric Offloading
4.4.3 Analysis
4.5 Illustrative Business Model and Problem Description
4.6 Cost-Effective Offloading: System Model
4.7 Collective Multi-Persona Offloading Optimization Problem (CMPO)
4.7.1 Definition
4.7.2 Formulation
4.8 Smart Cost-Effective Decision Maker
4.8.1 Solution Encoding
4.8.2 Fitness Evaluation
4.8.3 Evolution Process
4.8.3.1 Selection
4.8.3.2 Crossover
4.8.3.3 Mutation
4.9 Numerical Analysis
4.9.1 Setup
4.9.2 Generated distribution cost
4.9.3 Decision maker overhead
4.9.4 Satisfaction rate
4.9.5 Optimized decision maker speedup
4.9.6 Summary
4.10 Conclusion and Future Directions
CHAPTER 5 ARTICLE 4: PROACTIVE SOLUTION AND ADVANCED MANAGEABILITY OF MULTI-PERSONA MOBILE COMPUTING
5.1 Abstract
5.2 Introduction
5.3 Related Works
5.3.1 Predictive Virtual Instances Management Strategies
5.3.2 Dynamic Offloading Algorithms
5.3.3 Our Contributions
5.4 System Model
5.5 Machine Learning Prediction
5.5.1 Linear Regression
5.5.2 Support Vector Regression
5.5.3 Neural Network
5.5.4 Deep Neural Network
5.6 Problem Formulation
5.7 Proposed Dynamic Programming Algorithm
5.7.1 DP Table Filling
5.8 Evaluation
5.8.1 Setup
5.8.2 Numerical Analysis
5.9 Conclusion and Future Directions
CONCLUSION AND RECOMMENDATIONS
BIBLIOGRAPHY

Rapport PFE, mรฉmoire et thรจse PDFTรฉlรฉcharger le rapport complet

Tรฉlรฉcharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiรฉe. Les champs obligatoires sont indiquรฉs avec *