Stack Exchange Network

Stack Exchange network consists of 181 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

What algorithms are there for scheduling tasks

The scenario is something like this

So I want some way of deciding which tasks to complete at a given time, given the existing tasks and their priorities, and the limits on completion within a certain category

One greedy algorithm would be to just organize each category by priority, and then at each decision point take in as many tasks as is allowed by that category's rate, but the problem with this is that if a lot of high priority tasks come in low priority tasks will just sit in the backlog, potentially forever.

I've looked at some scheduling concepts like round robin scheduling and multilevel priority queues, but they don't seem to fit what I want - tasks can't be paused/rescheduled as in round robin, and multi priority queues seems like it'd still have the problem of getting to low priority tasks (could just make the priorities more stratified, but then it's just back to the original problem with deciding how many tasks to take from each queue)

Any advice on what I should look into? Maybe it's a sort of "repeated knapsack" thing?

colblitz's user avatar

3 Answers 3

What you can do is to add another parameter to your tasks, something that I would call dynamic priority delta. Or what is equivalent, you can implement some mechanism to increase or decrease the priority of each task on top of multilevel priority queues.

The basic idea is simple. "A process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.", reads section 5.3.6 of Operating Systems Concepts , seventh edition. For an extreme simple example, you can just increase priority level by one if a task has waited for an hour. Here is more excerpt from that book.

In general, a multilevel feedback-queue scheduler is defined by the following parameters: The number of queues The scheduling algorithm for each queue The method used to determine when to upgrade a process to a higher- priority queue The method used to determine when to demote a process to a lower- priority queue The method used to determine which queue a process will enter when that process needs service The definition of a multilevel feedback-queue scheduler makes it the most general CPU-scheduling algorithm. It can be configured to match a specific system under design. Unfortunately, it is also the most complex algorithm, since defining the best scheduler requires some means by which to select values for all the parameters.

Your situation might be even more complex since your task are also labelled with category. Without more information, I cannot advise further into your scheduling algorithm.

I encourage to read chapter "CPU Scheduling" of Operating System Concepts by Avi Silberschatz, Peter Baer Galvin and Greg Gagne. For even more content, read chapter "Uniprocessor Scheduling" and "Multiprocessor and Real-Time Scheduling" of Operating System Internals and Design Principles by William Stallings. Although they are talking about CPU or processor scheduling, most of the concepts and algorithms are the same as in or can be applied to job/task scheduling. You will find lots of inspirations and ideas and references.

John L.'s user avatar

One solution that I've read about, but haven't tried implementing myself as of yet, is to slowly increase the priority of tasks as time goes on. For instance, each time you pop off requests you could update all lower-priority tasks (or after every X requests have been popped off, etc). Thus eventually all tasks will get to execute.

Ryan Pierce Williams's user avatar

Most every operating system under the sun has it's take on this. Check an operating systems text, they describe some simple variants.

For Linux, the standard is the Completely Fair Scheduler (CFS); but there are others to handle real time tasks.

vonbrand's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged algorithms scheduling or ask your own question .

Hot Network Questions

algorithm task scheduling

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

algorithm task scheduling

Advertisement

Task Scheduling in Embedded System

Tasks, threads and processes.

We have already considered the multi-tasking concept – multiple quasi-independent programs apparently running at the same time, under the control of an operating system. Before we look at tasks in more detail, we need to straighten out some more terminology.

We use the word “task” – and I will continue to do so – but it does not have a very precise meaning. Two other terms – “thread” and “process” – are more specific and we should investigate what they mean and how they are differentiated.

Most RTOSes used in embedded applications employ a multi-thread model. A number of threads may be running and they all share the same address space:

Multi-thread RTOS diagram.

This means that a context swap is primarily a change from one set of CPU register values to another. This is quite simple and fast. A potential hazard is the ability of each thread to access memory belonging to the others or to the RTOS itself.

The alternative is the multi-process model. If a number of processes are running, each one has its own address space and cannot access the memory associated with other processes or the RTOS:

Multi-thread RTOS diagram.

This makes the context swap more complex and time consuming, as the OS needs to set up the memory management unit (MMU) appropriately. Of course, this architecture is only possible with a processor that supports an MMU. Processes are supported by “high end” RTOSes and most desktop operating systems. To further complicate matters, there may be support for multiple threads within each process. This latter capability is rarely exploited in conventional embedded applications.

A useful compromise may be reached, if an MMU is available, thus:

Multi-thread RTOS diagram.

Many thread-based RTOSes support the use of an MMU to simply protect memory from unauthorized access. So, while a task is in context, only its code/data and necessary parts of the RTOS are “visible”; all the other memory is disabled and an attempted access would cause an exception. This makes the context switch just a little more complex, but renders the application more secure. This may be called “Thread Protected Mode” or “Lightweight Process Model”.

Schedulers in Embedded Systems/RTOS

As we know, the illusion that all the tasks are running concurrently is achieved by allowing each to have a share of the processor time. This is the core functionality of a kernel. The way that time is allocated between tasks is termed “scheduling”. The scheduler is the software that determines which task should be run next. The logic of the scheduler and the mechanism that determines when it should be run is the scheduling algorithm. We will look at a number of scheduling algorithms in this section. Task scheduling is actually a vast subject, with many whole books devoted to it. The intention here is to just give sufficient introduction that you can understand what a given RTOS has to offer in this respect.

Run to Completion (RTC) Scheduler

RTC scheduling is very simplistic and uses minimal resources. It is, therefore, an ideal choice, if the application’s needs are fulfilled. Here is the timeline for a system using RTC scheduling:

Timeline for a system using RTC scheduling.

The scheduler simply calls the top level function of each task in turn. That task has control of the CPU (interrupts aside) until the top level function executes a return statement. If the RTOS supports task suspension, then any tasks that are currently suspended are not run. This is a topic discussed below; see Task Suspend .

The big advantages of an RTC scheduler, aside from its simplicity, are the need for just a single stack and the portability of the code (as no assembly language is generally required). The downside is that a task can “hog” the CPU, so careful program design is required. Although each task is started “from the top” each time it is scheduled – unlike other kinds of schedulers which allow the code to continue from where it left off – greater flexibility may be programmed by use of static “state” variables, which determine the logic of each sequential call.

Round Robin (RR) Scheduler

An RR scheduler is similar to RTC, but more flexible and, hence, more complex. In the same way, each task is run in turn (allowing for task suspension), thus:

Round Robin (RR) scheduler diagram.

However, with the RR scheduler, the task does not need to execute a return in the top level function. It can relinquish the CPU at any time by making a call to the RTOS. This call results in the kernel saving the context (all the registers – including stack pointer and program counter) and loading the context of the next task to be run. With some RTOSes, the processor may be relinquished – and the task suspended – pending the availability of a kernel resource. This is more sophisticated, but the principle is the same.

The greater flexibility of the RR scheduler comes from the ability for the tasks to continue from where they left off without any accommodation in the application code. The price for this flexibility is more complex, less portable code and the need for a separate stack for each task.

Continue to page 2 >>

Time Slice (TS) Scheduler

A TS scheduler is the next step in complexity from RR. Time is divided into “slots”, with each task being allowed to execute for the duration of its slot, thus:

Time Slice (TS) Scheduler.

In addition to being able to relinquish the CPU voluntarily, a task is preempted by a scheduler call made from a clock tick interrupt service routine. The idea of simply allocating each task a fixed time slice is very appealing – for applications where it fits the requirements – as it is easy to understand and very predictable.

The only downside of simple TS scheduling is the proportion of CPU time allocated to each task varies, depending upon whether other tasks are suspended or relinquish part of their slots, thus:

Time Slice (TS) Scheduler.

A more predictable TS scheduler can be constructed if the concept of a “background” task is introduced. The idea, shown here, is for the background task to be run instead of any suspended tasks and to be allocated the remaining slot time when a task relinquishes (or suspends itself).

Time Slice (TS) Scheduler.

Obviously the background task should not do any time-critical work, as the amount of CPU time it is allocated is totally unpredictable – it may never be scheduled at all.

This design means that each task can predict when it will be scheduled again. For example, if you have 10ms slots and 10 tasks, a task knows that, if it relinquishes, it will continue executing after 100ms. This can lead to elegant timing loops in application tasks.

An RTOS may offer the possibility for different time slots for each task. This offers greater flexibility, but is just as predictable as with fixed slot size. Another possibility is to allocate more than one slot to the same task, if you want to increase its proportion of allocated processor time.

Priority Scheduler

Most RTOSes support Priority scheduling. The idea is simple: each task is allocated a priority and, at any particular time, whichever task has the highest priority and is “ready” is allocated the CPU, thus:

Priority Scheduler.

The scheduler is run when any “event” occurs (e.g. interrupt or certain kernel service calls) that may cause a higher priority task being made “ready”. There are broadly three circumstances that might result in the scheduler being run:

The number of levels of priority varies (from 8 to many hundreds) and the significance of higher and lower values differs; some RTOSes use priority 0 as highest, others as lowest.

Some RTOSes only allow a single task at each priority level; others permit multiple tasks at each level, which complicates the associated data structures considerably. Many OSes allow task priorities to be changed at runtime, which adds further complexity.

Composite Scheduler

We have looked at RTC, RR, TS and Priority schedulers, but many commercial RTOS products offer more sophisticated schedulers, which have characteristics of more than one of these algorithms. For example, an RTOS may support multiple tasks at each priority level and then use time slicing to divide time between multiple ready tasks at the highest level.

Task States

At any one moment in time, just one task is actually running. Aside from CPU time spent running interrupt service routines (more on that in the next article) or the scheduler, the “current” task is the one whose code is currently being executed and whose data is characterized by the current register values. There may be other tasks that are “ready” (to run) and these will be considered when the scheduler is executed. In a simple RTOS, using a Run to Completion, Round Robin or Time Slice scheduler, this may be the whole story. But, more commonly, and always with a Priority scheduler, tasks may also be in a “suspended” state, which means that they are not considered by the scheduler until they are resumed and made “ready”.

Task Suspend

Task suspension may be quite simple – a task suspends itself (by making an API call) or another task suspends it. Another API call needs to be made by another task or ISR to resume the suspended task. This is an “unconditional” or “pure” suspend. Some OSes refer to a task as being “asleep”.

An RTOS may offer the facility for a task to suspend itself (go to sleep) for a specific period of time, at the end of which it is resumed (by the system clock ISR, see below). This may be termed “sleep suspend”.

Another more complex suspend may be offered, if an RTOS supports “blocking” API calls. Such a call permits the task to request a service or resource, which it will receive immediately if it is available, otherwise it is suspended until it is available. There may also be a timeout option whereby a task is resumed if the resource is not available in a specific timeframe.

Other Task States

Many RTOSes support other task states, but the definition of these and the terminology used varies. Possibilities include a “finished” state, which simply means that the task’s outermost function has exited (either by executing a return or just ending the outer function block). For a finished task to run again, it would probably need to be reset in some way.

Another possibility is a “terminated” state. This is like a pure suspend, except that the task must be reset to its initial state in order to run again.

If an RTOS supports dynamic creation and deletion of tasks (see the next article), this implies another possible task state: “deleted”.

In the next article we will take a further look at tasks, the context switch mechanism and interrupts. Earlier articles in this series include:  Introducing: RTOS Revealed  and  Program structure and real time .

Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor Embedded [the Mentor Graphics Embedded Software Division], and is based in the UK. His regular blog is located at: http://blogs.mentor.com/colinwalls. He may be reached by email at [email protected]

3 thoughts on “ Task Scheduling in Embedded System ”

“A good refresher! I will be recommending this series to the kids on LinkedIn, where I see repeated requests for RTOS explanations…”

“Thanks @Vapats”

“Hi Colin,nnI am trying to better understand different system run-time scenarios in bare metal with ISRs versus RTOSes. So in a bare metal system, you can have a situation where the system architecture has ISR prioritization (sometimes you can set the p

Leave a Reply Cancel reply

You must Sign in or Register to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

You may have missed

5 reasons to build your own c/c++ environment, programming embedded systems: foreground-background architecture (“superloop”), introducing qi and how to make wireless charging more pervasive, time crunch is on to meet 2-year eu iot cybersecurity deadline, imagination adds edge ai course to its university program, privacy overview.

algorithm task scheduling

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Optimized algorithm to schedule tasks with dependency?

There are tasks that read from a file, do some processing and write to a file. These tasks are to be scheduled based on the dependency. Also tasks can be run in parallel, so the algorithm needs to be optimized to run dependent tasks in serial and as much as possible in parallel.

So one way to run this would be run 1, 2 & 4 in parallel. Followed by 3.

Another way could be run 1 and then run 2, 3 & 4 in parallel.

Another could be run 1 and 3 in serial, 2 and 4 in parallel.

user2186138's user avatar

5 Answers 5

Let each task (e.g. A,B,... ) be nodes in a directed acyclic graph and define the arcs between the nodes based on your 1,2,... .

http://en.wikipedia.org/wiki/Topological_sorting

You can then topologically order your graph (or use a search based method like BFS ). In your example, C<-A->B->D and E->F so, A & E have depth of 0 and need to be run first. Then you can run F , B and C in parallel followed by D .

Also, take a look at PERT .

How do you know whether B has a higher priority than F ?

This is the intuition behind the topological sort used to find the ordering.

It first finds the root (no incoming edges) nodes (since one must exist in a DAG). In your case, that's A & E . This settles the first round of jobs which needs to be completed. Next, the children of the root nodes ( B , C and F ) need to be finished. This is easily obtained by querying your graph. The process is then repeated till there are no nodes (jobs) to be found (finished).

Community's user avatar

Given a mapping between items, and items they depend on, a topological sort orders items so that no item precedes an item it depends upon.

This Rosetta code task has a solution in Python which can tell you which items are available to be processed in parallel.

Given your input the code becomes:

Which then generates this output:

Items on one line of the output could be processed in any sub-order or, indeed, in parallel; just so long as all items of a higher line are processed before items of following lines to preserve the dependencies.

Paddy3118's user avatar

Your tasks are an oriented graph with (hopefully) no cycles.

I contains sources and wells (sources being tasks that don't depends (have no inbound edge), wells being tasks that unlock no task (no outbound edge)).

A simple solution would be to give priority to your tasks based on their usefulness (lets call that U .

Typically, starting by the wells, they have a usefulness U = 1 , because we want them to finish.

Put all the wells' predecessors in a list L  of currently being assessed node.

Then, taking each node in L , it's U value is the sum of the U values of the nodes that depends on him + 1. Put all parents of the current node in the L  list.

Loop until all nodes have been treated.

Then, start the task that can be started and have the biggest U value, because it is the one that will unlock the largest number of tasks.

In your example,

Meaning you'll start A first with E if possible, then B and C (if possible), then D and F

njzk2's user avatar

first generate a topological ordering of your tasks. check for cycles at this stage. thereafter you can exploit parallelism by looking at maximal antichains. roughly speaking these are task sets without dependencies between their elements.

for a theoretical perspective, this paper covers the topic.

collapsar's user avatar

Without considering the serial/parallel aspect of the problem, this code can at least determine the overall serial solution:

If you update the loop that checks for dependencies that have been fully satisfied to loop through the entire list and execute/remove tasks that no longer have any dependencies all at the same time, that should also allow you to take advantage of completing the tasks in parallel.

flyerfye's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service , privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged algorithm scheduled-tasks scheduling or ask your own question .

Hot Network Questions

algorithm task scheduling

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Scheduling Problems - New Applications and Trends

Types of Task Scheduling Algorithms in Cloud Computing Environment

Submitted: January 12th, 2019 Reviewed: May 15th, 2019 Published: April 23rd, 2020

DOI: 10.5772/intechopen.86873

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Scheduling Problems

Edited by Rodrigo da Rosa Righi

Chapter metrics overview

1,629 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

Cloud computing is one of the most important technologies used in recent times, it allows users (individuals and organizations) to access computing resources (software, hardware, and platform) as services remotely through the Internet. Cloud computing is distinguished from traditional computing paradigms by its scalability, adjustable costs, accessibility, reliability, and on-demand pay-as-you-go services. As cloud computing is serving millions of users simultaneously, it must have the ability to meet all users requests with high performance and guarantee of quality of service (QoS). Therefore, we need to implement an appropriate task scheduling algorithm to fairly and efficiently meet these requests. Task scheduling problem is the one of the most critical issues in cloud computing environment because cloud performance depends mainly on it. There are various types of scheduling algorithms; some of them are static scheduling algorithms that are considered suitable for small or medium scale cloud computing; and dynamic scheduling algorithms that are considered suitable for large scale cloud computing environments. In this research, we attempt to show the most popular three static task scheduling algorithms performance there are: first come first service (FCFS), short job first scheduling (SJF), MAX-MIN. The CloudSim simulator has been used to measure their impact on algorithm complexity, resource availability, total execution time (TET), total waiting time (TWT), and total finish time (TFT).

Author Information

Tahani aladwani *.

*Address all correspondence to: [email protected]

1. Introduction

Cloud computing is a new technology derived from grid computing and distributed computing and refers to using computing resources (hardware, software, and platforms) as a service and provided to beneficiaries on demand through the Internet [ 1 ]. It is the first technology that uses the concept of commercial implementation of computer science with public users [ 2 ]. It relies on sharing resources among users through the use of the virtualization technique. High performance can be provided by a cloud computing, based on distributing workloads across all resources fairly and effectively to get less waiting time, execution time, maximum throughput, and exploitation of resources effectively. Still, there are many challenges prevalent in cloud computing, Task scheduling and load balance are the biggest yet because it is considered the main factors that control other performance criteria such as availability, scalability, and power consumption.

2. Tasks scheduling algorithms overview

Tasks scheduling algorithms are defined as the mechanism used to select the resources to execute tasks to get less waiting and execution time.

2.1 Scheduling levels

First level: in host level where a set of policies to distribute VMs in host.

Second level: in VM level where a set of policies to distribute tasks to VM.

In this research we focus on VM level to scheduling tasks, we selected task scheduling algorithms as a research field because it is the biggest challenge in cloud computing and the main factor that controls the performance criteria such as (execution time, response time, waiting time, network, bandwidth, services cost) for all tasks and controlling other factors that can affect performance such as power consumption, availability, scalability, storage capacity, buffer capacity, disk capacity, and number of users.

2.2 Tasks scheduling algorithms definition and advantages

Tasks scheduling algorithms are defined as a set of rules and policies used to assign tasks to the suitable resources (CPU, memory, and bandwidth) to get the highest level possible of performance and resources utilization.

2.2.1 Task scheduling algorithms advantages

Manage cloud computing performance and QoS.

Manage the memory and CPU.

The good scheduling algorithms maximizing resources utilization while minimizing the total task execution time.

Improving fairness for all tasks.

Increasing the number of successfully completed tasks.

Scheduling tasks on a real-time system.

Achieving a high system throughput.

Improving load balance.

2.3 Tasks scheduling algorithms classifications

Tasks scheduling algorithms classified as in Figure 1 .

algorithm task scheduling

Tasks scheduling classes.

2.3.1 Tasks scheduling algorithms can be classified as follows

Immediate scheduling: when new tasks arrive, they are scheduled to VMs directly.

Batch scheduling: tasks are grouped into a batch before being sent; this type is also called mapping events.

Static scheduling: is considered very simple compared to dynamic scheduling; it is based on prior information of the global state of the system. It does not take into account the current state of VMs and then divides all traffic equivalently among all VMs in a similar manner such as round robin (RR) and random scheduling algorithms.

Dynamic scheduling: takes into account the current state of VMs and does not require prior information of the global state of the system and distribute the tasks according to the capacity of all available VMs [ 4 , 5 , 6 ].

Preemptive scheduling: each task is interrupted during execution and can be moved to another resource to complete execution [ 6 ].

Non-preemptive scheduling: VMs are not re-allocated to new tasks until finishing execution of the scheduled task [ 6 ].

In this research, we focus on the static scheduling algorithms. Static scheduling algorithm such as first come first service (FCFS), shortest job first (SJF), and MAX-MAX scheduling algorithms in complexity and cost within a small or medium scale.

2.4 Task scheduling system in cloud computing

The first task level: is a set of tasks (Cloudlets) that is sent by cloud users, which are required for execution.

The second scheduling level: is responsible for mapping tasks to suitable resources to get highest resource utilization with minimum makespan. The makespan is the overall completion time for all tasks from the beginning to the end [ 7 ].

The third VMs level: is a set of (VMs) which are used to execute the tasks as in Figure 2 .

algorithm task scheduling

Task scheduling system.

2.5 This level passes through two steps

The first step is discovering and filtering all the VMs that are presented in the system and collecting status information related to them by using a datacenter broker [ 8 ].

In the second step a suitable VM is selected based on task properties [ 8 ].

3. Static tasks scheduling algorithms in cloud computing environment

FCFS: the order of tasks in task list is based on their arriving time then assigned to VMs [ 3 ].

3.1.1 Advantages

Most popular and simplest scheduling algorithm.

Fairer than other simple scheduling algorithms.

Depend on FIFO rule in scheduling task.

Less complexity than other scheduling algorithms.

3.1.2 Disadvantages

Tasks have high waiting time.

Not give any priority to tasks. That means when we have large tasks in the begin tasks list, all tasks must wait a long time until the large tasks to finish.

Resources are not consumed in an optimal manner.

In order to measure the performance achieved by this method, we will be testing them and then measuring its impact on (fairness, ET, TWT, and TFT).

3.1.3 Assumptions

Number of tasks should be more than the number of VMs, which means that each VM must execute more than one task.

Each task is assigned to only one VM resource.

Lengths of tasks varying from small, medium, and large.

Tasks are not interrupted once their executions start.

VMs are independent in terms of resources and control.

The available VMs are of exclusive usage and cannot be shared among different tasks. It means that the VMs cannot consider other tasks until the completion of the current tasks is in progress [ 3 ].

Tasks lengths: assume we have 15 tasks with their lengths as in Table 1 .

algorithm task scheduling

Set of tasks with different length orders depends on the arrival time for each task.

3.1.4 VM properties

Assume we have six VMs with different properties based on tasks size:

We selected a set of VMs with different properties to make each category have VMs with appropriate ability to serve a specific class of tasks, to improve the load balance. Because when we use VMs with same properties with all categories it leads to load imbalance, where each class is different from other classes in terms of tasks lengths.

3.1.5 When applying FCFS, work mechanism will be as following

Figure 3 shows FCFS tasks scheduling algorithm working mechanism and how tasks are executed based on their arrival time.

algorithm task scheduling

FCFS work mechanism.

Dot arrows refer to first set of tasks scheduling based on their arrival time.

Dash arrows refer to second set of tasks scheduling based on their arrival time.

Solid arrows refer to third set of tasks scheduling based on their arrival time.

And here it is clear to us that t1 is too large compared with t7 and t12. However, t7 and t12 must wait for t1, which leads to an increase in the TWT, ET, TFT, and a decrease in fairness.

Table 2 shows how FCFS scheduling algorithm increases waiting time for all tasks.

algorithm task scheduling

Waiting times of tasks in FCFS.

Tasks are sorted based on their priority. Priority is given to tasks based on tasks lengths and begins from (smallest task ≡ highest priority).

3.2.1 Advantages

Wait time is lower than FCFS.

SJF has minimum average waiting time among all tasks scheduling algorithms.

3.2.2 Disadvantages

Unfairness to some tasks when tasks are assigned to VM, due to the long tasks tending to be left waiting in the task list while small tasks are assigned to VM.

Taking long execution time and TFT.

3.2.3 SJF work mechanism

When applying SJF, work mechanism will be as follows:

Assume we have 15 tasks as in Table 1 above. We will be sorting tasks in the task list, as in Table 3 . Tasks are sorted from smallest task to largest task based on their lengths as in Table 3 , then assigned to VMs list sequential.

algorithm task scheduling

A set of tasks sorted based on SJF scheduling algorithm.

3.2.4 Execute tasks will be

Table 4 shows that the large tasks must be waiting in the task list until the smallest tasks finish execution.

algorithm task scheduling

Waiting times of tasks in SJF.

3.3 MAX-MIN

In MAX-MIN tasks are sorted based on the completion time of tasks; long tasks that take more completion time have the highest priority. Then assigned to the VM with minimum overall execution time in VMs list.

3.3.1. Advantages

Working to exploit the available resources in an efficient manner.

This algorithm has better performance than the FCFS, SJF, and MIN-MIN algorithm.

3.3.2 Disadvantages

Increase waiting time to small and medium tasks; if we have six long tasks, in MAX-MIN scheduling algorithm they will take priority in six VMs in VM list, and short tasks must be waiting until the large tasks finish.

When applying MAX-MIN, Work Mechanism will be as follows.

Assume we have 15 tasks as in Table 1 above. We will be sorting tasks in task list as in Table 5 . Tasks sorted from largest task to smallest task based on highest completion time. They are then assigned to the VMs with minimum overall execution time in VMs list.

algorithm task scheduling

A set of tasks sorted based on MAX-MIN scheduling algorithm.

3.3.3 Execute tasks will be

Tables 6 and 7 shows that the small and medium tasks must be waiting in the task list until the large tasks finish execution.

algorithm task scheduling

Waiting time of tasks in MIX-MIN scheduling algorithm.

Comparison between FCFS tasks scheduling algorithm, SJF, and MAX-MIN in terms of TWT and TFT.

Figure 4 shows the TWT and TFT for the three tasks scheduling algorithms FCFS, SJF, and MAX-MIN. SJF tasks scheduling algorithm is the best in term of TWT and TFT.

algorithm task scheduling

4. Conclusion

This chapter introduces the meaning of the tasks scheduling algorithms and types of static and dynamic scheduling algorithms in cloud computing environment. This chapter also introduces a comparative study between the static task scheduling algorithms in a cloud computing environment such as FCFS, SJF, and MAX-MIN, in terms of TWT, TFT, fairness between tasks, and when becoming suitable to use?

Experimentation was executed on CloudSim, which is used for modeling the different tasks scheduling algorithms.

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Published: July 8th, 2020

By Ade Jamal

930 downloads

By Hong Seong Park

840 downloads

By Larysa Globa, Oleksandr Stryzhak, Nataliia Gvozdet...

829 downloads

Skip to Main Content

IEEE Account

Purchase Details

Profile Information

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Taylor and Francis Online homepage

Engineering Optimization

Free access

A task scheduling algorithm for cloud computing with resource reservation

1. Introduction

2. related work, 3. problem description, 4. the proposed solution approach, 5. computational results and discussion, 6. discussion, 7. concluding remarks, disclosure statement, research article.

Abstract Formulae display: ? Mathematical formulae have been encoded as MathML and are displayed in this HTML version using MathJax in order to improve their display. Uncheck the box to turn MathJax off. This feature requires Javascript. Click on a formula to zoom.

Resource-reservation-based services have been provided where cloud resources are reserved for tasks within specific time intervals so as fully to exploit the benefits of cloud computing. Such a practice induces an additional decision for task allocation timing and a time index to resource status. Combined with the increasing scale of cloud computing, this increases the complexity of task scheduling in cloud computing. On the other hand, the quality and responsiveness of a task scheduling process should be pursued to meet the desired service features of the market. A task scheduling problem with time-dependent resource availability is formulated, and an heuristic algorithm that derives a high-quality solution to the problem within a very short time is designed to address these challenges. The performance of the algorithm in terms of its optimality and computation efficiency is demonstrated by comparison with the optimal bound found by a commercial optimization solver.

Cloud computing is a technique that provides on-demand services where a user can access shared IT resources such as servers, data storage, applications, networks and so on through the internet (Selvarani and Sadhasivam  Citation 2010 ). Because of its feature that allows sharing distributed resources and even services with many users, cloud computing is becoming a key and dominant technology in many industrial sectors.

However, cloud computing and the services based on cloud computing involve several issues. The issues include security for centralized resources and data (Khalil, Khreishah, and Azeem  Citation 2014 ; Li et al.   Citation 2020 ), the capability for real-time services (Xiao et al.   Citation 2019 ) and handling the bandwidth growth mainly by the increasing Internet of Things (IoT) applications (Tian et al.   Citation 2020 ).

From the operational capability perspective, setting the right service level to satisfy service demands without loss is an issue. Cloud service providers often assign more resources than needed and suffer from a low utilization of computing resources, which increases operational costs (Barroso, Clidaras, and Holzle  Citation 2013 ). Considering that ensuring a sufficient amount of available resources upon customer requests and a reasonable service price is important to the service quality, this situation calls for action for efficient cloud resource usage.

algorithm task scheduling

From the scheduling perspective, however, resource reservation brings new dimensions to task scheduling in cloud computing. A key difference is that the availability of resources varies over time as a result of reservation. In addition, tasks for reservation can be handled together with on-demand tasks at the moment of task scheduling. Therefore, a time dimension should be considered for both resources and tasks that demand resources at certain time periods.

The resulting scheduling problem can be formulated as a task scheduling problem with time-dependent resource availability constraints, a highly complex optimization problem. However, due to the increasing prevalence of such problems in cloud computing, there is an increasing need to have the capability to solve such problems in a real-time or near real-time manner. This phenomenon has been observed in many applications of task scheduling, including cloud computing (Houssein et al.   Citation 2021 ), mobile crowdsensing (Wang et al.   Citation 2018 ), and automated manufacturing environments (Dang, Nielsen, and Bocewicz  Citation 2012 ; Nielsen et al.   Citation 2014 ), where tasks are continuously generated and updated, demanding near real-time responses to the tasks. Inspired by the needs, a simple but effective solution algorithm to the problem is addressed in this article.

2.1. Target task scheduling problem classification

algorithm task scheduling

Published online:

Figure 1. Cloud computing layers and corresponding scheduling classes.

algorithm task scheduling

Resource scheduling addresses the scheduling of resource distributions between the infrastructure layer (where physical resources are managed) and the virtualization layer (where virtual machines are managed). How to map physical machines to virtual machines is the main theme of a resource scheduling problem in cloud computing. Please refer to Singh and Chana ( Citation 2016 ) for a comprehensive review on cloud computing resource scheduling.

On the other hand, task scheduling , the focus of this study, addresses how to allocate tasks for users' requests to virtual machines. A special case of task scheduling in cloud computing is workflow scheduling, where tasks are allocated to virtual resources considering precedence relationships between the tasks. Please refer to Masdari et al.  ( Citation 2016 ) and Masdari and Zangakani ( Citation 2020 ) for extensive reviews on task and workflow scheduling in cloud computing.

Figure 2. Task scheduling classification features with highlights on relevant elements of the features to the target problem.

algorithm task scheduling

Based on the classification scheme presented in Figure  2 , the target task scheduling problem can be distinguished from work in the literature as follows. First, unrelated machines where processing time and quality of task execution vary across available machines are considered in the problem. The availability of machines also changes over time, adding a time dimension to resource status and task allocation decisions.

Static and offline task scheduling is considered in this study regarding the scheduling type feature, which represents a scheduling environment. In the cloud computing context, static task scheduling assumes prior knowledge of timing information about tasks. In contrast, dynamic task scheduling addresses a situation where timing information for tasks is not available at runtime. Offline task scheduling batches arriving tasks and schedules a set of tasks following an interval, whereas online task scheduling assigns tasks to resources as they enter.

In terms of the factor feature, which specifies the target objective functions and constraints of a problem, the target task scheduling problem aims to maximize the total rewards of task execution respecting the time windows of the tasks. The relevant settings of the problem feature to the task scheduling problem are highlighted with shading in Figure  2 .

2.2. Solution methodologies for task scheduling

A basic task scheduling problem, which assigns tasks to available resources subject to the resource–task compatibility constraints ( i.e. a single task can be assigned to a specific set of resources at a particular time), can be formulated as a minimum cost maximum flow problem where an optimal solution can be found in polynomial time. However, the flow-based approach is hardly applied to task scheduling problems in cloud computing because of its low scalability to the scale of cloud computing and low adaptability to additional constraints in cloud computing.

Various solution approaches have been proposed to address this challenge, which can be classified into two classes. The first class of the approaches is heuristic algorithms, which are mainly based on a simple rule or a combination of rules for their speed in deriving a feasible solution. First Come First Serve (FCFS), Minimum Completion Time (MCT) and Minimum Execution Time (MET) are examples of the rules that determine the priorities of tasks and resources for task scheduling (Madni et al.   Citation 2017 ). Despite the simplicity of the concept, they have shown promising performance on various scheduling problems (Houssein et al.   Citation 2021 ).

Table 1. Recent studies on meta-heuristics for task scheduling.

2.3. contributions of this study.

The target task scheduling problem is a key function in cloud computing with resource reservation. However, related studies in the literature consider only the resources available at the moment of scheduling and either keep the tasks that cannot be performed immediately for the next scheduling or discard them (Garg and Singh  Citation 2015 ; PeiYun Zhang and Zhou  Citation 2018 ; Oliveira et al.   Citation 2019 ). Moreover, in resource-reservation practice, it would be natural to incorporate the present and future availability of resources into task scheduling, especially when a task has a deadline or a time window for its completion.

From the solution methodology perspective, this study belongs to the heuristic algorithms class. A local search algorithm, an heuristic algorithm that updates a solution by applying local changes, is applied to the task scheduling problem. The performance of local search algorithms on various scheduling problems has been well recognized (Chalupa and Nielsen  Citation 2019 ; El Yafrani et al.   Citation 2021 ). However, existing heuristic approaches in the literature single-mindedly focus on a rule-based approach and neglect evolving heuristics ( e.g. local search algorithms) (Houssein et al.   Citation 2021 ).

To solve the task scheduling problem, a solution approach is designed that derives a feasible solution following a rule and improves the solution by re-assigning tasks to resources.

4.1. Initial solution generation

Given a problem instance, the algorithm first generates an initial solution to the problem. The initial solution generation algorithm is presented as Algorithm 1.

algorithm task scheduling

4.2. Solution improvement

Given CandidateSchedule , a solution improvement algorithm is executed to re-allocate tasks to resources until the improvement in the solution quality becomes marginal. Detailed steps of the algorithm are described in Algorithm 3.

Figure 4. Conflicts caused by rescheduling a task.

algorithm task scheduling

Figure 5. Flowchart of the proposed algorithm.

algorithm task scheduling

5.1. Problem instance generation

Table 2. experimental setting for problem instance generation., 5.2. performance analysis, table 3. experiment results..

As can be observed in Table  3 , the proposed algorithm works well for all the problem classes. In particular, when the problem size is large ( i.e. M30T288 , M100T72 , M100T144 and M100T288 problem classes), the proposed algorithm provides solutions with less than 1% optimality gap, on average. The proposed algorithm shows slightly poor but still acceptable performance for small-scale problem instances, providing around 3% and 2% optimality gaps for problem instances on M10T72 and M10144 , respectively. Regarding the computation time of the proposed algorithm, the algorithm provides solutions in a very short time. For the small- and medium-scale problem instances, the algorithm generates solutions in less than a second. In contrast, the algorithm took two seconds, on average, and five seconds maximum for large-scale problems.

5.3. Embedding the proposed algorithm into a genetic algorithm

The performance of the proposed algorithm can be improved further by embedding the algorithm into a meta-heuristic algorithm. Based on this idea, a Genetic Algorithm (GA) is implemented in which a task scheduling solution is represented as a sequence of tasks and evaluated by the proposed algorithm. Recall that the proposed algorithm starts by finding a sequence of tasks to generate an initial solution (see Algorithm 1). Therefore, changing the sequence allows for the proposed solution approach to find different final solutions. It should be noted that, in principle, the proposed algorithm can be embedded into any other meta-heuristic algorithms with a proper scheme to update the sequence of tasks.

Figure 6. Performance comparison with the GA.

algorithm task scheduling

Table 4. Experiment results for GA.

As presented, the optimality of solutions is improved by implementing the GA. The GA can provide a solution with less than a 1% optimality gap for all the problem classes, on average. However, applying a meta-heuristic for the target task scheduling problem of this study may not be practical for a system where the arrival rate of tasks is high and immediate responses to the tasks are critical because the approach takes a relatively long time to find the final solutions.

Moreover, as observed in Figure  6 , the optimality gap of the proposed algorithm becomes close to that of the GA as the problem scale increases. This underlines the performance of the proposed algorithm in large-scale problems, important for a solution algorithm in cloud computing. Nevertheless, these results indicate the potential gain by implementing the proposed algorithm into a meta-heuristic and its simplicity, which is critical for improving the performance of the proposed algorithm for small-scale problems.

6.1. The impact of the resource scarcity on the algorithm performance

Figure 7. Optimality gap over resource scarcity.

algorithm task scheduling

In Figure  7 , a dot represents an optimality gap in a problem instance and linear regression models for the optimality gap over the resource scarcity are plotted for each problem class. One can first observe from the figure the tendency that, as the scarcity of resources increases, the optimality gap increases. This case is straightforward because, when the resource scarcity is high, it is difficult to find a resource for a task, making a task scheduling problem hard to solve. It is also observed that the slopes of the tendency in small-scale problem classes are greater than those in large-scale problem classes because, with a shorter planning horizon and fewer resources, the solution space explored by the proposed algorithm is limited, degrading the quality of a final solution.

Lastly and importantly, Figure  7 implicitly indicates a map that predicts the performance level of the proposed algorithm to a problem. In particular, it is expected that the proposed algorithm will provide near-optimal solutions to problems having low resource scarcity level, which are the dominant cases in resource-rich cloud computing services.

6.2. Initialization of the algorithm

Figure 8. Performance comparison by different initialization schemes.

algorithm task scheduling

This figure shows box plots of the optimality gap of the proposed algorithm with different initialization rules. Positive optimality gap differences (the shaded region in the figure) mean that the proposed initialization rule performs better than a reference rule. From the figure, it is first observed that, overall, the proposed initialization rule provides better performance than the other rules. On the other hand, the performance difference by the initialization rules shows a high variance on the small-size problem instances with almost zero difference on average. This result implies that no initialization algorithm dominates the others, thus running the algorithm sequentially or in parallel with different initialization rules can be considered for better performance.

Motivated by resource-reservation-based cloud services, this study addresses a task scheduling problem with task time windows and time-dependent resource availability constraints. An heuristic algorithm is designed that can provide solutions with acceptable quality quickly to resolve the issue. Considering the scalability and responsiveness levels demanded by many systems and enterprises where task scheduling plays a key role, the proposed approach can contribute to their service quality.

The task scheduling problem addressed in this study is for a single task scheduling problem instance in the cycle. Task scheduling is, in fact, a continuous process performed over a service time. While the proposed algorithm can still be applied to every task scheduling instance of the cycle with the correspondingly updated resource availability status, it can be improved further w.r.t. its practicality by incorporating the following aspects.

First, a rescheduling algorithm focusing on limited resources can be discussed to reduce response time to a sudden update on resources or tasks. Preventing overly tight task scheduling solutions, which frequently assign tasks to resources without space, may thus entail significant rescheduling work when a resource suddenly becomes unavailable. Adding buffers to the time interval for task execution can also be considered. Lastly, a solution algorithm can exclude the tasks that demand resources in the relatively distant future from the moment of task scheduling. Resource allocation to those tasks can be made later with up-to-date information.

b CPU time (sec): algorithm runtime to get a solution.

b CPU time (sec): GA runtime to get a solution.

There are no relevant financial or non-financial competing interests to report.

Data availability statement

The generated problem instance data is available from the corresponding author, upon reasonable request.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form . For more information, please visit our Permissions help page .

Related research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations. Articles with the Crossref icon will open in a new tab.

Your download is now in progress and you may close this window

Login or register to access this feature

Register now or learn more

Advances, Systems and Applications

Improved Jellyfish Algorithm-based multi-aspect task scheduling model for IoT tasks over fog integrated cloud environment

Journal of Cloud Computing volume  11 , Article number:  98 ( 2022 ) Cite this article

975 Accesses

1 Altmetric

Metrics details

Corporations and enterprises creating IoT-based systems frequently use fog computing integrated with cloud computing to harness the benefits offered by both. These computing paradigms use virtualization and a pay-as-you-go strategy to provide IT resources, including CPU, memory, network and storage. Resource management in such a hybrid environment becomes a challenging task. This problem is exacerbated in the IoT environment, as it generates deadline-driven and heterogeneous data demanding real-time processing. This work proposes an efficient two-step scheduling algorithm comprising a Bi-factor classification task phase based on deadline and priority and a scheduling phase using an enhanced artificial Jellyfish Search Optimizer (JS) proposed as an Improved Jellyfish Algorithm (IJFA). The model considers a variety of cloud and fog resource parameters, including speed, capacity, task size, number of tasks, and number of virtual machines for resource provisioning in a fog integrated cloud environment. The model has been tested for the real-time task scenario with the number of tasks considering both the smaller workload and the relatively higher workload scenario matching the real-time situation. The model addresses the Quality of Service (QoS) parameters of minimizing the batch’s make-span time, lowering the batch execution costs, and increasing the resource utilization. Simulation results prove the effectiveness of the proposed model.

Introduction

In recent decades, the scientific community has embraced meta-heuristic optimization approaches to solve complicated optimization problems. Neural networks, data mining, industrial, mechanical, electrical, software engineering, and specific issues in location theory are some of the application domains of meta-heuristic algorithms [ 1 , 2 , 3 , 4 , 5 ]. Hussain’s analysis of 1,222 papers on metaheuristics from 1983 to 2016 (33 years) suggests that the behaviour of birds, humans, plants, water, the ecosystem, electromagnetic forces, and gravitation have been employed as metaphors in metaheuristic techniques [ 6 ]. Figure  1 presents a division of these techniques into two groups. The first group includes approaches that imitate biological or physical events and can be divided into four sub-categories: Nature-based, Physics-based, Human-based and Swarm-based methods. The third group consists of those that have been motivated by human events. The most exciting and widely used metaheuristic algorithms are swarm-intelligence algorithms based on the collective intelligence of colonies of ants, termites, bees, flocks of birds, and so on [ 7 ]. Their success can be attributed to the fact that they leverage shared knowledge among several agents, allowing self-organization, co-evolution and learning to aid in creating high-quality products during cycles. Although not all swarm-intelligence algorithms succeed, a handful has been quite effective and have thus become popular tools for tackling real-world problems [ 8 ].

figure 1

Classification of Meta-Heuristic Algorithms

Working in the same trend, cloud computing research has also leveraged the benefits of many meta-heuristics to target its’ complex problems, e.g., virtual machine allocation [ 9 , 10 , 11 ], virtual machine placement [ 12 , 13 ], load balancing [ 14 ], task scheduling [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ], workload prediction [ 28 ], resource allocation [ 29 , 30 ], workflow scheduling [ 31 , 32 ], virtual machine migration [ 33 ] and many more.

Computing paradigms like cloud and related technologies, fog and edge computing are built on the pay-as-you-go model. Resources are provided based on the service-level agreements (SLA) between the service providers and the consumers. Resources are of utmost importance for these technologies. Accordingly, resource provisioning for task scheduling becomes one of the significant concerns for these paradigms alongwith other challenges like security, performance, resource management, reliability etc. Therefore, to achieve an efficient performance and to make the best use of the scarce fog or cloud resources, the users’ tasks must be scheduled intelligently on the available resources while meeting the desired QoS. There are numerous factors to be considered for designing any task-scheduling algorithm. Some of critical factors are task completion time, makespan, security, and response time from a user’s standpoint. From the service provider’s Provider’s standpoint, the crucial parameters considered are resource utilization, fault tolerance, and power consumption to name a few.

Owing to the large solution space and time required to obtain an optimal solution, job scheduling, also known as resource provisioning for the cloud and peers, has been classed as NP-hard [ 34 ]. Optimization methodologies using meta-heuristic methods based on performance factors, e.g., completion time, cost, resource utilization offer a solution to address the resource provisioning problem. Although cloud computing meets the criteria of executing IoT operations locally, it impacts task performance and requires a platform to handle these tasks. Furthermore, IoT applications conduct various tasks with varying priorities and deadlines that must be performed without delay. Still, remote execution of these tasks in the cloud server creates multiple issues, including higher latency and limited bandwidth availability. These challenges can be overcome by processing IoT tasks at the network’s edge, called fog computing [ 35 ]. The goal of fog computing is to complete work before the deadline with local workload execution. The hybrid fog-cloud architecture offers a promising arrangement to improve the QoS with a broaded horizon. The fog layer assists the cloud layer in the task execution. Some appropriate tasks are executed at the fog level, with the remainder of the workload offloading to the cloud. Scheduling tasks in the fog-cloud layers promise to achieve reduced make-span time in executing these tasks with a better resource utilization.

Looking at the immense benefits of using meta-heuristics in computing literature, this study proposes an Improved Jellyfish algorithm (IJFA) model for scheduling classified tasks over the Fog integrated cloud environment. The work considers the tasks originating from the Internet of Things (IoT) devices, resulting in a heterogeneous task generation. The model uses a bi-factor classification method based on task category considering the priority and deadline, and the resource requirements aiming to minimize the make-span time of the batch of jobs. This, in turn, will lower the execution cost of tasks submitted for execution while improving the overall resource efficiency and utilization.

The proposed algorithm IJFA is based on a recently developed population-based nature-inspired meta-heuristic algorithm, an artificial Jellyfish Search Optimizer (JS), inspired by the behaviour of Jellyfish in the ocean. The simulation of jellyfish search behaviour includes their following of the ocean current, their motion within a jellyfish swarm as active and passive motions, a temporal control system for switching between these movements, and their convergence into a jellyfish bloom. On benchmark functions and optimization problems, the new approach performes admirably. The population size and the number of iterations are the only two control parameters in JS. As a result, it requires minimum effors in deployment and could be an excellent meta-heuristic algorithm for addressing optimization problems. This work modifies the the JS algorithm in order to to achieve faster convergence by improving its exploration phase resulting in a wider exploration of the solution space to attain efficient scheduling decisions [ 36 ].

The remaining section of this research is systematized as follows. Literature survey  section provides the literature survey of the various works reported in the literature in the domain. The proposed multi-aspect task scheduling approach  section details the proposed model including Improved Jellyfish algorithm (IJFA) based on task scheduling principles. Experimental results section discusses the simulation setup, simulation results and a performance evaluation of the proposed model.  Conclusion and future works section 5 presents the conclusion drawn from the work and possible future directions.

Literature survey

Cloud computing is the most popular distributed computing paradigm that provides self-service, dynamically scaled and metered access to a shared pool of resources with guaranteed Quality of Service (QoS) to the users. The jobs must be efficiently mapped to the offered resources to achieve QoS. Otherwise, it may violate Service Level Agreements (SLA). As a result, users will be hesitant to pay if the desired performance is not realized. Therefore, cloud computing systems consider scheduling a significant theme, where obtaining a subpar solution in a short period is desirable. No algorithms can solve the scheduling issue in polynomial time and provide optimal results owing to the vast search space in the actual implementation of the computing world. Therefore, combining meta-heuristic algorithms with the optimization of essential parameters reduces search space complexity and execution time. Also, the goal of task scheduling changes from one application to the next according to the QoS standard requirements. As a result, numerous studies in cloud and fog computing focus on meta-heuristics-based job scheduling. This section provides a comprehensive review of several scheduling techniques using various metaheuristics in the realm of cloud and fog computing.

Meta-heuristics in cloud computing

When implementing a task scheduling approach, at least one objective function ensures high performance. The most prevalent objectives are make-span, monetary cost, computational cost (i.e., CPU, memory, storage, GPU, bandwidth, etc.), reliability and availability, elasticity or scalability, energy consumption, security, resource usage, and throughput [ 37 ]. Researchers have explored these single-objective and multiobjective areas interestingly using various metaheuristics.

In [ 38 ], the authors introduced a new single objective strategy based on the Firefly algorithm for scheduling submitted tasks in clouds to reduce make-span. Their proposed Firefly Algorithm (FFA) outperforms Simulated Annealing (SA) and the Cuckoo Search Algorithm (CSA) in the experiments. Though this work minimizes the make-span successfully, other parameters, such as monetary cost, scalability, and availability, were ignored. The work in [ 22 ] suggested a hybrid task scheduling technique MSA using two metaheuristics MSDE (Hybrid Moth Search Algorithm and Differential Evolution), which is an integration of the Moth Search Algorithm (MSA) and Differential Evolution (DE) algorithms with a single goal of minimizing the make-span time required. The model offers exploration and exploitation capabilities based on Lévy’s flight and phototaxis ideas. However, since MSA’s exploitation capability is restricted, the DE algorithm has been utilized for local search, offering superior exploitation capability. Their results show that the proposed hybrid MSDE algorithm outperforms state-of-the-art heuristic and meta-heuristic scheduling algorithms regarding system make-span and throughput. Likewise, in [ 39 ], the authors proposed a hybrid method that combines the benefits of Ant Colony Optimization (ACO) and Cuckoo search that tries to lower the make-span or completion time. The work achieved this objective using the hybrid algorithm because the jobs were completed by allocating sufficient resources inside the set time interval. The findings suggest that the Hybrid algorithm outperforms the ACO method in terms of algorithm performance and time to completion.

An improved ant colony algorithm for multiobjective optimization scheduling based on a resource cost model (relationship between the user’s resource costs and the budget costs) was reported in [ 40 ]. The model achieved multiobjective performance and price optimization by including the make-span and the user’s budget costs as optimization constraints. Their multiobjective optimization method performed better than similar methods based on the make-span, cost, deadline violation rate and resource utilization. Using the benefits like the speed and accuracy of the PSO algorithm, the authors in [ 41 ] proposed a comprehensive multiobjective model to give better QoS to Cloud customers by reducing the task execution/transferring time and cost. The authors tried to achieve it by moving extra tasks from an overloaded VM rather than migrating the complete overload and eliminating the use of the VM pre-copy process. The simulation results reveal that the suggested method dramatically decreases the load balancing time compared to standard load balancing methodologies.

Similarly, research in [ 42 ] proposes a PSO-based Adaptive Multiobjective Task Scheduling (AMTS) strategy that considers both processing and transmission time and produces a better quasi-optimal solution in terms of average cost, job completion time and energy consumption, according to experimental results. The work reported using an adaptive acceleration coefficient to preserve particle diversity. Following the same trend, [ 43 ] also sed a load-balanced scheduling strategy based on the New Particle Swarm Optimization (NPSO) method. A new cost assessment function was employed to reduce the monetary cost of processing tasks on VMs. The suggested method improves efficiency through cost optimization (minimized cost) based on a statistical analysis of the total cost (execution and transfer) on a data set with many iterations and particles.

To build a complete multiobjective optimization model for task scheduling, the authors in [ 44 ] included four conflicting objectives: task transfer time, task execution cost, power consumption, and task queue length to lower expenses for customers and providers. Using the Multiobjective Particle Swarm Optimization (MOPSO) and Multiobjective Genetic Algorithm (MOGA), their proposed multiobjective model achieves optimal trade-off solutions among the four conflicting objectives, reducing job response time and make-span significantly. Findings say that the proposed model is faster and more accurate, improving QoS and lowering provider costs. Authors in [ 45 ] introduced a novel Multi-objective Cuckoo Search Optimization (MOCSO) technique for dealing with the resource scheduling problem in cloud computing to lower cloud user costs and improve performance by reducing make-span time. This helps cloud providers earn revenue or profit by maximizing utilization. The investigations and evaluation of the proposed method show that it outperforms MOACO, MOGA, MOMM, and MOPSO in balancing numerous objectives such as projected time to completion and cost. [ 46 ] proposed an ACO, PSO, and GA-based task-level and service-level dynamic resource scheduling technique, in which a task is assigned to a VM and a task is assigned to a service, respectively. This solution optimizes the make-span and CPU time while also lowering the overall operational cost of data centres. Still, the model does not perform well when allocating resources to global tasks. Table 1 summarises some of the recent single objective and multiobjective meta-heuristic algorithms employed in the cloud to address various QoS parameters.

Meta-heuristics in fog integrated cloud computing

Taking inspiration from the usage of metaheuristics in fellow cloud computing, researchers have explored the aspect of task scheduling using the same in fog computing and fog integrated cloud environments. In the work [ 47 ], the authors offered an energy-saving method based on a meta-heuristic known as the Harris Hawks optimization technique to increase QoS while maintaining SLA. The proposed algorithm is based on the fact that task scheduling is essential and adds to fog servers’ energy usage when managing Industrial IoT (IIOT) applications. It reportedly beats other known algorithms such as Particle Swarm Optimization (PSO) and Teaching Learning Based Optimization (TLBO) while considering the performance in terms of energy consumption and other QoS factors. In [ 48 ], the authors introduced an Adaptive Double fitness Genetic Task Scheduling (ADGTS) algorithm to maximize task make-span and communication cost at the same time using collaborative task and fog resource scheduling. Simulation results suggest that the ADGTS algorithm can simultaneously balance communication cost and task make-span performance. It performs better considering task make-span than the Min–Min method. Authors in [ 49 ] proposed a task scheduling algorithm based on a Moth-Flame Optimization (TS-MFO) algorithm. The proposed technique assigns an appropriate set of tasks to fog nodes to meet the quality-of-service criteria of Cyber-Physical Systems (CPS) applications while minimizing task execution time. The simulation study suggests the outperformance of the model over the PSO, NSGA-II, and (Bees Life Algorithm) BLA techniques in terms of total task execution time. The authors in [ 50 ], provides an optimization technique for IoT based applications using modified version of genetic algorithms with a focus on reducing latencies.

On the other hand, few works tried to explore multiobjective scenarios. In one such work [ 51 ], the authors tried to improve the QoS supplied to users in (Industrial IoT) IIoT applications to propose an energy-aware meta-heuristic based on a Harris Hawks Optimisation algorithm based on a Local search Strategy (HHOLS) for Task Scheduling in Fog Computing (TSFC) aided by the normalizing and scaling phase in solving the discrete TSFC. The quality of the solution was improved even more by balancing workloads across all virtual machines due to the swap mutation. The work compared their HHOLS method with other meta-heuristics based on various performance indicators, such as energy consumption, make-span, cost, flow time, and emission rate of CO 2. Using the same meta-heuristic, the authors in [ 52 ] proposed an enhanced elitism genetic algorithm (IEGA) to solve the work scheduling problem for FC and increase the quality of services provided to IoT device consumers. The proposed method demonstrates superior performance in make-span, flow time, fitness function, carbon dioxide emission rate, and energy consumption compared to other peers. The benefits of IEGA come from two main phases: first, the mutation rate and the crossover rate being manipulated to aid the algorithms in exploring the majority of the possible combinations that could form the near-optimal permutation; and second, several solutions being mutated based on a certain probability to avoid becoming trapped in local minima and to find a better solution.

Another work reported in [ 53 ] demonstrates a novel scheduling method based on the ant colony algorithm allowing for more accurate job scheduling and execution. It is a three-step method in which tasks are separated into two groups based on their completion time and cost, followed by prioritization based on completion time and cost. Then, the ant colony method is utilized to choose the best virtual computer to run the jobs. Simulation results suggest that the proposed method provides acceptable performance in make-span, response time, and energy usage compared to others. The work in [ 54 ] presents a Novel Bio-Inspired Hybrid Algorithm (NBIHA), a mix of Modified Particle Swarm Optimisation (MPSO) and Modified Cat Swarm Optimisation (MCSO) to lower average reaction time and optimize the resource consumption by efficiently scheduling jobs and managing available fog resources. The proposed algorithm outperforms three other algorithms viz by minimizing the execution time. First Come First Serve (FCFS), Shortest Job First (SJF), and MPSO. In the work [ 55 ], the authors proposed a Priority-aware Genetic Algorithm (PGA) which is a hybrid technique combining task prioritization with a genetic algorithm implementation. The objective is to determine the best compute node for each task by considering various job requirements while considering the diverse nature of fog and cloud nodes. It is a novel fog-cloud scheduling algorithm that optimizes a multiobjective function, a weighted sum of overall computation time, energy consumption, and Percentage of Deadline Satisfied Tasks (PDST). The work [ 56 ] introduced a Discrete Non-Dominated Sorting Genetic Algorithm II (DNSGA-II) based optimization model to schedule tasks dynamically aiming to reduce the makespan and costs in a fog-cloud environment. The model deals with the discrete multiobjective scheduling problem, allocates computing resources either on cloud or fog nodes and organizes the distribution of workloads. Another work in [ 57 ] attempts to reduce the makespan and energy consumption in the fog-cloud environment by using an integration of the Cultural Evolution Algorithm (CEA) and the Invasive Weed Optimization (IWO) named as hybrid (IWO-CA). The Dynamic Voltage and Frequency Scaling (DVFS) technique is presented to minimize the consumption of energy. Whale Optimization algorithm in smart healthcare application has been explored in [ 58 ] to address the massive data generated in the process aiming to minimize the average energy consumption and cost. Table 2 summarises some of the recent single objective and multiobjective meta-heuristic algorithms used in fog environment for addressing various QoS parameters.

Motivation and objective

Literature survey reveals that owing to the NP class nature of the scheduling problem, the work done considering both single-objective and multiobjective problems addressed the scheduling problem in different ways while proposing or utilizing a variety of well-known metaheuristics. However, many of them ignored the need of classifying heterogeneous end-user requests. The present study takes into account the importance of task categorization and is based on the novel approach JS (JellyFish Search Optimiser) [ 36 ]. JS has proven to be better than peers e.g. Whale Optimization Algorithm (WOA), Tree-Seed Algorithm (TSA), Symbiotic Organisms Search (SOS), Teaching–Learning-Based Optimization (TLBO), Firefly Algorithm (FA), Gravitation Search Algorithm (GSA), Artificial Bee Colony (ABC), Differential Evolution (DE), Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) algorithms in the mathematical benchmark tests. Additionally, JS has fewer internal parameters resulting in an efficient design, implementation and tuning of the simulation in the proposed work. This work proposes a multi-aspect task scheduler based on the Improved Jellyfish Algorithm (IJFA) which attempts to improve the exploration characteristics of the traditional JS for achieving better scheduling decisions for the heterogeneous IoT tasks. Further, the classification of jobs in the batch before making scheduling decisions results in a model suitable for real time and deadline aware tasks.

The proposed multi-aspect task scheduling approach

The system’s design, problem statement, suggested model and algorithm, simulation setup, and evaluation procedures are all presented in this section. To ensure the effective success of the objectives and aims, the research methodology is built over similar works in the domain of classification of tasks of IoT and its’ scheduling employing metaheuristics.

Methodology design

The work aims to schedule the jobs in the fog integrated cloud environment to minimize the make-span time for the jobs. The main research challenge identified is scheduling the requests from various IoT devices with heterogeneous demands in large volumes seeking different QoS needs. The ample search space makes scheduling these requests an even more complicated task. Several previous works and studies have been examined to discover competent qualities that may be utilized to classify and schedule requests. Classification is essential before scheduling for two main reasons: first, the requests are heterogeneous and second, to make use of the integrated environment of cloud and fog layers working with different capabilities and purposes. Fog layer aids real-time processing, necessitating ultra-low latency, location awareness, edge resource pooling, mobility, and most importantly preventing cloud server overloading. It’s also helpful in places where connectivity is inconsistent. On the other hand, cloud is more centralized and powerful than fog exhibiting features like availability, computing capabilities and storage capacity. It also aids in the long-term storage of data used for analysis and decision-making. Figure  2 presents the fog integrated cloud system architecture.

figure 2

Fog integrated cloud system architecture

All user demands are first submitted to a broker or a gateway with respective parameter values. The centralized broker, which is also a centralized decision-maker, is an intermediate party between the users and the providers and draws its inspiration from the broker used in [ 59 ]. The role of the broker is to manage the fog-cloud resource provisioning upon the user requests. It has the same role as a cloud broker, described as “an entity that manages the use, performance and delivery of cloud services, and negotiates relationships between cloud providers and consumers” [ 60 ]. The centralized broker records dynamically changing and uncertain resources from multiple fog and cloud providers along with the various IoT devices generating heterogeneous requests. Therefore, it minimizes the complexity arising out of many request handlers by providing a single point solution in the fog integrated Cloud environment. Once the broker recevices the requests, first, the bi-factor classification algorithm categorizes it based on the deadline and the priority. Next, these tasks are scheduled using the Improved Jellyfish algorithm (IJFA) based on the task category of the requests and their resource requirement to fog nodes or cloud nodes accordingly.

The most important properties of the requests coming from IoT devices are their deadline and priority, which can be used to classify them and then schedule them accordingly. For this purpose, this work proposes a bi-factor classification algorithm to ascertain the task category. The work proposes an Improved Jellyfish Algorithm (IJFA) strengthening the exploration properties of JS algorithm. JS is chosen for improvement as it is a promising meta-heuristic because of its simplicity and the use of only two control parameters, i.e., population size and the number of iterations. Figure  3 presents the workflow of the proposed model.

figure 3

Workflow of the proposed model

Problem statement

The IoT devices in the given environment produce data required to be processed for various applications for which it has been deployed. The computationally constrained nature of IoT devices makes it necessary for the devices to offload the data to other computing nodes for processing. In the proposed fog integrated cloud environment, the node assigned with a task and performing computation might be the fog node controlling the network of that geographical location or the cloud node present in a remote location. These nodes consider the processing of data as a task. Each task will be assigned either to a fog node or a cloud node executed as a virtual machine. The VM configuration can be the same if the tasks are of the same type or different depending on the task types. The tasks to for which computation is required can be denoted as,

where \({T}_{k}\) is the number of tasks submitted by the IoT devices. \({T}_{i}\) represents the \({i}^{th}\) task in the task sequence. These tasks are later clubbed together to form a batch of tasks to be executed as per the QoS requirements. For every task a suitable fog or cloud node is decided for allocation. As with many scheduling approaches, the proposed also aims to minimize the makespan of the tasks submitted by the IoT devices. Each task \({T}_{i}\) possesses some features which can be formulated as,

Here, \({T}_{i}^{ID}\) is the task identifier, \({T}_{i}^{TL}\) the job length (unit: million instructions), \({T}_{i}^{p}\) the priority level, and \({T}_{i}^{dl}\) deadline of execution of any given task \({T}_{i}\) . The nodes which process the tasks can be a fog node \({ND}_{f}\) , represented as

or a node \({ND}_{c}\) in the cloud data center denoted as,

Each node \({N}_{j}\) and \({N}_{r}\) possesses certain features as defined in Eqs.  5 and 6 as

Here, \({N}_{j}^{id}\text{ and }{N}_{r}^{id}\) denote the identification or the serial number of a node at the fog layer or the cloud layer, \({N}_{j}^{MIPS}\) and \({N}_{r}^{MIPS}\) the information processing speed of nodes (unit: millions-of-instructions-per-second, MIPS), \({N}_{j}^{m}\) and \({N}_{r}^{m}\) the memory availability and \({N}_{j}^{ds}\) and \({N}_{r}^{ds}\) the disk space availability corresponding to nodes \({N}_{j}\) and \({N}_{r}\) respectively. The allocation of a task \({T}_{i}\) to node \({N}_{j}\) or \({N}_{r}\) obeys certain conditions which are expressed as follows,

Here, \({A}_{ij}\) is the allocation vector representing the placement of task \({T}_{i}\) on node \({N}_{j}\) . The above equations describe that the allocation of tasks will be carried out only if the resource availability of the node is greater than the resource requirements of the task. The Expect Complete Time (ECT) matrix of size [ \({N}_{k}*{N}_{n}]\) represents the expected execution time to run the task individually on each computing resource of the fog layer resources as,

Similarly, the matrix of size [ \({N}_{k}*{N}_{m}]\) represents the expected execution time to run the tasks on each computing resource (VM) of cloud resources individually as,

One of the aims of the proposed model while making a scheduling decision is to minimize the make-span by locating the most efficient group of tasks on virtual machines. ECT ij for the same can be computed as,

where \({ECT}_{ij}\) refers to the required execution time of \(i\) th task on \(j\) th virtual machine of fog layer, \(TL\left({T}_{i}\right)\) is the task–length of the \(i\) th task. Similarly, \({ECT}_{ir}\) refers to the expected execution time of \(i\) th task on \(r\) th VM in the cloud layer represented as Eq.  13 .

The fitness value indicating the suitability of a node for a task can be defined as:

The proposed model aims to optimize the task’s execution time and improve the utilization of resources, thereby achieving the required QoS in terms of completion time. The primary objectives for the same can be summarised as,

where, \(Ru\) is the resource utilization, \(QoS\) the Quality Of Service to be met and \({E}_{T}\) the execution time of the tasks/batch of tasks scheduled in the considered computing environment.

Bi-factor task classification

The tasks originating from the IoT devices are aggregated by the \(Broker/Gateway\) which classifies tasks and performs scheduling. The incoming tasks possess various priorities and deadlines that should be considered to effectively categorize the tasks. The classification of tasks is performed to analyze the tasks in order to achieve an increased completion rate. The threshold values for priority and deadline of a task can be computed as,

Here, \(H\left({T}_{i}^{p}\right)\) and \(H\left({T}_{i}^{dl}\right)\) are the thresholds corresponding to priority and task execution deadline respectively. Further, \({T}_{i}^{p}\) is the priority value and \({T}_{i}^{dl}\) is the deadline requirement of the task \({T}_{i}\) with the threshold value ranging from 0 to 1. The priority and deadline are classified into two classes, namely low and high, and can be formulated as,

Table 3 presents the task classification based on task priority and the deadline. The classification of tasks is carried out into four major classes: highly intense, intense, moderate, and low, based on the priority and deadline of the tasks. These categorized tasks are then scheduled using the proposed model to meet the desired QoS.

Multi-aspect task scheduling

The scheduling of categorized tasks is performed by the \(Broker\) to achieve robust computation of tasks. The factors considered for effective scheduling of tasks are task category \(\left({T}_{i}^{C}\right)\) , and resource requirement \(({T}_{i}^{R}={T}_{i}^{TL},{T}_{i}^{p},{T}_{i}^{dl})\) .

This work proposes Improved Jellyfish Algorithm (IJFA) to explore the search space for aa better solution for scheduling of tasks with an increased convergence rate. As mentioned earlier, IJFA aims to improve the search space exploration as compared to JS while reducing the convergence time. To computationally realize the same, initially, the Jellyfish vectors are initialized as,

where, \({\overrightarrow{K}}_{0}\) denotes the initial vector of the Jellyfish, and \({\overrightarrow{K}}_{\mathfrak{J}}\) is the vector that holds the chaotic values of \(\mathfrak{J}\) th Jellyfish. \(\eta\) is aconstant which is taken as four in the proposed approach. Once the initialization is over, the selection of solution based on \(Ft\) is performed to select the location with maximum food \({\overrightarrow{K}}^{*}\) . Following it, the time control mechanism is utilized to switch between the motions of Jellyfish either towards ocean current or inside the swarms. The motion towards the ocean current can be formulated as,

Here, \(\overrightarrow{rn},{rn}_{1}\) represents a random number in the range of 0 to 1, \(\beta\) the distribution coefficient and \(\mu\) is the population mean. The motion of Jellyfish inside the swarm is emulated in two ways using action motion and passive motion. The determination of a new location using passive motion can be computed as,

Here, \(\gamma\) is a constant representing the motion length, \({rn}_{3}\) a random number in the range of 0 to 1, \({up}_{b}\) the upper bound of search space and \({lw}_{b}\) the lower bound of the search space. The determination of a new location using active motion can be computed as,

Here, \(\overrightarrow{Dr}\) denotes the motion direction of the Jellyfish to locate the best food and can be expressed as,

In the above equation, \(l\) represents the jellyfish index that is selected randomly. The control function \(\vartheta (t)\) for switching of motion of Jellyfish can be written as,

If the value of \(\vartheta \left(t\right)\) is greater than or equal to the constant \({\vartheta }_{0,}\) the Jellyfish follows the ocean current. Otherwise, the Jellyfish tends to move inside the swarm. If the randomly generated number \({rn}_{4}\) is greater than \(1-\vartheta \left(t\right)\) then the Jellyfish moves in a passive motion else it moves in an active motion. Improving JS, the enhancement is in the exploration capabilities of IJFA which prevents premature convergence. IJFA aims to achieving faster convergence while reducing the search time. This can be accomplished by using the randomization concept in investigating the swarm’s locations. Initially, the population, as usual is random to have diversity. Later on, it exploits some local optimum, but the introduced randomness also avoids premature convergence. The new formulation can be written as,

The improved algorithm selects three random solutions from the population and updates each population accordingly to ensure a fast convergence rate. Further, it has a higher chance of moving towards global optimum.

Figure  4 presents the proposed multi-aspect task scheduling using IJFA, which runs at the broker site. The user requests from various IoT devices are submitted and categorized using the bi-classification algorithm and then forwarded to the scheduler. Based on task requirement and category, the scheduler then schedules a task on the fog or cloud layer after initializing the algorithm’s parameters and computing and evaluating the fitness of the tasks. The time control mechanism governs the switching between the two types of movement of Jellyfish moving in the ocean in search of food. They are more attracted to locations where the available quantity of food is more significant. The location and corresponding objective function determine the amount of food found.

figure 4

Multi-aspect task scheduling using IJFA

The pseudocode of the IJFA based task scheduling process is presented in the box below, scheduling of tasks based on task category and resource requirements.

figure a

Experimental results

This section presents the simulation framework and the experimental result analysis of the proposed model featuring a multi-aspect task scheduler using the Improved Jelly Fish Algorithm (IJFA) for the fog integrated cloud environment. The scheduler uses IJFA to optimize the scheduling of IoT tasks based on the task’s priority, deadline and resource requirements. The requests are classified based on deadline and priority and then accordingly scheduled on either the fog or the cloud layer.

Simulation environment

Experimenting on real-world data in a real-world cloud environment is costly and complicated. Furthermore, if the experiment does not go as intended, crucial data may be lost. As a result, a simulator that can simulate the real-world cloud environment is necessary to test and validate various strategies and procedures presented for the benefit of the stakeholders. Once approved, these strategies and mechanisms can be used in a real-world context without the risk of data loss.

The classification before scheduling across the data center is essential in IoT tasks due to its effectiveness in managing the execution by executing real-time requests with high priority within their deadline. This work uses a Bi-factor classification algorithm to classify requests before scheduling them to address the concern. The experimental parameters are based on previous literature work reported in the literature for cloud and fog based resource provisioning.

The experiments were conducted by simulating a hybrid fog-cloud environment and heterogeneous task transfer. Table 4 summarises the simulation attributes used in the work. For effective simulation, MATLAB toolbox named Distributed System is utilized. This toolbox can be used to emulate any distributed network including fog integrated cloud environment. All experiments were executed for twenty different runs, including 3000 iterations for optimization.

The different batch sizes of Real-Time (RT) and Non-Real Time (NRT) requests were randomly generated and submitted. The distribution of VMs for fog and cloud nodes was kept at 30% and 70%, respectively. In each simulation, VMs, cloud and fog nodes were randomly generated with different processing capacities and configurations within the range in accordance with Table 5 as presented in the coming section.

Experimental setting

The proposed work used MATLAB to realize the fog integrated cloud environment. The experiments were designed to use five batches comprising of 600, 1200,1800, 2400 and 3000 tasks respectively. The different dataset sizes allow for more complicated task transfer scheduling testing. As the dataset size grows larger, the scheduling difficulty becomes more difficult, thus ensuring rigorous testing of the model under a real fog-cloud like workload. Each task is represented by four parameters: task ID, task size, task deadline, and the task’s priority. Task ID is a unique transfer identifier while job size provides an estimate of the computational execution time. Expected Completion Time (ECT) is used as the parameter to compare the performance of both JS and IJFA. In the computing environment, the dataset placement has been evaluated for three different scenarios: (1) allocating the dataset for execution directly on the cloud, (2) allocating the dataset for execution directly on the fog, (3) allocating the dataset for execution across fog integrated cloud architecture using classification being the main focus of this work.

Experimental results and analysis

Experiments using meta-heuristic scheduling algorithms, JS and the proposed IJFA were conducted in the fog-cloud environment. The findings of the experimentation are presented in this section. The simulation’s goal is to calculate the make-span of these two meta-heuristic scheduling methods being tested. The ideal meta-heuristic scheduling method is believed to be the one with the shortest make-span. Different test scenarios are used to calculate the performance of these two based on the ECT matrix to ensure the VMs’ balanced workload. In the proposed model, the task scheduling phase is proceeded by the bi-factor classification phase, which improves the overall performance by characterizing user tasks based on the priority and the deadline. Accordingly, the incoming tasks in the batch of jobs are classified into highly intensive, intensive, moderately intensive, and less intensive based on the priority and deadline. These two parameters enable the deadline sensitive and essential tasks to be scheduled at the fog layer, decreasing latency and time delay. These categorized tasks are then considered for scheduling, aiming to achieve better makespan.

Scenario 1: batch size variation from 600 to 3000 tasks with a fixed number of iterations as 3000

In this case, five different batches of tasks comprising tasks varying in the range 600–3000 were scheduled using JS and IJFA executed over 100 VMs run over 3000 iterations. Figure  5 a-e presents the Expected Completion Time (ECT) of an optimized schedule for all the five batches of tasks having 600, 1200, 1800, 2400 and 3000 tasks, respectively based on JS and IJFA algorithms through the convergence curves. As discussed in  The proposed multi-aspect task scheduling approach  section, the tasks are classified into four categories, i.e., highly intensive, intensive, moderately intensive and low intensive, based on deadline and priority using a bi-classification algorithm. These are then offloaded to the cloud and fog layer of the hybrid architecture based on their category. In the proposed model, on average, almost half of the requests generated in different trials are getting served by the fog, which directly improves the system’s total performance. The scheduler then, using JS and IJFA, allocates requests on available fog nodes by calculating the expected time of completion of each request of the batch and selects the one with minimum time among the available nodes at the respective layer where they have been offloaded to ensure a balanced workload for VMs. An average ECT scorewas computed for virtual machines at cloud and fog layers using the JS algorithm. From Fig.  5 a-e, it is observed that The IJFA can converge to a better solution with a better convergence rate in all the cases, from smaller batches of tasks to bigger ones. The better convergence of IJFA can be attributed to its better exploration capability to achieve faster convergence by reducing the search time than the traditional JS. This was made possible in IJFA by exploring search locations or solutions which other members could not have previously explored by randomly picking any three solutions from the population and updating the population accordingly.

figure 5

a Convergence Graph for JS and IJFA for Batch Size of 600 Tasks. b Convergence Graph for JS and IJFA for Batch Size of 1200 Tasks. c Convergence Graph for JS and IJFA for Batch Size of 1800 Tasks. d Convergence Graph for JS and IJFA for Batch Size of 2400 Tasks. e Convergence Graph for JS and IJFA for Batch Size of 3000 Tasks

Table 5 summarises the optimized ECT based on the various experiment scenarios. The results were obtained for every set of tasks for three different runs. The average best and average worst performance in ECT reported by JS and IJFA were observed. The results based on these three random trials over different task sets indicate that the optimized ECT observed in the model using the proposed IJFA is way lesser than the one obtained with JS as a trend due to efficient task scheduling based on multiple aspects by IJFA including bi-classification. The IJFA has faster convergence due to the improvement in the exploration phase and achieves the trade-off between exploration and exploitation with less time consumption due to implementing a two-directional search strategy resulting in better solutions. For instance, in the experiment with 600 batches of tasks, the optimized ECT for IJFA is about 13 s in the average best scenario, which is better than the average best of 17 s reported by JS. As far as the average worst scenario is concerned for both JS and IJFA, it is observed that IJFA is performing at par with JS. The same trend can be seen in all the different batches of jobs. It can be concluded that IJFA outperforms JS easily when used in the fog integrated cloud environment with randomly generated and scheduled heterogeneous tasks.

The ECT of JS and IJFA implementations for various task sets spread over 20 iterations is presented in Fig.  6 a–e. A boxplot performance for all five batches of jobs has been given. It is observed that the median of IJFA is less than JS in all the cases. For instance, the median for batch I (600 tasks 100 VMs) is 75 s for IJFA and 95 s for traditional JS. Also, the ECT for IJFA and JS has been dispersed over 167 s to 13 s and 186 s to 17 s, respectively. The same trend has been observed in all the other four batches of jobs, too, where range dispersion and median of IJFA are less than JS. The box plots reflects that IJFA performs an efficient task scheduling compared to JS, irrespective of the task set sizes. The result remains the same even with the change in the number of VMs. The same has not been reproduced here to avoid redundancy.

figure 6

a Comparison of IJFA and JS for 20 iterations for 600 Tasks. b Comparison of IJFA and JS for 20 iterations for 1200 Tasks. c Comparison of IJFA and JS for 20 iterations for 1800 Tasks. d Comparison of IJFA and JS for 20 iterations for 2400 Tasks. e Comparison of IJFA and JS for 20 iterations for 3000 Tasks

The performance of the proposed IJFA algorithm and traditional JS has also been evaluated by feeding a different number of tasks ranging from 600 to 4800to observe a general trend of the performance of the two algorithms. The same has been presented in Fig.  7 . This is significant because a wide range of data sizes could be used to support the task transfer scheduling’s sophisticated testing. As the data size grows larger, the scheduling becomes more difficult but still IJFA performes very well as compared to traditional JS. It is worth mentioning that the performance of IJFA becomes even superior with the increasing task set size which corresponding with the fog integrated cloud environment. As the size of the batch increases, the difference in ECT reported by both the algorithms also increases. The increased difference between the two concluded that IJFA is more suitable in the situation with large size datasets, which is very appropriate in the real-world scenarios especially considering IoT devices as the task generators.

figure 7

ECT v/s number of tasks

Scenario 2: performance evaluation of IJFA using Fog Integrated Cloud, Cloud-Only and Fog- Only Scheduling architectures

To understand the usefulness of fog- integrated cloud architecture over cloud-only and fog-only architectures, JS and the proposed algorithm IJFA was applied to work in three scenarios for the above mentioned three architectures. To realize the same, five batches of tasks with batch sizes ranging from 600 to 3000 were made, which were later scheduled on the above three architectures separately. Accordingly, batches were made with batch I comprising 600 tasks, batch II comprising 1200 tasks till batch V comprising 3000 tasks. A comparative analysis of the minimized makes-span trend for this study has been presented in Table 6 and a bar chart representation is depicted in Fig.  8 a-c. The ECT observed in both the cases of using JS and IJFA clearly outlines the outperformance of IJFA over JS for all the batch sizes over randomly generated datasets.

figure 8

a ECT v/s Batch Size for JS and IJFA for the Fog Integrated Cloud Architecture. b ECT v/s Batch Size for JS and IJFA for the Cloud Only Architecture. c ECT v/s Batch Size for JS and IJFA for the Fog-Only Architecture

Table 7 summarises the results obtained using IJFA as the scheduling strategy with the results in terms of ECT for varying batch sizes for the fog-only, cloud-only and the fog integrated cloud architectures. These results have been presented pictorially in Fig.  9 . It is observed that the low-capacity nodes at the fog layer led to maximum ECT in all the used five batches of jobs followed by cloud-only architecture. For instance, ECT for the batch I dataset came out to be approximately 13 s, 31 s and 321 s for fog integrated cloud, cloud-only and fog-only architectures The same trend can be seen for the remaining batches too. Therefore, it is established that the proposed IJFA algorithm performs very well in the hybrid fog-cloud environment owing to the classification of jobs prior to actual scheduling, which led to offloading of high priority and time-sensitive requests to the fog nodes and rest to the cloud nodes. As per the analysis of the random generation of requests in different trials, almost 50% of jobs were executed at the fog layer, resulting in a low ECT in the hybrid environment. In all the cases, if all the tasks were offloaded to the fog layer, it resulted in the maximum ECT as the number of fog devices is limited and computationally constrained as compared to cloud-only and fog integrated cloud architecture. The study infers that it is best to have an integration of fog nodes and cloud nodes contributing to schedule based on the task preferences to achieve better performance.

figure 9

ECT v/s batch size for IJFA for fog integrated cloud, cloud-only and fog-only architectures

This section studies the experimental results of the proposed model to optimize the execution of IoT tasks in the fog integrated cloud architecture. The effect of the batches of data on the performance was analyzed to study the results under various scenarios. It is gathered from the experimental results that the optimization of tasks in fog-only and cloud-only architecture faced challenges in scheduling as they did not classify and categorized the tasks based on their requirements and nature. This led to an increased ECT in both the scenarios. Also, with an increased number of tasks, the complexity of job scheduling increases, leading to delays in response and in the worst case request failures. The integration of classification of jobs with IJFA results in an efficient ECT reduction of the batch of jobs under numerous testing cases. Compared to using solely standard JS algorithm, the proposed model comprising of two-phase scheduling algorithm allows for adequate optimization time. The bi-classification phase helps in characterizing the IoT requests based on the deadline and the priority to schedule the tasks using IJFA on the available computing resources. These available resources can be either at the cloud layer or fog layer with their selection ensuring an optimal use of both for an efficient job execution. Therefore, this work gains significance as it addresses the efficient resource provisioning in the fog-ingrated cloud environment for the heterogeneous and delay sensitive IoT tasks to reduce the execution time leading to a decreased cost of execution for the batch of jobs.

Conclusion and future works

The Internet of Things (IoT) has emerged as a prominent technology in automation by performing real-time tasks for various applications with different priorities and deadlines. An integrated fog-cloud architecture offers the remote execution of these tasks with an improved latency and better availability of bandwidth. However, it is important to have a suitable resource provisioning scheme for task offloading to fog and cloud resources while meeting the task and resource requirements. This work proposes a multi-aspect task scheduling algorithm based on Improved Jellyfish Algorithm (IJFA) by leveraging the benefits of the integrated environment of fog-cloud. The proposed bi-factor classification phase helps the tasks to be scheduled to the right place based on their deadline and priority. The scheduler ensures minimizing the task completion time while maximizing resource utilization thus improving the overall QoS. The simulation results reveal that combining the classification phase with the IJFA speed up the completion of tasks based on their relevance. The advantage of using fog resources integrated with cloud resources offering superior task completion time has been presented. The model was rigorously tested to successfully evaluate its performance from low load to the real-world scenario of large volume and heterogeneous IoT task requests. The proposed IJFA based scheduler outperforms the traditional Jellyfish search optimizer (JS) under various test conditions to prove its effectiveness.

The authors have already started to take this work further to propose a multiobjective model minimizing the response time and energy for the integrated fog-cloud architecture aiming an improved QoS to the users while decreasing the cost for the providers.

Afshar A, Massoumi F, Afshar A, Mariño MA (2015) State of the Art Review of Ant Colony Optimization Applications in Water Resource Management. Water Resour Manage 29:3891–3904. https://doi.org/10.1007/s11269-015-1016-9

Article   Google Scholar  

Banks A, Vincent J, Anyakoha C (2007) A review of particle swarm optimization. Part I: background and development. Nat Comput 6:467–484. https://doi.org/10.1007/s11047-007-9049-5

Article   MathSciNet   MATH   Google Scholar  

Fister I, Yang XS, Brest J (2013) A comprehensive review of firefly algorithms. Swarm Evol Comput 13:34–46. https://doi.org/10.1016/J.SWEVO.2013.06.001

Fister I, Perc M, Kamal SM, Fister I (2015) A review of chaos-based firefly algorithms: Perspectives and research challenges. Appl Math Comput 252:155–165. https://doi.org/10.1016/J.AMC.2014.12.006

Yang XS (2014) Preface. Studies in Computational. Intelligence 585:v–vi. https://doi.org/10.1007/978-3-319-02141-6

Hussain K, Mohd Salleh MN, Cheng S, Shi Y (2019) Metaheuristic research: a comprehensive survey. Artif Intell Rev 52:2191–2233. https://doi.org/10.1007/s10462-017-9605-z

Yang X-S, Chien SF, Ting TO (2014) Computational Intelligence and Metaheuristic Algorithms with Applications. Sci World J 2014:425853. https://doi.org/10.1155/2014/425853

Fister I, Yang XS, Brest J, Fister D (2013) A brief review of nature-inspired algorithms for optimization. Elektroteh Vestn/Electrotech Rev 80:116–122

Google Scholar  

Soltanshahi M, Asemi R, Shafiei N (2019) Energy-aware virtual machines allocation by krill herd algorithm in cloud data centers. Heliyon 5:e02066. https://doi.org/10.1016/J.HELIYON.2019.E02066

Kesavaraja D, Shenbagavalli A (2018) QoE enhancement in cloud virtual machine allocation using Eagle strategy of hybrid krill herd optimization. J Parallel Distrib Comput 118:267–279. https://doi.org/10.1016/J.JPDC.2017.08.015

Usman MJ, Ismail AS, Chizari H et al (2019) Energy-efficient Virtual Machine Allocation Technique Using Flower Pollination Algorithm in Cloud Datacenter: A Panacea to Green Computing. J Bionic Eng 16:354–366. https://doi.org/10.1007/s42235-019-0030-7

Liu XF, Zhan ZH, Deng JD et al (2018) An Energy Efficient Ant Colony System for Virtual Machine Placement in Cloud Computing. IEEE Trans Evol Comput 22:113–128. https://doi.org/10.1109/TEVC.2016.2623803

Alresheedi SS, Lu S, Abd Elaziz M, Ewees AA (2019) Improved multiobjective salp swarm optimization for virtual machine placement in cloud computing. HCIS 9:15. https://doi.org/10.1186/s13673-019-0174-9

Li G, Wu Z Ant Colony Optimization Task Scheduling Algorithm for SWIM Based on Load Balancing. https://doi.org/10.3390/fi11040090

Natesan G, Chokkalingam A (2019) Optimal task scheduling in the cloud environment using a mean Grey Wolf Optimization algorithm. Int J Tech 10:126–136. https://doi.org/10.14716/ijtech.v10i1.1972

Sreenu K, Sreelatha M (2019) W-Scheduler: whale optimization for task scheduling in cloud computing. Cluster Comput 22:1087–1098. https://doi.org/10.1007/s10586-017-1055-5

Huang X, Li C, Chen H, An D (2020) Task scheduling in cloud computing using particle swarm optimization with time varying inertia weight strategies. Cluster Comput 23:1137–1147. https://doi.org/10.1007/s10586-019-02983-5

Chaudhary D, Singh Chhillar R (2013) A New Load Balancing Technique for Virtual Machine Cloud Computing Environment. Int J Comput Appl 69:37–40. https://doi.org/10.5120/12114-8498

Mohammad OKJ (2018) GALO: A new intelligent task scheduling algorithm in cloud computing environment. Int J Eng Technol (UAE) 7:2088–2094. https://doi.org/10.14419/ijet.v7i4.16486

Chaudhary D, Kumar B (2018) Cloudy GSA for load scheduling in cloud computing. Appl Soft Comput 71:861–871. https://doi.org/10.1016/J.ASOC.2018.07.046

Kaur M, Kadam S (2018) A novel multiobjective bacteria foraging optimization algorithm (MOBFOA) for multiobjective scheduling. Appl Soft Comput 66:183–195. https://doi.org/10.1016/J.ASOC.2018.02.011

Elaziz MA, Xiong S, Jayasena KPN, Li L (2019) Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowl Based Syst 169:39–52. https://doi.org/10.1016/J.KNOSYS.2019.01.023

Rajagopalan A, Modale DR, Senthilkumar R (2020) Optimal Scheduling of Tasks in Cloud Computing Using Hybrid Firefly-Genetic Algorithm. In: Satapathy SC, Raju KS, Shyamala K et al (eds) Advances in Decision Sciences, Image Processing, Security and Computer Vision. Springer International Publishing, Cham, pp 678–687

Chapter   Google Scholar  

Pradeep K, Prem Jacob T (2018) A Hybrid Approach for Task Scheduling Using the Cuckoo and Harmony Search in Cloud Computing Environment. Wireless Pers Commun 101:2287–2311. https://doi.org/10.1007/s11277-018-5816-0

Gabi D, Samad Ismail A, Zainal A, et al Orthogonal Taguchi-based cat algorithm for solving task scheduling problem in cloud computing. https://doi.org/10.1007/s00521-016-2816-4

Gobalakrishnan N, Arun C (2018) A New Multi-Objective Optimal Programming Model for Task Scheduling using Genetic Gray Wolf Optimization in Cloud Computing. Comput J 61:1523–1536. https://doi.org/10.1093/comjnl/bxy009

Abualigah L, Alkhrabsheh M (2022) Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J Supercomput 78:740–65. https://doi.org/10.1007/s11227-021-03915-0

Jeddi S, Sharifian S A water cycle optimized wavelet neural network algorithm for demand prediction in cloud computing. https://doi.org/10.1007/s10586-019-02916-2

Jayasena KPN, Li L, AbdElaziz M, Xiong S (2018) Multi-objective Energy Efficient Resource Allocation Using Virus Colony Search (VCS) Algorithm. 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). pp 766–773

Hamid Hussain Madni S, Shafie Abd Latiff M, Abdulhamid M, Ali J Hybrid gradient descent cuckoo search (HGDCS) algorithm for resource scheduling in IaaS cloud computing environment. https://doi.org/10.1007/s10586-018-2856-x

Elsherbiny S, Eldaydamony E, Alrahmawy M, Reyad AE (2018) An extended Intelligent Water Drops algorithm for workflow scheduling in cloud computing environment. Egypt Inform J 19:33–55. https://doi.org/10.1016/j.eij.2017.07.001

Manasrah AM, Ba Ali H (2018) Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing. Wirel Commun Mob Comput 2018:1934784. https://doi.org/10.1155/2018/1934784

Karthikeyan K, Sunder R, Shankar K et al (2020) Energy consumption analysis of Virtual Machine migration in cloud using hybrid swarm optimization (ABC-BA). J Supercomput 76:3374–3390. https://doi.org/10.1007/s11227-018-2583-3

Kalra M, Singh S (2015) A review of metaheuristic scheduling techniques in cloud computing. Egypt Inform J 16:275–295. https://doi.org/10.1016/J.EIJ.2015.07.001

Consortium O, Working A (2017) Open fog reference architecture for fog computing. Open Fog Consortium Architecture Working Group. pp 1–162

Chou JS, Truong DN (2021) A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl Math Comput 389:125535. https://doi.org/10.1016/j.amc.2020.125535

Houssein EH, Gad AG, Wazery YM, Suganthan PN (2021) Task Scheduling in Cloud Computing based on Meta-heuristics: Review, Taxonomy, Open Challenges, and Future Trends. Swarm Evol Comput 62:100841. https://doi.org/10.1016/J.SWEVO.2021.100841

Mandal T, Acharyya S (2015) Optimal task scheduling in cloud computing environment: Meta heuristic approaches. 2015 2nd International Conference on Electrical Information and Communication Technologies (EICT). pp 24–28

Raju R, Babukarthik RG, Chandramohan D et al (2013) Minimizing the makespan using Hybrid algorithm for cloud computing. 2013 3rd IEEE International Advance Computing Conference (IACC). pp 957–962

Zuo L, Shu L, Dong S, et al Special section on big data services and computational intelligence for industrial systems A Multiobjective Optimization Scheduling Method Based on the Ant Colony Algorithm in Cloud Computing. https://doi.org/10.1109/ACCESS.2015.2508940

Ramezani F, Jie, Farookh L et al (2014) Task-Based System Load Balancing in Cloud Computing Using Particle Swarm Optimization. Int J Parallel Prog 42:739–754. https://doi.org/10.1007/s10766-013-0275-4

He H, Xu G, Pang S, Zhao Z (2016) AMTS: Adaptive multiobjective task scheduling strategy in cloud computing. China Commun 13:162–171. https://doi.org/10.1109/CC.2016.7464133

Chaudhary D, Kumar B, Khanna R (2017) NPSO Based Cost Optimization for Load Scheduling in Cloud Computing. In: Thampi S, Martínez Pérez G, Westphall C, Hu J, Fan C, Gómez Mármol F. (eds) Security in Computing and Communications. SSCC 2017. Communications in Computer and Information Science, vol 746. Springer, Singapore. https://doi.org/10.1007/978-981-10-6898-0_9

Ramezani F, Lu J, Taheri J et al (2015) Evolutionary algorithm-based multiobjective task scheduling optimization model in cloud environments. World Wide Web 18:1737–1757. https://doi.org/10.1007/s11280-015-0335-3

Hamid Hussain Madni S, Shafie Abd Latiff M, Ali J, Abdulhamid M (2019) Multi-objective-Oriented Cuckoo Search Optimization-Based Resource Scheduling Algorithm for Clouds. Arab J Sci Eng 44:3585–3602. https://doi.org/10.1007/s13369-018-3602-7

Wu Z, Liu X, Ni Z et al (2013) A market-oriented hierarchical scheduling strategy in cloud workflow systems. J Supercomput 63:256–293. https://doi.org/10.1007/s11227-011-0578-4

AL-Amodi S, Patra SS, Bhattacharya S, Mohanty, JR, Kumar V, Barik RK (2022) Meta-heuristic Algorithm for Energy-Efficient Task Scheduling in Fog Computing. In: Dhawan A, Tripathi VS, Arya KV, Naik K. (eds) Recent Trends in Electronics and Communication. Lecture Notes in Electrical Engineering, vol 777. Springer, Singapore. https://doi.org/10.1007/978-981-16-2761-3_80

Liu Q, Wei Y, Leng S, Chen Y (2017) Task scheduling in fog enabled Internet of Things for smart cities. 2017 IEEE 17th International Conference on Communication Technology (ICCT). pp 975–980

Ghobaei-Arani M, Souri A, Safara F, Norouzi M (2020) An efficient task scheduling approach using moth-flame optimization algorithm for cyber-physical system applications in fog computing. Trans Emerg Telecommun Technol 31:1–14. https://doi.org/10.1002/ett.3770

Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K (2020) Scheduling Internet of Things requests to minimize latency in hybrid Fog–Cloud​ computing. Future Gener Comput Syst 111:539–551. https://doi.org/10.1016/j.future.2019.09.039

Abdel-Basset M, El-Shahat D, Elhoseny M, Song H (2021) Energy-Aware Metaheuristic Algorithm for Industrial-Internet-of-Things Task Scheduling Problems in Fog Computing Applications. IEEE Internet Things J 8:12638–12649. https://doi.org/10.1109/JIOT.2020.3012617

Abdel-Basset M, Mohamed R, Chakrabortty RK, Ryan MJ (2021) IEGA: An improved elitism-based genetic algorithm for task scheduling problem in fog computing. Int J Intell Syst 36:4592–4631. https://doi.org/10.1002/int.22470

Ghaffari E (2019) Providing a new scheduling method in fog network using the ant colony algorithm

Rafique H, Shah MA, Islam SU et al (2019) A Novel Bio-Inspired Hybrid Algorithm (NBIHA) for Efficient Resource Management in Fog Computing. IEEE Access 7:115760–115773. https://doi.org/10.1109/ACCESS.2019.2924958

Hoseiny F, Azizi S, Shojafar M et al (2021) PGA: A Priority-aware Genetic Algorithm for Task Scheduling in Heterogeneous Fog-Cloud Computing. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). pp 1–6

Ali IM, Sallam KM, Moustafa N et al (2020) An Automated Task Scheduling Model using Non-Dominated Sorting Genetic Algorithm II for Fog-Cloud Systems. IEEE Trans Cloud Comput 1. https://doi.org/10.1109/TCC.2020.3032386

Hosseinioun P, Kheirabadi M, Kamel Tabbakh SR, Ghaemi R (2020) A new energy-aware tasks scheduling approach in fog computing using hybrid meta-heuristic algorithm. J Parallel Distrib Comput 143:88–96. https://doi.org/10.1016/j.jpdc.2020.04.008

Jayasena KPN, Thisarasinghe BS (2019) Optimized task scheduling on fog computing environment using meta heuristic algorithms. 2019 IEEE International Conference on Smart Cloud (SmartCloud). pp 53–58

Ghanavati S, Abawajy J, Izadi D (2022) An Energy Aware Task Scheduling Model Using Ant-Mating Optimization in Fog Computing Environment. IEEE Trans Serv Comput 15:2007–2017. https://doi.org/10.1109/TSC.2020.3028575

Cloud broker. (2022, June 30). In Wikipedia. https://en.wikipedia.org/wiki/Cloud_broker . Accessed 20 Feb 2022

Download references

Acknowledgements

Not applicable.

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, 110067, India

Nupur Jangu & Zahid Raza

You can also search for this author in PubMed   Google Scholar

Contributions

Nupur Jangu conceived of the presented idea and developed the theory and performed the computations. Zahid Raza verified the analytical methods and supervised the findings of this work. All authors discussed the results and contributed to the preparation of the final manuscript. The author(s) read and approved the final manuscript.

Authors’ information

Ms. Nupur Jangu, School of Computer and Systems Sciences, Jawaharlal Nehru University, India ([email protected])

School of Computer and Systems Sciences

New Delhi, Delhi 110,067, IN

Nupur Jangu is pursuing PhD in the School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, India. She received her MCA (Master of Computer Applications) and BCA (Bachelor of Computer Applications) degree from Rajasthan Technical University, Rajasthan in the year 2014 and 2011, respectively. She has a teaching experience of about three years. Her areas of interests are cloud computing, Internet of Things (IoT), fog computing, optimization and parallel & distributed computing

Dr. Zahid Raza, School of Computer and Systems Sciences, Jawaharlal Nehru University, India ([email protected])

School of Computer and Systems SciencesNew Delhi, Delhi 110,067, IN

Dr. Zahid Raza is currently serving as a Professor in the School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India. Dr. Raza has a M.Sc degree in Electronics in which he was the Gold Medalist, M.Tech. degree in Computer Science and Ph.D. in Computer Science. Prior to joining Jawaharlal Nehru University, he served as a Lecturer in Banasthali Vidyapith University, Rajasthan, India. His research interest is in the area of Parallel and Distributed Systems. He has proposed various scheduling models for job scheduling for Computational Grid, Cloud and Parallel Systems. His research interest also includes the use of machine learning for medical problems. Dr. Raza has published many research papers in various peer reviewed International Journals and Transactions. He has various publications in proceedings of various peer-reviewed conferences in India and abroad. Dr. Raza is one of the authors of the Springer Briefs in Computer Science entitled Auction based Resource Provisioning in Cloud Computing . He has also contributed a chapter in an edited book. Various invited talks have been delivered by him throughout his academic career. Dr. Raza has also served in various committees for various academic, administrative and evaluation purposes

Corresponding author

Correspondence to Zahid Raza .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and Permissions

About this article

Cite this article.

Jangu, N., Raza, Z. Improved Jellyfish Algorithm-based multi-aspect task scheduling model for IoT tasks over fog integrated cloud environment. J Cloud Comp 11 , 98 (2022). https://doi.org/10.1186/s13677-022-00376-5

Download citation

Received : 09 May 2022

Accepted : 03 December 2022

Published : 21 December 2022

DOI : https://doi.org/10.1186/s13677-022-00376-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

algorithm task scheduling

Information

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

algorithm task scheduling

algorithms-logo

Article Menu

algorithm task scheduling

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Multi-objective task scheduling optimization in spatial crowdsourcing.

algorithm task scheduling

1. Introduction

2. related work, 2.1. task matching in sc, 2.2. task scheduling problem in sc, 2.3. the binary-objective optimization problem in sc, 3. the motso model in sc, 3.1. the ranking strategy algorithm, 3.1.1. task execution duration (ted) and ranked task execution duration (rted), 3.1.2. task entropy (te) and ranked task entropy (rte), 3.1.3. the ranked tables, 3.2. multi-objective particle swarm optimization, 4. performance evaluation.

4.1. The Performance of the Ranking Strategy Algorithm

4.2. Performance of the MOTSO Model

4.2.1. maximizing the number of completed tasks, 4.2.2. minimizing the total travel costs (ttcs), 4.2.3. minimizing the standard deviation of the workload balance, 5. conclusions, author contributions, data availability statement, conflicts of interest, abbreviations.

Share and Cite

Alabbadi, A.A.; Abulkhair, M.F. Multi-Objective Task Scheduling Optimization in Spatial Crowdsourcing. Algorithms 2021 , 14 , 77. https://doi.org/10.3390/a14030077

Alabbadi AA, Abulkhair MF. Multi-Objective Task Scheduling Optimization in Spatial Crowdsourcing. Algorithms . 2021; 14(3):77. https://doi.org/10.3390/a14030077

Alabbadi, Afra A., and Maysoon F. Abulkhair. 2021. "Multi-Objective Task Scheduling Optimization in Spatial Crowdsourcing" Algorithms 14, no. 3: 77. https://doi.org/10.3390/a14030077

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Task Scheduling

You have a long list of tasks that you need to do today. To accomplish task you need minutes, and the deadline for this task is . You need not complete a task at a stretch. You can complete a part of it, switch to another task, and then switch back.

You've realized that it might not be possible to complete all the tasks by their deadline. So you decide to do them in such a manner that the maximum amount by which a task's completion time overshoots its deadline is minimized.

Input Format

The first line contains the number of tasks, . Each of the next lines contains two integers, and .

Constraints

Output Format

Output lines. The line contains the value of the maximum amount by which a task's completion time overshoots its deadline, when the first tasks on your list are scheduled optimally. See the sample input for clarification.

Sample Input

Sample Output

Explanation

The first task alone can be completed in 2 minutes, and so you won't overshoot the deadline. 

With the first two tasks, the optimal schedule can be: time 1: task 2 time 2: task 1  time 3: task 1

We've overshot task 1 by 1 minute, hence returning 1. 

With the first three tasks, the optimal schedule can be: time 1 : task 2 time 2 : task 1 time 3 : task 3 time 4 : task 1 time 5 : task 3 time 6 : task 3

Task 1 has a deadline 2, and it finishes at time 4. So it exceeds its deadline by 2. Task 2 has a deadline 1, and it finishes at time 1. So it exceeds its deadline by 0. Task 3 has a deadline 4, and it finishes at time 6. So it exceeds its deadline by 2.

Thus, the maximum time by which you overshoot a deadline is 2. No schedule can do better than this.

Similar calculation can be done for the case containing 5 tasks.

Javatpoint Logo

DAA Tutorial

Asymptotic analysis, analysis of sorting, divide and conquer, lower bound theory, sorting in linear time, binary search trees, red black tree, dynamic programming, greedy algorithm, backtracking, shortest path, all-pairs shortest paths, maximum flow, sorting networks, complexity theory, approximation algo, string matching.

Interview Questions

JavaTpoint

Solution: According to the Greedy algorithm we sort the jobs in decreasing order of their penalties so that minimum of penalties will be charged.

In this problem, we can see that the maximum time for which uniprocessor machine will run in 6 units because it is the maximum deadline.

Let T i represents the tasks where i = 1 to 7

Activity or Task Scheduling Problem

T 5 and T 6 cannot be accepted after T 7 so penalty is

Other schedule is

Activity or Task Scheduling Problem

(2 4 1 3 7 5 6)

There can be many other schedules but (2 4 1 3 7 5 6) is optimal.

Youtube

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Artificial Intelligence

AWS Tutorial

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Control System

Data Mining Tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

Javatpoint Services

JavaTpoint offers too many high quality services. Mail us on [email protected] , to get more information about given services.

Training For College Campus

JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. Please mail your requirement at [email protected] Duration: 1 week to 2 week

RSS Feed

Please note that Internet Explorer version 8.x is not supported as of January 1, 2016. Please refer to this support page for more information.

Elsevier

Computer Science Review

Review article task scheduling in fog environment — challenges, tools & methodologies: a review.

Even though cloud computing offers many advantages, it can be a poor choice sometimes because of its slow response to existing requests, leading to the need for fog computing. Scheduling tasks in a fog environment is a major challenge. It is important that IoT clients execute their tasks in a timely manner and obtain lower-cost services; however, they are also looking for tasks to be executed in a secure manner. In this paper, we review the advantages, limitations, and issues associated with scheduling algorithms proposed by a number of different researchers for fog environments. For fog computing developers, we compare different simulation tools to help them choose the product that is most appropriate and flexible for simulating the application they are considering. Finally, open issues and promising research directions associated with task scheduling in fog computing are discussed.

Data availability

No data was used for the research described in the article.

Cited by (0)

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

An Efficient Hybrid Job Scheduling Optimization (EHJSO) approach to enhance resource search using Cuckoo and Grey Wolf Job Optimization for cloud environment

Roles Writing – original draft, Writing – review & editing

Affiliation Department of Computer Science and Engineering, R.M.K. Engineering College, Chennai, India

Roles Resources, Software

Roles Investigation, Writing – review & editing

Roles Resources, Writing – review & editing

Affiliation School of Computing Science and Engineering, VIT University, Chennai, India

ORCID logo

Roles Formal analysis, Methodology

* E-mail: [email protected]

Affiliation Department of Electrical and Computer Engineering, Bule Hora University, Bule Hora, Ethiopia

PLOS

Fig 1

Cloud computing has now evolved as an unavoidable technology in the fields of finance, education, internet business, and nearly all organisations. The cloud resources are practically accessible to cloud users over the internet to accomplish the desired task of the cloud users. The effectiveness and efficacy of cloud computing services depend on the tasks that the cloud users submit and the time taken to complete the task as well. By optimising resource allocation and utilisation, task scheduling is crucial to enhancing the effectiveness and performance of a cloud system. In this context, cloud computing offers a wide range of advantages, such as cost savings, security, flexibility, mobility, quality control, disaster recovery, automatic software upgrades, and sustainability. According to a recent research survey, more and more tech-savvy companies and industry executives are recognize and utilize the advantages of the Cloud computing. Hence, as the number of users of the Cloud increases, so did the need to regulate the resource allocation as well. However, the scheduling of jobs in the cloud necessitates a smart and fast algorithm that can discover the resources that are accessible and schedule the jobs that are requested by different users. Consequently, for better resource allocation and job scheduling, a fast, efficient, tolerable job scheduling algorithm is required. Efficient Hybrid Job Scheduling Optimization (EHJSO) utilises Cuckoo Search Optimization and Grey Wolf Job Optimization (GWO). Due to some cuckoo species’ obligate brood parasitism (laying eggs in other species’ nests), the Cuckoo search optimization approach was developed. Grey wolf optimization (GWO) is a population-oriented AI system inspired by grey wolf social structure and hunting strategies. Make span, computation time, fitness, iteration-based performance, and success rate were utilised to compare previous studies. Experiments show that the recommended method is superior.

Citation: Paulraj D, Sethukarasi T, Neelakandan S, Prakash M, Baburaj E (2023) An Efficient Hybrid Job Scheduling Optimization (EHJSO) approach to enhance resource search using Cuckoo and Grey Wolf Job Optimization for cloud environment. PLoS ONE 18(3): e0282600. https://doi.org/10.1371/journal.pone.0282600

Editor: Shih-Wei Lin, Chang Gung University, TAIWAN

Received: November 8, 2022; Accepted: February 21, 2023; Published: March 13, 2023

Copyright: © 2023 Paulraj et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

1. Introduction

Cloud computing has completely revolutionised business since it enables the efficient pooling of computing resources. Cloud users can provision and distribute pay-per-use cloud computing resources using the Cloud Service Provider’s public interface [ 1 ]. Recent advancements in cloud computing enable numerous geographically distributed and interconnected cloud data centres to provide pay-per-use on-demand services to cloud customers more efficiently [ 2 ]. According to [ 28 ], cloud data centres will handle 94% of computing workload by 2021. Cloud computing’s novel concept has provided various benefits, including decreased infrastructure costs, execution time, and maintenance expenses, among others. However, the increased strain imposed by the execution of several cloud-based applications led to a decline in resource utilisation and a reduced return on investment [ 3 ]. Incorrect job scheduling among virtual machines is one of the key causes of a decline in cloud computing resource utilisation, resulting in a loss of processing performance. Therefore, task scheduling is essential in cloud computing to assure optimal resource use by providing acceptable performance under varying task restrictions, including execution deadlines.

In cloud computing, a variety of tasks may need to be programmed on a large number of virtual machines [ 4 ] to save development time and enhance system performance. Therefore, work planning is essential for restoring the adaptability and dependability of cloud-based solutions. Scheduling tasks, on the other hand, has a broad scope of optimization and greatly contributes to the development of dependable and adaptable dynamic solutions. The majority of cloud computing work scheduling algorithms are rule-based because they are simple to build. Rule-based algorithms perform badly in the preparation of multidimensional jobs. Moreover, resource allocation and scheduling are not only associated with quality of service (QoS), but can also have a long-term effect on the revenue of cloud service providers. Researchers have access to a wide variety of alternatives for resource scheduling, and resource scheduling is currently recognised as one of the most important concerns in the field of cloud computing.

Job scheduling assigns user-supplied tasks to the correct cloud virtual machine [ 5 ]. Cloud consumers must sign a service level agreement with the cloud provider to stipulate service quality, execution timetable, budget, and work security. The user may request the computer resources needed to finish his job in compliance with the SLA [ 6 ]. The performance of cloud computing is directly affected by task scheduling. With proper work scheduling, more money can be generated, performance can be enhanced, and SLA violations may be minimised. Due to the rising complexity of cloud computing, the scheduling problem has become increasingly difficult to solve. In a cloud computing context, however, devising an effective plan to solve the issue of job scheduling becomes more difficult.

To solve the cloud computing job scheduling problem, enumeration methods or heuristic-based solutions [ 7 ] may be used. This type of problem can be seen in the cloud computing job scheduling quandary. In the context of cloud computing, enumeration processes are not relevant since they require the generation of every possible combination of work schedules before selecting the most effective one. This is not possible with cloud computing. This method is laborious, which renders it inappropriate for use in a cloud computing setting with a significant amount of work to be done.

This study’s primary objective is to search for and locate all accessible cloud resources, then rapidly distribute them in accordance with cloud users’ job requirements. In this instance, we begin by randomly assigning the solution based on the number of jobs and cloud nodes. Each solution’s goal function is then determined. This work develops a time- and quality-based objective function. After calculating fitness, we modify the solution using optimization techniques. Following the completion of the Cuckoo Search Optimization (CSO) process, the EHJSO technique then uses the Grey Wolf Job Optimization (GWO) algorithm to allocate the resources that have been found to be available. The following is a list of the significant contributions that this work has made.

This study says integrating CSO and GWO helps with scheduling (EHJSO). This study continues as follows. Section 2 discusses important studies. Third section describes recommended approach. 4 discusses simulation results. Part 5 concludes.

2. Literature survey

Sobhanayak Srichandan et al. [ 8 ] suggested a hybrid strategy for cloud job scheduling that employs generally available biologically inspired heuristic algorithms. To enhance manufacturing time while reducing energy use, a scheduling technique was devised. The computing cost of the scheduling method, on the other hand, was extraordinarily expensive.

Ratin Gautam et al. [ 9 ] established a work scheduling GA. The GA task scheduler identifies the GA scheduling function for each cycle. Implement job assignments and evaluate task schedules using user satisfaction and virtual machine accessibility. The function iterated genetic functions to determine the best work plan. The operation was scheduled to minimise execution time and delay cost to lower total processing costs. Optimization ignored multiobjective functions. T. Prem Jacob and K. Pradeep developed Cuckoo Search and Particle Swarm Optimization to reduce missed deadlines. However, the remaining QoS settings weren’t optimised.

The Dynamic Resource Allocation System for Cloud Computing is discussed in Saraswathiet et al. [ 10 ]. ’s research paper. These studies highlight the distribution of virtual machines (VM) to users based on an analysis of workload factors. The basic concept of this work is that lower priority jobs (with a big task limit) should not impede the implementation of higher priority works (with a short deadline), and that VM resources for a consumer job should be assigned with zeal to achieve the deadline. In addition, because meeting these criteria is challenging, scheduling solutions do not take into account the dependability and accessibility of the cloud computing environment.

A hybrid cloud resource provisioning technique that blends autonomic computing with aa was proposed by Mostafa Ghobaei-Arani et al. A hybrid cloud resource provisioning technique that combines autonomic computing and reinforcement learning (RL) was proposed by Mostafa Ghobaei-Arani and colleagues [ 11 ]. The cloud layer concept created an autonomic resource provisioning architecture to help with the control Monitor, Analysis, Plan, and Execute (MAPE) cycle.

Hadeel Alazzam et al. [ 12 ] proposed a hybrid Tabu-Harmony technique for cloud computing job scheduling. Tabu and Harmony searches were created to improve the quality of search results. The throughput was raised while the makespan and total cost were reduced. Despite this, it was unable to increase process efficiency through improved design. The bin packing dilemma can be used to illustrate the scheduling dilemma.

A approach for load balancing that combines firefly and improved multiobjective particle swarm optimization was proposed by Francis Saviour Devaraj and colleagues [ 13 ]. [Citation needed] (IMPSO). The search space was narrowed down because to the development of Firefly (FF). After some time had passed, the IMPSO approach was developed in order to determine the improved response. It was determined through the use of the IMPSO algorithm that the ideal particle should have the shortest distance from point to line (gbest). The distance between a point and a line was minimised to get the best particle possibilities. To maximise resource consumption and reaction time, a suitable average load was achieved. However, memory and cost were not addressed in order to increase scheduling performance.

The execution time and cost were optimised by Zong-Gan Chen et al. [ 14 ] using cloud-based process scheduling. The architecture of a multi-purpose ant colony system was designed using a number of convolutional populations. The optimization target is established using non-dominated solutions from a global library, and a fresh pheromone updating strategy is created. A complementary heuristic technique for colony restriction and pheromone update rule support was created to keep the search in balance. On the other hand, performance evaluation in multi-cloud systems was not successfully adapted by cloud environments.

G. N. Reddyet et al. [ 15 ] introduced the whale optimization approach as a solution to the cloud data centre work scheduling problem. They evaluated resource utilisation, energy use, and service quality as fitness indicators. The whale optimization technique is used to optimise the suggested fitness function. The simulation findings indicate that the whale optimization method outperforms alternative methods for reducing energy consumption and optimising resource use while maintaining the service level agreement’s needed service quality.

Mahendra Bhatu Gawali and his fellow workers came up with a heuristic methodology [ 16 ]. [Citation needed] It utilised the Modified Analytic Hierarchy Process (MAHP) with Bandwidth Aware Divisible Scheduling for the purpose of work scheduling and resource allocation (BATS). Methodologies such as divide-and-conquer, divide-and-conquer optimization, and the longest anticipated processing time pre-emption (LEPT) are also utilised. Before being allocated to cloud resources, each work was regulated using an MAHP technique. The BATS + BAR optimization method was then used to disperse the resources [ 17 ]. There was also discussion of the bandwidth and load limitations of cloud services. By leveraging LEPT preemption, resource-intensive tasks were also avoided. However, neither the scheduling nor the load distribution were optimised by the method. An efficient work scheduling mechanism is needed to manage a pool of virtual machines in a cloud computing data centre while ensuring service quality and resource efficiency. Undoubtedly, VM failure lowers the overall system throughput. A VM recovery technique that enables VMs to be copied to another host [ 18 ] can be used to solve this issue. According to Jin et al. [ 19 ], the scheduling of virtual machines (VM) is a crucial component for the efficient administration of computing resources in a data centre. When compared to alternative resource provisioning, resource utilisation rose [ 20 ]. However, the multi-tiered cloud systems described in [ 21 ] did not include admission control approaches. according to Y. The idea of cloud virtualization technology, according to Ling et al. [ 22 ], enables users to access computer resources through virtual machines (VM) and rent them to businesses or individual users. As cloud issues decrease performance, they should be controlled using the fault-tolerance technique [ 23 ]. similar to K. and S. Gupta According to Deep [ 24 ], the problem might have been brought on by bad hardware, a broken virtual machine, network congestion, or a broken application. A Hybrid GGWO algorithm optimization has been proposed [ 25 ] which combines the crossover and mutation operations of GA with the exploration and exploitation stages of GWO. This work proves the superiority of GGWO with high accuracy compared with different optimization algorithms in terms of Root Mean Squared Error (RMSE) and computational time.

Due to the discontinuous nature of the solution to the typical FMS scheduling problem, Tawhid et al. technique.’s has been slightly modified in its Levy Flight operator to select the optimal task. It is feasible to believe that the principal advantages of cloud computing—quantifiable services, adaptability, and resource polling—offer the best solution to problems with cost, compatibility, and a lack of IT personnel [ 26 ].

As a result of the economies of scale that are made feasible by cloud computing, a rising number of enterprises are deciding to install their applications in data centres that are hosted in the cloud [ 27 ]. Scheduling all incoming jobs while adhering to the service delay bound is difficult for a private cloud [ 28 ]. Visuwasam and L. Maria Michael [ 29 ] developed a method for the scheduling of work that maximises the profit of a private cloud while retaining the associated latency limitation. This scheduling issue was created and resolved using a heuristic approach. Based on application requirements, Devi. K et al. [ 30 ] proposed a way to optimise data centre resources and enable cloud computing.

A. Sermakani [ 31 ] put out an idea for a method of load balancing and scheduling that ignored the sizes of individual jobs. The author took into account the server’s refresh times when processing queries. Balachander, K. [ 32 ] introduced task scheduling that takes bandwidth into account as a resource. The development of a nonlinear programming paradigm has made resource allocation between tasks possible. Priority-based job scheduling for cloud computing was proposed by Praveen D. S. [ 33 ]. When making judgments, a multiplicity of characteristics and features are evaluated. The primary focus of Subramani, Neelakandan, and their colleagues’ work [ 34 ] is task scheduling that takes numerous restrictions into consideration. Due to the current state of the art, the authors of this study are inspired to continue their investigation into the scheduling of tasks and the allocation of resources.

3. Proposed system

3.1. job scheduling in cloud computing.

There are numerous sites from which you can access the cloud resources. They won’t be put in the same location. There may be numerous tasks associated with the user-supplied query, and each job may require its own set of resources to run. The task scheduler assigns jobs to the appropriate resources. Fig 1 depicts the cloud-based job scheduling architecture.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g001

In Fig 1 , "U1, U2,… UN" represents the users, "W1, W2,… WN" represents the works or queries supplied by the users, "J1, J2,… JM" represents the tasks required to complete each work or query, and "R1, R2,… RP" represents the resources required to complete each work or query. Consider the following illustration: During the completion of a task, a user conducts a search that may include many occupations. Each task will be accomplished with a unique set of resources. The resources could be stored anywhere in the cloud. The job scheduler must assign the task to the right resource. An effective technique is required to correctly schedule jobs in a cloud computing environment. For scheduling cloud-based workloads, a novel resource searching and allocation method is proposed and is depicted in Fig 2 . The Efficient Hybrid Job Scheduling Optimization (EHJSO) algorithm generates preliminary solutions based on user-supplied queries. The Cuckoo Search Optimization (CSO) and Grey Wolf Job Optimization (GWO) algorithms are then used to evaluate each solution, and the EHJSO algorithm obtains the best solutions from the CSO and GWO algorithms. The final optimal solution is utilised to schedule jobs after EHJSO performs search and find the optimal resource which are suitable to execute the given task.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g002

In the part that follows, we will discuss the hybrid strategy that was developed in order to successfully plan jobs in an environment based on cloud computing. Fig 2 depicts the many actions that should be taken in order to implement the proposed plan.

As shown in Fig 2 , the user-supplied queries are initially used to develop preliminary solutions for the hybrid EHJSO algorithm. The best EHJSO solutions are then sent to the algorithms Cuckoo Search Optimization (CSO) and Grey Wolf Job Optimization (GWO). The jobs are scheduled according to the optimal solution determined by the Cuckoo Search Optimization (CSO) and Grey Wolf Job Optimization (GWO) algorithms.

3.1.1. Preliminary solutions.

Based on the user’s inquiries, the initial solutions for processing the hybrid EHJSO algorithm are generated. The user submits a query to finish a task, and the task is completed in accordance with the user’s specifications. Numerous jobs will be executed by the query, each of which will use a different set of resources. It should be recognised that not every resource can complete every task at once. Although different resources can complete the same task, their efficiency and effectiveness will differ. The execution time and resource quality for a sample of jobs are shown in Table 1 .

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t001

algorithm task scheduling

In the equation that was just presented, in order to fulfil the user’s first query, the jobs {J 1 , J 2 and J 4 } need to be carried out; in order to fulfil the user’s second query, the jobs {J 1 , J 2 and J 3 } need to be carried out; and in order to carry out the user’s Nth query, the jobs {J 1 , J 3 , J 4 and J M } need to be carried out. a response that was produced by making use of these user-provided queries.

3.1.2 Fitness evaluation.

algorithm task scheduling

In the above equation, fit ( S i ) represents the fitness of the i th solution; T max is the maximum amount of time a resource can spend to complete a task; and T total is the total amount of time required by resources to accomplish all tasks. P represents resource quality, while M represents the total number of tasks. Two tasks from different users cannot be executed concurrently on the same resource, nor can two jobs from the same user be executed concurrently within the same time frame.

3.2 Cuckoo search algorithm

A population-based metaheuristic stochastic global search approach is known as the cuckoo search (CS). In a computer science algorithm, cuckoo eggs stand in for possible solutions. The CS algorithm is exemplified by three rules, which are as follows: One egg at a time is laid by each cuckoo in a nest that has been chosen at random. The next generation will inherit the most comfortable nest with eggs of the highest possible quality (solutions). The number of host nests that are currently available is restricted, and there is a probability P a ∈[0,1] that each host will discover at least one alien egg. If a host finds a cuckoo egg, the egg might be discarded, or the host might donate the egg to an invading species instead of keeping it in its own nest.

P a of n x ( t +1) for

algorithm task scheduling

Algorithm 1:Cuckoo search Algorithm

Initialization

algorithm task scheduling

Generation t = 1

Initial population of n host nests x i ( i = 1,2,…, n )

While (t<(Maximum Generation) or (stop criterion))

    Get a cuckoo (say i) randomly by Levy flights

    Evaluate fitness for cuckoo F

    Choose a nest among n (say j) randomly

     If ( F i > F j ) then

        Replace j by the new solution

     End if

    Abandon a function ( P a ) of worse nests and build new ones

    Keep the best solutions (or nests with quality solution)

    Rank the solution and find the current best

    Update the generation number t = t+1

3.3. Grey Wolf Optimisation algorithm

The University of California at Berkeley is responsible for the development of the swarm-based algorithm known as GWO. Its social behaviour in the wild is fashioned after that of grey wolves in their natural environment, which is a good thing. The pursuit of prey by wolves, through stalking and hunting, demonstrates the pursuit of the most advantageous option. While living in the wild, grey wolves prefer to congregate in groups known as packs. Wolf packs are typically comprised of five to twelve individuals in size. Additional to this, the wolves in the pack are divided into four groups according to their social position, which makes the hunting process easier. The following are the names of the groups: In a dog pack, the Alpha ( α ), who can be either male or female, is the pack’s leader and is in charge of making decisions about hunting, waking up, and sleeping. Beta ( β ) is the second level of wolves, and it is made up of either male or female wolves who assist the other wolves in the pack with decision-making and group decision-making. Delta ( δ ), the third rank, is responsible for a variety of important responsibilities such as carer, sentinel, pack leader, and hunter. Omega ( ω ) is the final and most difficult level to reach. Despite being the weakest of the levels in the hierarchical paradigm, this one serves as a scapegoat who must obey by the mandates of the higher-ups.

3.3.1. The mathematical model of Grey Wolf Optimization (GWO) algorithm.

Grey wolves have a social order, and their hunting strategy is like that of grey wolves. There are four levels in it.

Level-1 : Alpha (α): It controls how decisions are made (for example, decisions about shooting, wake-up time, and slumber location).

Level-2 : Beta (β): The contender has the best chance of succeeding the wolf as the new leader. acts as an advisor or consultant for

Level-3 : Delta (δ): Which wolves have earned their respect at this time are identified to the wolves. They are searching for x wolves. They perform a variety of roles, including scouts, sentinels, elders, group carers, and predators.

Level-4 : Omega (ω): They believed that the thinnest wolves were 1⁄4 equal wolves. They assume the role of accuser.

3.3.2. Mathematical model and algorithm.

algorithm task scheduling

Z i denotes the binary worth (discreate value) characterized by 0 or 1 and y i denotes the rate of the explanation (continues values) produced by the initialization and concluding phases of the algorithm. According to Eq ( 19 ), the binary number is zero if the complete rate of the residual falls between 0 and 0.4999, or between 1.5 and 1.9999, whereas the binary number is 1 otherwise. For a binary number that is between 0.5 and 1.4998, the complete rate of the remaining falls between these two ranges.

The first three optimal locations for the alpha, beta, and delta wolves will also draw the attention of the remaining wolves in the packet after one iteration of the algorithm, as seen in Fig 3 . Alphas is the response (position) with the most correct categorization, followed by betas and deltas. At the end of repetition of the process, the classification algorithm is qualified and authenticated, and the accurateness of the classifier is intended for each subsection (solution) of the situation medium.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g003

algorithm task scheduling

A list of resources is included in each resource subset. If two subsets have the same accuracy but differ in the sum of characteristics, the subsection with the less structures will be chosen. Furthermore, the standards of (W 1 ) and (W 2 ) in an above calculation are configurable, with the disorder that (W 1 ) be increased by Correctness and (W 2 ) by the opposite Sum of Designated Structures.

4. Result and discussion

4.1. experimental setup.

In this article, the usefulness of the suggested job scheduling technique is reviewed and analysed. To test the suggested technique for scheduling work using Java (jdk 1.8) and CloudSim, a workstation with an i5 processor running at 2.30 GHz, 8 GB of RAM, and a 64-bit version of Windows 10 was utilised.

4.2 Performance analysis and comparison

By varying the number of iterations and the preliminary solution provided in response to three distinct inputs, the performance of the proposed EHJSO technique is compared to that of alternative job scheduling algorithms in terms of fitness achieved and time needed to execute the scheduled jobs.

4.3. Makespan

Table 2 and Fig 4 present in full the comparison analysis that Makespan performed between the EHJSO methodology and other methodologies. The result shows that the EHJSO method has outperformed the other techniques in all aspects. For example, with 20 number of Iterations, the EHJSO method has taken only 96.14 sec to respond, while the other existing techniques like BAT, WBAT, Firefly and BLA have a Makespan of 110.37sec, 106.38sec, 102.35sec and 98.15sec respectively. Similarly, for 100 number of Iterations, the EHJSO method has a Makespan of 95.83 sec while the other existing techniques like BAT, WBAT, Firefly and BLA have 118.54sec, 108.37sec, 103.86sec and 100.98sec of Makespan, respectively.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g004

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t002

4.4. Computation time

In comparison to other methodologies, the calculation time analysis of the EHJSO methodology is displayed in Table 3 and Fig 5 . The data clearly shows that the EHJSO method has outperformed the other techniques in all aspects. For example, with 20 number of Iterations, the EHJSO method has taken only 216.65ms to respond, while the other existing techniques like BAT, WBAT, Firefly and BLA have a Computation time of 417.76ms, 336.75ms, 302.54ms and 263.76ms respectively. Similarly, for 100 number of Iterations, the EHJSO method has a Computation time of 251.54ms while the other existing techniques like BAT, WBAT, Firefly and BLA have 458.17ms, 392.87ms, 3 of Computation time59.27ms and 289.43ms, respectively.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g005

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t003

4.5. Fitness

The fitness of the EHJSO technique is compared to other methods in Table 4 and Fig 6 . The data clearly shows that the EHJSO method has outperformed the other techniques in all aspects. For example, with 20 number of Iterations, the EHJSO method has taken only 7.268 sec to respond, while the other existing techniques like BAT, WBAT, Firefly and BLA have a Fitness of 13.764sec, 11.437sec, 10.873sec, and 8.517sec respectively. Similarly, for 100 number of Iterations, the EHJSO method has a Fitness of 8.248 sec while the other existing techniques like BAT, WBAT, Firefly and BLA have 13.378sec, 13.427sec, 11.864sec and 10.817sec of Fitness, respectively.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g006

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t004

4.6. Performance based on iterations

Table 5 and Fig 7 describe the analysis of Performance based on Iterations of the EHJSO technique with the existing methods. The findings make it abundantly evident that the suggested approach is superior to the various other methods in every respect. For example, with 20 Iterations, the EHJSO method has a Performance based on Iterations of 88.22% while the other existing methods like BAT, WBAT, Firefly and BLA have a Performance based on Iterations of 76.92%, 77.95%, 80.87% and 83.69%, respectively. Similarly, with 100 Iterations, the proposed method has 93.86% of Performance based on Iterations while the other existing methods, BAT, WBAT, Firefly and BLA have a Performance based on Iterations of 78.74%, 81.54%, 83.28% and 86.28%, respectively. This proves that the EHJSO technique has higher performance with greater Performance based on Iterations.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g007

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t005

4.7. Success rate

In Table 6 and Fig 8 , the Success Rate of the EHJSO methodology in comparison to the current methods is shown. The data demonstrate that the suggested technique performs better than the alternatives in all respects. For example, with 20 Iterations, the EHJSO method has a Success Rate of 95.63% while the other existing methods like BAT, WBAT, Firefly and BLA have a Success Rate of 85.64%, 87.84%, 90.16% and 93.15%, respectively. Similarly, with 100 Iterations, the proposed method has 97.17% of Success Rate while the other existing methods, BAT, WBAT, Firefly and BLA have a Success Rate of 86.93%, 89.72%, 92.34% and 94.89%, respectively. This proves that the EHJSO technique has higher performance with greater Success Rate.

thumbnail

https://doi.org/10.1371/journal.pone.0282600.g008

thumbnail

https://doi.org/10.1371/journal.pone.0282600.t006

5. Conclusion

In this research, we propose a hybrid solution for task scheduling that combines the advantages of the Cuckoo search method and the Grey Wolf job optimization algorithm. Dominance of these algorithm together yield best results as we have measured its performance using different metrics. The newly generated preliminary solutions are fed into the EHJSO algorithm so that it can carry out its intended function. The EHJSO algorithm would optimise job scheduling by taking into consideration execution time and the quality of currently available resources. Following that, the jobs that had been previously planned are carried out, and the user is presented with the output that is pertinent to the query that they submit. The proposed method EHJSO is evaluated and compared to other similar works of cloud resource allocation. The evaluation takes into account both the total amount of time needed to complete the given tasks and the level of fitness attained. The proposed method performed better than the competing algorithms in their performance, Both in terms of general health and the amount of time required to carry out the indicated duties, the equipment was insufficient. In order to emphasize the performance of EHJSO various relevant metrics are taken and compared with other works as well. More research is required to see whether the proposed method can be applied to a variety of large-scale and real-world optimization problems.

ACM Digital Library home

A back adjustment based dependent task offloading scheduling algorithm with fairness constraints in VEC networks

School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230009, China

Engineering Research Center of Safety Critical Industrial Measurement and Control Technology, Ministry of Education, Hefei, 230009, China

Intelligent Manufacturing Institute, Hefei University of Technology, Hefei, 230051, China

New Citation Alert added!

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Save to Binder

Computer Networks: The International Journal of Computer and Telecommunications Networking

ACM Digital Library

In Vehicular Edge Computing (VEC) networks, task offloading scheduling has been drawing more and more attention as an effective way to relieve the computational burden of vehicles. However, with the intelligent and networked development of vehicles, the complex data dependency between in-vehicle tasks brings challenges to offloading scheduling. Moreover, scheduling fairness has a growing impact on the average Quality of Service (QoS) of vehicles in the network. To this end, we propose a dependent task offloading scheduling algorithm with fairness constraints based on a back adjustment mechanism. First, to solve the execution constraint problem caused by dependent tasks and the scheduling fairness problem in multi-user scenarios, a two-level task sorting algorithm is given to determine the scheduling sequence of tasks. Then, the sequential task offloading scheduling process is modeled as a Markov Decision Process (MDP) and solved by a reinforcement learning method. Finally, a back adjustment mechanism is designed to resort the task sequence and achieve the required scheduling fairness by iterative process. The simulation results show that the proposed algorithm significantly improves the scheduling fairness and reduces the average application completion time compared with other algorithms.

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Published in

Elsevier B.V.

In-Cooperation

Elsevier North-Holland, Inc.

United States

Publication History

Author Tags

Funding Sources

Other metrics.

Article Metrics

This publication has not been cited yet

Digital Edition

View this article in digital edition.

Share this Publication link

https://dl.acm.org/doi/10.1016/j.comnet.2022.109552

Share on Social Media

Export Citations

We use cookies to ensure that we give you the best experience on our website.

SCDA

Simulation, Scheduling, Optimization, ERP

Genetic job shop machine task scheduling

algorithm task scheduling

Job shop scheduling is a challenging problem for mathematical programming. Yet, when applicable, the operational impact can be great. In today’s blog post I will contribute to this topic with a technical example covering genetic job scheduling.

A simple job shop machine task scheduling problem

The example that I suggest comprises 3 machines (M1, M2, and M3) and 5 tasks (T1, T2, T3, T4, and T5). The jobs must be processed in a job shop. Each task can be processed on any machine, but there are certain constraints that limit the possible sequence of tasks on each machine.

For example, the constraints might be as follows:

I now want to find a task schedule on each machine that minimizes the total processing time for all tasks.

Job shop task scheduling with genetic algorithms

To solve this problem using a genetic algorithm I represent a schedule as a sequence of integers. Each integer represents the task to be processed next. For example, a schedule [1, 2, 3, 4, 5] means that we process task 1 on machine 1, followed by task 2 on machine 2, followed by task 3 on machine 3, and so on.

We can use a simple fitness function that calculates the total processing time for a given schedule, taking into account the constraints. For example, we might calculate the processing time as follows:

We can then use a genetic algorithm to generate and evolve schedules until we find a good one.

Genetic scheduling algorithm example

Here’s an example of how the algorithm might work:

Job shop scheduling problems are known to be NP-hard, which means that finding the optimal solution is computationally intractable for large problem instances. As a result, genetic algorithms may take a long time to converge, or may not be able to find the optimal solution at all.

Final remarks and related job shop scheduling content

Job shop scheduling is a challenging subject. Mathematical programming, and in this case, genetic algorithms, can be effective in some cases. In other cases problems can be too complex to solve analytically with reasonable effort. Should this be the case discrete-event simulation can help overcome some of the challenges of mathematical programming application. The following SCDA blog post describes the challenges of job shop scheduling:

You can read more about simulation, especially disrete-event simulation, in the following SCDA publications. Simulation modelling can implement scheduling solutions that can overcome those challenges of mathematical programming that make it unfit for application in real-world problems.

Constraint programming is another approach to job shop scheduling that you can take. Here is an example that can get you started:

algorithm task scheduling

Data scientist focusing on simulation, optimization and modeling in R, SQL, VBA and Python

Leave a Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

DRLBTS: deep reinforcement learning-aware blockchain-based healthcare system

Scientific Reports volume  13 , Article number:  4124 ( 2023 ) Cite this article

Metrics details

Industrial Internet of Things (IIoT) is the new paradigm to perform different healthcare  applications with different services in daily life. Healthcare applications based on IIoT paradigm are widely used to track patients health status using remote healthcare technologies. Complex biomedical sensors exploit wireless technologies, and remote services in terms of industrial workflow applications to perform different healthcare tasks, such as like heartbeat, blood pressure and others. However, existing industrial healthcare technoloiges still has to deal with many problems, such as security, task scheduling, and the cost of processing tasks in IIoT based healthcare paradigms. This paper proposes a new solution to the above-mentioned issues and presents the deep reinforcement learning-aware blockchain-based task scheduling (DRLBTS) algorithm framework with different goals. DRLBTS provides security and makespan efficient scheduling for the healthcare applications. Then, it shares secure and valid data between connected network nodes after the initial assignment and data validation. Statistical results show that DRLBTS is adaptive and meets the security, privacy, and makespan requirements of healthcare applications in the distributed network.

Introduction

The usage IIoT paradigm based on machine learning schemes have been rising day by day 1 , the goal is to offer different automated machine learning enabled healthcare services to users. The digital healthcare applications such as COVID-19 detection systems, heartbeat monitoring systems, cancer detection systems, and many other applications executed on distributed IIoT networks 2 . The IIoT network is made up of different technologies, such as IoT devices (like healthcare sensors), blockchain technologies, different wireless technologies (e.g., Bluetooth and 5G/6G technologies), and cloud computing (e.g., fog nodes and edge nodes). These technologies are exploited to run the IIoT healthcare applications based on above mentioned technologies 3 . Task scheduling is the most important mechanism of IIoT paradigm for healthcare applications to meet their quality of service needs during execution in different technologies. There are different types of healthcare applications, such as workflow, coarse-grained, and object-based Fine-grained tasks. On the other hand, all medical services are carried out according to a sequential workflow pattern. Because of this, it is hard to schedule healthcare workflow applications based on the quality of service they need in the IIoT paradigm. Generally, healthcare workflow applications are scheduled on cloud computing-based resources distributed across different networks. Static scheduling causes significant problems in healthcare applications. For example, the assignment of healthcare workflows to cloud resources by static schedulers cannot be changed if the performance of a healthcare application degrades while it is running. Dynamic scheduling is a good way to make changes during the execution of a healthcare workflow application on the cloud. Regardless, dynamic schedulers continue to fail due to the limited capacity of edge computing resources for applications.

To solve the workflow healthcare application scheduling problems in distributed cloud computing, many adaptive task scheduling schemes have been suggested in the literature. In the literature, many supervised and unsupervised learning-based scheduling approaches are suggested to predict and run workflows quality of service requirements. The reinforcement learning based scheduler optimizes the healthcare application performance with q-learning policies and value functions. The IIoT paradigm based healthcare problem is not like the game theory problem or the factory automation problem, but it is a dynamic problem with the data. However, these healthcare applications are distributed and offload their data to various independent nodes, security issues are the primary challenges in decentralized uniform nodes.

Recently, many IIoT paradigm based applications have been implemented based on decentralized blockchain technology for independent nodes, where data tampering and security issues are somehow reduced in the network 4 . Blockchain technology has different types: public, private, consortium, and community. Ethereum, Corda, Hyper-ledger, IBM, and many more are all public frameworks for blockchain technology that is already in use. Many machine learning models are suggested with blockchain technology to control the load balancing between resources and energy consumption in the IoT network 5 , 6 . Deep learning is a type of machine learning in which we can use different artificial neural networks to improve a lot of IoT system metrics. However, all existing machine learning-enabled industrial IoT systems support only financial applications. The healthcare applications are widely ignored in the literature 6 , 7 , 8 .

These workloads, such as fine-grained objects, and cluster workload are deployed in the healthcare system. These workloads  are set up based on an IoT fog cloud, that reduces the  processing time. The current IIoT paradigm has different types, like the Internet of Medical Things (IoMT) and Industrial Internet of Healthcare Things (IoHT), these paradigm only assumes the workloads mentioned earlier on the edge cloud network with different constraints (like delay, energy, tardiness, and resources). The existing IoMT and IoHT systems use blockchain technology and hasn’t paid much attention to workflow applications, which run in different computing nodes due to limited resources.

The single agent-based reinforcement learning approaches suggested in these studies 9 , 10 . The goal was to use trial and error techniques with dynamic gain to improve the performance of distributed applications. A few studies suggested multi-agent-based policies with the cooperative nodes in the network 11 , 12 . The goal is to transfer workflow based on their policies for predicting time series and get the best reward for a given discount factor. Because there are more workloads and nodes, the current multi-agent with deep reinforcement learning and the q-learning policy has more variation and delay time. Distributed applications can run into trouble when multi-agent policies with security constraints, deadline constraints, cost constraints, and delay constraints are in place.

This paper represents an algorithm framework for healthcare workflow applications called DRLBTS (deep reinforcement learning and blockchain-enabled task scheduling). DRLBTS consists of different schemes, such as Q-learning, blockchain, and task scheduling. The goal is to minimize the makespan of the workflow applications. The makespan is the combination of the communication time and computation on different nodes (e.g., mobile, fog, and cloud) during processing. The paper makes the following to the considered workflow scheduling problem in heterogeneous computing nodes.

(1) we design the system based on three types of processing nodes: mobile, fog, and cloud nodes. The goal is to run different workflow tasks, such as lightweight, delay and delay tolerant tasks on different nodes.

(2) The system provide the public blockchain scheme to maintain the data processing workflow of healthcare applications across multiple computing nodes.

(3) The application partitioning and resource profiling schemes are designed to maintain resource balancing during processing workflows on computing nodes.

(4) Based on their current state and their reward, the study creates an agent policy that helps move work from mobile devices to collaborative fog nodes and the cloud. The work implements the delay optimal deep reinforcement learning scheme, whereas adaptive scheduling is devised based on q-learning schemes.

(5) The designed model is lightweight proof of work (PoW), focusing on data validity, security, and efficient processing among different nodes.

(6) The study also provide a mathematical model, in which workflow task assignment, processing time, and communication time are determined by different equations. The objective function and constraints are analyzed in the mathematical model.

The manuscript is divided into sub-parts to define the different steps of the work. In related work, there is a lot of talk about the current deep reinforcement learning methods and blockchain schemes, as well as their solutions and limits. The proposed DRLBTS illustrates how the system works to solve the problem. The DRLBTS algorithm part demonstrates how to solve the problem with different schemes in a given number of steps. The graphs and results of the proposed methods are compared to the baseline methods already in place. This is done to talk about evaluation and implementation. In conclusion, the study’s accomplishments, practices, problems, results, and future work were discussed.

Related work

Many studies exploited reinforcement learning, deep reinforcement learning, and blockchain schemes for IIoT paradigm enabled healthcare applications. The study 1 presented the deep reinforcement learning-based scheme for IIoT healthcare applications in the fog and cloud network. In the study, a single agent was made based on neural network, reinforcement learning, and a prediction of the state and time series in the network. The study 2  presented the deep reinforcement learning-enabled system for healthcare workloads to optimize the assignment problem. The objective was to optimize resource constraints and allocation constraints in the study. These studies 3 , 4 , 5 investigated the resource allocation problems for healthcare applications to handle the security issues in the mobile cloud network. These studies devised blockchain technology to handle the data validation between nodes. In this work, the time series prediction and descent gradient-enabled weights were used to meet the needs of the applications. The dynamic allocation of workload based on reinforcement learning with the supervised learning label was investigated by these studies 6 , 7 , 8 , 9 . These studies reducing noise by turning unstructured data into structured data. The dynamic task scheduling is based on machine learning and blockchain approaches investigated in these studies 10 , 11 , 12 , 13 , 14 , 15 . These studies devised task-scheduling methods based on different reinforcement learning with different features. The unsupervised learning clustering technique is used to analyze and group together similar types of resources so that tasks that require a lot of data or processing power can be run on them. The aggregation time of resource clustering was determined in these studies. However, scheduling and waiting time were and blockchain security processing time widely ignored in these studies. In these studies, the single-agent scheduling scheme was looked at. In this scheme, data is stored in a different state, and the current state learns from the previous state. The stochastic descent gradient is based on the reinforcement learning task scheduling scheme suggested in these studies  16 , 17 , 18 , 19 , 20 , where the heterogeneous cloud nodes (e.g., fog and cloud nodes) and internet of things medical devices are considered in the study. The objective was to optimize the deadline and resource constraints of the devices. The blockchain-enabled task scheduling is based on supervised learning schemes suggested in these studies 21 , 22 , 23 , 24 , 25 . In the decentralized cloud of the Internet of Things (IoT), proof of work and proof of stake methods are suggested. The data validation at each node matches the hashing and makes immutable network transactions. These algorithms are AES, RSA, MD5, SHA-256 26 , 27 , 28 , 29 , 30 , 31 implemented in blockchain technology to validate the data between IoT nodes and cloud nodes in the distributed network.

Recently, many studies 31 , 32 , 33 , 34 , 35 suggested blockchain enabled deep reinforcement learning schemes with fixed states for healthcare  applications. The goal is to run the healthcare workloads on the fixed processing nodes and improve the reward scores for the applications. The blockchain-enabled homomorphic security and privacy schemes are suggested in these studies 36 , 37 , 38 , 39 , 40 . These studies implemented blockchain systems that have smart contracts and distributed mining processes to execute fine-grained and coarse-grained healthcare workloads  in the distributed fog cloud networks. All studies investigated task scheduling problems with fine-grained and coarse-grained healthcare workloads. In these studies, the delay, time, deadline, and security validation constraints were considered when the problem was set up. However, these studies should have considered the workflow of healthcare applications on mobile devices. The applications for mobile workflow are complicated and must run on different nodes, like the fog node and the cloud. The main reason is that mobile workflow applications are made up of small tasks that take a lot of processing power and must be run on different nodes. So, deadlines, costs, security, and the possibility of failure are all very important regarding workflow applications that run on different nodes.

Is a new paradigm of the healthcare application that uses data from different clinics to provide complex remote digital healthcare services to the patients. The IoHT consists of complex biomedical sensors, wireless technology, and cloud computing services that can measure heartbeats, blood pressure, and other things. Industrial healthcare still has to deal with many problems, such as security, task scheduling, and the cost of processing in the cloud. This paper presents a new solution to the issues mentioned above. Existing reinforcement learning and deep learning methods took all the layers and parameters into account, but they were too time-consuming. Also, security is fundamental when different clinics that work independently but in different critical work together on data to make disease prediction and patient applications more efficient. Existing security systems only look at centralized and similar nodes, but when data is moved between clouds and fog, there are security issues. The study devises a deep reinforcement learning-enabled blockchain-based task scheduling (DRLBTS) with many goals. DRLBTS provides efficient scheduling in terms of makespan and meeting the requirements of healthcare workflow applications in the mobile fog cloud networks. Shares secure and valid data between connected network nodes. The proposed q-learning-based scheduling executes all workflow applications until and unless they meet their requirements in the mobile fog cloud networks.

Proposed DRLBTS system

The study presents a deep reinforcement learning-aware blockchain-based healthcare system, as shown in Fig. 1 . The study considers the industrial healthcare workflow, which is a single workflow, e.g., v1=1,...v=10. The study partitioned the workflow application at the design time into mobile, fog, and cloud tasks. The main reason is that the mobile device doesn't have enough resources to handle all tasks, fog nodes can only handle delay-sensitive tasks, and cloud computing can handle delay-tolerant tasks. The study differentiates tasks based on their annotations. For instance, blue nodes show mobile tasks, yellow nodes show fog tasks, and red nodes show cloud tasks. The patients' interfaces are their mobile devices, which they can use to request or upload data in the workflow pattern. The hospital's processing tasks will be executed on the system's fog node. The cloud processing tasks will store their huge data after execution on the cloud node. The multi-agent heterogeneous mobile fog cloud networks demonstrate that workflow application tasks in a workflow pattern should be performed on different nodes. The mobile agent, for example, completes task 1, the fog agent completes tasks 2, 3, 4, 5, and 6, and the cloud agent completes tasks 7, 8, 9, and 10. The DRLBTS framework comprises different plans, like task scheduling, q-learning, blockchain schemes, and workflow task sequencing. All the computing nodes are cooperative and share the data. All the nodes are connected based on the blockchain network, where each piece of data is converted into a hash and shared in a valid format. B stands for the blockchain blocks, each of which has a unique identification number (ID). Each task transaction must be verified using the PoW scheme. Application status and resource profiling are schemes that notify the system about the entire execution of the workflow tasks on the mobile fog cloud network.

figure 1

DRLBTS: deep reinforcement learning-aware blockchain-based healthcare system.

Electronic health record (EHR)

In our scenario, EHR consisted of mobile patient  application where hospital services integrated at the smart-city and global levels based on fog and cloud networks. The mobile patients can access city level services by fog nodes. Furthermore, few services are globally available based on cloud computing in the entire region for healthcare application. The study considered the workflow of healthcare applications and exploited them on different nodes such as mobile, fog, and cloud. To ensure security, the study implemented blockchain schemes and performed valid data transactions among nodes.

Problem mathematical model of mobile-fog-cloud agents

In this paper, the mobile workflow is represented by the workflow, e.g., ( G,V ), where the V set of workflow tasks consists of mobile, fog, and cloud tasks. There is an A number of workflows in this work. E is the communication weightage between local, fog, and cloud tasks. The workflow is broken down into three sub-task sets: mobile [ v  = 1 V ], and cloud [ v  = 1 V ]. Each workflow has data, like data V , broken up into subtasks and handled by the different agents. The investigation is centered on three computing nodes: mobile agents m , fog nodes f , and cloud nodes c . Each node, such as mobile m, fog node f, and cloud node c , has a particular speed and resources in the network, for instance, epsilon m , epsilon f , andepsilon c , for example, represent the resources of mobile agents, fog agents, and cloud agents in the network, with their respective speeds, e.g., \({\zeta_{m}}\) , \({\zeta_{f}}\) , \({\zeta_{c}}\) . Three kinds of agents can run workflow applications based on how their tasks were marked up when the applications were being made.

Equation ( 1 ) determines the execution time of a set of tasks on the different nodes. Whereas, y v = 1 , 2 , 3 is the assignment vector which shows that, either task are assigned or not to the particular mobile, fog, and cloud nodes. The study determines the communication time of workflow tasks in the following way.

Equation ( 2 ) determines the communication between the mobile node and the fog node, and the fog node’s proximity to the cloud node determines the communication. We assume all nodes have fixed data communication bandwidth in the network.

The State Equation ( 3 ) determines that each state has a complete execution process and a blockchain scheme to transfer workflow between tasks. The total execution time of all workflow applications is determined below.

Equation ( 4 ) determines the makespan time of all workflow applications at the different computing nodes for the considered problem. The study is to be optimized for the objective function of the study in the following Eq. ( 5 ).

Equation ( 6 ) illustrates that all the local tasks must be executed under the resource limit of mobile nodes in the network.

Equation ( 7 ) illustrates that all the fog tasks must be executed under the resource limit of fog nodes in the network.

Equation ( 8 ) illustrates that all the cloud tasks must be executed under the resource limit of cloud nodes in the network. Each workflow has a deadline, e.g., D G , and has the following constraint.

The deadline Eq. ( 9 ) determines that all workflow applications must be executed under their deadline bias in the network.

DRLBTS algorithm methods

The study presents the DRLBTS algorithm framework to solve the task scheduling problem on the different computing nodes for the workflows. DRLBTS consists of different schemes to solve the problem in different steps. Initially, DRLBTS introduces the deep q-learning approach where tasks are divided into different categories as shown in Algorithm 1. Furthermore, Algorithm 1 consists of different schemes to show the execution of applications with their workflows. The divided tasks are distributed among mobile, fog, and cloud-based systems based on their annotations. The annotations are types of tasks executed on mobile, fog, and cloud tasks, decided by the application’s partitioning scheme in the design time of the system. All tasks are scheduled on computing nodes based on their types. However, with adaptive q-learning enabled scheduling handles all failure tasks and reschedule them on the available resources computing nodes.

figure a

Deep reinforcement learning multi-agents

The considered task scheduling problem has a finite loop with different states, the optimal policy and value function are examined the execution performance of workflow tasks. In our study, the deep reinforcement learning method is a combination of two different approaches, such as deep learning and reinforcement learning. The main reason is that we obtained the results of running applications through trial and error parameters because the availability of computing resources change all the time. We use supervised learning to label all of the inputs in advance, such as mobile [ v = \(\text{v} \in V\) ], f og [ v = \(\text{v} \in V\) ], and f og [ v = \(\text{v} \in V\) ]. The study considers multi-agent systems where each agent can communicate with another agent to perform tasks and share valid task data for further execution. The study initially divided the deadlines of all workflows according to their makespan in the following way.

Equation ( 10 ) determines the deadline of each task based on its execution time. The probability of the study is determined in the following

whereas t is the timestamp of the state transition in work. The equation ( 11 ) determines the probability of each agent. The policy of the objective function X determines the following.

Equation ( 12 ) determines the optimal policy of the workflows in different network states.

Policy enabled workflows assignment in different agents

The Algorithm 2 demonstrates that all agents, such as mobile, fog, and cloud, must execute their tasks in their respective states with the optimal policy and objective in work. In a mobile cloud network, each state must meet the deadline, security, and makespan of the workflows between the different pieces of data. All the agents are cooperative and share their data between tasks. The main reason is that, the system executes the tasks of workflow applications on different nodes. Therefore, the data must be shared between tasks on the different nodes.

figure b

Adaptive task scheduling and blockchain mechanism

Blockchain-enabled enabled scheme and adaptive scheduling scheme maintains the performance of workflow tasks among different agent states illustrated in Fig.  2 . At the time of design, all tasks are split up into those done on the local machine and those done on the fog node and cloud. This is shown in Fig.  2 . At the same time, the execution was divided into different states and ran in other time transitions with the policy and reward constraint. Algorithm 3 is the adaptive scheduler based on blockchain technology in work. The scheduling divides into different states when optimizing the objective function of the workflow applications. In the initial state, all the mobile tasks are scheduled on the mobile device according to mobile resources and speed with the state, action, and transition. Algorithm 3 has the following execution process with different steps.

figure c

Deep-reinforcement learning-enabled blockchain mechanism.

Mobile Execution: In the schedule from steps 1 to 10, all the designed annotated local schedules at the mobile device and applied the blockchain validation in the mobile device. The initial state from s 1 , a 1 , t 1 is the first action and transition in the state. In this state, for the application, only v 1 , v 3 tasks are scheduled on the mobile node j 1. This process will encrypt based on Secure hashing algorithm (SHA-256) bits, and cryptographic data offload from one j 1 to j 2 until and unless the current and previous hash is matched. The model-free optimal policy call optimizes and adds the reward in the q-learning sequence with the positive execution. The process of mobile task execution is represented in Fig.  2 .

Fog Execution: In the schedule from steps 11 to 23, all the designed annotated yellow tasks schedule at the fog node, and apply the blockchain validation in the fog node. The initial state from s 2 , a 2 , t 2 is the first action and transition in the state. In this state, for the application, only v 4 , v 9 , v 10 tasks are scheduled on the fog node j 2. This process will be encrypted based on Secure hashing algorithm (SHA-256) bits, and cryptographic data offload from one j 2 to j 3 until and unless the current and previous hash matches. The model-free optimal policy call optimizes and adds the reward in the q-learning sequence with the positive execution. The process of mobile task execution is represented in Fig.  2 .

Cloud Execution: In the schedule from steps 24 to 35, all the designed red annotated cloud schedule tasks at the mobile cloud node and apply the blockchain validation in the cloud computing. The initial state from s 3 , a 3 , t 3 is the first action and transition in the state. In this state, for the application, only v 2 , v 5 , v 6 , v 7 , v 8 tasks are scheduled on the mobile node j 3. This process will be encrypted based on Secure hashing algorithm (SHA-256) bits, and cryptographic data offload from one j 3 to j 2 until and unless the current and previous hash matches. The model-free optimal policy call optimizes and adds the reward in the q-learning sequence with the positive execution. The process of cloud task execution is represented in Fig.  2 .

Evaluation and implementation part

 In this session, we discuss the implementation of DRLBTS and baseline algorithms and their performance in result discuss part.  The experimental parameters exploited in the implementation are defined in Table 1 . The study compares the obtained data results based on the statistical mean values by the relative percentage deviation as shown in Eq. ( 13 ). Whereas, X represents the objective function of the study, and X* is the obtained optimal objectives in the adaptive scheduling. RPD% differentiates between optimal and objective function and initial obtained objective function of the workflow applications by the adaptive scheduling. 

Equation ( 13 ) shows the statistical analysis of the proposed methods based on the given data with both single variance and multi-variance during simulations.

Use-cases and baseline approaches of workflow healthcare applications

The study designed the simulation based on different layers, such as industrial healthcare workflow, multi-agent heterogeneous nodes, and mobile fog and cloud agents. The baseline of the code is designed on edgexfoundry where layers can easily implement due to the open-source application programming interface. The industrial workflow applications are divided mobile devices into different parts such as mobile, fog, and cloud tasks.

For the experimental comparison, the study used existing algorithms and workflow workloads, which are described below.

DQN (Deep Q-Learning) implemented as Baseline 1 is widely approached in the deep reinforcement learning approach to solve the problem of heterogeneous computing. This method is widely exploited in these 1 , 2 , 3 , 4 studies when considering similar workflow problems in different computing nodes.

DDPG machine learning approach is widely implemented as the Baseline 2 scheme for similar workloads and problems as considered in these studies 5 , 6 , 7 , 8 during problem formulation in networks.

DDPG with blockchain-enabled approaches implemented as the Baseline 3, which are deployed to improve the dynamic scheduling performances for healthcare applications by these studies 9 , 10 .

Asynchronous Advantage Actor-Critic Algorithm and decentralized ethereum scheduling implemented as the Baseline 4 in the simulation of these studies 9 , 10 . The state search strategy is designed to achieve dynamic uncertainty of resources.

Parameter variation

Table 2 shows that the variation in states, actions, transition, and blockchain blocks increases resource consumption. In this study, we randomly put the S  = 20 , a  = 20 , t  = 20 , N  = 5 , B  = 1 5 based on available mobile fog and cloud nodes and two failure transactions during the experiment for each workflow. However, increasing the states, actions, transactions, and blocks will optimize scheduling chances, but resource consumption increases and leads to higher resource leakage.

Result discussion with different approaches

In this part, the study analyzes the performance of the different workflow applications on the various computing nodes in four separate cases. In case 1, the study executes all workflow applications on mobile devices, as shown in Fig.  3 (a). Figures (b), (c), and (d) (cases 2, 3, and 4) depict the scheduling and offloading performance of workflow applications in other nodes. But it can be seen that the deep offloading method suggested by the study is better than the methods that are already used to run workflow applications on different nodes. The implemented strategies, such as Deep Offloading, Static Offloading and Dynamic Offloading migrates workload from resource constraints local devices to the available nodes for execution based on given requirements. Figure  3 shows that four applications, such as G 1 , G 2 , G 3 , G 4 offload their workloads from local devices to proximity computing for execution. At the same time, the y-axis represents the RPD% point values related to the objective function. Figure  3 (a) shows that the simple parameter-enabled static offloading has higher delays when the parameters are changed at the runtime. The considered parameters are resources, traffic, and waiting time during execution, as shown in Fig.  3 (a,b,c,d). Figure  3 (a) shows that the simple parameter-enabled dynamic offloading has higher delays when the parameters are changed at the runtime. The considered parameters are resources, traffic, and waiting time during execution, as shown in Fig.  3 (a,bc,c,d). Dynamic offloading schemes adopt some changes; however, these offloading methods have higher delays due to failure of workflow tasks and unavailability of resources in the particular nodes. Therefore, deep offloading method considered all aspects during processing of workflow on the computing nodes, such as parameters changing, resource and tasks failure, and deadline, therefore it has lower delays as shown in Fig.  3 (a,b,c,d). Figure  4 shows that deep learning-enabled blockchain for the workflow healthcare application is the more optimal in terms of resource leakage, data validation, transaction failure, and inter-dependency as shown in Fig.  4 (a), (b), (c) and (d). The existing static blockchain ethereum and dynamic Corda blockchain did not focus on resource leakage, tasks failure, and workflow dependency in their models. Because these blockchain frameworks only considered the coarse-grained and fine-grained workloads in their models that are executed on the distinct nodes. Figure  4 (a) shows the performance of blockchain technologies during their data validations of healthcare workflows among different computing nodes. However, we determined that there is a considerable delay due to resource-constraint issues among different computing nodes while performing validation on data transactions of healthcare workflows. Furthermore, resource leakage in different computing nodes is the biggest issue while implementing the the existing blockchain technologies during performing data transactions of workflow healthcare applications. Therefore, the proposed deep learning based blockchain technology is more in terms of above issues. Figure  4 (b,c,d) shows the data validation performance of the blockchain technologies, and their obtained objective function and results for workflow healthcare applications. However, still, blockchain technologies exploited different algorithms such as proof of work, proof of stake, and proof of credibility  that are consumed much more resources and incurred higher delays in their data validation and consensus models. The failure and independence among nodes during validation and failure have long delays for different transactions. Therefore, Fig.  4 (a,b,c,d) shows that the deep blockchain scheme optimizes all aspects during processing transactions among nodes compared to existing studies. The study implemented four baseline approaches that scheduled the workflow tasks on different computing nodes based on their pre-set requirements. Figure  5 shows that makespan performance workflow tasks (e.g., 1000) by exploiting different parallel and distributed scheduling schemes (e.g., baseline 1,4) and DRLBTS. The evaluation shows that DRLBTS outperformed all existing schemes, as shown in Fig.  5 . Figure  5 and Fig.  6 show that the workflows of 1000 and 2000 tasks are scheduled with the different computing nodes and have different objective function (e.g., makespans) by using both initial and dynamic adaptation scheduling strategies during runtime. Figure  5 and Fig.  6 show that DRLBTS minimized the makespan of all workflows as compared to existing strategies. Figure  7 and Fig.  8 show that the workflows of 1000 and 2000 tasks are scheduled with the different makespans using both initial and deep learning strategies during runtime in the network. Figure  7 and Fig.  8 show that DRLBTS minimized the makespan of all workflows as compared to existing strategies. Figure  5 , 6 , 7 , 8 shows the performances of baseline approaches as compared to the proposed scheme DRLBTS. There is the evaluation of different performances of healthcare workflow applications, such as makespan, failure rate, and deadline. Figure  5 shows the performances of different workflow strategies in terms of makespan (e.g., RPD%).  Model-free optimal policy-enabled scheduling proposed by the study reduces the mean-time delay of workflow applications compared to static and dynamic scheduling. The main reason is that the static and dynamic schedulers cannot handle uncertainty regarding resource capacity and leakage in the heterogeneous mobile fog cloud nodes. Figure  4 shows the effectiveness of deep learning-enabled blockchain for workflow healthcare applications is the more optimal in terms of resource leakage, data validation, failure, and inter-dependency, as shown in Fig.  4 (a), (b), (c) and (d) with the different proposed schemes. The existing ethereum static and dynamic Corda blockchain did not focus on resource leakage, failure, and workflow dependency in their model. Because these blockchain frameworks only considered the coarse-grained and fine-grained workloads in their models. Existing Ethereum static and dynamic Corda blockchain only validated single node type data with homogeneous nodes, and data offloads to servers only have standardized nodes.  Hence, all the simulation results show that, blockchain-based data validation and transaction consume much more resources and have higher delays in different computing nodes. It is practically impossible to obtain from all computing nodes to have the same resource capability and expect the same performance while implementing blockchain-based data validation schemes for the workflow applications. Therefore, the partitioning of workflow applications on the nodes and adaptive scheduling-based blockchain scheme that is proposed in this work gained more optimal results and minimized the overall makespan of applications, and handled all failure, resource leakage, and deadline of tasks and nodes as compared to existing frameworks and schemes. 

figure 3

Blockchain enabled validation performance in multi-agent cooperative nodes.

figure 4

Blockchain-enable data transitions in mobile fog cloud networks.

figure 5

Makespan performance of workflows in mobile fog cloud networks.

figure 6

Adaption performance of scheduling schemes for workflows.

figure 7

Performance of workflows based on given policy.

figure 8

Baseline deep and reinforcement learning schemes for workflows.

Finding and limitations

The study designed the DRLBTS algorithm scheme that divided the workflows into different tasks (mobile tasks, fog tasks and cloud tasks) and executed them on the different nodes. The divided tasks, such as mobile, fog, and cloud, are executed based on requirements resources, blockchain validation, and time for each application. However, there are many limitations in the proposed DRLBTS. The proposed DRLBTS consumes much energy while executing the workflows on different nodes. Therefore, in our future work, we will consider the different constraints such as energy consumption, electricity cost, and mobility-enabled power efficiency scheme for applications.

With the cooperative multi-agents mechanism used in the study obtained the optimal results as shown in simulation part.  The study proposed DRLBTS algorithm framework that obtained the optimal results in terms of objective functions of the healthcare workflows as compared to existing methods. The current version of the study focuses on multi-agents in distributed mobile fog and cloud networks. The study showed that, DRLBTS executed the workflows on the different computing nodes, where the proposed deep blockchain scheme achieved the optimal delay transactions as compared to existing blockchain technologies. DRLBTS handles all situations, such as resource failure, task failure, resource leakage, and deadline.  However, this framework will not support real-time healthcare applications and will incur overhead at different nodes. Therefore, in future work, we will implement real-time profiling, which analyzes the application requirements at the runtime of applications. In future work, we will divide the application into parts based on the resources of each node and schedule them based on the quality of service they need. In future work, DRLBTS will optimize the energy consumption of the nodes and minimize the electricity cost during the execution of healthcare workflows in different time zones in distributed mobile fog cloud networks.

Data availability

The datasets generated and analyzed during the current study are not publicly available. The main reason is that we have designed self-workflows on the local machines and experimented with them on the local machines. Some of the datasets were gathered from local clinics. Therefore, they allowed only experiments that were not publicly available for analysis. The study designed the DRLBTS system based on Java programming for healthcare workflow applications. As the link below explains, the algorithm is based on deep reinforcement learning and blockchain technology. https://github.com/ABDULLAH-RAZA/Assignment-/tree/master .

Heuillet, A., Couthouis, F. & Díaz-Rodríguez, N. Explainability in deep reinforcement learning. Knowl. Based Syst. 214 , 106685 (2021).

Article   Google Scholar  

Dai, Y., Wang, G., Muhammad, K. & Liu, S. A closed-loop healthcare processing approach based on deep reinforcement learning. Multimedia Tools and Applications 81 , 3107–3129 (2022).

Chen, H., Chen, Z., Lin, F. & Zhuang, P. Effective management for blockchain-based agri-food supply chains using deep reinforcement learning. IEEE Access 9 , 36008–36018 (2021).

Xiaoding, W. et al. Enabling secure authentication in industrial iot with transfer learning empowered blockchain. IEEE Trans. Ind. Inform. 17 , 7725–7733 (2021).

Gazori, P., Rahbari, D. & Nickray, M. Saving time and cost on the scheduling of fog-based iot applications using deep reinforcement learning approach. Futur. Gener. Comput. Syst. 110 , 1098–1115 (2020).

Lakhan, A. et al. Blockchain-enabled cybersecurity efficient iioht cyber-physical system for medical applications. IEEE Trans. Netw. Sci. Eng. 1–14, (2022).

Farahbakhsh, F., Shahidinejad, A. & Ghobaei-Arani, M. Multiuser context-aware computation offloading in mobile edge computing based on Bayesian learning automata. Trans. Emerg. Telecommun. Technol. 32 , e4127 (2021).

Google Scholar  

Sodhro, A. H., Sennersten, C. & Ahmad, A. Towards cognitive authentication for smart healthcare applications. Sensors 22 , 2101 (2022).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Qurat, Khan, F. A., Abbasi, Q. H. et al. Dynamic content and failure aware task offloading in heterogeneous mobile cloud networks. In 2019 International Conference on Advances in the Emerging Computing Technologies (AECT) , 1–6 (IEEE, 2020).

Tiwari, P., Zhu, H. & Pandey, H. M. Dapath: Distance-aware knowledge graph reasoning based on deep reinforcement learning. Neural Netw. 135 , 1–12 (2021).

Article   PubMed   Google Scholar  

Alatoun, K. et al. A novel low-latency and energy-efficient task scheduling framework for internet of medical things in an edge fog cloud system. Sensors 22 , 5327 (2022).

Weng, J. et al. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Dependable Secur. Comput. 18 (5), 2438–2455 (2019).

Ferrag, M. A. & Maglaras, L. Deepcoin: A novel deep learning and blockchain-based energy exchange framework for smart grids. IEEE Trans. Eng. Manag. 67 , 1285–1297 (2019).

Singh, M., Aujla, G. S., Singh, A., Kumar, N. & Garg, S. Deep-learning-based blockchain framework for secure software-defined industrial networks. IEEE Trans. Ind. Inform. 17 , 606–616 (2020).

Li, X. Mobility and fault aware adaptive task offloading in heterogeneous mobile cloud environments. EAI Endorsed Trans. Mobile Commun. Appl. 5 , -16 (2019).

Sajnani, D. K., Tahir, M., Aamir, M. & Lodhi, R. Delay sensitive application partitioning and task scheduling in mobile edge cloud prototyping. In International Conference on 5G for Ubiquitous Connectivity , 59–80 (Springer, 2018).

Waseem, M. Data security of mobile cloud computing on cloud server. Open Access Libr. J. 3 , 1–11 (2016).

Khoso, F. H., Arain, A. A., Lakhan, A., Kehar, A. & Nizamani, S. Z. Proposing a novel iot framework by identifying security and privacy issues in fog cloud services network. Int. J. 9 (5), 592–596 (2021).

Khoso, F. H. et al. A microservice-based system for industrial internet of things in fog-cloud assisted network. Eng. Technol. Appl. Sci. Res. 11 , 7029–7032 (2021).

Vahdat Pour, M., Li, Z., Ma, L. & Hemmati, H. A search-based testing framework for deep neural networks of source code embedding. arXiv e-prints arXiv–2101 (2021).

Tapas, N., Merlino, G. & Longo, F. Blockchain-based iot-cloud authorization and delegation. In 2018 IEEE International Conference on Smart Computing (SMARTCOMP) , 411–416 (IEEE, 2018).

Uddin, M. A., Stranieri, A., Gondal, I. & Balasubramanian, V. A survey on the adoption of blockchain in iot: Challenges and solutions. Blockchain Res. Appl. 2 , 100006 (2021).

Nartey, C. et al. On blockchain and iot integration platforms: current implementation challenges and future perspectives. Wirel. Commun. Mob. Comput. 2021 (2021).

Qiu, C., Yao, H., Jiang, C., Guo, S. & Xu, F. Cloud computing assisted blockchain-enabled internet of things. IEEE Trans. Cloud Comput. 10 (1), 247–257 (2020).

Wu, H. et al. Eedto: an energy-efficient dynamic task offloading algorithm for blockchain-enabled iot-edge-cloud orchestrated computing. IEEE Internet Things J. 8 , 2163–2176 (2020).

Haque, R. et al. Blockchain-based information security of electronic medical records (emr) in a healthcare communication system. In Intelligent Computing and Innovation on Data Science , 641–650 (Springer, 2020).

Chelladurai, U. & Pandian, S. A novel blockchain based electronic health record automation system for healthcare. J. Ambient Intell. Humaniz. Comput. 1–11 (2022).

Sathio, A. A. et al. Pervasive futuristic healthcare and blockchain enabled digital identities-challenges and future intensions. In 2021 International Conference on Computing, Electronics & Communications Engineering (iCCECE) , 30–35 (IEEE, 2021).

Lakhan, A. et al. Smart-contract aware ethereum and client-fog-cloud healthcare system. Sensors 21 , 4093 (2021).

Lakhan, A., Mohammed, M. A., Kozlov, S. & Rodrigues, J. J. Mobile-fog-cloud assisted deep reinforcement learning and blockchain-enable iomt system for healthcare workflows. Trans. Emerg. Telecommun. Technol. e43–63 (2021).

Singh, A. P. et al. A novel patient-centric architectural framework for blockchain-enabled healthcare applications. IEEE Trans. Ind. Inform. 17 , 5779–5789 (2020).

Oh, S. H., Lee, S. J. & Park, J. Effective data-driven precision medicine by cluster-applied deep reinforcement learning. Knowl. Based Syst. 256 , 109877 (2022).

Wang, L., Xi, S., Qian, Y. & Huang, C. A context-aware sensing strategy with deep reinforcement learning for smart healthcare. Pervasive Mob. Comput. 83 , 101588 (2022).

Rjoub, G., Wahab, O. A., Bentahar, J., Cohen, R. & Bataineh, A. S. Trust-augmented deep reinforcement learning for federated learning client selection. Inf. Syst. Front. 1–18 (2022).

Talaat, F. M. Effective deep q-networks (edqn) strategy for resource allocation based on optimized reinforcement learning algorithm. Multimed. Tools Appl. 81 (28), 39945–39961 (2022).

Ali, A. et al. An industrial iot-based blockchain-enabled secure searchable encryption approach for healthcare systems using neural network. Sensors 22 , 572 (2022).

Almaiah, M. A., Hajjej, F., Ali, A., Pasha, M. F. & Almomani, O. A novel hybrid trustworthy decentralized authentication and data preservation model for digital healthcare iot based cps. Sensors 22 , 1448 (2022).

Ali, A. et al. Security, privacy, and reliability in digital healthcare systems using blockchain. Electronics 10 , 2034 (2021).

Ali, A. et al. Deep learning based homomorphic secure search-able encryption for keyword search in blockchain healthcare system: A novel approach to cryptography. Sensors 22 , 528 (2022).

Ali, A. et al. A novel secure blockchain framework for accessing electronic health records using multiple certificate authority. Appl. Sci. 11 , 9999 (2021).

Article   CAS   Google Scholar  

Download references

This research work was partially supported by the Ministry of Education of the Czech Republic (Project No. SP2023/001 and No. SP2023/002).

Author information

Authors and affiliations.

Department of Computer Science, Dawood University of Engineering and Technology, Sindh, Karachi, 74800, Pakistan

Abdullah Lakhan

College of Computer Science and Information Technology, University of Anbar, Anbar, 31001, Iraq

Mazin Abed Mohammed

Department of Telecommunications, VSB-Technical University of Ostrava, 70800, Ostrava, Czech Republic

Abdullah Lakhan, Mazin Abed Mohammed & Jan Nedoma

Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 70800, Ostrava, Czech Republic

Abdullah Lakhan, Mazin Abed Mohammed & Radek Martinek

School of Information Technology, Halmstad University, Halmstad, Sweden

Prayag Tiwari

Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology (Deemed University), Patiala, Punjab, India

Neeraj Kumar

School of Computer Science, University of Petroleum and Energy Studies, Dehradun, Uttarakhand, India

Department of Computer Science and Information Engineering, Asia University, Taichung, Taiwan

You can also search for this author in PubMed   Google Scholar

Contributions

A.L., M.A.M., J.N., R.M., P.T., N.K. discussed the idea and wrote the manuscript. A. L. did the experimental work and prepared figures along with P.T. M.A.M., P.T., and N.K. the final proofreading.

Corresponding author

Correspondence to Prayag Tiwari .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and Permissions

About this article

Cite this article.

Lakhan, A., Mohammed, M.A., Nedoma, J. et al. DRLBTS: deep reinforcement learning-aware blockchain-based healthcare system. Sci Rep 13 , 4124 (2023). https://doi.org/10.1038/s41598-023-29170-2

Download citation

Received : 25 April 2022

Accepted : 31 January 2023

Published : 13 March 2023

DOI : https://doi.org/10.1038/s41598-023-29170-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

algorithm task scheduling

Related Articles

CPU Scheduling in Operating Systems

Scheduling of processes/work is done to finish the work on time. CPU Scheduling is a process that allows one process to use the CPU while another process is delayed (in standby) due to unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.

Tutorial on CPU Scheduling Algorithms in Operating System

Whenever the CPU becomes idle, the operating system must select one of the processes in the line ready for launch. The selection process is done by a temporary (CPU) scheduler. The Scheduler selects between memory processes ready to launch and assigns the CPU to one of them.

Table of Contents

What is Process Scheduling?

Why do we need to schedule processes.

   

What is a process?

In computing, a process is the instance of a computer program that is being executed by one or many threads . It contains the program code and its activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.

How is process memory used for efficient operation?

The process memory is divided into four sections for efficient operation:

To know further, you can refer to our detailed article on States of a Process in Operating system .

Process Scheduling is the process of the process manager handling the removal of an active process from the CPU and selecting another process based on a specific strategy.

Process Scheduling is an integral part of Multi-programming applications. Such operating systems allow more than one process to be loaded into usable memory at a time and the loaded shared CPU process uses repetition time.

There are three types of process schedulers :

What is the need for CPU scheduling algorithm?

CPU scheduling is the process of deciding which process will own the CPU to use while another process is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line.

In Multiprogramming , if the long-term scheduler selects multiple I / O binding processes then most of the time, the CPU remains an idle. The function of an effective program is to improve resource utilization.

If most operating systems change their status from performance to waiting then there may always be a chance of failure in the system. So in order to minimize this excess, the OS needs to schedule tasks in order to make full use of the CPU and avoid the possibility of deadlock.

Objectives of Process Scheduling Algorithm:

What are the different terminologies to take care of in any CPU Scheduling algorithm?

Turn Around Time = Completion Time  –  Arrival Time
Waiting Time = Turn Around Time  –  Burst Time

Things to take care while designing a CPU Scheduling algorithm?

Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm depends on a variety of factors. Many conditions have been raised to compare CPU scheduling algorithms.

The criteria include the following: 

What are the different types of CPU Scheduling Algorithms?

There are mainly two types of scheduling methods:

Different types of CPU Scheduling Algorithms

Let us now learn about these CPU scheduling algorithms in operating systems one by one:

1. First Come First Serve:  

FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by using FIFO queue .

Characteristics of FCFS:

Advantages of FCFS:

Disadvantages of FCFS:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on First come, First serve Scheduling.

2. Shortest Job First(SJF):

Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest execution time to execute next. This scheduling method may or may not be preemptive. Significantly reduces the average waiting time for other processes waiting to be executed. The full form of SJF is Shortest Job First.

Characteristics of SJF:

Advantages of Shortest Job first:

Disadvantages of SJF:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Shortest Job First.

3. Longest Job First(LJF):

Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests this algorithm is based upon the fact that the process with the largest burst time is processed first. Longest Job First is non-preemptive in nature.

Characteristics of LJF:

Advantages of LJF:

Disadvantages of LJF:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Longest job first scheduling .

4. Priority Scheduling:

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning that the most important process must be done first. In the case of any conflict, that is, where there are more than one processor with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First Come First Serve) algorithm.

Characteristics of Priority Scheduling:

Advantages of Priority Scheduling:

Disadvantages of Priority Scheduling:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Priority Preemptive Scheduling algorithm .

5. Round robin:

Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm . Round Robin CPU Algorithm generally focuses on Time Sharing technique. 

Characteristics of Round robin:

Advantages of Round robin:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Round robin Scheduling algorithm .

6. Shortest Remaining Time First:

Shortest remaining time first is the preemptive version of the Shortest job first which we have discussed earlier where the processor is allocated to the job closest to completion. In SRTF the process with the smallest amount of time remaining until completion is selected to execute.

Characteristics of Shortest remaining time first:

Advantages of SRTF:

Disadvantages of SRTF:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the shortest remaining time first .

7. Longest Remaining Time First:

The longest remaining time first is a preemptive version of the longest job first scheduling algorithm. This scheduling algorithm is used by the operating system to program incoming processes for use in a systematic way. This algorithm schedules those processes first which have the longest processing time remaining for completion.

Characteristics of longest remaining time first:

Advantages of LRTF:

Disadvantages of LRTF:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the longest remaining time first .

8. Highest Response Ratio Next:

Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is considered as one of the most optimal scheduling algorithms. The name itself states that we need to find the response ratio of all available processes and select the one with the highest Response Ratio. A process once selected will run till completion. 

Characteristics of Highest Response Ratio Next:

  Response Ratio = (W + S)/S Here, W is the waiting time of the process so far and S is the Burst time of the process.

Advantages of HRRN:

Disadvantages of HRRN:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Highest Response Ratio Next .

9. Multiple Queue Scheduling:

Processes in the ready queue can be divided into different classes where each class has its own scheduling needs. For example, a common division is a foreground (interactive) process and a background (batch) process. These two classes have different scheduling needs. For this kind of situation Multilevel Queue Scheduling is used. 

The description of the processes in the above diagram is as follows:

Advantages of multilevel queue scheduling:

Disadvantages of multilevel queue scheduling:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Queue Scheduling .

10. Multilevel Feedback Queue Scheduling: :

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like   Multilevel Queue Scheduling but in this process can move between the queues. And thus, much more efficient than multilevel queue scheduling. 

Characteristics of Multilevel Feedback Queue Scheduling:

Advantages of Multilevel feedback queue scheduling:

Disadvantages of Multilevel feedback queue scheduling:

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Feedback Queue Scheduling .

Comparison between various CPU Scheduling algorithms

Here is a brief comparison between different CPU scheduling algorithms:

Total waiting time for P2  = Completion time – (Arrival time + Execution time) = 55 – (15 + 25) = 15

https://www.youtube.com/watch?v=wO2O3WY5uYc

Related articles :

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above

Please Login to comment...

New Course Launch!

Improve your Coding Skills with Practice

Start your coding journey now.

IMAGES

  1. Sescheduling algorithms for real time embedded systems

    algorithm task scheduling

  2. -Scheduling Algorithm Block Diagram.

    algorithm task scheduling

  3. TASK SCHEDULING ON ADAPTIVE MULTI-CORE

    algorithm task scheduling

  4. Categories of Scheduling Algorithm

    algorithm task scheduling

  5. Improved cost based algorithm for task scheduling in cloud computing

    algorithm task scheduling

  6. The main classifications of task scheduling algorithms in distributed...

    algorithm task scheduling

VIDEO

  1. 4.2 C Types of Scheduling Algorithm

  2. L7_Priority Scheduling Algorithm

  3. Virtual Machine Based Task Scheduling Algorithm in a Cloud

  4. Insertion sort Algorithm with C#

  5. Greedy Algorithm for Scheduling Tasks with Deadlines

  6. Scheduling Algorithms(2021.08.18)

COMMENTS

  1. 9.2: Scheduling Algorithms

    The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to be allocated resources. There are many different scheduling algorithms.

  2. What algorithms are there for scheduling tasks

    In general, a multilevel feedback-queue scheduler is defined by the following parameters: The number of queues. The scheduling algorithm for each queue. The method used to determine when to upgrade a process to a higher- priority queue. The method used to determine when to demote a process to a lower- priority queue.

  3. Task Scheduling in Embedded System

    The way that time is allocated between tasks is termed "scheduling". The scheduler is the software that determines which task should be run next. The logic of the scheduler and the mechanism that determines when it should be run is the scheduling algorithm. We will look at a number of scheduling algorithms in this section.

  4. Optimized algorithm to schedule tasks with dependency?

    There are tasks that read from a file, do some processing and write to a file. These tasks are to be scheduled based on the dependency. Also tasks can be run in parallel, so the algorithm needs to be optimized to run dependent tasks in serial and as much as possible in parallel. eg: A -> B A -> C B -> D E -> F

  5. (PDF) An Efficient Firefly Algorithm for Optimizing Task Scheduling in

    The various processes in the heuristic-based algorithm of scheduling will result in varied makespans when heterogeneous resources are utilized. As a result, a smart method of scheduling must...

  6. Types of Task Scheduling Algorithms in Cloud Computing Environment

    Tasks scheduling algorithms are defined as a set of rules and policies used to assign tasks to the suitable resources (CPU, memory, and bandwidth) to get the highest level possible of performance and resources utilization. 2.2.1 Task scheduling algorithms advantages Manage cloud computing performance and QoS. Manage the memory and CPU.

  7. PDF Embedded Systems Task Scheduling Algorithms and Deterministic Behavior

    Task Scheduling • The task scheduler -the part of the operating system that responds to the requests by programs and interrupts for processor attention and gives control of the processor to those processes. • Scheduling algorithm: The algorithm followed to decide who gets next turn on CPU. • The program that does this is called the ...

  8. Generalized Real-time Task Scheduler

    Generalized Real-time Task Scheduler : The scheduler used for handling or scheduling all the three types of real-time tasks i.e periodic, sporadic and aperiodic tasks, is known as Generalized task scheduler. It schedules periodic, sporadic and aperiodic tasks in most proficient way. Why Generalized Task Scheduler?

  9. Scheduling in Greedy Algorithms

    Algorithm 1 : The first idea is to select as short events as possible. In the example, this algorithm selects the following events: However, selecting short events is not always a correct strategy. For example, the algorithm fails in the below case: If short event is selected, it can only select one event.

  10. Task Scheduling Algorithms in Cloud Computing: A Survey

    Task Scheduling Algorithms in Cloud Computing: A Survey. Abstract: In the cloud architecture, the virtual machines (VMs), data centers, hosts and brokers are composed to establish the communication. The most reliable VM is searched through brokers in order to execute the cloudlet. When working in a cloud environment the aim is to schedule tasks ...

  11. Full article: A task scheduling algorithm for cloud computing with

    Based on this idea, a Genetic Algorithm (GA) is implemented in which a task scheduling solution is represented as a sequence of tasks and evaluated by the proposed algorithm. Recall that the proposed algorithm starts by finding a sequence of tasks to generate an initial solution (see Algorithm 1).

  12. Improved Jellyfish Algorithm-based multi-aspect task scheduling model

    Simulation results suggest that the ADGTS algorithm can simultaneously balance communication cost and task make-span performance. It performs better considering task make-span than the Min-Min method. Authors in [ 49] proposed a task scheduling algorithm based on a Moth-Flame Optimization (TS-MFO) algorithm.

  13. Scheduling Algorithm

    Basically, the algorithm starts by creating a list of tasks to schedule, from the highest to the lowest priority: (1) proactive failure tolerance, (2) reinitiating of failed tasks, and (3) scheduling of new tasks. Power-optimizing scheduling has the least priority, being performed when there are no other tasks.

  14. Algorithms

    A spatial crowdsourcing (SC) scenario of the multi-objective task scheduling optimization (MOTSO) model. In Figure 2, is the location of the task, and is the time duration of a task. Meanwhile, the two sets for the server are tasks and workers W= {. The requesters send a query (SC-Query), which includes the task and its constraints, to the SC ...

  15. Task Scheduling

    Explanation. The first task alone can be completed in 2 minutes, and so you won't overshoot the deadline. With the first two tasks, the optimal schedule can be: time 1: task 2. time 2: task 1. time 3: task 1. We've overshot task 1 by 1 minute, hence returning 1. With the first three tasks, the optimal schedule can be:

  16. Task Scheduling Algorithms in Cloud Computing: A Review

    The principal idea behind the scheduling is to minimize loss time, workload, and maximize throughput. So, the scheduling task is essential to achieve accuracy and correctness on task...

  17. Activity or Task Scheduling Problem

    Activity or Task Scheduling Problem. This is the dispute of optimally scheduling unit-time tasks on a single processor, where each job has a deadline and a penalty that necessary be paid if the deadline is missed. A unit-time task is a job, such as a program to be rush on a computer that needed precisely one unit of time to complete.

  18. Task scheduling in fog environment

    An algorithm for task scheduling is proposed that identifies the overall latency based on the priority of each task and allocates the tasks accordingly. A simulation built on CloudSim evaluates the proposed scheduler. The proposed scheme achieves ultralow latency for tasks with high priority, and effectively categorizes tasks by priority. ...

  19. An Efficient Hybrid Job Scheduling Optimization (EHJSO) approach to

    Scheduling tasks, on the other hand, has a broad scope of optimization and greatly contributes to the development of dependable and adaptable dynamic solutions. The majority of cloud computing work scheduling algorithms are rule-based because they are simple to build. Rule-based algorithms perform badly in the preparation of multidimensional jobs.

  20. A back adjustment based dependent task offloading scheduling algorithm

    To this end, we propose a dependent task offloading scheduling algorithm with fairness constraints based on a back adjustment mechanism. First, to solve the execution constraint problem caused by dependent tasks and the scheduling fairness problem in multi-user scenarios, a two-level task sorting algorithm is given to determine the scheduling ...

  21. Genetic job shop machine task scheduling

    A simple job shop machine task scheduling problem. The example that I suggest comprises 3 machines (M1, M2, and M3) and 5 tasks (T1, T2, T3, T4, and T5). The jobs must be processed in a job shop. Each task can be processed on any machine, but there are certain constraints that limit the possible sequence of tasks on each machine.

  22. Parallel task scheduling

    Parallel task scheduling (also called parallel job scheduling or parallel processing scheduling) is an optimization problem in computer science and operations research.It is a variant of optimal job scheduling.In a general job scheduling problem, we are given n jobs J 1, J 2, ..., J n of varying processing times, which need to be scheduled on m machines while trying to minimize the makespan ...

  23. DRLBTS: deep reinforcement learning-aware blockchain-based healthcare

    This paper proposes a new solution to the above-mentioned issues and presents the deep reinforcement learning-aware blockchain-based task scheduling (DRLBTS) algorithm framework with different ...

  24. Scheduling (computing)

    In computing, schedulingis the action of assigning resourcesto perform tasks. The resourcesmay be processors, network linksor expansion cards. The tasksmay be threads, processesor data flows. The scheduling activity is carried out by a process called scheduler.

  25. CPU Scheduling in Operating Systems

    What is the need for CPU scheduling algorithm? CPU scheduling is the process of deciding which process will own the CPU to use while another process is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line.