2014
01-26

# Process scheduling

Process scheduling is a very important problem in operating system design. Each process requires some amount of resources to run. The process will release all the resources allocated to it when completes. Different resource allocating strategy varies much in efficiency. Even some unsuitable strategy can result in dead lock.
Now there are n processes and m kinds of resources. At the beginning, each process has been allocated some amount of resources for each kind. However, the allocated resources might not be enough. They still need some extra resources for each kind. And you are given the available resources for each kind in the computer now. Can you tell whether it is possible to schedule these processes in a suitable order so that all of them can be executed successfully?

There are four parts in the input.
The first part contains two positive integers n(n<=50000) and m(m<=3) representing the number of processes and the number of resources.
The second part is the following m lines. Each of the m lines contains n integers. These integers make the allocation table.
The third part is also the following m lines. Each of the m lines contains n integers. These integers make the request table.
The last line containing m integers is the fourth part. This part tells you the amount of available resources for each kind currently.
You may assume all integers that appear are less than or equal to 10^9.

There are four parts in the input.
The first part contains two positive integers n(n<=50000) and m(m<=3) representing the number of processes and the number of resources.
The second part is the following m lines. Each of the m lines contains n integers. These integers make the allocation table.
The third part is also the following m lines. Each of the m lines contains n integers. These integers make the request table.
The last line containing m integers is the fourth part. This part tells you the amount of available resources for each kind currently.
You may assume all integers that appear are less than or equal to 10^9.

4 3
1 6 2 0
0 1 1 0
0 2 1 2
2 0 1 4
2 0 0 2
2 1 3 0
0 1 1
4 3
2 5 2 0
0 1 1 0
1 1 1 2
1 1 1 4
2 0 0 2
1 2 3 0
0 1 1

Yes
No

这题关键是数据量太大，如果用普通的银行家算法，最坏情况下时间复杂度为O(N*N*M)，显然会超时。正确的解法是对每种资源分别考虑，建立M个队列。进程对每种资源的需求数量从小到大存入队列，这样每次只需检查队列头部的进程即可。排序时间复杂度O(M*N*logN),每个进程至多出队M次，时间复杂度为O(N*M*M)。

由于杭电的数据太弱，所以这个算法体现不出效果…

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <memory.h>
#include <string>
#include <vector>
#include <map>
#include <set>
#include <queue>
#include <algorithm>
#include <iostream>
#include <sstream>

using namespace std;

int allo[3][50005], reqs[3][50005], avai[3], indx[3];
pair<int, int> queu[3][50005];
bool flag[3][50005];
int n, m;

int main() {
int i, j, k, cunt;
bool sign;
while (scanf("%d %d", &n, &m) != EOF) {
for (i = 0; i < m; i++)
for (j = 0; j < n; j++)
scanf("%d", &allo[i][j]);
for (i = 0; i < m; i++)
for (j = 0; j < n; j++) {
scanf("%d", &reqs[i][j]);
queu[i][j].first = reqs[i][j];
queu[i][j].second = j;
}
for (i = 0; i < m; i++)
scanf("%d", &avai[i]);
for (i = 0; i < m; i++)
sort(queu[i], queu[i] + n);
sign = true;
cunt = 0;
memset(indx, 0, sizeof(indx));
memset(flag, 0, sizeof(flag));
while (sign) {
sign = false;
for (i = 0; i < m; i++) {
for (j = indx[i]; j < n; j++) {
if (queu[i][j].first > avai[i])
break;
flag[i][queu[i][j].second] = true;
for (k = 0; k < m; k++) {
if (!flag[k][queu[i][j].second])
break;
}
if (k == m) {
sign = true;
cunt++;
for (k = 0; k < m; k++) {
avai[k] += allo[k][queu[i][j].second];
if (avai[k] > 1000000000)
avai[k] = 1000000000;
}
}
}
indx[i] = j;
}
}
if (cunt == n) puts("Yes");
else puts("No");
}
return 0;
}

1. 第2题，TCP不支持多播，多播和广播仅应用于UDP。所以B选项是不对的。第2题，TCP不支持多播，多播和广播仅应用于UDP。所以B选项是不对的。