2014
03-23

At 0th millisecond, task 1 gets the CPU. After running for 1 millisecond, it still needs 0.5 milliseconds to complete.

At 1st millisecond, task 2 gets the CPU. After running for 1 millisecond, it still needs 3.2 milliseconds to complete.

At 2st millisecond, task 3 gets the CPU. After running for 1 millisecond, it still needs 1.8 milliseconds to complete.

At 3rd millisecond, task 1 comes back to CPU again. After 0.5 millisecond of running, it is finished and will never need the CPU.

At 3.5 millisecond, task 2 gets the CPU again. After running for 1 millisecond, it still needs 2.2 milliseconds to complete.

At 4.5 millisecond, it’s time for task 3 to run. After 1 millisecond, it still needs 0.8 milliseconds to complete.

At 5.5 millisecond, it’s time for task 2 to run. After 1 millisecond, it still needs 1.2 milliseconds to complete.

At 6.5 millisecond, time for task 3. It needs 0.8 millisecond to complete, so task 3 is finished at 7.3 milliseconds.

At 7.3 millisecond, task 2 takes the CPU and keeps running until it is finished.
At 8.5 millisecond, all tasks are finished.

Tuntun decided to make a simple iPhone multi-tasking OS himself, but at first, he needs to know the finishing time of every task. Can you help him?

The first line contains only one integer T indicates the number of test cases.
The following 2×T lines represent T test cases. The first line of each test case is a integer N (0<N <= 100) which represents the number of tasks, and the second line contains N real numbers indicating the time needed for each task. The time is in milliseconds, greater than 0 and less than 10000000.

The first line contains only one integer T indicates the number of test cases.
The following 2×T lines represent T test cases. The first line of each test case is a integer N (0<N <= 100) which represents the number of tasks, and the second line contains N real numbers indicating the time needed for each task. The time is in milliseconds, greater than 0 and less than 10000000.

2
3
1.5 4.2 2.8
5
3.5 4.2 1.6 3.8 4.4

Case 1:
3.50
8.50
7.30
Case 2:
14.10
17.10
7.60
15.90
17.50

这是个模拟题，但是由于数据量很大，不能只是单纯的模拟，但是细节的处理很重要。核心思路是每一步找出最小的时间，取整得到min.然后每一步应该是每一个任务的时间-（min-1）,这样方便计时。

#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <cstring>
using namespace std;
int main()
{
//freopen("in.txt","r",stdin);
int cas,n,rest,flag,vis[110],min;
scanf("%d",&cas);
for(int i=1; i<=cas; i++)
{
scanf("%d",&n);
rest=n;
memset(time,0,sizeof(time));
min=10000009;
flag=0;
for(int j=0; j<n; j++)
{
a--;
min=min<a?min:a;
}
cur=0;
while(rest)
{
memset(vis,0,sizeof(vis));
cur+=min*rest;
int minn=min;
min=10000009;
for(int j=0; j<n; j++)
{
continue;
{
cur++;
}
else
{
rest--;
time[j]=cur;
continue;
}
a--;
min=min<a?min:a;
}
}
cout<<"Case "<<i<<":"<<endl;
for(int i=0; i<n; i++)
printf("%.2lf\n",time[i]);
}
return 0;
}

下面代码是对min-1的另一种处理思路，直接减去min.

#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cmath>
#include <cstring>
#define efs 1e-6
using namespace std;
int main()
{
//freopen("in.txt","r",stdin);
int cas,n,rest,flag;
scanf("%d",&cas);
for(int i=1; i<=cas; i++)
{
scanf("%d",&n);
rest=n;
memset(time,0,sizeof(time));
min=10000009;
flag=0;
for(int j=0; j<n; j++)
{
}
cur=0;
while(rest)
{
int a=floor(min);
cur+=a*rest;
min=10000009;
for(int j=0; j<n; j++)
{
continue;
{
int temp=0;
for(int h=n-1; h>j; h--)
temp++;
time[j]=cur-temp;
rest--;
continue;
}
}
for(int j=0; j<n; j++)
{
continue;
{
rest--;
time[j]=cur;
}
else
{
cur++;
}
}
}
cout<<"Case "<<i<<":"<<endl;
for(int i=0; i<n; i++)
printf("%.2lf\n",time[i]);
}
return 0;
}

1. #!/usr/bin/env python
def cou(n):
arr =
i = 1
while(i<n):
arr.append(arr[i-1]+selfcount(i))
i+=1
return arr[n-1]

def selfcount(n):
count = 0
while(n):
if n%10 == 1:
count += 1
n /= 10
return count