Office migrations move the easy workloads first. Email moves to Microsoft 365 or Google Workspace. Files move to SharePoint or Dropbox. The line-of-business application gets swapped for a SaaS that does roughly the same job. The migration tracker shows green ticks. The server in the closet is scheduled for decommission. The team feels good.
The workload that does not move easily, and that the migration tracker is the worst at representing, is the scheduled task. Scheduled tasks live in Windows Task Scheduler on the office server, run overnight, produce an output that someone downstream depends on, and emit no signal except the output itself. There is no dashboard, there is no log that anyone reads, and there is no error report when the task quietly stops working. The signal that something has gone wrong is the absence of the output, and absence is the hardest signal for most monitoring stacks to recognize.
The pattern we see in office migrations is that the scheduled tasks get migrated late, get migrated under time pressure, and get monitored thinly. The team copies the script to a cloud scheduler, confirms that the scheduler runs it on the right schedule, and moves on. The cutover happens, the server is decommissioned, and the scheduled task is presumed to be working. Three months later, somebody downstream notices that they have not received the file in a while, asks the office about it, and the office discovers that the task has been failing every night since the migration.
The reason the failure is invisible is that cloud schedulers report "did the function run" rather than "did the function do the work." Every nightly run shows up as a green tick in the scheduler's job history. The function returned without throwing. The function did not actually produce the output, because the credentials it was using expired, or the network path it was writing to changed, or the upstream data source moved, or the schedule itself was set in UTC when it needed to be set in local time and the work skipped a window. The scheduler is happy. The downstream consumer is not.
The working pattern for office migrations that survive their scheduled tasks is to monitor each migrated task for the absence of its expected output, not for the presence of an error in its logs. The expected output is concrete. It is a file in a folder, an email in a mailbox, a row in a database, a record in a SaaS that the task is supposed to insert. A monitor that knows the expected output and the expected schedule can fire when the output is late or missing, regardless of whether the underlying scheduler reported success.
We add this monitoring as a deliverable for every scheduled task we migrate. The overhead is small. The protection is real. The first time a monitor catches a task that has been silently failing for a week, the cost of installing the monitor on the whole fleet has paid for itself many times over.
The office migrations that go badly are not the ones where the email cutover bumped a day. Those are visible and recoverable. The ones that go badly are the ones where a scheduled task has been silently failing for a quarter and the office is the last to know, usually because a regulator or a vendor or a customer has noticed first. The absence-watch is the protection against that failure mode, and the migration is the right time to install it.
This is a guest post from the team at Modern Serverless, who run office migrations off on-premise servers and onto modern cloud software. The work covers the audit, the replacement selection, the migration itself, and the decommission of the old server.