Events:

2011-12-24 at 02:37
Due to a recently discovered flaw in one of the major telnet implementations all telnet login to PDC has been disabled.
2011-12-20 at 14:39 [xxx (Ferlin)]
The login nodes of Ferlin have been restarted due to overloading use around lunch today.
2011-12-19 at 18:32
The network problem has now been resolved. A failing cable between two switches was identified and replaced.
2011-12-19 at 18:08
We currently have some network problems affecting a number of servers. Among others, some file servers that host data for some of the queuing systems on our resources. Investigation is in progress.
2011-12-17 at 00:00
At around 23:17 earlier tonight there was a dip on accesses to /cfs/klemming and likely quite a few jobs got damaged.
2011-12-09 at 22:25 [xxx (lindgren)]
The system has been restarted and is running jobs since ~20 minutes.
2011-12-09 at 21:31 [xxx (lindgren)]
We seem to have an issue with the interconnect; rather than pursuing a fault search at this hour, we will bounce and restart the system.
2011-12-09 at 14:10
An AFS server associated with Povel is misbehaving. This might cause problems for jobs and users using storage volumes on that server.
2011-12-05 at 11:30 [xxx (lindgren)]
There have been a series of continuing problem with jobs overnight. Many stale jobs now purged, and the few remaining running jobs also purged. Job starts, as well as submits, now seem more operational again.
2011-12-04 at 23:23 [xxx (lindgren)]
Last evening we got a large number of jobs sent into the system, while few (zero) other jobs were eligible. This made the scheduler hit a limit on jobs to consider able to start, and jobs sent in afterwards were not considered. The limit has been raised (from 4k to 8k number of jobs.) Jobs of all users now considered again.
2011-11-30 at 11:29 [xxx (Ferlin)]
The Ferlin login node stopped suddenly to work. It has been restarted and is available again.
2011-11-26 at 11:35 [xxx (Ekman)]
Currently you observe a decreasing number of available nodes on the cluster. This happens due to maintenance works on servers of the CSC school that delays the re-use of nodes for new jobs. Running jobs are not affected and work correctly.
2011-11-26 at 11:31 [xxx (Ferlin)]
Currently you observe a decreasing number of available nodes on the cluster. This happens due to maintenance works on servers of the CSC school that delays the re-use of nodes for new jobs. Running jobs are not affected and work correctly.
2011-11-24 at 22:18 [xxx (lindgren)]
the system has now been restarted, and is running jobs again.
2011-11-24 at 20:55 [xxx (lindgren)]
Correction - not only the router-nodes were in bad shape. We will need some time to figure out what went bad prior we can try to get the system go on-line again.
2011-11-24 at 20:43 [xxx (lindgren)]
Roughly ~19:20 both router-nodes connecting lindgren with /cfs/klemming went bad. They are being restarted now. Access to files under /cfs/klemming/ is stuck, and your jobs quite likely can have been hit.
2011-11-09 at 13:14 [xxx (lindgren)]
One of the network nodes connecting lindgren to /cfs/klemming got overloaded ~30 minutes ago and was just restarted. Normal access to /cfs/klemming should soon be resumed.
2011-10-29 at 12:32 [xxx (Ekman)]
Restart of Ekman's login server (ekman.pdc.kth.se) within a few minutes to avoid potential problems from an earlier stamemer of the hardware
2011-10-26 at 16:54 [xxx (lindgren)]
Service maintenance finished; the system is open for use again.
2011-10-29 at 10:00
This affects users with their $HOME in /afs/nada.kth.se/... or which in other ways depend on CSC infrastructure. --- Maintenance work on some CSC servers (AFS cell nada.kth.se) will be performed on Saturday October 29 starting at 10 am. Forwarded from CSC Systems Group, further questions to system@csc.kth.se, thanks.
2011-10-26 at 10:14 [xxx (lindgren)]
We will soon start the announced maintenance of lindgren. The system will be unavailable large parts of today. We will announce when we open it up for general use again.
2011-10-25 at 18:29 [xxx (lindgren)]
A suspicious piece of network equipment between lindgren and klemming found, and re-seated. Job starts enabled again.
2011-10-25 at 15:50 [xxx (lindgren)]
There are since ~60 minutes severe problems between /cfs/klemming/ and several computational systems; lindgren included. Access to files under /cfs/klemming/ is mostly getting stuck.
2011-10-25 at 12:07 [xxx (Ekman)]
The disturbation that has been reported last night is solved. A severe harddisk failure on the scheduler node could not be handled by the even redundant disks. The scheduler has been reinstalled on another hardware and we continue with operations since half an hour. A few of the previously submitted jobs could have experienced problems during the start. E-Mails should have been sent out for affected jobs.
2011-10-24 at 23:36 [xxx (Ekman)]
resending, the previous message is being grey-listed: Most likely the machine running the scheduler is having a severe failure on it's disk(s.) No running jobs should be affected, but no changes on jobs can be effectuated. You can still submit/cancel jobs, but will not see any action take place until the hardware has been repaired, or the service moved to another machine.
2011-10-24 at 23:28 [xxx (Ekman)]
Most likely the machine running the scheduler is having a severe failure on it's disk(s.) No running jobs should be affected, but no changes on jobs can be effectuated. You can still submit/cancel jobs, but will not see any action take place until the hardware has been repaired, or the service moved to another machine.
2011-10-19 at 09:17 [xxx (lindgren)]
The maintenance work planned for tomorrow Thursday, October 20, is postponed. Next slot is Wednesday October 26 / 10:30.
2011-10-17 at 14:13 [xxx (lindgren)]
Maintenance work is planned for Thursday, October 20, starting around 10:30. System will go off-line.
2011-10-10 at 14:44 [xxx (Ferlin)]
The login node of Ferlin had to be restarted - very likely due to some paralle applications that have been started on the login node accidentially
2011-10-06 at 12:04 [xxx (Ekman)]
The update works on Ekman are going on. It turned out that we have to do more checks on the file servers. For this reason and also because operations on the large file system take a long time generally we can see already now that the filesystem will not be available until tonight. Ekman will likely become available in the course of Friday.
2011-10-06 at 09:17 [xxx (Ekman)]
The batch processing on Ekman has been stopped now due to the system maintenance today. Access to the login nodes will soon be stopped too.
2011-10-04 at 11:08 [xxx (lindgren)]
The node export dsl (dynamic shared libraries) is stuck and is being restarted. Jobs relying on dsl likely have experienced problems.
2011-09-25 at 00:42 [xxx (lindgren)]
We had an extended dropout between lindgren compute nodes and cfs/klemming during the past hour. Several jobs did terminate during the past hour and likely did fail (roughly 1/3 of all running jobs) The file-system and lindgren can now reach each other again.
2011-09-23 at 22:17 [xxx (Ferlin)]
The system operation will be paused for a system upgrade on date: 2011-10-10, time: 07.00-16.00. The operating system will be upgraded to CentOS 5.7 during this break.
2011-09-23 at 22:13 [xxx (Ekman)]
The system operation will be paused for a system upgrade on date: 2011-10-06, time: 07.00-16.00. The operating system will be upgraded to CentOS 5.7 during this break. Furthermore, the quota will be turned on on the Lustre file system.
2011-09-19 at 20:15 [xxx (Ferlin)]
The login node "ferlin.pdc.kth.se" failed - very likely with a damaged harddisk. We continue the operation with the alternative login node "ferlin2.pdc.kth.se" that will be made available also under the name "ferlin.pdc.kth.se". The defect of this server implies that all jobs that wait in the queue and have been submitted from "ferlin.pdc.kth.se" would fail during the start. We will remove them therefore from the queue.
2011-09-15 at 00:48 [xxx (lindgren)]
The somewhat extensive maintenance window is over with. Lindgren is now available and running jobs again.
2011-09-14 at 09:56 [xxx (lindgren)]
The maintenance window will start in minutes. Lindgren will be taken down without further notice.
2011-09-12 at 10:43 [xxx (Ekman)]
All jobs on Ekman had been terminated to ensure a secure system operation as noted in earlier flashnews. Over night has been reached a fill degree of >= 97% in Lustre. Please free disk space when possible. Batch system processing will be restarted at a fill degree "< 95%" (as provided by the command "df"). Please continue to free disk space to avoid new stops of the operation.
2011-09-12 at 10:26 [xxx (lindgren)]
Service window on Lindgren, Wed Sep 14 10:00:00 We have set a service window coming Wednesday, starting at 10:00. No jobs will be allowed to start should they need to execute across that time. We will perform service on the external /cfs/klemming/ lustre hardware, replacing a few pieces that have malfunctioned, and also service a few of the compute nodes of Lindgren itself.
2011-09-12 at 09:34 [xxx (Ekman)]
All jobs on Ekman have been terminated to ensure a secure system operation. Over night has been reached a fill degree of >= 97% in Lustre. Please free disk space when possible.
2011-09-11 at 23:07 [xxx (Ekman)]
The Lustre problem has been solved earlier this evening. The batch system remains stopped due to the fill degree of >= 96%. Running jobs can continue to work over night.
2011-09-11 at 16:38 [xxx (Ekman)]
We face a decreasing number of available compute nodes due to problems with the Lustre filesystem. The batch system has been stopped beside this because we reached again a fill degree of the file system of 96%.
2011-08-31 at 06:47 [xxx (Ekman)]
The batch system proccessing has been restarted again. For more information, please contact the representatives of your resp. user groups. More detailed information has been provided to them last night.
2011-08-30 at 22:53 [xxx (Ekman)]
Start of new jobs postponed due to fill degree of 95% in the Lustre file system.
2011-08-30 at 22:00 [xxx (Ekman)]
Maintenance is planned for Wednesday, September 7, 2011, 06:00 - 16:00. Purpose: Update of the compute nodes from CentOS 5.5 to CentOS 5.6.
2011-08-30 at 14:13 [xxx (Ferlin)]
The queue on Ferlin is currently at a stall. We are looking into this issue.
2011-08-30 at 11:41 [xxx (Ekman)]
No errors were found during the filesystem check and the queue has now been started again. The problem was similar to the last time and to a known Lustre problem. We will schedule an upgrade of the fileservers soon and combine it with an update of the OS on the rest of Ekman.
2011-08-30 at 10:23 [xxx (Ekman)]
No new jobs are currently allowed to start on Ekman due to problems with a fileserver for /cfs/ekman. Expected downtime is around 30-45 min to run a filesystem check to make sure no data has been damaged.
2011-08-25 at 22:17 [xxx (lindgren)]
The service-node exporting the dsl-environment went down earlier tonight and is being restarted. Jobs requiring dsl might have experienced problems. As this is the 2nd time this summer we will likely increase the number of nodes exporting dsl at the next maintenance break, some time after summer-school and prace-schools.
2011-08-12 at 15:22 [xxx (lindgren)]
/cfs/klemming is on-line, jobs allowed to start again.
2011-08-12 at 07:43 [xxx (lindgren)]
We seem to have an issue with /cfs/klemming (operations freeze.) No new jobs will be allowed to start from now on, while investigating.
2011-08-08 at 17:55 [xxx (Ekman)]
The queue on Ekman has now been started again. One of the file servers had gotten into a strange state and needed a full reboot to come to senses about the state of its filesystems. The filesystem check went fine and no files seems to have been harmed.
2011-08-08 at 15:50 [xxx (Ekman)]
The Lustre file system of Ekman has problems so that it is not possible to write to one of the disk servers. We start now a check and recovering procedure. For that reason some files may be not available or even some jobs can be affected. Wie will inform in about an hour about the situation. The start of new batch jobs remains stopped until then.
2011-08-07 at 21:57 [xxx (lindgren)]
the system is restarted and running again.
2011-08-07 at 20:57 [xxx (lindgren)]
As announced earlier we will stop and start the system now.
2011-08-06 at 23:31 [xxx (lindgren)]
We have likely hit a pbs_mom and/or torque and/or alps limit a couple of hours ago when getting a very large number of 1 node jobs. No new jobs can currently start. We will likely let running jobs finish and restart the system within the next 24 hours.
2011-07-24 at 15:22 [xxx (lindgren)]
The node exporting DSL environment went down ~08:35 this morning. It has now been restarted. Jobs depending on dsl should have failed. As compute nodes are flagged down when lacking dsl, most jobs were not allowed to start.
2011-07-23 at 18:39 [xxx (Ekman)]
The start of new batch jobs has been stopped due to problems with the access to servers providing account informations to the batch system of the cluster.
2011-07-23 at 18:39 [xxx (Ferlin)]
The start of new batch jobs has been stopped due to problems with the access to servers providing account informations to the batch system of the cluster.
2011-07-18 at 15:08 [xxx (lindgren)]
Klemming is now back on-line (with reduced bandwidth and redundancy for now) and the queue on Lindgren has been re-activated.
2011-07-18 at 07:59
/cfs/klemming is unavailable. No jobs will be allowed to start on lindgren.
2011-06-29 at 16:22 [xxx (Ekman)]
Update for the status of cluster Ekman: The repair of the Lustre filesystem could be finished without data losses. The batch processing has been started again.
2011-06-29 at 03:43 [xxx (lindgren)]
informational, user access to lindgren enabled again after the upgrades/updates. Please report anomalies - we anticipate a few.
2011-06-28 at 09:50 [xxx (Ekman)]
Update for the status of the cluster Ekman: The first part of the filesystem's rebuild was successfull. Now runs the second half of the rebuild that will hopefully correct the errors from another failed disk. We expect to be back in operation tomorrow around lunch according to the repair time needed so far.
2011-06-28 at 06:21 [xxx (lindgren)]
Informational - as announced we will start the upgrade/update of lindgren now. The system will soon go off-line.
2011-06-27 at 11:44 [xxx (Ekman)]
Update for the status of the cluster Ekman: The rebuild of the Lustre filesystem has been started in the morning. This process can need the whole day. Therefore, if it works well and all data can be rebuilt the normal operation can be epected tomorrow again.
2011-06-26 at 22:47 [xxx (Ekman)]
Update for the status of cluster Ekman: Repair works on the Lustre filesystem will be done in the course of Monday (2011-06-27). You can access your volumes in the AFS filesystem via the login node "ekman.pdc.kth.se" and the staging node "ekman-rsync.pdc.kth.se". The batch system remains stopped for the time being.
2011-06-24 at 23:20 [xxx (Ekman)]
Update for status of the cluster Ekman: After multiple disk losses in Ekman's storage servers yesterday as well as today failed all trials to keep the Lustre file system operational without a service interruption. The repair will continue after the weekend. For the time being no new jobs will be started.
2011-06-24 at 19:23 [xxx (Ekman)]
The Lustre file system on Ekman is not working correctly at the moment. The scheduling of new batch jobs has been stopped therefore for the time being.
2011-06-20 at 07:40
The effects of the problematic Kerberos server starting 19/6 should now be contained with the exception that password changes are currently not possible.
2011-06-19 at 12:03
We currently have a problem with one of the servers handling the authentication of logins to PDCs resources. We are working at resolving this.
2011-06-17 at 21:35
Our mail server seem to have had problems to handle mail since roughly 14:00 today. It has now been restarted and is quit busy processing overdue mail.
2011-06-16 at 21:49 [xxx (lindgren)]
The system is on-line again. The direct cause of module failures found. There is a small risk we have an unknown in-direct cause as well, but we think not.
2011-06-16 at 13:52 [xxx (lindgren)]
informational: we have disabled all new logins to lindgren. The situation is being investigated locally as well as from overseas. We will send an info update within the next 24 hours.
2011-06-16 at 07:33 [xxx (lindgren)]
re-issued flash: The module/interconnect problem of yesterday has occurred again overnight. No new jobs will be spawned, and we will start to gather information on the fault.
2011-06-16 at 07:16 [xxx (lindgren)]
The module/interconnect problem of yesterday has occurred again overnight. No new jobs will be spawned, and we will start to gather information on the fault.
2011-06-15 at 14:48 [xxx (lindgren)]
lindgren is on-line and available for use again.
2011-06-15 at 06:11 [xxx (lindgren)]
Likely due to a switch fail-over last night the internal lustre /cfs/emil got stuck overnight. A restart will be initiated now.
2011-06-04 at 15:30 [xxx (lindgren)]
We have now restarted lindgren. CCM queue disabled. please report anomalies to support@pdc.kth.se.
2011-06-04 at 14:44 [xxx (lindgren)]
During a reboot of a set of down compute nodes, we seem to have lost the fast interconnect. We will restart the system now.
2011-05-31 at 17:20 [xxx (Ferlin)]
The ferlin login node (ferlin) is being rebooted.
2011-05-27 at 13:51 [xxx (lindgren)]
System on-line for login again. We will resume batch-jobs within minutes.
2011-05-27 at 13:05 [xxx (lindgren)]
As announced, lindgren is going off-line now.
2011-05-23 at 14:45 [xxx (Ferlin)]
There is now a second login node available in the cluster Ferlin. Its name is "ferlin2.pdc.kth.se".
2011-05-23 at 11:57 [xxx (Ferlin)]
On the login node was again to observe high load suspected due to intense I/O activities. The node came into a status that made a reboot necessary. Now, after the restart the work can continue. in the course of the afternoon we will provide a second login node so that you can choose between differen nodes.
2011-05-20 at 22:47 [xxx (lindgren)]
lindgren is now on-line and executing jobs again. No analysis of reason made.
2011-05-20 at 21:54 [xxx (lindgren)]
lindgren is behaving sluggish. We will likely reboot the system within short. We will rather get the system back on-line than initiating an extensive fault search.
2011-05-19 at 18:15 [xxx (lindgren)]
We have a partial failure in /cfs/rydqvist affecting a limited set of users. These have been informed separately.
2011-05-17 at 15:56 [xxx (lindgren)]
Informational; lindgren is available for use again.
2011-05-17 at 10:09 [xxx (lindgren)]
Informational; lindgren is now brought down for todays maintenance.
2011-05-05 at 15:20 [xxx (lindgren)]
Informational; lindgren is on-line again.
2011-05-05 at 10:01 [xxx (lindgren)]
Informational; we will soon stop lindgren for the planned maintenance.
2011-04-29 at 15:09 [xxx (Ekman)]
Access to the cluster Ekman is posible again. For information about changes see e-mail to VagnEkman mailing list
2011-04-29 at 11:06 [xxx (Ekman)]
Reminder: Access to the login node (ekman.pdc.kth.se) and the staging node (ekman-rsync.pc.kth.se) is temporarily not possible due to installation works.
2011-04-28 at 17:00 [xxx (lindgren)]
Informational; lindgren started again after stop 2.
2011-04-28 at 12:22 [xxx (Ekman)]
About 14 o'clock we will stop the staging on the host ekman-rsync.pdc.kth.se for installation works in the network connection.
2011-04-28 at 10:09 [xxx (lindgren)]
informational; we will stop lindgren for the second maintenance now.
2011-04-26 at 15:11 [xxx (lindgren)]
Informational; lindgren started again.
2011-04-26 at 10:43 [xxx (lindgren)]
informational; we will stop lindgren for maintenance now.
2011-04-21 at 23:20 [xxx (Ferlin)]
We started today an upgrade of the operating system to CentOS 5.5 on the cluster Ferlin. Your programs should work afterwards as before according to the experience we had with the update of other systems. The procedure: Since earlier this evening we do not start new jobs. All nodes that become free after the termination of running jobs will be set to the service status and get a new installation. Tomorrow (expected is a time around lunch) we continue with the start of new jobs on the updated nodes. The update of the remaining nodes will continue over the weekend. During this time the number of available nodes will be reduced. The access ot the login node and the interacive nodes will be disabled tomorrow during their update times.
2011-04-20 at 11:50 [xxx (Ferlin)]
Access to the login node of Ferlin is possible again.
2011-04-20 at 11:07 [xxx (Ferlin)]
Access to the login node of Ferlin is not possible at the moment.
2011-04-12 at 23:36 [xxx (Ekman)]
The update of the operating system is ongoing. The access to the login node has been disabled now.
2011-04-06 at 12:51 [xxx (Ferlin)]
The login to the cluster Ferlin (ferlin.pdc.kth.se) is possible again. We could repair the problem of accessing job data in the filesystem that caused crashes of the login node.
2011-04-06 at 09:59
Informational, affecting support at all snic sites: Due to maintenance on the electrical systems at the site hosting the RT, the RT support server will be out of service between 09:00 and 11:00 on Wednesday, April 6. Our apologies for forwarding information so late.
2011-04-06 at 09:47 [xxx (Ferlin)]
The login node of Ferlin (ferlin.pdc.kth.se) has problems with the file system access and crashes since 09:37. We investiagte the problem and inform you if it will be restored or replaced by another system.
2011-04-05 at 19:12 [xxx (Ferlin)]
Informational; between ~15:00 and ~15:20 earlier today we did experience timeouts between one afs-server and mostly ferlin compute nodes. Aside from leaving a stale lock-file (for node allocations) it might have had impact on running jobs.
2011-04-01 at 12:14
Informational: over the past week parts of KTH at large have been target of what often is called (D)DOS attacks. For example: between 03 and 06 this morning 10 out of 10 Gbit/s into KTH were flooded, and after blocking parts of Internet around 0600 this morning, 'only' 9 out of 10 Gbit/s is DOS-traffic. The impact to users of PDC resources can be none at all, reduced/disturbed, or on occasion completely blocked, depending on where on Internet you are located and the current rate of DOS-traffic. KTHLAN and NUNOC/SUNET are working on this, see i.e. http://www.nunoc.net/nunocweb/open_trouble_tickets.html sunetticket 1240.
2011-03-27 at 08:46 [xxx (lindgren)]
All running jobs removed last night (2011-03-26) while freeing up space on /cfs/emil. Details sent to lindgren-users-list.
2011-03-25 at 21:56 [xxx (Ekman)]
At least one site-wide culprit server has been identified. Information is being propagated to all compute nodes and jobs are able to start again, without being pushed back due to time-outs.
2011-03-25 at 21:04 [xxx (Ekman)]
Since a couple of hours all job starts fail due to timeouts. Whether this is related to (PDC internal) network problems, file-system problems, or other infrastructure problems (name-servers, KDCs, ...) is not yet figured out.
2011-03-25 at 15:19 [xxx (Ferlin)]
The batch system works again well.
2011-03-25 at 13:37 [xxx (Ferlin)]
Due to a problem with a fileserver no jobs can be started at the moment.
2011-03-16 at 15:24
Problem on overloaded file-server identified, job starts resumed on ferlin and ekman. Problem is not solved, but temporarily worked around.
2011-03-16 at 12:26
Job starts on systems ekman and ferlin paused while we investigate network and/or file-server problems.
2011-03-11 at 15:52 [xxx (Ferlin)]
The login node of Ferlin needs to be restarted because of problems to access the system with telnet/ktelnet.
2011-03-04 at 12:59
We have a general problems with networks/routers. Things are partially worked around, and we are working on getting remaining systems reachable.
2011-03-03 at 11:05 [xxx (lindgren)]
informational, the 1/4 of compute nodes on lindgren set aside for measurements now back in the batch pool.
2011-03-03 at 10:02 [xxx (lindgren)]
informational - staff is running heat-generating application on 1/4 of the system.
2011-03-03 at 10:02
informational - staff is running heat-generating application on 1/4 of the system.
2011-02-19 at 17:46 [xxx (Ferlin)]
The nodes in the Ferlin cluster have been updated with new packages that closes a potential security hole. All running jobs had to be stopped because a node reboot after the update was necessary to make the update effectively. The cluster is back in operation again now.
2011-02-19 at 12:36 [xxx (Ferlin)]
An update of the cluster to fix a security exploit is ongoing. For that reason the access to the system has been disabled.
2011-02-19 at 12:34 [xxx (Ekman)]
The system is back in operation.
2011-02-19 at 10:12 [xxx (Ekman)]
An update of the cluster to fix a security exploit is ongoing. For that reason the access to the system has been disabled. More details will be distributed over the user mailing list after the update.
2011-02-18 at 15:26 [xxx (lindgren)]
login to lindgren enabled again.
2011-02-18 at 14:45 [xxx (lindgren)]
We will start on the announced patching of the lindgren login node within short.
2011-02-14 at 09:17 [xxx (lindgren)]
rebooting one of the lindgren service nodes running jobs through pbs_mom, all jobs on that node will be lost.
2011-02-08 at 13:25 [xxx (Hebb)]
Hebb is currently down due to problems with the parallel file system, /gpfs/scratch, again. No more jobs will start until we have figured out what the actual problem is.
2011-02-04 at 20:32 [xxx (Hebb)]
Due to some network problems the GPFS filesystem was down for a while, which meant that jobs couldn't run. It is now up again but with reduced performance due to one broken server.
2011-02-02 at 18:34
Informational, forwarded for CSC affiliated users: Maintenance work on some CSC servers will be performed on Saturday February 12 starting at 10 am. Most computers at CSC will be heavily affected during this time. Services like email and www will also be affected.
2011-02-02 at 18:04 [xxx (lindgren)]
Information; the lindgren service window is closed, the system is back on-line again.
2011-02-02 at 09:24 [xxx (lindgren)]
Informational; the announced service window on lindgren will soon be in effect.
2011-01-24 at 12:31
The AFS server pompano is now back in business! If you still experience any continued problems (login or file access related), please contact support@pdc.kth.se
2011-01-24 at 10:36
The AFS server pompano has been having trouble this morning. Users who have their AFS volumes on this server will have noticed this due to problems accessing home directories and/or project volumes located on this server. These problems will all shortly go away. Sorry for the inconvenience.
All flash news for 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996, 1995

Back to PDC
Subscribe to rss