News
    Main Page
    News
 

Winter 2004 Changes

6 Jan 2004:

  1. Workstation Changes

    Due to the break-time sacrifices and hard work of the lab assistants David Wyman and George Orriss, the lab workstations were updated in time for the first day of classes, despite inclement weather.

    1. Database clients

      Students in TCSS445 and TCSS545 can access database client software for these DBMS products: Oracle 9.0.2, SQL Server 2000 SP3a, and DB2 8.1 FP4.

      Separate database accounts were created even though most DBMSes supported Windows authentication with your login account. We chose this mechanism to allow you to use JDBC connection strings with a plain-text password that is not your Windows login password, thereby protecting your login password from someone eavesdropping on the network.

      Your passwords can be found in the ~/.pw file on cssgate; you'll need to login to cssgate to view them.

    2. Poseidon UML and ArgoUML

      The Poseidon UML Community Edition v. 2.1.2 tool has been installed on all workstations; it was previously available only in SCI106. Dr. Chung discovered that ArgoUML was not a very robust tool, so he had asked us to install Poseidon. ArgoUML was removed.

      Please note that Poseidon may prompt you to register -- you can cancel the registration and still be able to use Poseidon.

    3. SAS Enterprise Miner

      SAS is licensed for 24 copies. They are only available on the Dell 330s in SCI106/108, and should only be used for TCSS545.

    4. Visual Studio .Net

      We believe we have fixed the problem with not being able to easily create an executable due to lack of debugging or administrative privileges.

    5. Eclipse Now Has UML Support

      A UML component for the popular Eclipse IDE was installed.

    6. BlueJ

      A new version of BlueJ, version 1.3.5, was installed.

  2. Repository Server Is Now Clustered

    Cormac was able to successfully install and configure the Kimberlite clustering software on Linux, which will hopefully increase the availability of the repository server (cssgate). It does so by software and hardware that tracks the state of both computers participating in this failover cluster. Both computers share a common data store.

    Normally, we are running on the primary node (called turing). If it fails due to a hardware problem, the secondary node (called vonneumann) should detect this, turn off turing's power to ensure data integrity for the shared data, and assume control of the 128.208.250.8 IP address and the domain name(s) associated with it (e.g., cssgate.tacoma.washington.edu).

    From your point of view, what will happen is the system will look like it has crashed -- you will lose what you were working on; all running processes will be killed and all tomcat installations will not be remembered. This is because the primary node (turing) really has crashed. However, within about five minutes you can login to cssgate again, recover from the failover (e.g., re-install your tomcat/JSP project), and continue working. This is because the old secondary node (vonneumann) is now functioning as cssgate, and is therefore now the primary node.

    What we will do when we are on-site is investigate the cause of the crash, fix it and eventually bring up the failed node (e.g., in this case, turing) to participate in the cluster. If the primary node fails later on, the cluster will then switch to secondary node, and we will investigate again, and so on. If we can do it with little impact, we may force a failover such that turing is always the primary after investigation.

    The idea is to let you continue working -- although not transparently -- until we get a chance to investigate the source of the failure, which may occur when the lab is not staffed (e.g., nights, weekends and holidays).

    We do know of a couple of problems with the clustering, which may affect the high availability we had anticipated. Please be patient with us as we try to diagnose and solve them.

    Consequently, we do not consider this cluster to be rock-solid, but we think it is better than relying on only one computer which has failed hardware-wise in the past. We will attempt to improve the availability even further in the next few months, by removing a single point of failure for the shared data and by adopting a new data storage technology called iSCSI (if it works!).


Hours  |  Support Information  |  News  | 
Policies  |  Emergencies