Product Documentation

Now, prepare the DRB node with necessary software stack installation.

  1. Install Rocky Linux 9.x using kickstart-iso image.
  2. Log into the appliance as root user.
  3. Configure networking: /etc/sysconfig/network-scripts/ifcfg-*, /etc/resolv.conf, /etc/hosts, etc.
  4. Reboot appliance
    shell> init 6
  5. Copy over current Tellaro distribution on existing PROD appliances under /usr/local/software directory
  6. Change directory to /usr/local/software
    shell> cd /usr/local/software
  7. Unarchive StrongKey Tellaro distribution
    shell> tar zxvf SAKA-4.X.X-dist.tgz
  8. Change directory to /usr/local/software/saka.
    shell> cd /usr/local/software/saka
  9. Using a text editor (gedit or vi), edit the install-saka.sh script to customize Server names, passwords, database size, etc.

    https://demo4.strongkey.com/getstarted/assets/documents/HTML/images/key_strong_cyan.pngNOTE: Add FQDNs of all servers in the PROD cluster in the same order as defined in servers table, plus the FQDN of new server at the end with additional server ID. This new server ID would be the next sequence in the Server Table. Assuming there are four PROD Nodes with Server1, Server2, Server3 and Server4, a new entry needs to be added as “SERVER5=<hostname>

    Also, in order to match the module configurations with PROD appliances, please update the flags as neccessary.
  10. Run the install-saka.sh script:
    shell> ./install-saka.sh
  11. Log out of the StrongKey Tellaro
  12. Login as 'strongauth' into the StrongKey Tellaro Appliance.
  13. Startup 2 shell windows
  14. In Window 1, copy the database dumps created in step# 1 (MariaDB as well as OpenLDAP) onto the new appliance
    shell> scp <domain-name>:/usr/local/strongauth/dbdumps/strongkeylite-newserver.db /usr/local/strongauth/dbdumps
    shell> scp <domain-name>:/usr/local/strongauth/dbdumps/conf-<DATE>.ldif /usr/local/strongauth/dbdumps
    shell> scp <domain-name>:/usr/local/strongauth/dbdumps/databackup-<DATE>.ldif /usr/local/strongauth/dbdumps
  15. Copy the Keystore files necessary for SKFS functionality from the same PROD node as the database backup was copied over from:
    shell> scp -r <domain-name>:/usr/local/strongauth/skfs/keystores/*/usr/local/strongauth/skfs/keystores
    shell> scp -r <domain-name>:/usr/local/strongauth/skce/keystores/*/usr/local/strongauth/skce/keystore
  16. In Window 1, log into mysql database 'strongkeylite' as the 'skles' user
    shell> mysql -u skles -p strongkeylite
  17. Source the database dump to bring the new server up to date with the others in the cluster
    mysql> source /usr/local/strongauth/dbdumps/strongkeylite-newserver.db
    When the dump has finished sourcing, log out of mysql.
  18. A new entry must be added to the server_domains table for each Domain ID (DID) that is present in the cluster. For instance, if there exists 2 domains in the cluster, there must be a new record in the server_domains table for SID=5 DID=1 and SID=5 DID=2.

    Add an entry in the server_domains table for each domain
    mysql> insert into server_domains values (SID, DID, 'STARTING_PSEUDONUMBER','Active',null,null);
    SID must be the numeric value of the new SID to be added to the cluster.

    DID must be the value of one domain already existing in the cluster. You can see what domains currently exist with the mysql command
    mysql> select * from domains\G
    STARTING_PSEUDONUMBER is the first token to be used by the new server. This value can be any number that is the same length as the appliance configured token length (default 16 digits). This value can be reused between multiple domains. A value of '5000000000000001' is the suggested format for SID 5.

    Adding a new server with SID = 5 to a cluster with DID 1 and 2, the commands would be
    mysql> insert into server_domains values (5,1, '5000000000000001','Active',null,null);
    mysql> insert into server_domains values (5,2, '5000000000000001','Active',null,null);
  19. If any custom configurations have been added to the existing appliances in the configuration properties files, these should be duplicated on the new server.
    /usr/local/strongauth/appliance/etc/appliance-configuration.properties
    /usr/local/strongauth/crypto/etc/crypto-configuration.properties
    /usr/local/strongauth/skcc/etc/skcc-configuration.properties
    /usr/local/strongauth/skce/etc/skce-configuration.properties
    /usr/local/strongauth/skfs/etc/skfs-configuration.properties
    /usr/local/strongauth/strongkeylite/etc/strongkeylite-configuration.properties
  20. In Window 1, restart the Glassfish application server
    If using payara6, use the following command:
    shell> sudo systemctl restart payara
    
    If using payara5, use the following command:
    shell> sudo service glassfishd restart
    
  21. In Window2, go the /usr/local/strongauth/<payara-version>/glassfish/domains/domain1/logs directory
    shell> aslg
    Or
    shell> cd /usr/local/strongauth/<payara-version>/glassfish/domains/domain1/logs
  22. In Window2, run the tail -f command on the server.log file
    shell> tail -f server.log
  23. In Window1, change directory to /usr/local/strongauth/bin
    shell> cd ~/bin
  24. In Window1, execute the Secondary-SAKA-Setup-Wizard.sh
    shell> ./Secondary-SAKA-Setup-Wizard.sh
  25. Follow the wizard steps to completion, ensuring there are no errors in Window1 or Window2. If there are any errors, determine the cause of the error, log out of the session, log back in as root and execute the cleanup.sh script to clean out the installation. Fix the cause of the error and start the installation process with Step N-1-9.

    https://demo4.strongkey.com/getstarted/assets/documents/HTML/images/key_strong_cyan.pngNOTE: The step after submitting all KeyCustodians will be to create a MASK file. Please store this mask file on USB.

  26. In Window1, restart the Payara application server
    shell> sudo systemctl restart payara
    OR
    shell> sudo service glassfishd restart
    
  27. In Window1, execute the KC-SetPINTool.sh
    shell> KC-SetPINTool.sh
  28. Using the KeyCustodian flash-drives, set the PINs for the required minimum number of Key Custodians to activate the cryptographic hardware module on the appliance, ensuring there are no errors in Window1 or Window2