DOC

hpc2012 help

By Samuel Ward,2014-06-26 19:18
7 views 0
hpc2012 help

    Microsoft HPC Pack 2012 R2 and HPC

    Pack 2012

    1 Microsoft HPC Pack: Getting Started 1.1 Release Notes for Microsoft HPC Pack 2012 These release notes address late-breaking issues and information about Microsoft? HPC

    Pack 2012. In this topic:

    ; Download and install Microsoft HPC Pack 2012

    ; Install the Microsoft HPC Pack 2012 web components

    ; Install the HPC soft card key storage provider

    ; Uninstall HPC Pack 2012

    ; Known issues

    1.1.1 Known issues

    The following issues are known to affect this release of HPC Pack 2012:

    ; Adding a head node configured for high availability may cause loss of security accounts from

    the HPCUsers and Administrators groups

    ; iSCSI deployment is not currently enabled

    ; Setup requires .NET Framework 3.5

    ; Cluster management tools are not supported on Windows XP or Windows Server 2003

    ; English (United States) locale must be set for remote SQL Server database instance

    ; Bare metal deployments with a large number of nodes may fail

    ; The Windows Azure VM role is retired

    ; Windows Azure Connect is not supported in Microsoft HPC Pack 2012

    ; Windows Azure HPC Scheduler Web Portal is not available immediately after deployment

    ; Heat map and metrics may stop updating

    ; Cluster metrics collection may affect performance-sensitive applications

    ; Task output shows most recent 4000 characters of output

    ; Capture Image operation may cause HPC Cluster Manager to hang

    ; Capture Image operation may incorrectly report errors

    ; Activation and submission filters must be accessible to all head nodes

    ; Custom HPC Pack 2008 R2 diagnostic tests must be updated to work with HPC Pack 2012

    ; Apostrophe cannot be used in a node template name

    ; Remote connection credentials to cluster nodes may not be stored

    ; Node preparation task may run repeatedly

    ; SOA diagnostics can fail after installation of HPC Pack 2012 using nondefault folders

    Adding a head node configured for high availability may cause loss of security accounts from the HPCUsers and Administrators groups Adding a head node to a HPC cluster in which the head node is configured for high availability can cause loss of security accounts in the HPC cluster user and administrator groups that are configured on the cluster. Unless precautions are taken before disconnecting from an HPC cluster management tool session (HPC Cluster Manager or Windows HPC PowerShell) on a high availability head node, in the event of account removal, a cluster administrator may be unable to reconnect to the cluster by using the HPC cluster management tools.

    To work around or avoid this problem, do the following in your high availability configuration:

    1. As a best practice, create and maintain domain user groups for the HPC cluster users and

    administrators, and use standard processes in your organization to manage the user groups.

    2. Use the HPC cluster management tools to add these domain user groups to the HPC cluster

    users and HPC administrators groups (HpcUsers and local Administrators group) on each

    head node.

    3. When you add a head node to the HPC cluster that is configured for high availability, before

    running Setup for HPC Pack 2012 on the failover cluster node, use the local user and group

    management tools on the computer to add these domain groups to the HpcUsers and the local

    Administrators groups. (You may need to create the HpcUsers group on a new head node.)

    Failure to do so will result in the removal of the domain groups from the HPC cluster

    configuration.

    4. If you are unable to connect to the HPC cluster with HPC Cluster Manager or Windows HPC

    PowerShell as an HPC administrator to a high availability head node, log in locally to the

    computer. Then, use the local user and group management tools to add the appropriate domain

    groups to the HpcUsers and the local Administrators groups. After you do this, you can

    connect to the head node using the HPC cluster management tools.

    5. As an additional best practice, it is recommended that only one HPC cluster administrator

    works on the cluster at one time when adding or removing head nodes.

    iSCSI deployment is not currently enabled

    Because of a known issue, in this HPC Pack 2012 release, the deployment of iSCSI boot notes on your HPC cluster is not supported. The iSCSI deployment features in the HPC Pack 2012 cluster management tools (such as iSCSI Deployment, under Configuration, in HPC Cluster Management),

    are not currently enabled.

     R2. iSCSI deployment features are enabled in clusters that are created with HPC Pack 2008

Setup requires .NET Framework 3.5

    SQL Server 2012 Express requires .NET Framework 3.5 to successfully install. If you do not have a database already set up for use with HPC Pack 2012, or if your head node does not have Internet access during setup, install .NET Framework 3.5 manually before you attempt to install HPC Pack 2012. You can install .NET Framework 3.5 by using the Add Roles and Feature Wizard GUI or a command-line tool. For more information, see Install .NET Framework 3.5 and other features on-demand.

    Cluster management tools are not supported on Windows XP or Windows Server 2003

    You cannot run Setup for HPC Pack 2012 to install the client utilities on a computer that is running the Windows XP or the Windows Server 2003 operating system. The HPC Pack 2012 management tools are not supported on those operating systems.

    You can use job submission APIs to submit jobs to a HPC Pack 2012 cluster from a computer that is running the Windows XP or the Windows Server 2003 operating system. To enable this, install the HPC Pack 2012 Client Redistributable (HpcClient_x64.msi or HpcClient_x86.msi, depending on the operating system) on the computer. The installation files are available from the Microsoft Download Center, and they are installed in the Reminst file share on an HPC Pack 2012 head node.

    English (United States) locale must be set for remote SQL Server database instance

    When preparing the databases for a HPC Pack 2012 head node on a remote server that is running Microsoft? SQL Server, the account for the HPC cluster database instance must be configured to use English (United States) as the system locale. If a different locale is configured (such as an English locale other than the United States), the deployment of the head node can fail.

    To configure the English (United States) locale for the SQL Server login for the HPC cluster database instance, use the SQL Server management tools. The SQL Server login should be a domain account that you will use for the installation of HPC Pack 2012 on the head node.

    Bare metal deployments with a large number of nodes may fail Under certain conditions when deploying compute nodes from bare metal, the deployment of some nodes can fail. This is more likely to occur if you are deploying a large number of nodes at one time. You might see an error message similar to “The data is invalid,” or “The file or directory is corrupted and unreadable.”

    To work around this problem, redeploy the affected nodes, or try deploying a smaller number of nodes at one time.

The Windows Azure VM role is retired

    The VM Role feature (beta) in Windows Azure is being retired on May 15, 2013. Also now deprecated are the settings in Microsoft HPC Pack 2008 R2 and Microsoft HPC Pack 2012 to deploy a custom VHD to VM role nodes from a Windows HPC cluster. After the retirement date, VM role deployments from an HPC cluster will fail or be inaccessible. To add Windows Azure nodes to an HPC cluster, use the Windows Azure worker role.

    Windows Azure Connect is not supported in Microsoft HPC Pack 2012

    Windows Azure Connect features are not supported on Windows Azure node deployments with HPC Pack 2012. The Windows Azure Connect page has been removed from the Create Node Template

    Wizard and Node Template Editor. Windows Azure Connect settings will be removed from Windows Azure node templates that are imported from HPC Pack 2008 R2.

    HPC Pack 2012 supports Windows Azure Virtual Network in deployments where Windows Azure Virtual Network is available.

    Windows Azure HPC Scheduler Web Portal is not available

    immediately after deployment

    After the initial deployment of the Windows Azure HPC Scheduler is completed, if you immediately attempt to reach the Web Portal, you might see a “Permission denied” error message. The Web Portal can take up to 10 minutes to start functioning after the deployment is completed. Heat map and metrics may stop updating

    If a network using IPsec has Connection Security Rules enabled in Windows Firewall, communication between a head node and a compute node may be blocked after the HPC Monitoring Server Service is restarted on the head node. Because of this, the heat map in HPC Cluster Manager and other cluster metrics may stop updating.

    To work around this problem, restart the HPC Monitoring Client Service on each affected compute node. To restart the service on all nodes, you can run the followingclusrun commands:

    clusrun net stop HpcMonitoringClient

    clusrun net start HpcMonitoringClient

    Cluster metrics collection may affect performance-sensitive applications

    Collection of cluster metrics, in particular the counters for HpcNetwork, can negatively affect job throughput for performance-sensitive applications, such as compute-intensive MPI applications. To avoid this problem, disable the HpcNetwork usage counter, if it is in use. To do this, first export the current metric definition list to an XML file, in case you want to add it back later, by running the following HPC PowerShell cmdlet:

    Windows PowerShell

    Export-HpcMetric -name HpcNetwork -path hpcnetwork.xml

    Then, remove the HpcNetwork metric, as follows:

    Windows PowerShell

    Remove-HpcMetric -name HpcNetwork

    Task output shows most recent 4000 characters of output

    HPC Pack 2012 caches and shows the most recent 4000 characters per task, not the first 4000 characters, as in previous versions of HPC Pack. This can affect or break existing scripts that you have implemented that monitor task output. The feature in HPC Pack 2012 makes it easier for you to see the current or exit status of tasks in your HPC jobs.

    Capture Image operation may cause HPC Cluster Manager to hang The Capture Image operation in HPC Cluster Manager may continue running without completing, and HPC Cluster Manager may appear to hang, if there is not sufficient disk space on the head node to store the disk image. The operations log may show errors or warnings that are related to the Robocopy command and the .wim file.

    To work around this problem, use Task Manager to exit HPC Cluster Manager. Ensure that there is sufficient hard disk space on both the head node and the computer on which HPC Cluster Manager is running. If the head node is configured for high availability, check that there is sufficient space in the shared disk in the failover cluster. Then, try the operation again.

Capture Image operation may incorrectly report errors

    On a cluster with a head node configured for high availability in the context of a failover cluster, the Capture Image operation may complete successfully but report one or more errors. For example, you might see a message similar to the following:

    “[Error] Could not find a part of the path ''.

    [Error] The operation failed and will not be retried.

    [Warning] The operation failed due to errors during execution.”

    To determine if the image was captured successfully, review the list of images in the image store. Activation and submission filters must be accessible to all head nodes In an HPC cluster configured for high availability of the head node in the context of a failover cluster, any activation or submission filter program files (.exe files and scripts) must be accessible to all of the head nodes in the cluster. It is recommended to install the filters to a folder in the shared storage of the failover cluster. Alternatively, you can create a local folder on each head node to store the folders. If you use duplicated local folders, ensure that you synchronize them when you make changes to the filters.

    Custom HPC Pack 2008 R2 diagnostic tests must be updated to work with HPC Pack 2012

    Because of a change in the supported .NET Framework version, any custom diagnostic test binaries that you created for HPC Pack 2008 R2 and that reference HPC assemblies need to be updated for HPC Pack 2012.

    To update a HPC Pack 2008 R2 diagnostic test to work with HPC Pack 2012, recompile the diagnostic test source code by using the HPC Pack 2012 API and .NET Framework 4. Then, the diagnostic test should be retested to ensure full functionality.

    Alternatively, a workaround for each HPC Pack 2008 R2 diagnostic test executable file is to create a new application configuration file, or update the existing configuration file, to enable HPC Pack 2012 to recognize and run the existing test. Each configuration file has a name of the form TestName.exe.config, where TestName.exe is the name of the diagnostic test executable file, and

    is located in the same folder as the test executable. For example, the following sample configuration

     R2 diagnostic test to support .NET Framework 4 and to bind the HPC file enables an HPC Pack 2008

    Pack 2012 dependent assembly microsoft.hpc.diagnostics.helpers.

    XML

    xml version="1.0"?>

    <configuration>

     <startup>

     <supportedRuntime version="v4.0"/>

     <requiredRuntime version="v4.0" safemode="true"/>

     startup>

     <runtime>

     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">

     <dependentAssembly>

     <assemblyIdentity name="microsoft.hpc.diagnostics.helpers"

     publicKeyToken="31bf3856ad364e35"

     culture="neutral" />

     <bindingRedirect oldVersion="3.0.0.0"

     newVersion="4.0.0.0"/>

     dependentAssembly>

     assemblyBinding>

     runtime>

configuration>

    Apostrophe cannot be used in a node template name

    The apostrophe (single quote) character cannot be used in the name of a node template. Using the apostrophe character in a template name can cause HPC Cluster Manager to crash or may lead to unexpected results when attempting to filter nodes by template name.

    Remote connection credentials to cluster nodes may not be stored If you previously stored domain credentials for remote desktop connections to cluster nodes from HPC Cluster Manager and those credentials expired or changed recently, you will be prompted to change credentials the next time you start a remote connection to cluster nodes by using the Remote

    Desktop in HPC Cluster Manager. However, if you enter the new credentials, and select

    the Remember my credentials option, the new credentials are not stored by HPC Cluster Manager. If you attempt later to make a remote desktop connection to another node by using HPC Cluster Manager, you will be prompted to enter credentials again. You will also be prompted to enter credentials again to make a later connection to the original node.

    To update the remote connection credentials to the cluster nodes so that they are stored by HPC Cluster Manager, click Change password in the HPC Cluster Manager dialog box that appears when you use

    the Remote Desktop action. Then, in the Windows Security dialog box that appears, enter your

    remote connection credentials, and select Remember my credentials.

Node preparation task may run repeatedly

    A job may repeatedly attempt to run a node preparation task and continue to run, instead of failing, if certain types of errors occur. If you notice that a job continues to create a node preparation task and the task fails immediately, you should manually cancel the job. Verify that you have specified a valid working directory for the job. If the job runs on graphics processor units (GPUs), or is another job type that runs in a console session that is created on the compute nodes, you should also ensure that there is not currently a user logged in to the console session of the nodes on which you are attempting to run the job.

    SOA diagnostics can fail after installation of HPC Pack 2012 using nondefault folders

    The HPC SOA Diagnostics Monitoring Service (HpcSoaDiagMon) can fail to start if the data directory configured on the head node during installation of HPC Pack 2012 is set to a folder that is not under the installation folder for HPC Pack. In this case, the path for the CCP_DATA environment variable is not under the path for the CCP_HOME environment variable. This problem will prevent the monitoring of SOA jobs by HPC Pack 2012.

    If the data directory is not located under the default installation folder for HPC Pack, you make a file system link to resolve the problem. From an elevated command prompt, type the following command: mklink /j "%CCP_HOME%\Data" "%CCP_DATA%"

    1.2 Getting Started Guide for Microsoft HPC Pack 2012 R2 and HPC Pack 2012

    This guide provides basic conceptual information and general procedures for installing a high performance computing (HPC) cluster by using Microsoft? HPC Pack 2012 R2 or Microsoft? HPC Pack 2012. These versions of HPC Pack allow you to create and manage HPC clusters consisting of dedicated on-premises compute nodes, part-time servers, workstation computers, and on-demand compute resources that are deployed in Windows Azure.

    The steps in this guide will help you to deploy a new head node and on-premises compute nodes. Deployment procedures for HPC Pack 2012 R2 and HPC Pack 2012 are generally identical, except for some differences in system requirements for the different cluster roles. For information about migration from an HPC Pack 2008 R2 cluster or advanced deployment scenarios, see the content in the Microsoft

    TechNet Library.

    Checklist: Deploy an HPC cluster

    The following checklist describes the overall process of designing and deploying an on-premises HPC cluster. Each task in the checklist is linked to the section in this document that describes the steps to perform the task.

Task Description

    Step 1: Prepare for Your Before you start deploying your HPC cluster, review the list of Deployment prerequisites and initial considerations.

    Step 2: Deploy the Head Deploy the head node by installing Windows Server and HPC

    Node Pack.

    Step 3: Configure the Configure the head node by following the steps in

    Head Node the Deployment To-do List in HPC Cluster Manager.

    Step 4: Validate Your If you will be deploying on-premises nodes from bare metal,

    Environment Before run a set of diagnostic tests to identify common problems that Deploying Nodes can affect node deployment.

    Step 5: Add Nodes to Add on-premises nodes to the cluster by deploying them from

    the Cluster bare metal, by importing an XML file, or by manually

    configuring them.

    Step 6: Run Diagnostic Run diagnostic tests to verify that the deployment of the cluster Tests on the Cluster was successful.

Step 7: Run a Test Job Run a basic job on the cluster to verify that the cluster is

    on the Cluster operational.

    Appendices

    The following appendices provide additional information that might be needed to complete certain steps in this guide.

    ; Appendix 1: HPC Cluster Networking

    ; Appendix 2: Creating a Node XML File

    ; Appendix 3: Node Template Tasks and Properties

    ; Appendix 4: Job Template Properties

    ; Appendix 5: Scripted Power Control Tools

    ; Appendix 6: Using HPC PowerShell

    1.3 Deploying a Windows HPC Cluster with Remote

    Databases Step-by-Step Guide

    Starting in Microsoft? HPC Pack 2008 R2, you can install the Windows high performance computing (HPC) cluster databases on one or more remote servers that are running Microsoft? SQL Server?, instead of installing them on the head node of your cluster. The advantage of this type of installation is that it saves resources on the head node, helping ensure that it can efficiently manage the cluster. This configuration can also enable you to configure and run jobs on clusters that could exceed the capabilities of SQL Server Express, which is installed by default on the head node to host the cluster databases.

    This guide provides step-by-step procedures for installing HPC Pack on the head node of your cluster, with the HPC databases stored on remote servers. This guide also provides information about how to prepare the remote database servers for the installation of the head node.

    Scenario overview

    HPC Pack uses the following Microsoft SQL Server databases to support cluster operations. The default names of the databases are in parentheses.

    ; Cluster management database (HPCManagement)

    ; Job scheduling database (HPCScheduler)

    ; Reporting database (HPCReporting)

    ; Diagnostics database (HPCDiagnostics)

    ; Monitoring database (HPCMonitoring)

    During the installation process, you can select the location where you want to install each of the HPC databases: on the head node or on a remote server. You select the installation location per database, so you can install some of the databases on the head node and the remaining databases on one or more remote servers.

    To install the HPC databases on a remote server, that server must be running a version of SQL Server that is supported by your version of HPC Pack. Also, you need to create the databases and configure them for remote access before you start the deployment process for your HPC cluster. You can install as many databases as you want on each remote server. Also, you can install as many databases as you want in each instance of SQL Server (there can be more than one instance on a given server).

    There are no restrictions to the names that you can give the HPC databases, except the restrictions that are imposed by SQL Server. The same rule applies to the names of the instances where these databases are created. During the installation process of HPC Pack on your head node, you are asked to specify the name of the databases that you created and the name of the instance where you created them. Sections in this guide

    This guide contains the following sections:

    ; Requirements to Deploy a Windows HPC Cluster with Remote Databases

    ; Steps to Deploy an HPC Cluster with Remote Databases

    o Step 1: Prepare the Remote Database Servers

    o Step 2: Install HPC Pack with Remote Databases

    1.4 Reinstall Microsoft HPC Pack Preserving the Data in the HPC Databases

    1.5 Enable HPC PowerShell with the Client Redistributable Package

2 Microsoft HPC Pack: Node Deployment

    2.1 Adding Nodes to a Cluster

    2.1.1 Deploy Nodes from Bare Metal

Report this document

For any questions or suggestions please email
cust-service@docsford.com