The Bright Computing booth was a bustling hub of happy activity at SC15. Solving cluster deployment issues “headache free” has brought Bright Computing to the forefront in the under-appreciated world of HPC and cloud provisioning as well as generated a global customer (and fan) base. In short, Bright Computing provides comprehensive software solutions for provisioning and managing HPC clusters, Hadoop clusters, and OpenStack private clouds inside the data center or in the cloud. It’s easy to request a live demo, just click here.
According to Earl Joseph, VP for High Performance Computing at IDC, the HPC community appears to have transitioned from the experimental to an applied phase for cloud computing. The expectation is that we will now see an accelerated consumption of cloud computing by the HPC, scientific, and academic communities.
Cloud computing is great because it gives instant access to hardware resources, but users still have to provide the software. Bright Computing solves the software pain-point by giving users the ability to provision new cloud instances without sacrificing the ability to provision local hardware resources. Fundamentally, this appears the reason for the excitement shown by HPC experts visiting the Bright Computing booth. With a highly agnostic approach, Bright Computing appears to be taking over the cluster deployment marketplace for those customers who need to stand up HPC and cloud solutions.
A large number of Customer, Partner, and Analyst testimonials demonstrates the ubiquitousness of the Bright Computing Platform adoption. (Click here for the marketing brochure.) Just follow the previous hyperlinks to get a sense of the large number of organizations that already use the Bright Computing software. The following videos should give a sense of the toolset and applications:
Infrastructure
Monitoring
HPC applications
Apache Spark
Apache Hadoop
Interested in giving Bright a try?
Contact Bright Computing to get a trial software key, that will work for a cluster containing two nodes (For example: a headnode and one compute, or setting up 2 headnodes in HA config so that your teams can try stuff out in say a VM or KVM in multiple environments separate from production):
- Sign up to access the customer portal: https://www.brightcomputing.com/Customer-Create-Account
- Download the ISO: http://www.brightcomputing.com/Download (See note below)
- Download the cmgui to control the cluster: http://support.brightcomputing.com/cmgui/
- Review the cluster manager manuals: http://support.brightcomputing.com/manuals
Here is a video showing how to set up a basic cluster:
Linux cluster planning checklist
FQDN |
|
Location Address |
|
Room |
|
Rack |
|
U Location in Rack |
|
Bright Version |
|
Base OS |
|
NETWORK |
|
Internal network(Provisioning/Management eth0) Head nodeIP address on internal network |
|
Netmask |
|
Gateway |
|
Domain |
|
External network (Login eth1) Head node IP address on external network |
|
Netmask |
|
Gateway |
|
Domain name |
|
Name Server1 |
|
Name Server2 |
|
IPMI/iLO/Drac/BMC |
|
IP Address |
|
Netmask |
|
Gateway |
|
Domain name |
|
High Speed Interconnect OFED (Mellanox/Qlogic/Base) |
|
IP Address |
|
Netmask |
|
Gateway |
|
Domain name |
|
Time servers |
|
Graphical Head |
Yes |
Graphical Nodes |
No |
Ethernet Switches |
|
Are the switches managed or unmanaged? |
|
Accessible |
|
In the case of blade chassis, are the switches internal to the chassis? |
|
Have the switches been configured (IP address and SNMP enabled)? |
|
IB Switches |
|
Does the IB switch have an embedded subnet manager? |
|
Nodes |
|
List of MAC addresses: Is a list of MAC addresses for the nodes available? This information could optionally be used to identify nodes. |
|
List of switch port mapping: Is a list of switch port to node mappings available? This information could optionally be used to identify nodes. |
|
Name of the default node category |
Default |
Default software image |
default-image, |
Additional/Special kernel modules |
|
Disk setup RAID setup, partitioning and filesystem layout (if non-standard, specify details below) |
|
Mount Points: Add any addiotional mount point that should be mounted across the nodes in the form of (NFS server, path on server, local mount point) (nfsserver, /path/on/server, /local/mount/point) |
|
Workload manager |
|
Workload Management System |
PBS Pro |
Queues |
|
Specify the workload management queues that should be created |
|
Third-party applications |
|
Please list the third-party applications that should be installed |
|
High Availability |
|
Disksetup: Disk setup that should be used for secondary head node |
|
Heartbeat network: The dedicated heart beat network between the two head nodes for heartbeat ping. Normally a direct connection between the two head nodes. |
|
Shared storage: Type of shared storage |
|
GPUs |
|
Install CUDA in software images |
|
Xeon Phi |
|
Install Xeon Phi packages in software images? |
|
Configure Xeon Phi devices? |
|
Rack View |
|
Populate the rack information? |
|
Custom Requirements |
|
Please fill in any additional custom requirements that are needed. |
Acceptance Criteria
Test Case 1 |
(example) Deploy 6 nodes in AWS / EC2 and run Monte Carlo simulation |
Leave a Reply