Just a heads up that Joyent has released Smart Data Center 6.5.6 as noted here:
http://wiki.joyent.com/wiki/display/sdc/Upgrading+SDC+6.5.3+or+6.5.4+to+SDC+6.5.6
First upgrade attempt fails at the very end when selecting the correct platform. Joyent Support noted that it has seen this before, and that a "sdc-restore" from the pre-upgrade backup and then a reattempt should work. In my case, it did just that. I did the quick restore (no -F here). Rebooting the head node as I write this.
Showing posts with label SDC. Show all posts
Showing posts with label SDC. Show all posts
Thursday, April 04, 2013
Tuesday, April 02, 2013
Save time in backing up Joyent Smart Data Center
One does not frequently backup the head node USB key with Joyent's Smart Data Center. Generally you do it prior to upgrades. Therefore, its commonly a "do it twice" process as it has a quirky bug with regards to terminal emulation that I never seem to remember until its too late:
[root@headnode (CIS:0) ~]# sdc-backup -U c2t0d0p0
Disk c2t0d0p0 will relabled, reformatted and all data will be lost [y/n] y
labeling disk
creating PCFS file system
mounting target disk c2t0d0
mounting source disk c0t0d0
copying files
setting up grub
Sorry, I don't know anything about your "xterm-256color" terminal.
Error: installing grub boot blocks
Yep, my OSX default xterm-256color is not known, and the many-minutes long backup process dies at the end. To address this, override the terminal setting in root user's .bash_profile file with the line:
export TERM=vt100
Simple, but its not every day you can increase performance by 50%, so to speak.
Labels:
Joyent,
SDC,
smart data center,
SmartOS
Thursday, February 28, 2013
The fine art of SmartOS image creation in SDC
I suspect many organizations that run Joyent's Smart Data Center have them operated by Joyent staff themselves. Template creation of SmartOS images is something any private cloud operator will need to do, and Joyent has basic information on how to do so. However, certain steps require tools and code generally only available or known to Joyent staff. I wanted to impart my knowledge on how to go about doing this here for my own notes and for others.
First, one can follow the instructions at http://wiki.joyent.com/wiki/display/jpc2/Creating+Your+Own+SmartMachine+Image
I found that creating the snapshot locally to the compute node, as mentioned near the bottom, was insufficient, but your mileage may vary. I used the UI to snapshot the templated VM. In my case, I used the Smart64 image as my base OS image to then customize as mentioned, such as adding tomcat, services, and configurations.
One step that I found problematic is the meta data creation. The commands for doing this were found only in Smart64 or similar instances, and not the underlying nodes or SDC head node zones. I created a new Smart64 instance for template manipulation, pointed it to my cloudapi host using the "sdc-setup" command, and after configuration, used sdc-updatemachinemetadata from /opt/local/bin. The specific command I used for my meta data example was:
First, one can follow the instructions at http://wiki.joyent.com/wiki/display/jpc2/Creating+Your+Own+SmartMachine+Image
I found that creating the snapshot locally to the compute node, as mentioned near the bottom, was insufficient, but your mileage may vary. I used the UI to snapshot the templated VM. In my case, I used the Smart64 image as my base OS image to then customize as mentioned, such as adding tomcat, services, and configurations.
One step that I found problematic is the meta data creation. The commands for doing this were found only in Smart64 or similar instances, and not the underlying nodes or SDC head node zones. I created a new Smart64 instance for template manipulation, pointed it to my cloudapi host using the "sdc-setup" command, and after configuration, used sdc-updatemachinemetadata from /opt/local/bin. The specific command I used for my meta data example was:
sdc-updatemachinemetadata -m image_name="tomcat" -m image_version="1.0.1" -m image_description="tomcat appserver" 99199472-bae6-4c89-a7ef-d6d4cf736feb
The final part of that line is the zone uuid after it has been shutdown. The final step is to run sdc-create-image, a script that is only available internal to Joyent. Please contact your team rep to get this. Once you have it, your image publishing is a trivial command, run from the head node:
./sdc-create-image 99199472-bae6-4c89-a7ef-d6d4cf73757
With that, your new template is created, and your users can not pick your new application image to instantiate.
Labels:
Joyent,
SDC,
smart data center,
SmartOS
Saturday, January 12, 2013
Joyent Anniversary: What's Next
It was just over a year ago I wrote about some of my initial thoughts regarding Joyent's Smart Data Center product and SmartOS in general. A lot has changed since then and the product has both matured and found more acceptance.
I never really got into the "Why" of using the product. We make decisions on product use entirely based on technical and strategic directions. Our use of virtual machine technology is not as common as what one would see among Amazon AWS or Joyent Public Cloud customers. Rather, the requirements have consisted of large, non-transient VMs used for simulations, CAD (of the chip variety) layout, and large data manipulation. We provide hardware for VMs that each require 16-64GBs of RAM, dozens of gigabytes of local storage, and easily 12-16 cores a piece. Up to now, the solutions chosen in academia have been the likes of VMware ESX, Xen, etc. The problem is that the performance, stability, and data integrity requirements either tend to the more expensive end of the product matrix of the above. Perhaps a novel scale-out cloud VM solution on premise would work out better. This second option can be found in either Joyent, which is ready to go now, or OpenStack and friends which will some day achieve similar levels of maturity and flexibility.
Not everything is rosy in Joyent land though. Its primary focus here to now has been to match Amazon AWS in most customer requirements, inclusive of a multitude of transient VMs. This has left the 6.5 revision of the product wanting in areas such as conversion/migration of existing thirty party VMs into the cloud instance, migrating VMs and settings/state between compute nodes, and even migration of head nodes between hardware. Over time, and with help from staff at Joyent, I've worked my way around these edge cases utilizing dataset and package templating, low level use of ZFS snapshots and send/recv, and sometimes just plain old reinstallation of components.
With the announcement of Joyent 7, most of the above issues are being addressed, and we hope to both utilize the newer version and push for more change down the line to make this our go-to tool for virtualization of our entire environment. Where we best hope to help it is in the network space, as we have an obvious preference for the adoption of OpenFlow (SDN) to enable ease of multi-datacenter deployments.
2013 looks like a good year for our VM directions, and I expect others out there will see similar benefit if they just give this technology a try. I didn't give Joyent much thought until the KVM port. A year later, I'm glad we did.
I never really got into the "Why" of using the product. We make decisions on product use entirely based on technical and strategic directions. Our use of virtual machine technology is not as common as what one would see among Amazon AWS or Joyent Public Cloud customers. Rather, the requirements have consisted of large, non-transient VMs used for simulations, CAD (of the chip variety) layout, and large data manipulation. We provide hardware for VMs that each require 16-64GBs of RAM, dozens of gigabytes of local storage, and easily 12-16 cores a piece. Up to now, the solutions chosen in academia have been the likes of VMware ESX, Xen, etc. The problem is that the performance, stability, and data integrity requirements either tend to the more expensive end of the product matrix of the above. Perhaps a novel scale-out cloud VM solution on premise would work out better. This second option can be found in either Joyent, which is ready to go now, or OpenStack and friends which will some day achieve similar levels of maturity and flexibility.
Not everything is rosy in Joyent land though. Its primary focus here to now has been to match Amazon AWS in most customer requirements, inclusive of a multitude of transient VMs. This has left the 6.5 revision of the product wanting in areas such as conversion/migration of existing thirty party VMs into the cloud instance, migrating VMs and settings/state between compute nodes, and even migration of head nodes between hardware. Over time, and with help from staff at Joyent, I've worked my way around these edge cases utilizing dataset and package templating, low level use of ZFS snapshots and send/recv, and sometimes just plain old reinstallation of components.
With the announcement of Joyent 7, most of the above issues are being addressed, and we hope to both utilize the newer version and push for more change down the line to make this our go-to tool for virtualization of our entire environment. Where we best hope to help it is in the network space, as we have an obvious preference for the adoption of OpenFlow (SDN) to enable ease of multi-datacenter deployments.
2013 looks like a good year for our VM directions, and I expect others out there will see similar benefit if they just give this technology a try. I didn't give Joyent much thought until the KVM port. A year later, I'm glad we did.
Wednesday, January 04, 2012
Getting hands dirty with Joyent SDC: first lesson learned
Finally getting into Joyent's private cloud technology. I'll talk more about what all of this is useful for some other time, but this post is more of a note to self / note of warning. I repurposed some beefy ESX nodes for testing out Smart Data Center. But, those didn't have disks worth anything. Instead, I took some disks that were evacuated out of ZFS pools for larger drives. They would still be fine here...
The problem arises in setting up compute nodes, and later in any re-installing if necessary of the headnode. Things would quietly fail without any errors on compute node configuration, and re-installs of the head node dug a deeper hole. Turns out that Joyent is being ever too cautious in creating data pools for the head and compute nodes, and won't attempt to create the necessary local disk pools if the disks were previously associated with active ZFS pools. Silent errors are never good.
The work around is to bring up the head node in their recovery mode, which is noted as not importing any pools. Next, associate the drives, import the pools (if fully there) or create a new pool for each individual disk, and then "zpool destroy" them. Rinse, repeat. I finally got my head node installed in a sane way, and now on to some remaining problems with compute nodes and testing out KVM and vcpu support. More on that later.
Subscribe to:
Posts (Atom)