Skip to content
GigaSpaces Logo GigaSpaces Logo
  • Products
    • InsightEdge Portfolio
      • Smart Cache
      • Smart ODS
      • Smart Augmented Transactions
    • GigaSpaces Cloud
  • Roles
    • Architects
    • CXOs
    • Product Teams
  • Solutions
    • Industry Solutions
      • Financial Services
      • Insurance
      • Retail and eCommerce
      • Telecommunications
      • Transportations
    • Technical Solutions
      • Operational BI
      • Mainframe & AS/400 Modernization
      • In Memory Data Grid
      • Transactional and Analytical Processing (HTAP)
      • Hybrid Cloud Data Fabric
      • Multi-Tiered Storage
      • Kubernetes Deployment
      • Streaming Analytics for Stateful Apps
  • Customers
  • Company
    • About GigaSpaces
    • Customers
    • Partners
    • Support & Services
      • University
      • Services
      • Support
    • News
    • Contact Us
    • Careers
  • Resources
    • Webinars
    • Blog
    • Demos
    • Solution Briefs & Whitepapers
    • Case Studies
    • Benchmarks
    • ROI Calculators
    • Analyst Reports
    • eBooks
    • Technical Documentation
  • Contact Us
  • Try Free

Deployment Composition In Cloudify

Subscribe to our blog!

Subscribe for Updates
Close
Back

Deployment Composition In Cloudify

Dewayne Filppie May 23, 2015
5 minutes read

In Cloudify, “deployments” define an isolated namespace that contains a collection of nodes and relationships.  These nodes and relationships are typically visualized as a complete “stack” of technologies, that deliver a complete platform for computing. An example is a classic load balancer, web servers, app servers, and database stack. In some cases, it is desirable to have a these islands *not* represent a complete stack, but a portion of a stack (a tier for example).

In this model, a database deployment (for example) can be instantiated independently from other tiers. The other tiers can come and go independently of the database. Cloudify has no built in capability to express such a model, but the flexible plugin architecture makes it rather simple to do so.

Quick Walkthrough
The DeploymentProxy node lets you set up a startup dependency between deployments.  A DeploymentProxy node is inserted in the dependent blueprint, and is configured to refer to the outputs of the independent blueprint, or more precisely, the independent deployment.  The source for the plugin is on github, and includes an example.  The example demonstrates a NodeJS blueprint that depends on a MongoDB blueprint.  The details of the dependency are somewhat contrived, but good enough for a demonstration.

The DeploymentProxy uses the blueprint “outputs” feature as the integration point.  So in this example, the first step is to establish meaningful outputs in the MongoDB blueprint.
[gist id=845b0c7ee7b8f33e490e]
Once the outputs are established, all work moves to the dependent blueprint (NodeJS), that contains the DeploymentProxy node.  To begin with the NodeJS blueprint includes the plugin definition and the TOSCA node definition for DeploymentProxy.

[gist id=d778f35d9a88f87b4fdb]
Next, the DeploymentProxy node itself is added.  The DeploymentProxy node represents the independent blueprint (MongoDB) in the NodeJS blueprint.  Its only function is to be used during the built-in install workflow to wait for (if necessary) and provide info about the referenced blueprint/deployment.

[gist id=bf213e60b9bd3e6b9be2]
This particular node demonstrates a python boolean expression being used to determine when the proxy will return successfully during the install workflow.  In other words, the NodeJS install will wait for that condition to be true, or time out.  The expression is supplied the “outputs” dict of the target deployment.  The other kind of condition is “exists”, which returns successfully if the named property exists in the outputs.

The last step is to connect, via a relationship, the NodeCellar application to the MongoDB database represented by the proxy.  Beyond simply waiting for MongoDB to be available, the example also demonstrates accessing the outputs in order to connect to the database.  The DeploymentProxy node returns the outputs from its target blueprint in its runtime properties.

[gist id=d7badd6942b693d1fd19]
In the “node_connected_to_mongo” relationship, slightly modified from the original version in the standard NodeCellar blueprint, the postconfigure lifecycle method gets the MongoDB host and port.  In the original version, it gets the values from the MongoDB nodes that are in the current blueprint.  In this version, since MongoDB has a completely separate blueprint, it gets the host and port from the proxy node.  This is shown in the NodeJS blueprint in the relationship implementation at /scripts/mongo/set-mongo-url.sh.

[gist id=25f49a7f5d2e798e7d56]

A Little Deeper
The plugin has but a single implementation function, “wait”, that waits for conditions on the outputs of the target deployment. When the “start” method is invoked, “wait” receives the following parameters:

  • deployment_id : the deployment to depend on.
  • wait_for: either “exists” or “expr”.
    •  If “exists”, will wait for an output matching the value of property “test”.
    • If “expr”, it interprets property “test” as a python boolean expression, in which the collection “outputs” is the outputs dict (e.g. expr: outputs[port]>0
  • test : either the name of an output, or a boolean expression (see wait_for)
  • timeout : number of seconds to wait. When timeout expires, a “RecoverableError” is thrown. Default=30.

The “wait” function calls the Cloudify REST API to get the outputs from the configured deployment id. It either checks whether a specific output property exists, or evaluates a supplied python boolean expression to check more complicated conditions. If an expression is configured, the dict “outputs” containing the target deployment “outputs” dict is in scope when the expression is evaluated. The function attempts to satisfy the condition for “timeout” seconds, at which point a “RecoverableError” is raised. This causes the Cloudify install workflow to enter it’s own retry loop. This continues until the install workflow finally gives up, or the expression evaluates as true. When the DeploymentProxy completes, it copies the outputs of the target deployment into it’s own runtime properties. This allows other nodes in the containing blueprint easy access to the outputs where, for example, a server IP address and port
might be located.

Conclusion and Future Directions

The cloudify.nodes.DeploymentProxy node provides a basic dependency mechanism between deployments.  It masquerades as a local deployment node, while accessing another deployment, waiting for a ready state described by its outputs.  This is just the tip of the iceberg for this concept, as the communication is limited to outputs and is uni-directional.  There is no reason, in principle, that this plugin couldn’t be extended to actually trigger the install of the target deployment, access and expose runtime properties, and update outputs and other properties continuously.  The source is available on github, along with the usage example from the walkthrough in this post.

CATEGORIES

  • GigaSpaces
Dewayne Filppie

All Posts (6)

YOU MAY ALSO LIKE

November 14, 2008

Big Cloud Week Coming Up
2 minutes read

June 4, 2008

Economies of Non-Scale
13 minutes read

December 7, 2010

Online discussions on NoSQL, Scale-out,…
1 minutes read
  • Copied to clipboard

PRODUCTS, SOLUTIONS & ROLES

  • Products
  • InsightEdge Portfolio
    • Smart Cache
    • Smart ODS
    • Smart Augmented Transactions
    • Compare InsightEdge Products
  • GigaSpaces Cloud
  • Roles
  • Architects
  • CXOs
  • Product Teams
  • Solutions
  • Industry
    • Financial Services
    • Insurance
    • Retail and eCommerce
    • Telecommunications
    • Transportation
  • Technical
    • Operational BI
    • Mainframe & AS/400 Modernization
    • In Memory Data Grid
    • HTAP
    • Hybrid Cloud Data Fabric
    • Multi-Tiered Storage
    • Kubernetes Deployment
    • Streaming Analytics for Stateful Apps

RESOURCES

  • Resource Hub
  • Webinars
  • Blogs
  • Demos
  • Solution Briefs & Whitepapers
  • Case Studies
  • Benchmarks
  • ROI Calculators
  • Analyst Reports
  • eBooks
  • Technical Documentation
  • Featured Case Studies
  • Mainframe Offload with Groupe PSA
  • Digital Transformation with Avanza Bank
  • High Peak Handling with PriceRunner
  • Optimizing Business Communications with Avaya

COMPANY

  • About
  • Customers
  • Management
  • Board Members
  • Investors
  • News
  • Events
  • Careers
  • Contact Us
  • Book A Demo
  • Try GigaSpaces For Free
  • Partners
  • OEM Partners
  • System Integrators
  • Value Added Resellers
  • Technology Partners
  • Support & Services
  • University
  • Services
  • Support
Copyright © GigaSpaces 2021 All rights reserved | Privacy Policy
LinkedInTwitterFacebookYouTube

Contact Us