Yesterday, we held a webinar about some of the enhancements to Fuel in Mirantis OpenStack 7.0. Fuel has come a long way in this iteration, and we detailed both the operational and interface improvements — including some of the possibilities of the plugin framework — which you can hear about here. We also entertained questions from the audience, which we thought you might find helpful.
Q: How does Fuel discover new nodes? Does this happen automatically or do I have to register them as new nodes first?
A: Fuel discovers nodes that boot to its PXE network environment. (In other words, nodes that PXE boot on the same network occupied by Fuel.) When a node powers on and is set to PXE boot, Fuel bootstraps that node and discovers many of the physical attributes such as the CPUs, RAM, HDD, NICs, MAC address, and so on. Fuel then adds the node to the pool of available systems.
Q: I assume there is a way for a plugin to define what other roles it can share a node with?
A: Yes. The plugin framework allows for full control of how the plugin is deployed. For example, Fuel limits which default roles can coexist (for example, Controller and Compute can NOT), and Plugins provide the same capability to developers. In the case of our LMA plugins, for example they can currently only coexist with each other, as far as the specific roles are concerned. We can stack the Kibana and Grafana roles on a single dedicated node, but we can’t stack them with the Storage-Cinder role, for example.
Q: Does Fuel support different deployment modes?
A: Earlier versions of Fuel provided the option to choose between HA and non-HA environments; if you chose non-HA, there was no going back later. Current versions of Fuel will instead default to an HA-capable deployment mode, though the cluster will only truly have high availability if it has at least three controllers. That said, if you add new controllers to an existing cluster, you will be able to achieve HA. This way, you get the benefits of HA availability without having to install a minimum of three controllers if you don’t need it.
Q: Each node role has its own network template file. What if a node has multiple roles?
A: For the current release, network templates are defined by a single role. Nodes with multiple roles will take the first role template that matches alphabetically. If the node needs configuration from multiple templates, it will require deployment customization. We are looking at adding this type of flexibility to future releases.
Q: What is the difference between the name that is displayed on the node and the hostname in the extended settings screen?
A: While the current implementation can be somewhat confusing, the name shown on the node in the UI is the “node name”. This value can change at any point, before or after deployment, and is used only to visually identify nodes. The name in the extended settings screen is only capable of being changed prior to deployment and is propagated down to the node as the node’s actual hostname, which is visible both in the UI and on the node itself.
Q: Are these labels are locally significant, if they don’t correspond to the hostname or any other server identifier?
A: Labels are also visible in the CLI, but are not used in conjunction with the hostname. However, we do have the ability to define the hostname for a node in the UI as a new feature, as well.
Q: Are there any plans to support the glusterfs plugin with MOS 7.0 ?
A: GlusterFS was initially developed as a proof of concept to showcase the plugin capabilities in 6.0, but we do not have plans at this time to create a GlusterFS plugin compatible with 7.0. (That said, the whole point of the plugin architecture is that you can, if you need it.)
Q: When using ESXi compute (and multiple clusters) do you recommend putting nova-compute processes on the controllers or dedicated nodes?
A: This one is probably best as “it depends based on your workload” but we are exploring the ability to deploy individual processes – including RMQ and Keystone – onto dedicated nodes based on the workloads that will be running on your environment.
Q: Are the HealthChecks automatically constructed based on the contents of the deployment script(s) ?
A: HealthChecks are a predefined subset of relevant, selectable Tempest, Rally, and OSTF tests exposed in the Fuel UI.
Q: Can multiple environments share the same network?
A: Network Templates are environment-specific, but once defined, they can be copied to multiple environments.
Q: Why are the CEPH and Telemetry greyed out in this example?
A: Fuel grays out selections which are incompatible with currently selected roles. In addition, whether a selection is available is dependent on the selections of the specific environment components during the creation of a new environment. In our demo, for example, we had NOT selected CEPH as the storage backend, and also did not enable the optional Ceilometer service.
Q: Is the possibility to install plugins (or update current ones) in already deployed environments planned?
A: Yes, this is one of our high value targets for improving our plugin framework in upcoming releases. (We do not have a release date planned for these features yet.)
Q: I thought there are some more enhancements in the upcoming Mirantis OpenStack 8.0 with respect to plugin management?
A: Yes, there are more enhancements coming with respect to plugin management, including the ability to provide plugin options in the wizard and a further improved Settings tab, where plugins can define where they are presented. This is all being worked on right now, and we hope to see these changes in MOS 8.0.
Q: Will the webinars on December 3 and December 10 be the same as this one? Or do they have different content?
A: The webinars in December will be different. One will be about Kubernetes/Murano integration, while the other will be more VMware-centric. Feel free to check out our landing page for more info: http://content.mirantis.com/Mirantis-OpenStack-7-Webinar-Series-Landing-Page.html