Mirantis Acquires Docker Enterprise Platform Business

LEARN MORE

53 Things to look for in OpenStack Train

It’s been a while since we did one of these articles marking the release of a new OpenStack release, but with last week’s announcement of the updated Certified OpenStack Administrator exam, we thought it was high time to bring back the tradition.

OpenStack Train was released last week, and includes more than 25,500 code changes by 1,125 developers from 150 different companies. Most components have gotten many big fixes and performance improvements, and some are finishing their transition to Python 3 by announcing that this is the last release to support Python 2.7.

Here’s the list, excerpted from the OpenStack Train release notes, organized by project.

Cinder – Block Storage service

Cinder provides on-demand, self-service access to Block Storage resources.

  1. A number of drivers have added support for new features such as multi-attach and consistency groups.
  2. Cinder now has upgrade checks for possible compatibility issues when upgrading to Train.

Designate – DNS service

Designate provides scalable, on demand, self service access to authoritative DNS services in a technology-agnostic manner.

  1. Designate now provides full IPv6 Support for the API control plane and the DNS data plane

Glance – Image service

Glance provides services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions.

  1. The Glance multi-store feature is now considered to be stable.
  2. Cache prefetching is now done as a periodic task by the glance-api, removing the requirement to add it in cron.

Horizon – Dashboard

Horizon provides an extensible unified web-based user interface for all OpenStack services.

  1. Volume multi-attach is now supported.
  2. Horizon now supports the optional automatic generation of a Kubernetes configuration file.

Ironic – Bare Metal service

Ironic consists of an OpenStack service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner.

  1. Ironic now provides basic support for building software RAID.
  2. Ironic includes a new tool for building ramdisk images: ironic-python-agent-builder.

Keystone – Identity service

Keystone facilitates API client authentication, service discovery, distributed multi-tenant authorization, and auditing.

  1. All keystone APIs now use the default reader, member, and admin roles in their default policies. This means that it is now possible to create a user with finer-grained access to keystone APIs than was previously possible with the default policies. For example, it is possible to create an “auditor” user that can only access keystone’s GET APIs. Please be aware that depending on the default and overridden policies of other OpenStack services, such a user may still be able to access creative or destructive APIs for other services.
  2. All keystone APIs now support system scope as a policy target, where applicable. This means that it is now possible to set [oslo_policy]/enforce_scope to true in keystone.conf, which, with the default policies, will allow keystone to distinguish between project-specific requests and requests that operate on an entire deployment. This makes it safe to grant admin access to a specific keystone project without giving admin access to all of keystone’s APIs, but please be aware that depending on the default and overridden policies of other OpenStack services, a project admin may still have admin-level privileges outside of the project scope for other services.
  3. Keystone domains can now be created with a user-provided ID, which allows for all IDs for users created within such a domain to be predictable. This makes scaling cloud deployments across multiple sites easier as domain and user IDs no longer need to be explicitly synced.
  4. Application credentials now support access rules, a user-provided list of OpenStack API requests for which an application credential is permitted to be used. This level of access control is supplemental to traditional role-based access control managed through policy rules.
  5. Keystone roles, projects, and domains may now be made immutable, so that certain important resources like the default roles or service projects cannot be accidentally modified or deleted. This is managed through resource options on roles, projects, and domains. The keystone-manage bootstrap command now allows the deployer to opt into creating the default roles as immutable at deployment time, which will become the default behavior in the future. Roles that existed prior to running keystone-manage bootstrap can be made immutable via resource update.

Manila – Shared File Systems service

Manila provides a set of services for management of shared file systems in a multitenant cloud environment, similar to the way OpenStack provides for block-based storage management through the Cinder project.

  1. Manila share networks can now be created with multiple subnets, which may be in different availability zones.
  2. NetApp backend added support for replication when DHSS=True.
  3. GlusterFS back end has added support for extend/shrink for directory layout.
  4. The Infortrend driver with support for NFS and CIFS shares has been added.
  5. The CephFS backend now supports IPv6 exports and access lists.
  6. The Inspur Instorage driver with support for NFS and CIFS shares has been added.
  7. Support for modifying share type name, description and/or public access fields has been added.

Neutron – Networking service

Neutron implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.

  1. OVN can now send ICMP “Fragmentation Needed” packets, allowing VMs on tenant networks using jumbo frames to access the external network without any extra routing configuration.
  2. When different subnet pools participate in the same address scope, the constraints disallowing subnets to be allocated from different pools on the same network have been relaxed. As long as subnet pools participate in the same address scope, subnets can now be created from different subnet pools when multiple subnets are created on a network. When address scopes are not used, subnets with the same ip_version on the same network must still be allocated from the same subnet pool.
  3. A new API, extraroute-atomic, has been implemented for Neutron routers. This extension enables users to add or delete individual entries to a router routing table, instead of having to update the entire table as one whole
  4. Support for L3 conntrack helpers has been added. Users can now configure conntrack helper target rules to be set for a router. This is accomplished by associating a conntrack_helper sub-resource to a router.

Nova – Compute service

Nova implements services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.

  1. Nova now includes live migration support for servers with a NUMA topology, pinned CPUs and/or huge pages, and/or SR-IOV ports attached when using the libvirt compute driver.
  2. Support for cold migrating and resizing servers with bandwidth-aware Quality of Service ports attached has been added.
  3. This release includes mproved multi-cell resilience with the ability to count quota usage using the Placement service and API database.
  4. A new framework supporting hardware-based encryption of guest memory to protect users against attackers or rogue administrators snooping on their workloads when using the libvirt compute driver has been added. Currently this framework only has basic support for AMD SEV (Secure Encrypted Virtualization).
  5. Nova now has improved operational tooling for tasks such as archiving the database and healing instance resource allocations in Placement.
  6. Coordination with the baremetal service during external node power cycles has been improved.
  7. Support for VPMEM (Virtual Persistent Memory) when using the libvirt compute driver. This provides data persistence across power cycles at a lower cost and with much larger capacities than DRAM, especially benefitting HPC and memory databases such as redis, rocksdb, oracle, SAP HANA, and Aerospike.

Octavia – Load-balancer service

Octavia provides scalable, on demand, self service access to load-balancer services in technology-agnostic manner.

  1. You can now apply an Access Control List (ACL) to the load balancer listener. Each port can have a list of allowed source addresses.
  2. Octavia now supports Amphora log offloading. Operators can define syslog targets for the Amphora administrative logs and for the tenant load balancer connection logs.
  3. Amphorae can now be booted using Cinder volumes.
  4. The Amphora images have been optimized to reduce image size and memory consumption.

Placement – Placement service

The Placement service tracks cloud resource inventories and usages to help other services effectively manage and allocate their resources.

  1. Placement now includes support for forbidden aggregates, which allows groups of resource providers to only be used for specific purposes, such as reserving a group of compute nodes for licensed workloads.
  2. Support has been added for a suite of features which, combined, enable targeting candidate providers that have complex trees modeling NUMA layouts, multiple devices, and networks where affinity between and grouping among the members of the tree are required. These features will help to enable NFV and other high performance workloads in the cloud.

Swift – Object Storage service

Swift provides software for storing and retrieving lots of data with a simple API. It is built for scale and optimized for durability, availability, and concurrency across the entire data set.

  1. Log formats are now more configurable and include support for anonymization.
  2. Swift-all-in-one Docker images are now built and published to https://hub.docker.com/r/openstackswift/saio.

Tacker – NFV Orchestration service

Tacker implements Network Function Virtualization (NFV) Orchestration services and libraries for end-to-end life-cycle management of Network Services and Virtual Network Functions (VNFs).

  1. Tacker now includes support for force deleting VNF and Network Service instances.
  2. Partial support of VNF packages has been added.

Blazar – Resource reservation service

Blazar’s goal is to provide resource reservations in OpenStack clouds for different resource types, both virtual (instances, volumes, etc) and physical (hosts, storage, etc.).

  1. Blazar now includes support for a global request ID which can be used to track requests across multiple OpenStack services.

Cyborg – Accelerator resources for AI and ML

  1. Cyborg (previously known as Nomad) is an OpenStack project that aims to provide a general purpose management framework for acceleration resources (i.e. various types of accelerators such as GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK and so on).

Karbor – Data Protection Orchestration Service

Karbor implements services and libraries to provide project aware data-protection orchestration of existing vendor solutions.

  1. Karbor now includes event notifications for plan, checkpoint, restore, scheduled and trigger operations.
  2. Karbor now enables users to backup image boot servers with the new added data which is located on the root disk.

Kolla

Kolla provides production-ready containers and deployment tools for operating OpenStack clouds.

  1. This release Introduces images and playbooks for Masakari, which supports instance High Availability, and Qinling, which provides Functions as a Service.

Kuryr

Kuryr provides a bridge between container framework networking and storage models to OpenStack networking and storage abstractions.

  1. Support has been added for tagging all the Neutron and Octavia resources created by Kuryr.

Senlin – Clustering service

Senlin implements clustering services and libraries for the management of groups of homogeneous objects exposed by other OpenStack services.

  1. Senlin now has support for webhook v2: previously the webhook API introduced microversion 1.10 to allow callers to pass arbritary data in the body along with the webhook call.

Trove – Database service

Trove provides scalable and reliable Cloud Database as a Service functionality for both relational and non-relational database engines, and continues to improve its fully-featured and extensible open source framework.

  1. The cloud administrator can now define the management resources for the trove instance, such as keypair, security group, network, etc., Creating a trove guest image is also now much easier for the cloud administrator or developer by using trovestack script, and users can expose the trove instance to the public with the ability to limit source IP addresses access to the database.

Vitrage – RCA (Root Cause Analysis) service

Vitrage’s purpose is to organize, analyze and visualize OpenStack alarms and events, yield insights regarding the root cause of problems, and deduce their existence before they are directly detected.

  1. This release adds new datasources for Kapacitor and Monasca, a new API for Vitrage status and templates versions, and support database upgrades for Vitrage with the alembic tool.

Watcher – Infrastructure Optimization service

Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

  1. This release adds a ‘force’ field to Audit. The user can set –force to enable the new option when launching audit. In addition, Grafana has been added as datasource that can be used for collecting metrics, and Watcher can get data from Placement to improve its compute data model.

Zun – Containers service

Zun provides an OpenStack containers service that integrates with various container technologies for managing application containers on OpenStack.

  1. The Zun compute agent now reports local resources to the Placement API, and the Zun scheduler gets allocation candidates from the placement API and claim container allocation.
LIVE DEMO
How to Use Service Mesh with VMs and Containers
REGISTER