NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Admit it. VNF Certification is Marketing.

Boris Renski - May 21, 2018
image
Just because something seems like common sense doesn't mean that it's right. It just takes a while to realize it sometimes. Until now, Mirantis, like the majority of other NVFI vendors, ran a standard marketing program to “certify” VNFs against MCP.
That makes sense, right? You want to make sure that the VNFs you want to use are going to work on the NVF Infrastructure you choose. So we had a program that involved deploying a default configuration of Mirantis Cloud Platform in a lab, running tests to make sure that VMs with various VNF components come up. And like good corporate citizens, we followed with an official press release / blog announcement and a logo on a website.
We did this with NetScaler.
We did this with Palo Alto Networks.
We did this with Harmonic.
We did this with Avi Networks.
We did this with dozens of others.
We weren't unique in running this program. The majority of other vendors in the space currently practice a very similar approach.
It's great marketing.
However, as our customers started on-boarding some of those "certified” VNFs, we realized that the broad and shallow approach, while showing how awesome MCP is in press releases, does little to create real customer value.
Why? Because nobody uses the default configuration. NFVI environments differ customer to customer, and for good reason. The throughput and latency goals, network chaining architectures, and physical network and compute infrastructure implemented by service providers are finely tuned to particular business use cases. Even within a single service provider you'll usually find multiple hardware specifications and reference architectures for hub data centers vs. edge NFVI environments.
All of this variation means that generic certifications simply do not work. VNFs are, after all, applications, and unless a VNF is natively built to be multi-cloud ready (and most are not), there is no way to ensure it will run on the NFVI layer of a particular operator through “certifications.” The entire story is similar to OpenStack Interop efforts, which I shared my opinion on in the past.
More importantly, we're not the only ones saying so; our telco customers, such as AT&T and Vodafone, have alluded to this problem in their call to action to drive standards for VNF on-boarding.
In light of this realization, we are announcing today that we are evolving our generic VNF certification program in favor of a VNF validation approach that tests VNFs against the actual customer NFV Infrastructure, which lets us take customer-specific business objectives and individual circumstances into account.  
The new VNF validation program will be a 3-way effort between Mirantis, our telco customers and VNF vendors. It consists of five phases:
 
  • Discovery and analysis: At the start of the project, we talk with the customer to define the workloads that will run on the system, their requirements, and the PNFs they are going to replace. We'll also identify the various options for VNFs that may be fit for purpose, and engage with appropriate vendors.
  • Design and implementation: Once the customer's specific goals are clear, we can determine specific service blueprint that matches the VNF to those goals and define the validation and onboarding process within the customer's environment.
  • Validation and certification: Now we can develop CI/CD pipelines that make it possible to automate all of the testing needed to make sure that for each VNF the customer chooses functions properly in their specific environment -- even if the environment changes. The result of this stage is Validation Reports the customer and Mirantis can use to decide which VNFs to deploy to production.
  • Production readiness: The decisions have been made, and the chosen VNFs get added to the repository, where they can be safely added to the production environment.
  • Performance audit and optimization: With a known-good baseline, we can now move on to optimizing performance. Audits and analysis show what changes might lead to increased performance, and those changes are tested using the CI/CD pipelines that would be used to actually deploy to production. If the results are an improvement, those changes are promoted to production, and this becomes the new baseline.
The end result of all this is an architecture that actually adds value in the real-world environment of our customers, rather than a generic "certification" and a press release that sounds good, but is ultimately meaningless.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW