NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

How to manage your OpenStack projects using keystone domains in Havana

Sergey Kashaba - February 03, 2014

In OpenStack, domains are how you to aggregate projects, or tenants in Grizzly into completely separate spaces. Domains also enable you to limit access to a particular domain. For example, as you probably remember, without domains in place, a user who was assigned the admin role in one project was an admin for the entire cluster, and able to do anything. With domains, you can assign a user the admin role for a single domain, and that user will only have admin privileges within that domain.

In this article, we briefly will discuss some of the situations in which domains might help you organize your projects. Once you understand that, we'll look at how to actually use domains to make that happen.

Motivation

We first encountered domains about a year ago, when one of our enterprise customers requested a feature very similar to OpenStack domains. This was during the Grizzly cycle, so we read that and said ‘YoHoHo, we can use the domains feature,’ which had just been introduced.

But OpenStack is ... OpenStack, and the domains feature wasn’t supported by all of the required components. So we patched Keystone to get the required use cases to work, and resolved to try again later.

Six months later Havana was released. Since the domain feature is quite useful for enterprise companies, we decided to build a proof-of-concept using the vanilla Havana release.

We wanted to cover these use cases:

  1. As an IaaS admin I should be able to create domains to aggregate tenants (now called projects).

  2. As an IaaS admin I should be able to add a user as a domain admin.

  3. As a domain admin I should be able to:

    • Create, remove,  update and delete projects inside the domain. Only the domain’s project should be listed on the project list.
    • Apply CRUD operations on user to project roles.
  4. LDAP should be used as the source for username/password.

  5. Only the REST API is required. No UI, no CLI should be required.

And this time, we did succeed. I can’t say that we didn't encounter any issues, but the project showed that domains in Havana are at least workable.

Ready/Steady/Go

We tested domains using the stable/havana branch of Devstack. The localrc file contains nothing but parameters to provide passwords.

During the investigation, we found that the CLI is only partially ready for domains. It provides full functionality for a regular user, but not all administration functionality. (You can see the related issue and its status here.)

You need to do several things in order to use domain features.

  1. Change the token format to UUID (keystone.conf, [signing] section, token_format = UUID). Glance has issues with PKI tokens (at least with v3 tokens), and fails authentication saying ‘body is too large’ if the default, PKI, is used. (This is not specific to domains.)

  2. Apply the correct policy.json file for keystone. Domain validation is unsupported in the default policy file. We re-used the one that comes with keystone from git, and then modified it.

  3. Reconfigure Nova, Glance-api, and the Glance registry paste configuration files by to use V3.0 authentication by adding ‘auth_version = v3.0′ to the [filter:authtoken] section. In the test environment, we only used Nova and Glance to boot the VM. If you are using more services, make sure to modify the appropriate configuration files. If you don't do this, the Nova and Glance clients will be using V2 token validation, and will fail with the message ‘Only default domain is allowed over the V2.0′.

  4. Update the keystone endpoints for the identity service to point to v3 instead of v2.0

  5. Restart the services that you reconfigured.

Note that as per customer requirements, we only used the REST interface, and did not use the CLI/PythonAPI/UI clients.

Understanding how to control domains

After spending about a week investigating domains, I would say that there are two (2) key points to understand how about how domains work:

  1. How a token is created, and in particular, the “token scope”.

  2. How policies are invoked.

In this article, we'll provide all examples using curl commands, for two reasons. First, when this document was first written, the CLI was unable to specify domain-related parameters from the command line. Second, and almost more importantly, the CLI hides some details essential for understanding how it works.

Token creation

The first thing you need to do is get a token, and that's quite simple. This is the request to get a token:

curl -si -X POST -H "Content-Type: application/json" -d '{"auth": {"scope": {"domain": {"id": "XXXXX"}}, "identity": {"password": {"user": {"domain": {"name": "default"}, "password": "qwerty", "id": "YYYYYY"}}, "methods": ["password"]}}}" http://127.0.0.1:5000/v3/auth/tokens | awk '/X-Subject-Token/ {print $2}'| tr -d '\r'

Here we can see the important part of the json that is sent as a body – scope. Scope can be either a domain or a project.  The token scope participates in the policy invoking, so it’s important to understand and remember this. If you're interested, you can look into the function keystone/auth/controllers.py:AuthInfo._validate_and_normalize_scope_data to find the actual code used to build the object used later to produce policy credentials.

Policy invocation

The policy invocation is more complex. Policy invocation is a main use case. Fortunately, once you have a solid understanding of it, you should be able to perform most operations in the Domains area. Invoking the policies uses 3 new keywords: credential, policy, and target. Now let’s see how each of this parts are built and used.

Credentials

The credential object is a dictionary that is build on the token basis and includes scope for the token that was issued, plus roles assigned to the user.

Policy

Policy is just a set of rules combined by or/and logic. It should become more readable in future releases, but for now we'll use the format currently in use. Below you can find selected pieces from a policy example to illustrate my point.

"admin_required": [["role:admin"]],

"owner" : [["user_id:%(user_id)s"], ["user_id:%(target.entity.user_id)s"]],

"identity:get_project": [["rule:admin_required",
"domain_id:%(target.project.domain_id)s"]],

"identity:list_projects": [["rule:admin_required", "domain_id:%(domain_id)s"]],

"identity:list_user_projects": [["rule:owner"], ["rule:admin_required",
"domain_id:%(domain_id)s"]],

Everything should be between the brackets ([]).

The smallest piece of a rule (we can call it an "atomic" rule) is a string divided by ‘:’. It can be:

  1. A reference to another rule. In this case, the left part is the ‘rule’ and the right part is the rule name.
  2. A definition of the validation logic. In this case left side defines the key to be found in the credential object and the right side is something that should be found in the ‘target’.  It can also be a constant. Think of it in terms of credentials_object[left_side] == right_side % target_object.

Several atomic rules can be combined by using the comma (,). This means that those rules are joined using ‘and’ logic (such as ["rule:admin_required", "domain_id:%(domain_id)s"]). Let’s call it an ‘and rule’.

If there are two set of rules encapsulated by [] they are joined using ‘or’ logic. For example, [["rule:owner"], ["rule:admin_required"]] means that this rule passes if owner rule is passed or the admin required rule is passed.

If the rule uses another rule than the encapsulated rule should be described in this same way.

To summarize, the example ‘”identity:list_user_projects”: [["rule:owner"], ["rule:admin_required", "domain_id:%(domain_id)s"]], means ‘list user projects is allowed only if rule with name ‘owner’ is passed OR if both admin required rule is passed AND domain_id from token scope is equal to domain id from target’. We'll talk about the target next.

 Target

Target is also a dictionary, and it's built based on two things:

  • query string provided in the request

  • objects that are invoked.

For example:

  • If we try to get information about the project, then the target is that project

  • If we try to assign a user to project with some role, then the target is role+project+user.

  • If we try to list all projects, the target is empty. (This disappointed me at first because I thought that some operations required by our customers were not available in Havana. Luckily for us, I was wrong.)

  • If we try to list all projects using filter by some value (for example by domains_id), then the target includes all filters. (This one surprised me, but helped a lot.)

An example query:

curl -sX GET -H "X-Auth-Token:$OS_MYTOKEN" http://127.0.0.1:5000/v3/projects?domain_id=$OS_DOMAIN_ID

For this query the target includes only the domain_id. The rule to be included in the policy.json can be "identity:list_projects": [["rule:admin_required", "domain_id:%(domain_id)s"]]

Having all this information, you can easily manipulate access by modifying the policy.json file.

Also, you can put debug messages in keystone/policy/backends/rules.py:enforce to log credentials, rules and target to investigate things more deeply. This technique helped me to understand what is going on for some usecases.

Cloud admin

Looking at the existing policy.json, it was unclear to me what ‘cloud_admin’ rule meant:

"cloud_admin": "rule:admin_required and domain_id:admin_domain_id",
"identity:get_domain": "rule:cloud_admin",

After some research, I found that this is a quite elegant workaround for defining ‘super administrator’ without using a service token. You just create a domain which is a ‘super domain’, and specify the appropriate ID as the admin_domain_id value in the rule. For example you might use ‘default’ if you use devstack and want the default domain admin to be a ‘super’ admin.

According to a discussion with Keystone core leaders, the cloud admin concept is a workaround and will be replaced with a service admin concept.

Horizon support

According to the blueprints, Horizon supports multi domains as well. You need to add only a few changes to either settings.py or local_setting (which we tested on local lab):

OPENSTACK_API_VERSIONS = {
"identity": 3
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

After making these changes, you will get an additional input field on login step to enter the domain name. Unfortunately, admin actions (such as managing domains and assignments) are not really possible with Horizon if the V3 policy is applied to keystone (link). However, this at least provides a regular user with the ability to use the UI instead of sending CURL commands.

CLI

In mid-January 2014, domains support was added to the CLI, and it now supports this additional set of options:

 --os-domain-name <auth-domain-name>

Domain name of the requested domain-levelauthorization scope (Env: OS_DOMAIN_NAME)

 --os-domain-id <auth-domain-id>

Domain ID of the requested domain-levelauthorization scope (Env: OS_DOMAIN_ID)

 --os-user-domain-name <auth-user-domain-name>

Domain name of the user (Env: OS_USER_DOMAIN_NAME)

 --os-user-domain-id <auth-user-domain-id>

Domain ID of the user (Env: OS_USER_DOMAIN_ID)

 --os-project-domain-name <auth-project-domain-name>

Domain name of the project which is the requested project-level authorization scope (Env: OS_PROJECT_DOMAIN_NAME)

 --os-project-domain-id <auth-project-domain-id>

Domain ID of the project which is the requested project-level authorization scope (Env: OS_PROJECT_DOMAIN_ID)

Using these parameters we can specify token scope, project domain and user domain.

Putting it all together

So after applying all of the changes above, you are able to run all of the required functions. Below is the list of curl commands. Remember that I needed to provide ONLY REST queries to be invoked in order to get functionality workable for this test.

Prerequisites:

I used the jq tool in order to have convenient json response parsing.

sudo apt-get install jq

We also have users created (from ldap) - user0, demo

The example:

Here's what the test looks like:

#export start variables.

export OS_AUTH_URL=http://127.0.0.1:5000/v3

export OS_SERVICE_TOKEN=openstack

 

#list domains

curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains | jq '.domains'

#or

openstack --os-identity-api-version 3 --os-url  http://127.0.0.1:35357/v3 --os-token openstack domain list

 

#create domain

curl -sX POST -H "X-Auth-Token:$OS_SERVICE_TOKEN" -H "Content-Type: application/json" -d '{"domain": {"enabled": true, "name": "dom0"}}' http://127.0.0.1:35357/v3/domains | jq '.'

 

#list show domain0

curl -s -X GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains?name=dom0 | jq '.domains'

 

#assign user as an domain admin

#get user id

curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/users?name=user0 | jq '.users'

export OS_USER_ID=`curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/users?name=user0 | jq '.users[].id' |tr -d '"'`

 

curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains?name=dom0 | jq '.domains'

export OS_DOMAIN_ID=`curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains?name=dom0 | jq '.domains[].id' |tr -d '"'`

 

curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/roles?name=admin | jq '.roles'

curl -sX PUT -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains/$OS_DOMAIN_ID/users/$OS_USER_ID/roles/bc485df1732140928ad44804a1c9b546

curl -sX GET -H "X-Auth-Token:$OS_SERVICE_TOKEN" http://127.0.0.1:35357/v3/domains/$OS_DOMAIN_ID/users/$OS_USER_ID/roles/ | jq '.roles'

 

#Authenticate as an domain admin

export OS_MYTOKEN=`curl -si -X POST -H "Content-Type: application/json" -d  "{\auth\": {\"scope\": {\"domain\": {\"id\": \"$OS_DOMAIN_ID\"}}

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW