Loading…

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Design Summit [clear filter]
Wednesday, April 17
 

4:30pm

Cinder API v2: a new hope for better validation
In this session I'll be presenting the state of the Cinder rest API in v2 for G. In addition, I'll be going over some small things that need to happen in H.

Duncan Thomas will then be leading improvements that need to happen in H for better validation in the rest API. Overall, accepting the fact that users don't usually have access to the logs, so we need to present errors up front when possible.

(Session proposed by Mike Perez)


Wednesday April 17, 2013 4:30pm - 5:10pm
B110

5:20pm

Cinder Smart Shutdown
Cinder needs a nice way to gracefully shutdown it's services and queue up new tasks for the next restart. Once services come back up any queued tasks should be kicked off. Any existing services during the shutdown request should be allowed to finish before termination of the cinder process.

(Session proposed by Walt)


Wednesday April 17, 2013 5:20pm - 6:00pm
B110
 
Thursday, April 18
 

9:00am

Cinder Update for Disk Encryption
There is a Nova blueprint called Encryption of Attached Cinder Volumes
that has a working prototype. Encrypted volumes add new challenges for
Cinder operations such as snapshots and cloning, particularly with
regard to key management. The goal of this session is to enumerate the
Cinder operations that must become "encryption-aware" in Havana and to
outline different implementation strategies to support these operations.

(Session proposed by Laura Glendenning)


Thursday April 18, 2013 9:00am - 9:40am
B110

9:50am

Multi-Attach and Read Only Volumes
There's been some discussions about introducing a special Read Only volume that can be attached to multiple instances simultaneously. Initial thoughts are something like creating a R/O volume from an existing volume, or converting an existing volume.

Additionally a number of folks would like to see R/W multi-attach which can be discussed as well.

(Session proposed by John Griffith)


Thursday April 18, 2013 9:50am - 10:30am
B110

11:00am

refactor attach code
Cinder now has copy/paste code from Nova's libvirt to do attach of a volume to the cinder node. This is done so cinder can copy volume contents into an image and then stuff the image into glance. This currently only works for iSCSI, but we need the same capability for Fibre Channel.

We shouldn't copy/paste the code from Nova's libvirt code. We should look at migrating the attach code from Nova into oslo and reuse that code both in Nova and Cinder


Other options are using a worker VM to do the copying or a library in cinder.

(Session proposed by Walt)


Thursday April 18, 2013 11:00am - 11:40am
B110

11:50am

Standardizing vol type extra spec as driver input
Cinder back-end drivers can provide differanitate service by looking into volume type extra specs. Right now how driver treat extra specs is still up to driver developer. In order to make sure volume type/extra specs defnition more portable, standardizing the way driver extract requirements from volume type extra spec is very important. In this session, we'll discuss possible solutions.

(Session proposed by Huang Zhiteng)


Thursday April 18, 2013 11:50am - 12:30pm
B110

1:30pm

Cinder Capability Standardization
Currently volume types are used for two main purposes: for the capability filter scheduler to decide where to place volumes, and for drivers to create volumes with certain parameters/capabilities.

It is important for volume types to be both standardized and flexible. For example, if two different back-ends support feature foo, then they should report it in the same way. The scheduler should choose either back-end, and that back-end should have a key to enable feature foo for the given volume.

We propose to:
- Maintain a list of mandatory capabilities that all drivers must report (one current example is free space). These capabilities must be generic and make sense for all back-ends.
- Maintain a list of recommended capabilities that drivers may report. These should still be generic, but well-defined, and used across back-ends.
- Drivers may report any additional capabilities that they want where they are specific to that back-end.
- Administrators should be able to specify capabilities for storage via the configuration file if the driver doesn't report them.

The goal of this session is to discuss the mechanisms for managing capabilities (e.g., proposing new ones, listing existing ones), and perhaps to come away with an initial list.

(Session proposed by Avishay Traeger)


Thursday April 18, 2013 1:30pm - 2:10pm
B110

2:20pm

ACL(access control list) Rule for Cinder Volumes
The volume can only be accessed by a certain user in a certain project. There is no ACL rule for cinder volume. Adding ACL configration can make the volume read or written by other users or other projects. The volume creator has the capability to edit the ACL rule. The ACL model can be similar to the one in Amazon S3.
Use case: several users can share the data in one volume.

(Session proposed by houshengbo)


Thursday April 18, 2013 2:20pm - 3:00pm
B110

3:20pm

Cinder FC SAN Zone/Access Control Manager
Fibre Channel (FC) block storage support was added in Grizzly release (cinder/fibre-channel-block-storage blueprint).

Currently, there is no support for automated SAN zoning and requires FC SANs to be either pre-zoned or open zoned.

We propose to:

- Add support for FC SAN Zone/Access Control management feature - allowing automated zone lifecycle management in the attach/detach entry points of volume manager (when fabric zoning is enabled).
- Introduce FibreChannelZoneManager plug-in interface API for automated zone lifecycle management - enables SAN vendors to add support for pluggable implementations.

Use Cases

- Defaults and capabilities - support for FC SAN configuration settings (e.g. zoning mode, zoning capabilities etc.)
- Add active zone interface to add the specified zone to the active zone set
- Remove active zone interface to remove the specified zone from the active zone set
- Support for provisioning and enumerating SAN/Fabric contexts

In a future OpenStack release, this can be extended as a general SAN access control mechanism to support both iSCSI and FC.

The goal of this session is to discuss the mechanisms for automated zone lifecyle management in OpenStack/Cinder.

(Session proposed by Varma Bhupatiraju)


Thursday April 18, 2013 3:20pm - 4:00pm
B110

4:10pm

Cinder Volume Migration
We propose to add a volume migration feature to OpenStack - allowing for a volume to be transferred to a location and/or have its type changed. The following use cases will clarify the intent:

1. An administrator wants to bring down a physical storage device for maintenance without interfering with workloads. The admin can first migrate the volumes to other back-ends, which would result in moving volume data to the other back-ends.
2. An administrator would like to modify the properties of a volumes (volume_type). In this use case, the back-end that stores the volume supports the new volume type, and so the driver may choose to make the change internally, without moving data to another back-end.
3. Same as case #2, but where the current back-end does not support the new volume type. In this case, the data will be moved to a back-end that does support the new type, in a manner similar to case #1.
4. A Cinder component (be it the scheduler or a new service) may optimize volume placement for improved performance, availability, resiliency, or cost.

The goal is for the migration to be as transparent as possible to users and workloads - it should work for both attached and detached volumes, as well as (possibly) attaching/detaching mid-operation.

Volume migration will be enabled for all back-ends (for example, by using a generic copy function for detached volumes, and QEMU's live block copy feature for attached volumes). In addition the API will allow for storage back-ends to implement optimizations depending on source and target support (for example, using the storage controllers' remote mirroring function); this will require additions to the driver interface to report on the capabilities of the storage (based upon existing mechanisms) and to invoke storage functions.

(Session proposed by Avishay Traeger)


Thursday April 18, 2013 4:10pm - 4:50pm
B110

5:00pm

Cinder Library for Local Storage
Problems
========
a) storage related code is in multiple locations
b) available space and quotas are managed in two locations
c) locality specification is limited to availability zones
d) volume and instances can't be scheduled together

Goals
=====

a) share code relating to volumes
b) manage storage related quotas and data in one place
c) enable better locality specification
d) allow scheduling of instances and volumes as a unit


(Session proposed by Duncan Thomas)


Thursday April 18, 2013 5:00pm - 5:40pm
B110