In this session, we will review the current status of Oslo and discuss improvements to the concept and process.
Topics up for discussion will include releasing individual library packages, versioning, pypi uploads, our "managed copy and paste" process and what shared code should be focusing our efforts on.
More topics are welcome, please come armed with your ideas!
At the Grizzly summit I proposed replacing the WSGI framework in Oslo with a combination of Pecan and WSME. We have done that in ceilometer's v2 API, and this session will discuss lessons learned.
Operators want to keep services running while deploying upgrades, or at least close enough to running that clients don't notice. Shutting down all services, upgrading them all, then running migrations, then starting them all up is an uptime nightmare.
Lets detail the steps needed to allow migrations to happen without [non-trivial] visible downtime.
During the Grizzly timeframe, Rootwrap moved to Oslo and that version was adopted by Nova and Cinder. This session will look into further work planned during the Havana timeframe:
- Make Quantum's rootwrap use the common version (including introducing the extra features from quantum-rootwrap into the common one) - Add a PathFilter for obvious path-constrained operations - Execute snippets of Python instead of shelling out - Graduate oslo.rootwrap off Oslo incubation
At the end of the Grizzly cycle, work was done to make openstack.common.setup/version and hacking.py stand alone projects - i.e. oslo.packaging and oslo.hacking.
This session will discuss the scope of these, whether they should actually be in Oslo or not, and how we should manage them going forward - as well as the technology choices involved in making them separate (such as d2to1 and flake8)
The RPC API is one of the next APIs we propose to release as a new standalone library in the Oslo family.
Once released standalone, we will be making a firm commitment to the stability of the API. This session will review the current API and discuss areas where we're not comfortable about our ability to maintain the API into the future.
The ZeroMQ RPC driver was initially developed for Nova and its actor and scheduler driven design. Since moving into Oslo, new projects have begun using the RPC code for different messaging patterns. Some of those patterns are particularly un-actor-like, which poses challenges for the ZeroMQ driver. I've been attacking these challenges head-on and we've made good progress in Grizzly. I would like to discuss what challenges remain, identify new and upcoming challenges for Havana, set expectations, and identify gaps where new blueprints may be necessary.
AMQP server provides the message bus for openstack, so its security affects the overall of openstack big time. A lot of efforts have been spent on authentication of the sender/recipient, and confidentiality/integrity protection of the messages. However, a compromised Nova component, e.g. hypervisor, can pass authentication as normal (before the compromise is detected and corrected) and send malicious but legitimate messages to the bus and hence mess up the openstack system. Fine grained access control and throttling of messages etc for authenticated AMQP client is needed to counter this.
Oneway to do that is to implement the access control, authorization and throttling etc in the Nova code, but this implementation will be duplicated everywhere AMQP messages are examined and/or consumed.
This session proposes implementing access control with flexible authorization based on roles and other metrics, message authenticity, throttling/rate-limiting etc at the AMQP level via either an AMQP proxy or as a plugin to an AMQP server. It can also help on access control in multi-cluster scenarios as well.
If accepted, a 45-minute talk will be prepared or brainstorming session will be conducted to outline and discuss the details on how it works. Note that it's not just for openstack, any system that uses AMQP as message bus can leverage the capabilities provided here.
About the author: Jiangang Zhang, a.k.a. JZ, veteran in architecting and managing the whole development lifecycle of highly scalable, highly available and highly performant software and systems and in practicing pretty much all aspects of information security, currently Distinguished Architect at Yahoo. JZ can be reached via jz@yahoo-inc.com (business) or jgzhang@hotmail.com (personal).
With message signing on the horizon to provide confidence, it is expected that many will desire the next step: encrypted messaging to provide confidentiality. It is not expected we can have confidentiality in Havana, but we will need to plan for it in our changes to rpc envelope, matchmaker, etc.
With message envelopes introduced in Grizzly and to be enabled with Havana, we have the ability to make the envelope immutable. I am proposing a blueprint to work toward immutable messaging and would like to discuss the challenges of achieving this goal and the requirements for the next version of the RPC envelope.
At y! we have been working in integrating zipkin into openstack to get a 'live' tracing mechanism hooked into the various openstack components.
This session would be talking about how we did this and what the benefits are and how it can be expanded in the future to provide more in depth tracing with more context, something sorely lacking in openstack.
We'd like to share this code with others and let others see the potential of such a system and also be able to use it for themselves...
This session will start with a quick overview of the current approach that OpenStack services take to i18n and some of the challenges faced.
We will then step back and look at the bigger picture - OpenStack services currently do immediate translation of messages to the local server locale, yet there are two use cases for the translation of messages from OpenStack services:
1) As an OpenStack technical support provider, I need to get log messages in my locale so that I can debug and troubleshoot problems.
2) As an OpenStack API user, I want responses in my locale so that I can interpret the responses.
If we want to translate log messages (i.e. use case 1), they should be in a separate translation domain from the messages we return to users of the REST API (i.e. use case 2). The problem with having them both in the same translation domain is that translators have no way of prioritizing the REST API messages nor do administrators have any way of disabling the translation of log messages without the translation of the REST API messages.
Another tactic that may help is to delay the translation of messages by creating a new Oslo object that saves away the original text and injected information to be translated at output time. When these messages reach an output boundary, they can be translated into the server locale to mirror today's behavior or to a locale determined by the output mechanism (e.g. log handler or HTTP response writer).
As part of this session we will look at some of the difficulties encountered in the implementation of delayed messages: use of gettext.install(…) to install the _() function into Python's __builtin__ namespace (also known as "domain change issue") and the expectation that _() is returning a string.
(Session proposed by Mark McLoughlin)
Wednesday April 17, 2013 11:00am - 11:40am PDT
B119
Nova and Cinder now both have code to access XenAPI that builds on the standard XenAPI library. They can make assumptions about the way OpenStack uses xenapi.
It would be good have this code shared in a managed way between the projects. Something like oslo-xenapi seems like one possible good place. The code that has been added into Cinder is probably a good place to start when bringing together the code in both Nova and Cinder.
(Session proposed by John Garbutt)
Wednesday April 17, 2013 11:50am - 12:30pm PDT
B119