Is this your first OpenStack summit? Unlike most conferences, you are invited to participate and play an active role. But... where to start? The rationale and the organization that allows such a unique collaborative design will be explained. This is your chance to get answers and get the best of it!
During this session we will let the attendants know some details about the summit, including who will be attending, different tracks and purposes, which sessions/talks are the most suitable for beginners and how they can participate. This short introduction will be followed by a a lively presentation of the most common situations and how to behave when facing them. It will be a miniature experience of a first participation to the summit.
Currently we have an archetecture that should support scalling but
some code is missing.
How does the heat-api find the correct engine to talk to?
How would a "heat list" work?
https://etherpad.openstack.org/heat-multiple-engines
(Session proposed by Angus Salkeld)
In this session, I will talk about some of the problems we face in deploying openstack by using openstack. I'll point at areas where the bare metal driver needs better integration with other services (eg. Quantum and Cinder), how we really need an inventory database (I'm looking at you, HealthNMon), how the Nova scheduler needs to be aware of hardware, and how Heat is taking over the world. I might even propose that it's possible to bootstrap yourself right out of your own boots!
(Session proposed by Devananda van der Veen)
1.Identify test gaps for all core services Swift,Nova, Keystone, Cinder and Quantum projects .
2.From gaps, identify new tests to be written to have coverage.
3.Discuss as part of design session that leads to new blueprints and blueprint ownership for Havana release
Etherpad: https://etherpad.openstack.org/havana-gap-analysis
(Session proposed by Ravikumar Venkatesan)
This session will include the following subject(s):
Distributed & scalable alarm threshold evaluation:
A simple method of detecting threshold breaches for alarms is to do so directly "in-stream" as the metric datapoints are ingested. However this approach is overly restrictive when it comes to wide dimension metrics, where a datapoint from a single source is insufficient to perform the threshold evaluation. The in-stream evaluation approach is also less suited to the detection of missing or delayed data conditions.
An alternative approach is to use a horizontally scaled array of threshold evaluators, partitioning the set of alarm rules across these workers. Each worker would poll for the aggregated metric corresponding to each rule they've been assigned.
The allocation of rules to evaluation workers could take into account both locality (ensuring rules applying to the same metric are handled by the same workers if possible) and fairness (ensuring the workload is evenly balanced across the current population of workers).
Logical combination of alarm states:
A mechanism to combine the states of multiple basic alarms into overarching meta-alarms could be useful in reducing noise from detailed monitoring.
We would need to determine:
* whether the meta-alarm threshold evaluation should be based on notification from basic alarms, or on re-evaluation of the underlying conditions
* what complexity of logical combination we should support (number of basic alarms; &&, ||, !, subset-of, etc.)
* whether an extended concept of simultaneity is required to handle lags in state changes
The polling cycle would also provide a logical point to implement policies such as:
* correcting for metric lag
* gracefully handling sparse metrics versus detecting missing expected datapoints
* selectively excluding chaotic data.
This design session will discuss & agree the best approaches to manage this distributed threshold evaluation, while seemlessly handling up- and down-scaling of the worker pool (i.e. fairly re-balance and avoid duplicate evaluation).
This session will include the following subject(s):
Time series data manipulation in nosql stores:
Ceilometer currently supports multiple storage drivers (mongodb, sqlalchemy, hbase) behind a well-defined abstraction.
The purpose of this design session is to discuss how well suited the existing nosql stores are to the efficient manipulation of time-series data.
The questions to be decided would include:
* whether we could optimize/improve our existing schemas in this regard
* whether we should consider a storage driver based on Cassandra in order to take advantage of it's well-known suitability for TSD
(Session proposed by Eoghan Glynn)
The dotted line between metering and metric/alarms:
There is clear commonality in the data acquisition & transformation layers for gathering metering and metric data.
However the further we venture through the pipeline, there are also operation concerns around over-sharing of common infrastructure in the transport and storage layers.
We need to tie to down exactly where we see the dotted line between the handling of metering and metric data, deciding whether:
* a common conduit in the form of AMQP should be used for publication (for example given that during a brownout in the RPC layer, we would need a timely metric flow more than ever)
* a common storage layer should be used for persistence (for example given that data retention requirements may differ significantly)
* a common API layer should provide aggregation (for example given that certain aggregations such as percentile may make far more sense for metric rather than metering data)
We need to tie down the requirements for managing the state and history of alarms, for example providing:
* an API to allow users define and modify alarm rules
* an API to query current alarm state and modify this state for testing purposes
* a period for which alarm history is retained and is accessible to the alarm owner (likely to have less stringent data retention requirements than regular metering data)
* an administrative API to support across-the-board querying of state transitions for a particular period (useful when assessing the impact of operational issues in the metric pipeline)