Requirements:
https://docs.openstack.org/upstream-training/upstream-trainees-guide.html
Training Content:
https://docs.openstack.org/upstream-training/upstream-training-content.html
Start your day learning about the evolution of OpenStack, both as software project and through our community to provide an open infrastructure and a collaborative environment for the industry. Further you will learn how you can join, bringing your own ideas and being part of the latest innovations spearheaded through the community.
The OpenStack Technical Committee is the body responsible for "the management of the technical matters" in OpenStack. What does that really mean in practice ? In this presentation, we'll explore the history of the Technical Committee, the evolution of its role over time, where it stands right now, and its vision for the future.
"Technical Stewardship: A look inside the OpenStack Technical Committee"
Designing Ceph clusters is hard. Throw OpenStack into the mix, and it gets even harder. There are many different ways in which a Ceph clusters can be configured. Same goes for selecting the right hardware. This session will give you a solid start into the world of Ceph-based storage (in the context of OpenStack).
You will learn all the lessons learned by us the hard way when operating our two Ceph clusters over the past several years including all necessary recommendations on picking the right hardware for Ceph, as well as some tips on optimising the deployed Ceph cluster for OpenStack.
During the session will concentrate on what to consider when creating your very own Ceph cluster, what are the common problems with Ceph to be aware of, what is benchmarking and profiling Ceph, what to know about fat node vs thin node, what’s the best SSD for journal (Spoiler alert: one does not exist, here’s why) and, the last but not least, what is the sweet spot between storage density, price, and performance.
The session will be a good blend of high level information combined with various useful to know technical details which we’ve learned the hard way, so that you don’t have to.
As any IT project clear goals are important, so we’ll start by discussing the biggest pitfalls of Private cloud projects.
Hardware specsYou private cloud will need some hardware to run on, I’ll present you a standard Openstack hardware configuration/bill of material for different capacity targets: what to buy, how many and how much it costs: servers + storage + network.
Deployment methodsThe next step is to setup Openstack on your new hardware, here we’ll discuss and compare different options: vendor distributions, Kola (Openstack on Containers), Fuel, Openstack Ansible, …
OperationsNow that your Openstack deployment is up and running, let’s cover monitoring and capacity management for Openstack: What to monitor? How to update your capacity management processes for Infrastructure as a Service?
OpenStack has come to stay for good. Many companies leverage the advantages of having a private cloud: fewer costs, more control, no vendor lock-in. In addition, you get a better understanding of how cloud platforms work. However, you also need to take care of operations and maintenance. OpenStack troubleshooting is a nontrivial task and takes a lot time, knowledge, and experience. Hundreds of log files are written by numerous services with several configuration files to countless virtual and physical machines - the possibilities for errors seem endless. Manual root cause analysis is like looking for the needle in the haystack. We’re going to look at common problems in OpenStack environments, analyze their root cause, and discuss options for effective and efficient operation and troubleshooting.
Participants of this session will learnThe market for container software solutions will soon approach $3bn and is growing at 40% a year, largely driven by the promise of portability, micro-services based cloud-native apps and a growing focus on DevOps. Container technologies are now available on every major public and private cloud platform. So why are OpenStack users adopting containers 3x faster than the rest of the enterprise market?
In this session, we’ll explore why containers are such a hot topic and why OpenStack is the ideal platform for developing, orchestrating and running containers for CI/CD and DevOps.
City Network AB together with OP5 AB will share some insights in the daily challenges of planning, deploying, upgrading and servicing a geographically distributed, multi-data center OpenStack production environment. We’ll go over a few specific problems we’ve had, how we responded and what we learned, the post-mortem we conducted and what we could’ve done better. We’ll also talk about what tools we use to support, monitor and trouble-shoot our environment and what we’d like to see.
We’ll also talk about where we see our environment going forward and the challenges to monitor, visualise and react to streaming metrics environments at scale. We’ll touch upon on how OpenStack stacks up from a traditional sys-admin perspective.
This session will be a good blend of high level information, some hard-fought lessons from the field but also a good pointers about where we see production OpenStack deployments going in the future.
In many applications of Open Source Software, components developed by independent communities are tested only within the context of the corresponding communities without integrating the components from other communities. This results in very limited or nonexistent end-to-end testing and causes problems for the seamless interworking of these components when they are attempted to be integrated with each other.
OPNFV has created a comprehensive CI that was originally based on the consumption of stable artifacts. We will present a recent extension of the OPNFV CI which was established by the close collaboration between OpenStack, OPNFV, and OpenDaylight communities; Cross Community CI (XCI).
XCI enables timely verification of ongoing upstream development work in a full OPNFV system context. The latest versions of the upstream components can be integrated and tested on baremetal, significantly cutting the time it takes to introduce new features, identify bugs and issue fixes. Apart from deploying and testing from master, OPNFV XCI enables the patchset verification, providing better visibility and significantly faster feedback on potential system level issues to OPNFV itself and the upstream communities it works with even before the patches are submitted to master.
We will also share our experiences we have gained while we worked on establishing the XCI, the contributions we made to the upstream communities infrastructures, and the tools we reused from the upstream communities such as bifrost, openstack-ansible, ansible-opendaylight.
The XCI activity has already started changing how the communities work with each other when it comes to CI/CD, DevOps, and Infrastructure in order to make things faster and get communities even closer to each other. This talk looks into collaborative development from different perspective in order to increase the awareness and aims to achieve real CD & DevOps way of working in Open Source.This talk is about removing hurdles for deploying OpenStack in 24/7 large scale enterprise environments.
Our customers run OpenStack at very large scale and sharing their and our experiences is of course part of the story. Real life stories and examples about integration with SDN’s. Why analytics, HA and real-time monitoring is crucial for their 24/7 Network Operations centers. And if we talk the talk, we walk the walk, so we end with a demonstration of load balancing, ddos mitigation, SSL analysis and more in OpenStack.
What if the community or vendor provided cloud images do not fit your requirements?
Like when your cloud needs to offer
Ø high performance drivers
Ø additional security hardening
Ø support for different hypervisors
Ø images with latest available patches
Ø cloud-specific pre-configuration (e.g. repositories)
For our public cloud "Open Telekom Cloud" (OTC) we decided to build our own images. Utilizing open-source tools like OpenBuildService, kiwi, DiskImageBuilder, OpenStack tools, and build public images, optimized to run on OTC.
This includes fully automated build, testing and publishing of the images. Following this approach we are able to adapt changes fast, including security patches, hardening and customization.
The image configurations are signed and published along with the resulting images.
This session presents how we build images and how you could build upon this work to set up your own factory, maybe extending it to build container images as well?
As OpenStack embraces Kubernetes as the container platform of choice, it becomes imperative to connect VM’s and containers seamlessly.
This talk outlines Project Calico’s pure-L3 network approach combined with declarative (Kubernetes) Network Policy to interconnect OpenStack with Kubernetes.
As the OpenStack community embraces Kubernetes as the container orchestration platform of choice, it becomes imperative to seamlessly connect applications across VM’s and containers.
Early OpenStack Neutron plugin implementations approached network isolation by creating network overlays on top of convoluted paths through vswitches, bridges and L3-agents on compute nodes, along with coarse security groups. In contrast, Kubernetes and other container orchestrators provide rich network policy constructs while assuming IP connectivity between nodes and containers/pods, facilitating simple, scalable network design and isolation with network policy.
This talk explores a pure-L3 network approach combined with declarative (Kubernetes) Network Policy, espoused by Project Calico, to interconnect OpenStack with Kubernetes across 3 scenario’s: 1. Kubernetes alongside OpenStack (with application instances in either) 2. Kubernetes running within OpenStack VM’s 3. OpenStack as a (containerized) application orchestrated by Kubernetes