IBM Power Systems and IBM i @ LinuxDay 2019

Estimated reading time: 2 mins

IBM Power Systems and IBM i @ LinuxDay / Carinthia / Austria

As I’m a graduate of the HTL-Villach and the LinuxDay was co-organized by Mario Kleinsasser and Bernhard Rausch - both coworkers at STRABAG SE - I posted a CFP: Node.js on Midrange Server?

The technical setup was the easy part. I already had setup the Java code deployment to IBM i systems with GitLab CI/CD Pipelines. Thanks to the effort of Jesse Gorzinski, Kevin Adler and the rest of the IBMiOSS team at IBM the installation of Node.js was done in a minute. With YUM and RPM on IBM i PASE this is not a miracle at all.

I talked to Roman, also a business colleague, to write a simple Node.js web page and an interacting 5250 screen. Thanks for presenting this part at our talk.

The idea for the outline was to show that open source software in a business context has to work together with legacy systems and that this can be done with a popular open source coding framework. Second goal was to reach the students and show them that there is a big IT company within STRABAG SE as well.

So all was fine until I realized, that this will be my first talk about IBM i in front of an audience which knows little to nothing about IBM Power Systems and/or IBM i operating system.

It took me four after-work sessions to end up with the final outline:

Followed by Roman’s part:

I really loved to do the session and to talk about the long and lasting story of IBM i, Power Systems and the active Open Source initiative. It was nice to chat with some of the visitor afterwards and I’m looking forward to LinuxDay 2020 (Carinthia / Austria) - Be there!

P.s. Thanks to IT Power Services and IBM Österreich for organizing IBM related giveaways for the event.

Posted on: Sun, 19 May 2019 20:10:59 +0200 by Markus Neuhold

  • IBM i
Markus Neuhold
IBM i (AS/400) SysAdmin since 1997, Linux fanboy and loving open source, docker and all about tech and science.

LinuxDay 2019

Estimated reading time: 2 mins

Yesterday was a great start for all future LinuxDays in Carinthia/Austria!

Franz Theisen from RedHat Austria together with Reiner Rabensteiner from HTL-Villach and Mario Kleinsasser from STRABAG SE managed to reboot the OpenSource & Linux Community in our small country. They put in a lot of effort to turn the idea of a LinuxDay in Carinthia to reality by creating a website, calling out for CFP’s, organizing the location, putting together a program for the day / getting in contact with other companies and trying to motivate the pupils of the HTL Villach.

There were many cool talks and workshops which you really missed if you haven’t been there and you also missed the opportunity to talk with nice and experienced people about cool projects like Docker / OpenStreetMap / Ansible / OpenShift / KiCAD / Puppet / …

For me it was a great day to get back in contact with my old school and with cool and motivated people! I tried to capture the day in some fotos and help where help was needed but was a little sad, that there were only 3 people @ my “Puppet Basics” workshop… But I’m still motivated for future LinuxDays and will definitely fill out a CFP for the next one!

I hope to see you there the next time! :)

Posted on: Sat, 18 May 2019 21:39:06 +0200 by Bernhard Rausch

Bernhard Rausch
CloudSolutionsArchitect/SysOps; loves to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker; Always up to get in contact with interesting people - do not hesitate to write a comment or to contact me!

Build a Docker Swarm on AWS with Ansible in 1 minute and 47 seconds

Estimated reading time: 8 mins

Is it possible to build a five node (3 manager nodes, 2 worker nodes) Docker Swarm in under 2 minutes? Yes it is! Some weeks ago, Hennnig Jacobs who works at Zalando Technology, posted a Tweet where he referenced an article he wrote called “Why Kubernetes?”. This article covers another post, “Maybe You Don’t Need Kubernetes”, written by Matthias Endler who works at Trivago. There are always pros and cons for every solution, but I missed Docker Swarm in his article. And there was another thing that triggered my brain. He wrote that “[…] creating a cluster on DigitalOcean takes less than 4 minutes and is reasonably cheap ($30/month for 3 small nodes with 2 GiB and 1 CPU each).”. In addition, he wrote that at Zalando they “[…] run 100+ Kubernetes clusters […]”.

Therefore I asked myself how long it would take to setup a Docker Swarm cluster with 3 managers and 2 workers on AWS by myself and furthermore would it be possible to start 101 (100+) Docker Swarm clusters too? Short answer, yes it is 😎! But lets start with the idea. And as a side-note, “3 small nodes” are not a productive setup for Kubernetes, whereas 3 manager nodes and 2 worker nodes are a productive setup for Docker Swarm.

The plan

Every time when I am going to do some creative brainstorming, I take a pen and a paper to order my thoughts. You can have a look at the picture on the left, to see what this means in this case 😁. After some thinking I was pretty sure that there is the need to do some things with Ansible in parallel. Since we are using Ansible at work over the last one and a half year, I already knew, that I will have to do some tricks to get things up and running fast. Setting up compute resources takes the most time as you have to specify your needs and of course you have to wait until you can access the compute resource to install additional software like Docker on it. The simplest way to parallelize something under Linux is to use BASH forks. Obviously this is resource intensive, but more on this point later!

After some testing it was clear to me that I will use a script to run multiple Ansible Playbooks at the same time in parallel. I ended up with the following BASH scrtipt:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/bin/bash
export AWS_ACCESS_KEY=AK....
export AWS_SECRET_KEY=ai....

# create 5 Docker nodes - 3 managers and two workers
# This is the first manager where all other nodes will join
ansible-playbook --extra-vars "swarmnodename=$1-mm" aws_ec2_create_docker_swarm_node.yml &

# Add two additional managers and tag them
ansible-playbook --extra-vars "swarmnodename=$1-mn1" aws_ec2_create_docker_swarm_node.yml &
ansible-playbook --extra-vars "swarmnodename=$1-mn2" aws_ec2_create_docker_swarm_node.yml &

# Add to worker nodes and tag them
ansible-playbook --extra-vars "swarmnodename=$1-wn1" aws_ec2_create_docker_swarm_node.yml &
ansible-playbook --extra-vars "swarmnodename=$1-wn2" aws_ec2_create_docker_swarm_node.yml &

wait

# Join them together - first get join tokens from main manager and store them
ansible-playbook --extra-vars "swarmnodename=$1-mm" aws_ec2_get_docker_swarm_join_token.yml

# Add additional managers 
ansible-playbook --extra-vars "swarmnodename=$1-mn1" aws_ec2_join_swarm_as_manager.yml &
ansible-playbook --extra-vars "swarmnodename=$1-mn2" aws_ec2_join_swarm_as_manager.yml &

# Add additional workers
ansible-playbook --extra-vars "swarmnodename=$1-wn1" aws_ec2_join_swarm_as_worker.yml &
ansible-playbook --extra-vars "swarmnodename=$1-wn2" aws_ec2_join_swarm_as_worker.yml &

wait

This script uses a variable $1 which is provided by an wrapper script (to create n-Docker Swarms) which is simple counter loop. First of all, I need the mm-node, the master-manager-node. The master manager node is the node, where the docker swarm init command will be issued, after the EC2 instances are created - see line 20. To make things easier, the script will wait on line 17 for all executes. The & after the code lines are indicating that these lines are running in parallel. In line 20 the join tokens for manager and worker joins are created and then, from line 23 to line 28 used to register the nodes as master or worker in parallel too.

Some tricks

To get this up and running smooth and fast, I have to use some tricks 🤩😁 - they might be useful out there!

Trick #1: “dynamic” inventory

The EC2 instances are using dynamic ip addresses. Therefore the script and also the Ansible Playbooks cannot rely on Ansible Inventories! There are dynamic inventory scripts for Ansible and AWS (and many others) out there and they are official supported, but they are often not that fast. Thankfully, there is ec2_instance_facts for AWS to filter (find) instances which meet certain requirements. If instances are found, we add them to a in memory Ansible Inventory. Look at the Ansible Playbook below, lines 10-23.

Trick #2: Name your instances

The second trick is, to tag the instances you create with names that are dynamic but predictable. We cannot only create one Docker Swarm cluster with this Anisble Playbook, we can create hundreds if we like. Look at the Ansible Playbook below, lines 14.

Trick #3: Save Ansible data (varibales) to a local file

This is huge! you can save Ansible output to a local file and afterwards you can load the data from this file to use it in another playbook. Look at the Ansible Playbook below, lines 40-44. If you are clever, you can create very smart playbooks. Like in this case, I save the Docker join tokes to files that are name according to the Docker Swarm cluster that is currently created. Therefore, you can use this information during the parallel creation of Docker swarms!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# get the main manager
- hosts: localhost
  gather_facts: no
  connection: local
  vars:
    ssh_key_name: pwd-m4r10k
    region: eu-central-1
    ansible_user: ubuntu
  tasks:
    - name: List instances
      ec2_instance_facts:
        region: "{{ region }}"
        filters:
          "tag:name": "{{ swarmnodename }}"
          instance-state-name: running
      register: ec2

    - name: Add all instance public IPs to host group
      add_host:
        name: "{{ item.public_ip_address }}"
        groups:
          - ec2training
      with_items: "{{ ec2.instances }}"

- hosts: ec2training
  gather_facts: no
  vars:
    ansible_user: ubuntu
  tasks:
    - name: Get join command for manager
      shell: docker swarm join-token manager | grep join
      become: yes
      register: joinmanager

    - name: Get join command for worker
      shell: docker swarm join-token worker | grep join
      become: yes
      register: joinworker

    - name: Save manager join command local
      local_action: copy content={{ joinmanager }} dest=/tmp/{{ swarmnodename }}-join-as-manager

    - name: Save worker join command local
      local_action: copy content={{ joinworker }} dest=/tmp/{{ swarmnodename }}-join-as-worker

Trick #4: Load Ansible data (varibales) from a local file

It is easy (if you found it) to load Ansible data from local files saved previously. Look at the Ansible Playbook below, lines 30-38.

Trick #5: Use Ansible build in function

Ansible comes with a lot of handy functions. In this example, I use split to split up the number of the Docker Swarm this Ansible Playbook is running for to load the correct Docker Swarm join command. Line 32 below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# get the main manager
- hosts: localhost
  gather_facts: no
  connection: local
  vars:
    ssh_key_name: pwd-m4r10k
    region: eu-central-1
    ansible_user: ubuntu
  tasks:
    - name: List instances
      ec2_instance_facts:
        region: "{{ region }}"
        filters:
          "tag:name": "{{ swarmnodename }}"
          instance-state-name: running
      register: ec2

    - name: Add all instance public IPs to host group
      add_host:
        name: "{{ item.public_ip_address }}"
        groups:
          - ec2training
      with_items: "{{ ec2.instances }}"

- hosts: ec2training
  gather_facts: no
  vars:
    ansible_user: ubuntu
  tasks:
    - name: Load vars
      include_vars:
        file: /tmp/{{ swarmnodename.split("-")[0] }}-mm-join-as-manager
        name: joinmanager
      register: input

    - name: Join as manager
      shell: "{{ joinmanager.stdout }}"
      become: yes

The video

Here is the video of this run with 1 minute and 47 seconds.

Create 100+ Docker Swarms

AWS raised the limit of EC2 nano instances from 28 (default) to 550 - the only thing that you have to do for this is to open up a support ticket. The next problem is, that the BASH fork mechanism is really exhausting the resources of our ansible host - and this is OK as copying the Pyhton processes is expensive. Just for test, I’ve put in 32 cores and 64 GB of memory (VMware). With this configuration I was able to start the creation of 101 Docker Swarms but then I got a lock-down of the AWS-API - “Too many requests” 😂 Maybe, in the future I will try to create 10 or 20 Docker Swarm at a time to not reach this limit.

Conclusion

Ansible in combination with AWS and Docker Swarm is pretty awesome! It was a lot of fun to optimize the playbook run to get this up and running in parallel. I will upload the playbooks into a GitLab repository the next weeks. If you need it earlier, let me know!

Have fun!

Posted on: Tue, 23 Apr 2019 10:21:00 +0200 by Mario Kleinsasser

Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, do not hesitate and contact me!

DevOps Gathering 2019

Estimated reading time: 4 mins

For the second time I attend to the DevOps Gathering as a speaker. This year I shared the stage with Alexander Ortner, a colleague and friend of mine, and we did our talk together. Also Bernhard Rausch was with us, but we have to leave him back during our travelling challenge. 😮

Our travelling challenge started on Tuesday, 12th March at 5am at our workplace. The plan was to travel to Salzburg airport by car to catch our flight to Düsseldorf. Normally this ride takes about 90 minutes. Our flight was scheduled for 8:25am, usually plenty of time reserve. But this day was one of these days, where hardly anything works as expected. The first thing that happened was that a truck crash stopped our ride to Salzburg airport. Unfortunately we were locked down on the motorway for more than two hours and therefore we were not able to catch up our flight.

As the previous plan was that we were going to attend to the DevOps Gathering 2019 as private persons, all chances were gone to reach our talk on Wednesday. But during the booking of the flight some month ago and the day this story happens, our employer, STRABAG BRVZ Gmbh, was so kind to support our travel. Therefore Alex called his principal and we got a go to book flights for Alex and me (Mario) from Munich. MANY, MANY THANKS FOR THIS SUPPORT TO STRABAG BRVZ GMBH! 💗

But we had to leave back Bernhard at Salzburg train station 😓. Nevertheless Alex and I (Mario) went on to Munich to catch the flight from there. After the ride to Munich we were able to check in in-time and after a rough flight with a nice side-wind landing we caught the train to Bochum without any problems. After thirteen hours we arrived at the DevOps Gathering location at Bochum (G-Data) finally.

As we arrived, we received a really huge welcome from the other attendees! Special thanks to Xinity, you are always welcome my friend! We talked a lot with the other attendees and we were able to catch up with the latest information. After some hours we left the venue and went back to our hotel where Alex and I updated our presentation with a special slide to honor Bernhard for all that he tries to be with us. It’s always about the people and friends - people matter!

Next day, we started early to get all up and running and to test our equipment at the conference location. Then it was stage time and overall all went smooth! You can find the slides from our presentation on Speaker Deck | C4 - Continuous Culture Change Challenges! It’s a different if you do a talk alone or if you share the stage. Both ways have different challenges. After the talk we got a load of positive feedback! And we would like to say THANK YOU for all your positive feedback!

An hour later or so, I checked the trains from Düsseldorf to Bochum for our return travel and that’s where I noticed that all trains from Düsseldorf to Bochum were cancelled for the whole day because of the storm (trees on the train track). Niclas Mietz from the Bee42 (DevOps Gathering Organizer) was so kind to bring us to Essen, where we were able to catch our flight to Munich. After the ride back to our working location we arrived happily.

The DevOps Gathering 2019 was a great conference for us, even if we were not able to be there for long time. But it was very, very nice to see how everyone tried to help us and our short stay was very intensive. Many, many thanks for all who have supported us! 🤗

And here is the recording of our talk!

Here are some pictures from the conference!

Posted on: Mon, 18 Mar 2019 19:21:00 +0100 by Mario Kleinsasser , Alexander O. Ortner , Bernhard Rausch

Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, do not hesitate and contact me!
Alexander O.
Alexander O. Ortner is the team leader of a software development team within the IT department of the STRABAG BRVZ construction company. Before joining the STRABAG SE he obtained a DI from the Department of Applied Informatics, Klagenfurt in 2011 and another DI from der Department of Mathematics, Klagenfurt in 2008. He is a software engineer since more than 10 years and beside the daily business mainly responsible for introducing new secure cloud ready application architecture technologies. He is furthermore a contributor to introduce a fully automated DevOps environment for the highly diversity of different applications.
Bernhard Rausch
CloudSolutionsArchitect/SysOps; loves to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker; Always up to get in contact with interesting people - do not hesitate to write a comment or to contact me!

DEVCONF.cz 2019

Estimated reading time: 4 mins

This weekend, from the 25th until the 27th of January, the DEVCONF.cz took place at the Faculty of Information Technology Brno (Czech Republic) and I’ve got the chance to attend. As an Open Source addicted community driven conference, which is mainly sponsored by Red Hat there was no ticket charge, but a free ticket registration was required. Now, after the conference I know why, but more on that later. Red Hat is running a large office at Brno (around 1200 employees) and most of them are working in a technical area. Therefore there is an intense partnership between the technical university of Brno and Red Hat.

I got knowledge about this conference by colleagues who are working at Red Hat Vienna. A while ago they told me, that there is a large annual conference at Brno and if I would be interested to attend. I said yes, because the conference is free of charge, community driven, the schedule was very interesting and my company (STRABAG) payed the hotel expenses - many thanks for that at this point! ❤️ 😃

My journey started on Friday morning at the company and after a 6 hour drive I arrived at my hotel at Brno around 3 pm save and sound. I checked in and went off to the conference venue immediately by tram. What should I say, there were lots of people there! As written above, now I know why a free registration is needed and recommended before the conference. The DEVCONF.cz used the system provided by Eventbrite, which worked perfectly!

The first track I listen to was Ansible Plugins by Abhijeet Kasurde which was very informative because it is possible to easily extend Ansible by plugging in filters for example. The second and last track on Friday was Convergence of Communities: OKD = f(Kubernetes++) by Daniel Izquierdo and Diane Mueller. This one was really interesting as it gave a cool insight how people are contributing to various open source project based on the GitHub repositories, commits and comments.

After that, I went back to the hotel and met with the colleagues from RedHat, Franz Theisen and Armin Müellner and after some chatting we went up to the dinner, which was really delicious! During the dinner I had the chance to talk to other colleagues who were with us.

On Saturday I got the chance to visit the Red Hat office at Brno and after a delicious coffer we went on to the conference.

I had a full packed day with a lot of sessions which are listed afterwards. The full schedule can be found here.

All of the tracks that I have visited were great! But I would like to highlight two of them. The Containers Meetup with Daniel Walsh was super interesting because of the discussion about cgroups v2 which might cause a lot of problems for the container software. The problem herein is, that the cgroups v2 interface of the Linux kernel is not compatible with the v1 version. This means, that software which relies on libraries that are implementing cgroups v1, like Docker and others, will be broken. if the new Kernel interface is enabled. In the meetup it was discussed if the upcoming Fedora version should go this way. Well, we will see whats coming up…

The Insiders info from the Masters of Clouds is the second one I like to mention because there were lots of insight, how Red Hat manages their infrastructure. For me it was mega cool to see, that Red Hat is also using Zabbix heavily for system monitoring, like we on-premises too!

On Saturday evening we had a very nice dinner and the opportunity to continue our chats from Friday. On Sunday I went back to Austria early, as I have to drive 6 hours back. 🚗😃

In summary, I am very happy, that I’ve got the chance to attend to this conference and I will try to attend to it next year too! I meet a lot of cool people, like Akihiro Suda, the Docker Community Leader of Tokyo, which I am really proud of. DevConf.cz I will come back!

Posted on: Sun, 27 Jan 2019 16:32:12 +0100 by Mario Kleinsasser

  • DevOps
  • Conference
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, do not hesitate and contact me!