Since it’s possible to run dotnet core apps on Linux, Docker immediately comes into play.
The typical approach is to execute dotnet run and it will output to the console:
..which is nice for debugging, but for real scenarios you probably will redirect application output to CloudWatch Logs, Loggly or ELK.
So what is you want to run app without this console output?
If you interested in Tableau installation on AWS you should have a look at CloudFormation templates from Tableau.
Single server installation suits well for trial, but it has a number of limitations including link to default VPC. But what if you want to deploy it in dedicated VPC or you don’t have default one?
Not a big deal, I’ve updated template and you can use it:
If you have MS SQL server in your environment and have to do some actions (execute migrations, change data, etc.) with it during your CI/CD it might be quite inconvenient to use Windows machine.
Fortunately we have sqlcmd for Linux, and Microsoft provides some instructions for popular Linux distributions – https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools
But what if you have AWS Linux? If you try to use instructions from MS you will fail and there’s not much information across the Internet about this topic. The only useful link is here.
Since my env is highly automated I decided to create simple script to install sqlcmd on AWS Linux and share it with you:
When you do some CI/CD jobs you might want to mark some builds with name of the current (active) Jira sprint.
We have a dozen components in the project with dedicated Jira projects and sprint names are like “Backend sprint 12”, so you probably don’t want to add useless information to the build and need only number to identify build.
Jira has a nice REST, so you can get what you want in a very simple way:
Although XCode server is almost perfect for building iOS apps, Jenkins is still more popular. If your application consists of a few parts such as a database, backend, frontend, Android, and iOS apps you typically want to have the same CI/CD for all components.
My Jenkins master is running in AWS cloud together with a dozen of Linux and Windows slaves. However, iOS app can be built only on macOS and you have to use Apple computer to build it. It’s typically a Mac Mini computer located in the office.
In this post we’ll consider secure and reliable connection between mac in the office and Jenkins master in AWS cloud.
Both Jenkins and GitHub are very popular, so it couldn’t be a problem integrating them. It still might be a bit confusing if you’re doing it for the first time. That’s why I decided to spend a few minutes to show you guys how it can be done.
Jenkins master can be accessed through the URL different from the one specified in Jenkins configuration.
Why might we need this? Well, you probably want your Jenkins server to be publicly accessible (this is required for GitHub integration, by the way) and since it’s public you typically want to use an encrypted HTTPS connection.
Well, you can install nginx proxy to achieve this, but in this case you’ll have to maintain SSL certs, which obviously sucks, especially if you can use AWS Certificate Manager with AWS ELB.
Another reason to use different URL is to save your time. When you use Windows slaves via JNLP there’re well-known issues with both nginx and load balancers.
And the last but not least reason is that “LAN” connection between Jenkins master and slaves is still more secure and faster, so it’s preferable in most cases.
So let’s start implementing Jenkins and GitHub integration within these conditions!
We use OpenVPN to connect to some resources in AWS private subnets.
It works as expected on macOS, iOS, and even on old Windows 7, but not on Windows 10.
In case of Windows 10 when you’re connected to VPN (doesn’t matter OpenVPN, L2TP or even PPTP) you will still get responses from DNS server which is set on your Ethernet (or WiFi) adapter.
block-outside-dns option for OpenVPN didn’t fix this problem and I decided to provide solution for all VPN types, so here it is:
In this post we’ll consider rather common situation when you’ve got application with dedicated configuration file and at least two environments: dev and prod.
Application in dev env is using dev database and prod application is using prod database;
Database connection strings are stored in app.config file, so you have to have different app.config files in dev and prod Git branches.
The most common suggestion about managing this challenge is to use git attributes merging policy, so let’s see how to deal with it.
In this post I’ll show how Git repo can be completely transferred from BitBucket to Visual Studio Team Services.
However, this instruction will be relevant for Git repo transfer between any origins, such as GitHub, Stash, GitLab, etc.
Before migration starts, it’s important to commit and push all changes from local repositories to the current origin (which happens to be BitBucket in our example).
If you want to have really fast disk and have enough RAM (f.e. I have 32Gb) you can create so called “RAM disk”:
diskutil erasevolume HFS+ RAMDisk1 $(hdiutil attach -nomount ram://41943040)
This will create 20Gb drive (10240*2048=41943040)
When you finish working gust unmount drive in Finder or Terminal ;)