Dieser Blogbeitrag ist nur in englischer Sprache verfügbar. | This blog post is only available in English.
With so many applications/systems to keep an eye on, nowadays, it often takes IT staff too much time to perform repeated manual tasks. This is valuable time we then lack in communicating with stake holders, key users and operational departments or solving other problems.
It’s often easy to say: “I’ll just do this manually because it only takes me half an hour a week”
At the end, however, it always takes more than the time assumed, as manual actions are error prone and as quality assurance actions have to be applied.
On top of that, what if you go on vacation? Who will care for these actions in the meantime?
What if the system has an error and your colleagues don’t feel comfortable working on it? What if your colleagues have a high workload themselves? And, not to forget that, in order to hand these actions over to them, you need a detailed documentation.
Agreed, automatizing processes also takes time, but it is, at the end of the day, the step forward as automation keeps getting improved.
Regarding time and effort, considering the points mentions above, the appropriate automation is, in the long run, less time consuming.
Starting with manual actions is, of course, good for a quick start and for gathering experience with the specific process – which exceptions can occur, in particular. From there on, however, automation covering all experiences made during manual execution, should be considered for the advantages we mentioned above.
Let’s have a look at the following automation examples.
With all the main and satellite systems needed to provide all services for an application nowadays, the installation effort is higher than expected. Following the main installation path is quick, but all the detailed work around it requires extra effort. Such as: configuring the local firewall, optimizing performance-related settings for the web server, ensuring separated cache directories, etc. Not only does all this have to be executed, but also documented.
Nowadays there are good concepts like Containers which make the results of the deployments scripts predictable, reliable and re-usable.
The Container technology enforces standards, dramatically reducing installation documentations, as a result. By automating deployments, the agility in deploying software is dramatically increased. This is a highly sought-after outcome, especially in pharmaceutical companies.
From automated deployments, why not take a step further and automate the upgrade process, as well? The new software is deployed as described in the previous chapter. Your data/repository now has to be upgraded due to changed schemas / data structures. This can also be done in an automated way since most software vendors offer a silent / unattended / headless upgrade process in addition to their classic graphical upgrade utilities. This upgrade can also be performed by Containers, thus getting seamlessly integrated into your Container architecture.
Automated Unit Tests
Automated unit tests are no new discovery as they have been around for many years. But with build pipelines the concept of automated unit tests is having a comeback and increased significance in the context of fully automated processes.
I like the idea of “code a little, test a little”. A developer has to test his code anyway and usually he does so with temporary test code, written as a mini application (basically with a short main method). Why not use this code directly for a persistent unit test, which can be used repeatedly when we need to make sure the code is still working as expected?
Sometimes writing persistent unit tests seems to put some pressure on the developer to create perfect test cases. You could apply the same grade of detail like you would use for your temporary test code but transform it into a persistent test code. Following this paradigm means that you can always write new unit tests e.g. in case of detected errors to make sure that they have been fixed. Over a period of time, you will get more and more test cases. It’s important to remember to focus on relevant unit tests, though, as sometimes we can be tempted to test scenarios which really never happen.
Automated End-to-End Tests
With build pipelines, also end-to-end tests are getting more popular. End-to-end test engines control the client component by filling out forms, invoking actions, etc. and checking the outcome from a client component’s perspective. They are called end-to-end tests, because actions are performed from one end (the client component) and processed by the server component (the other end) and the results are checked by the client component again.
There are several advantages to using end-to-end tests:
- The client component is used for performing actions. This ensures that the client component behaves as expected from a user’s perspective.
- Test cases can (where appropriate) get consolidated into a single test script / flow.
- In unit tests only the objectives of a single unit are tested. If multiple units are woven together by e.g. a higher-level framework, it is more efficient to test the woven units by an end-to-end test.
Units tests and end-to-end tests work best together. Use the first for detailed tests and the latter for more abstract ones.
Automated Delivery Builds
Developers often focus on programming code, but not on an automated packaging of a delivery. Deliveries are often assembled manually — repeatedly without any documentation on the assembly at all, as it is too obvious and boring for the original developer. But what happens three months after the project has been finished? Or if a new member is added to the team? What about absence days, like when on vacation or in trainings? Such deliveries often look quite similar, but they are rarely identical. There are great build management tools on the market helping with the organization of project assets and building a standardized delivery.
Automated Build Pipeline
If you add and consider all points above, the next obvious step is setting up a build pipeline for continuous integration and continuous delivery. This means in short:
- Creating a delivery
- Testing this delivery
- If all tests are passed, deploying the delivery to a target system
You can find many more automation examples out there. In this blog post, I focused on the most obvious tasks IT people try to solve manually and have tried to bring to your attention that much of the work you’ve been doing so long manually, could be automated. Why? Because automation offers you the following, distinct advantages:
- It’s less error prone
- It’s repetitive and reproducible
- It’s following standards
- Offers quicker time-into-market
So, let’s reduce your operational or testing effort in order to gain more value and time for other topics!
Do you want/need more information? Get in touch with us.
- Blog article: How can Linux containers save your day?
- Blog article: Why OpenText Documentum and Cloud Services are complementary?
- Blog article: Automated End-to-End Testing
- Datasheet: Distributed Environments With Containers
- Datasheet: Optimal Deployment And Efficient Operation Of Your Opentext Documentum Environment Through Containerization