Everything as a Code!

Why “Everything as a code”?

Simplification, Integration and automation provides much more benefits than handling siloed technologies, manually managing each silo and lack of automation on the processes for design, deployment and day two operations, from the recent success of DevOps adoption the benefits are evident and now span across multiple domains of the entire SDLC. The key advantages are..

  • Stay safe at speed and scale — The CI/CD pipelines, Cloud / container auto scale features and on demand infra resources make it far more easier and better without compromising the security footprints of the workloads, making it truly scale at speed without compromising on the consistency and effectiveness of the deployments at scale.
  • Pre-packaged, best-practice modules of secure automation across all layers of the stack is the bonus of deploying everything as a code, the configurations, conformance and credibility of the deployments are remarkably fast and seamlessly integrated due to standardization of processes, technologies and dependencies thereof.
  • Reduce the cost of managing- This is the paramount importance since utilization of the resources is highly optimized and reusability of the components and scalability from north to south ensures cost reduction and with lesser manageability overheads..
  • Transparency & Collaborative Trust — Adopting an everything as code approach, for example, tends to shift organisations to a more collaborative culture, much like a digital transformation. bringing configurations and processes out of application dashboards and exposing them all as code that is shared and versioned increases transparency and trust for each layer and stakeholder involved.

Simplifying the tenets..

To further our understanding on “Everything as Code” we must take a look at entire software development life cycle / value chain and associated tenets thereof..

  • The Development frameworks and toolsets deployed in “Everything as a code” approach, should be controlled and configured programmatically via APIs or SDKs thus enabling the process to be codified into a shared, normalised representation of workflow and on demand resource enablement. Any aspect of logic implemented in code should be validated using tools that employ syntax checks, inter-dependencies and all changes are captured and versioned so that when an error is identified downstream in the delivery process, teams are equipped to immediately roll-back to a last working / safe version, diagnose the error and quickly correct it in runtime in agile manner.
  • Application Environments & Infrastructure — Allows to automate many manual steps / tasks, saving significant time, reducing human errors and provides a clear “state” of the software interaction with the infrastructure and configurations thereof. The common deployments today seen using Chef, Puppet, Ansible and similar tools that enable multiple teams to orchestrate the systems and environment variables/configurations at large scale, quickly, along with software-defined networks for programmatically initialising, controlling, changing and managing network behaviour dynamically via open interfaces and APIs. All the layers of compute, memory, storage and networking are pre-integrated, pre-configured and mobilized dynamically in seamless and scalable way and stored as boilerplate code..
  • Build & Test Automation — Code generators, Compilation and packaging — Development and operations teams model software delivery pipelines in code to enable continuous delivery and speed release cycles. While Version Control and test automation for both non-functional tests (including unit tests, performance tests and regression / load tests) as well as functional tests (including acceptance tests and tests created for behaviour-driven development) are defined and managed programmatically via a wide variety of tools and scripting languages. One of the key advantages of using an automated dashboards that provides data along with detailed reports and insights about different operations and their interactions such as the number of tests, duration of each test, and success or failure rates, setup changes to the environments along with concise record of deployments and dependencies across the entire system.
  • The CI/CD pipelines & configuration management tools are already well addressed with DevOps best practices with proper implementation strategy to follow continuous integration and delivery making the software delivery more predictable via catching and correcting issues with agility. Integrating the Code creation, versioning, testing, deployment, and post-deployment activities getting streamlined without human intervention is a successful CI/CD pipeline outcome. Toolset offerings such as Jenkins, Bamboo, Octopus Deploy, Gitlab, Travis CI, UrbanCode etc are making Migrations, deployments and configuration changes should be simple and easy to replicate. A tight integration with the underlying infrastructure components to react to the state of resource demands for provisioning, orchestration with load balancing, scaling up or down etc adds tremendous value to the agility and performance. There are proven configuration tools such as Chef, Puppet, Ansible and Salt that simplify that tasks of complex orchestration and configuration management via their so called recipes, modules or playbooks for enforcing the codification.
  • The Security & Compliance aspects of the distributed systems is a real challenge unless addressed systematically with automated enforcements of policies and entitlements to get credentials and secrets to the respective microservice or containers running the same yet maintain the isolation of multi-tenant workloads with access profiles and controls. The Security as a code talks about a comprehensive governance ranging from the privilege management till authentication and authorization for privileged users while monitoring and managing critical security for this dynamic ecosystem. By way of limiting and controlling the access accounts and their privileges assigned to resources spanning across all software components, users, groups, roles, and entitlements should be carefully crafted and designated while assigning the “least privilege” across all layers of workloads using embedded tokens and keys in code to cite an example. While as a best practice, on the proactive side the SAST/DAST, Threat Modelling / Hunting, resource hardening, scanning code or deployment pipelines , wiring compliance policies, audits and checks inline to all above for establishing automated monitoring and controls using Tripwire, OSSEC, UpGuard etc would continue to remediate the anomalies or conformance issues observed accordingly.

Way forward..

The velocity of the changes in the innovation of the frameworks, platforms, resource libraries, user communities and techniques are driven by large scale adoption of distributed systems and databases reducing the complexity of erstwhile problems is moved from the state of a single machine to the state of multiple heterogeneous services and orchestration of the relationships between them.

Published By

Follow

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store