Configurable Deployment

Amazon Web Services (AWS) offers an extremely configurable and scalable infrastructure. You can configure database deployments, Kubernetes clusters, Dash, RShiny, or javascript applications all at the click of a button.

Scientific software infrastructure gets very complex, very fast. Your application may include databases, Dask or Spark clusters.

RShiny or Dash Deployment on Kubernetes

Several frameworks, such as RShiny and Dash offer enterprise hosting and deployment. Their solutions may be too expensive, or not offer the level of flexibility you need.

If you need  more customization, or just prefer to maintain control over your deployment, deploying your RShiny application to AWS on Kubernetes is an ideal solution for you.

This solution gives you complete control. You choose the power of the machines you deploy to, the number of instances to deploy for autoscaling and load balancing, along with mixing and matching other services you need.

What you Get

We will go over your requirements, including databases, Spark/Dask Clusters, filesystems, etc, to fit your needs, either email or over a Zoom call. 

Once the requirements are set these will be coded into a Terraform Recipe, along with scripts for building, deploying, and testing your application's ecosystem that can be uploaded to any CI/CD platform of your choosing (CircleCI, AWS CodeCommit, Travis, etc). 

All the code and configuration you receive is completely, 100% yours and without any intellectual property. Each deployment comes with a 1 month support contract for tweaking parameters and keeping up with API changes.

Prefer to DIY?

That's totally fine. ;-) I have several guides to get you started, including:

 Next Steps

Fill out the form below to get in touch and your preferred method to go forward.

Get Started

Free Initial Consult

 I schedule projects on a first-come, first-serve basis, with priority given to my current clients. Book now to make sure your application lives on!

Why Work With Me?

Over the course of my career, I have earned a robust reputation for outstanding genomics and bioinformatics DevOps, and I am known for my ability to design and integrate innovative, flexible infrastructures, leveraging in-depth client and business consultation to uncover critical, unique program needs. Throughout the years I’ve seen datasets grow in size and complexity (who remembers microarrays?) and worked with researchers to develop analysis infrastructure to accommodate the ever-growing demand for more number crunching. 

I have consulted with the Bioinformatics Contract Research Organization (CRO) and BitBio to design and deploy a major manual-labor saving HPC cluster with integrated SLURM scheduler and user / software stacks, and elastic computational infrastructure for genomics analysis, empowering a greater focus on high-priority projects and activities.

I also designed and deployed complex data visualization applications on AWS such as NASQAR.  I am both a contributor and core team member of Bioconda as well as a contributor to the Conda Ecosystem and EasyBuild


50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.