Do you need to build out infrastructure on AWS for your Bioinformatics Analyses?
Do you have a WGS, exome, single cell or other analysis that needs to be analyzed on the cloud?
Do you need to build a production ready system that can handle large scale computations, programmatically execute workflows based on data input, incorporate data visualization and generate reports?
Do you manual processes that you're sick of?
Are you using services with no logging, no reporting, and when things go wrong you have no idea why?
Imagine knowing which compute option is right for your analysis without having to wade through all the options.
Instead of spending countless hours researching different compute infrastructures have a ready to execute plan for rolling out and scaling your compute infrastructure.
What you’ll learn:
- How to deploy applications such as RShiny, Dash, and Apache Airflow on AWS.
- How to incorporate elastic computing options into your data visualization applications for real time data visualization on large datasets in the browser.
- How to get started with High Performance Computing on AWS.
- When to use traditional HPC methods such as SLURM and when to break out the newer, shinier services such as AWS Batch.
- How to incorporate multiple AWS Services and data science applications in order to build hybrid systems that can handle operations like waiting for data to appear, executing decision trees based on various metrics, submitting jobs to various compute services and more.
Deploy all the Things (on AWS)!
Sign up below to get instant access to a PDF download with an indepth explanation of the above flowchart, with lists of AWS services, so you can get started immediately. In addition, you’ll receive a short series of email messages to explain how and why each AWS service was chosen.