There has never been a better time in human history to write scalable applications. We have access to so many technologies that are drop in replacements for our familiar and beloved toolsets.
There is no need to write C bindings or vectorize code anymore!
Your team built an amazing web application that runs an analysis in real time. Unfortunately, this app simply doesn't scale with larger datasets.
Computations take longer than any sane user is willing to wait on a web page.
In this case we will identify the offending computations, and offload them to a parallel cluster using Spark or Dask. It is possible your codebase will not need any parallelization, but can instead take advantage of existing APIs that are already optimized for speed in libraries like numpy or pandas.
An application is running the same computations over and over again, slowing down the responsiveness as a whole.
I will work with you to put an in memory data store as a part of your infrastructure, and store the results of your computations there.