I'm working on Azure Databricks. Currently my Pyspark project is on 'dbfs'. I configured a spark-submit job to execute my Pyspark code (.py file). However, according to the Databricks documentation spark-submit jobs can only run on new automated clusters (Probably, that's by design).
Is there a way to run my Pyspark code on existing interactive cluster?
I also tried to run spark-submit command from notebook in %sh cell to no use.
