Job Execution Overview¶
Execute individual tasks across different environments with flexible runtime configurations.
Jobs vs Pipelines¶
Jobs: Execute single tasks in isolation
from runnable import PythonJob
from examples.common.functions import hello
def main():
job = PythonJob(function=hello)
job.execute() # Single task execution
return job
if __name__ == "__main__":
main()
Pipelines: Orchestrate multiple connected tasks
from runnable import Pipeline, PythonTask
from examples.common.functions import hello
def main():
pipeline = Pipeline(steps=[
PythonTask(function=hello, name="task1"),
PythonTask(function=hello, name="task2")
])
pipeline.execute() # Multi-task workflow
return pipeline
if __name__ == "__main__":
main()
Available Job Executors¶
| Executor | Use Case | Environment | Execution Model |
|---|---|---|---|
| Local | Development | Local machine | Direct execution |
| Local Container | Isolated development | Docker containers | Containerized execution |
| Kubernetes | Production | Kubernetes cluster | Distributed execution |
Configuration Pattern¶
All job executors use this configuration pattern:
Recommended Usage (via environment variable):
# Keep configuration separate from code
export RUNNABLE_CONFIGURATION_FILE=config.yaml
uv run my_job.py
# Or inline for different environments
RUNNABLE_CONFIGURATION_FILE=production.yaml uv run my_job.py
Alternative (inline in code):
Examples Directory
Complete working examples are available in examples/11-jobs/. Each example includes both Python code and YAML configuration files you can run immediately.
Custom Job Executors¶
Need to run jobs on your unique infrastructure? Runnable's plugin architecture makes it simple to build custom job executors for any compute platform.
No Vendor Lock-in
Your infrastructure, your way: Execute jobs on AWS Batch, Azure Container Apps, HPC clusters, or any custom compute platform.
- 🔌 Cloud batch services: AWS Batch, Azure Container Apps, Google Cloud Run Jobs
- 🏢 HPC integration: Slurm, PBS, custom job schedulers
- 🎯 Specialized hardware: GPUs, TPUs, edge devices
- 🔐 Enterprise platforms: Custom orchestrators, proprietary compute services
Building Custom Job Executors¶
Learn how to create production-ready custom job executors with our comprehensive development guide:
📖 Custom Job Executors Development Guide
The guide covers:
- AWS Batch integration example showing key integration patterns
- Job submission & monitoring workflow that works with any compute platform
- Plugin registration and configuration for seamless integration with Runnable
- Testing and debugging strategies for custom executors
Quick Example
Create a custom executor in just 3 steps:
- Implement the interface by extending
GenericJobExecutor - Register via entry point in your
pyproject.toml - Configure via YAML for your users
Ready to build? See the full development guide for implementation patterns and examples.
Choosing the Right Executor¶
Development & Testing¶
- Local: Quick development, debugging, simple tasks
- Local Container: Isolated development, dependency consistency
Production Deployment¶
- Kubernetes: Production scale, resource management, distributed execution
When to Use Job Execution¶
Choose job execution when you need:
- Single task execution without workflow orchestration
- Independent tasks that don't share data with other steps
- Simple execution without complex dependencies
When to Use Pipeline Execution Instead¶
For multi-task workflows, consider Pipeline Execution:
- Multi-step workflows with dependencies between tasks
- Cross-step data passing via parameters or catalog
- Complex orchestration with parallel branches or conditional logic
Next Steps¶
- Start simple: Begin with Local execution for development
- Add isolation: Move to Local Container for consistent environments
- Scale up: Deploy with Kubernetes for production workloads
Multi-Task Workflows
For orchestrating multiple connected tasks, see Pipeline Execution which provides workflow management and cross-step data passing.