Joining Camunda and Python gives an ordinary Camunda scripting user a way more freedom than just utilizing the alternatives such as:
- Writing the business logic in Java, compiling and embedding into Camunda's image
- Utilizing present engines such as:
- Nashorn - which still lives in the age where even
for...of
is non-existent (and has been removed anyway) - Groovy - which feels like "Java meets Bash" but still drags along the unreadability and unnecessary robustness for the use-case of scripting a task
- others defined in JSR 223
- Nashorn - which still lives in the age where even
- Hoping for the Jython integration, which nevertheless still requires you to bring custom JARs and ensure it doesn't break your Camunda engine along the way when something upgrades
Meet CaPython ([ka.piˈt̪ãn]), joining the pleasant experience from Camunda orchestration layer and bringing to you the way of scripting that doesn't require you to question own sanity in the process.
CaPython is going to take you on the journey in the insane seas of strict-typing for scripting and gluing JARs on top of already complex engine (a.k.a. creating a Monolithic application) by utilizing two simple concepts under the hood:
- the good old Camunda REST API via Camunda External Task Client in Python
- the even older and even better,
exec()
Disregard anything scary you've read about eval()
/ exec()
functions. In
this case exec()
is the perfect tool because you'll be executing a custom code
anyway and it's up to you to secure the execution access prior the flow
even reaching any exec()
or similar functionality.
However! As with any piece of software, even this can be misused, therefore
be aware of the entrypoint variable (called capython
by default) and
ensure nothing from the outside of the process is passed into it. Even
though it's dockerized and silly rm -rf /
attempts won't do a real harm,
this is a way for an attacker to basically hijack your Python
interpreter by inserting malicious code instead of, or appended to yours.
Then, based on the environment you prepare, an attacker can:
- create a Fork-bomb just to make your container eat more resources, therefore money, if you pay for the deployment
- utilize the CPU/GPU for processing (miners are trendy nowadays)
- read/write to the container's filesystem (and anything mounted to it)
- reach the network (and anything that's in it) container is deployed on
- reach the network (and anything that's in it) pod is deployed on (if using Kubernetes)
- reach the Internet and depending on the networking rules upload and/or download content
- execute anything present or downloaded into the container
- access Camunda via its API
That being said, the same applies to any other scripting engine in Camunda including the existing ones, therefore if you have already ignored these points successfully, your system either already has hole(s) in it or might have been compromised. Depending on your scenario, this might be appropriate.
Each script has two sets of variables present.
Global variables holding a useful value so you don't need to dig for them manually. Although accessing and overwriting them is allowed, recovery within a single script instance isn't guaranteed:
- all of the variables passed into the task from Camunda
BpmnException
__task__
instance ofExternalTask
__task_id__
from task__topic__
from task
These (global) variables are the only ones that can set anything after the code was executed to the task handler and are used for the failed task recovery by Camunda:
__task_retries__
, defaults to0
__task_retry_timeout__
, defaults toCAPYTHON_RETRY_TIMEOUT
This is the phase of running the Docker container's entrypoint. In it is,
except other things, a way for you to install 3rd-party packages via pip
.
Currently it supports only requirements.txt
and the file(s) first have to
get somehow into the container (mount it directly, via ConfigMap
in
Kubernetes, etc) and then you need to specify the full, preferably absolute,
path via CAPYTHON_REQUIREMENTS
.
Example:
CAPYTHON_REQUIREMENTS=/tmp/reqs-1.txt,/opt/reqs-2.txt,/mnt/reqs-3.txt
CaPython will then install the 3rd-party packages in this specific order, so
for example if your library requires Cython and the maintainer(s) haven't set
its setup.py
to pull Cython, you should put Cython into the first file and
the package requiring Cython for installation into the second file.
Specifies the locations of requirements.txt
-like files within the
container for CaPython to install prior to handling tasks.
Specifies the separator to use for splitting multiple paths from
CAPYTHON_REQUIREMENTS
. Defaults to ,
.
Specifies the input variable of a task that holds the full script to execute.
Defaults to capython
.
Specifies the base URL of engine-rest
Camunda service handling the REST
API. Defaults to http://camunda:8080/engine-rest
.
Specifies topics in Camunda to listen to. Defaults to topic
.
Specifies the separator to use for splitting multiple topics from
CAPYTHON_TOPICE
. Defaults to ,
Specifies the unique ID of an executor. Defaults to str(uuid.uuid4())
.
tbd, defaults to 1
.
tbd, defaults to 10000
.
tbd, defaults to 5000
.
tbd, defaults to 3
.
tbd, defaults to 5000
.
tbd, defaults to 30
.
CaPython is available as a standalone Docker image which can be used in a Docker engine, via Docker compose, in Kubernetes or any other engine which supports Docker images.
You can find the tags here.
There's always {version}-{python-tag}
and {python-tag}
format present.
- Navigate to
examples
folder. docker-compose up -d
docker-compose logs -f capython
- Open browser at http://localhost:8080/camunda (user: demo, pass: demo).
- Open
sample-flow.bpmn
in Camunda Modeler and deploy it tohttp://localhost:8080/engine-rest
. - Run the flow by pressing the "play" button in Camunda Modeler and selecting
Start process instance
- Observe the logs of
capython
service/container and the progress in Camunda Cockpit (if you can catch it). - Don't forget to spin the resources down with
docker-compose down